text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1137–1160 August 11-16, 2024 ©2024 Association for Computational Linguistics ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models Aparna Elangovan, Ling Liu, Lei Xu, Sravan Bodapati, Dan Roth AWS AI Labs {aeg,lingliun,leixx,sravanb,drot}@amazon.com Abstract In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The conclusions from these evaluations, thus, must consider factors such as usability, aesthetics, and cognitive biases. We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert. Furthermore, the evaluation should differentiate the capabilities and weaknesses of increasingly powerful large language models – which requires effective test sets. The scalability of human evaluation is also crucial to wider adoption. Hence, to design an effective human evaluation system in the age of generative NLP, we propose the ConSiDERSThe-Human evaluation framework consisting of 6 pillars – Consistency, Scoring Critera, Differentiating, User Experience, Responsible, and Scalability. 1 Introduction Generative tasks in natural language processing (NLP) have to rely on human evaluation, as the current set of automated metrics does not correlate well with human judgment (Gao and Wan, 2022; Deutsch et al., 2022). Human evaluation tends to be expensive and difficult to repeat or reproduce (Belz et al., 2023, 2020). Even more importantly, an all-too-common scenario tends to be that the evaluation method is fundamentally misaligned with the problem statement (Hämäläinen and Alnajjar, 2021). In the age of generative large language models (LLMs) with increasing capabilities that can generate fluent content to even fool humans (Clark et al., 2021), ensuring that the human evaluation is set up appropriately to measure the right aspects and reach the right conclusions is crucial. In this position paper, first, we argue that to design and interpret the results of human evaluation accurately, the evaluation pipeline needs to be human-centric in the age of generative AI, accounting for human evaluators and their cognitive biases. The field of user experience (UX) takes into account the emotional states of a user, a.k.a. how a user feels (Marques et al., 2021; Hartson and Pyla, 2012). It is a well-known fact in UX that users tend to be heavily influenced by aesthetic aspects, while actual function or usability aspects take a second place when users perceive a system as useful, leading to the notion “what is beautiful is useful” (Sonderegger and Sauer, 2010; Tuch et al., 2012; Hamborg et al., 2014). Aesthetics also extends to language. Factors such as fluency can affect the evaluation, outweighing the actual content or substance (Reber, 2011). Studies in human-computer interface (HCI), cognitive, and social psychology have demonstrated that processing fluency – the ease with which information is perceived and processed in the human mind – has a positive effect on evaluation (Preßler et al., 2023; Tsai and Thomas, 2011; Greifeneder and Bless, 2017). Current stateof-the-art (SOTA) LLMs tend to be quite fluent and produce content that is easy to read and understand, and as a result, users can conflate fluency and usefulness. Therefore, we need to closely examine our human evaluation before reaching conclusions such as – The LLM can perform function <x> similar to or better than a trained professional. NLP evaluation procedures, therefore, at the very least must delineate style vs. substance. Secondly, the effectiveness of the test set in measuring the capabilities of a model is critical, as ineffective test sets cannot adequately evaluate these models, a common theme that has surfaced in many leader-boards and public data sets (Tedeschi et al., 2023; Elangovan et al., 2021). 1137 Hence, in this position paper, we make the following contributions: 1) We propose a framework, a structure for organizing and contextualizing human evaluation, that can be customized and adapted to specific contexts. Our proposed framework – the ConSiDERS-TheHuman evaluation framework – has 6 pillars: The 6 pillars of ConSiDERS-The-Human Evaluation Framework: (See Checklist in Appendix A.1 to follow.) • Consistency of human evaluation: The findings of human evaluation must be reliable and generalizable. • Scoring Criteria: The scoring criteria must include both general purpose criteria such as readability, as well as be tailored to fit the goal of the target tasks or domains. • Differentiating: The evaluation test sets must be able to differentiate the various capabilities as well as the weaknesses of generative LLMs. • User Experience: The evaluation must take into account user experience, including their emotions & cognitive biases, when designing experiments and interpreting results. • Responsible: The evaluation needs to account for responsible AI including aspects such as bias, safety, robustness, and privacy capabilities of the model. • Scalability: Human evaluation must be scalable for pragmatic widespread adoption. 2) We make the case for why UX and the psychology of cognitive biases should be at the forefront of human evaluation. In the last 20 years, less than 7% of the papers (only 16 papers) with “human” and “eval” in their title available in ACL Anthology mention user experience-related keywords in either the title or the abstract (see query in Appendix A.2.6). 3) We highlight how neglecting the role of cognitive biases in human evaluation can lead to incorrect inclusions from the study. We, therefore, provide specific recommendations to mitigate the effects of common cognitive biases. We also provide tips to troubleshoot and improve consistency issues in human evaluation. In the rest of this paper, we introduce the necessary background concepts in Section 2 and explore each of the 6 pillars in detail in Section 3. 2 Background concepts 2.1 Usability Usability, according to ISO 9241–11:2018, is defined as – “The extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use” (Barnum, 2020). Usability testing includes the following five elements or the 5 Es of usability (Niranjanamurthy et al., 2014; Barnum, 2020). 1) Easy to learn: This aims to address questions such as a) How easy is it for users to complete basic tasks the first time they use the system? b) When users return to the design after a period of not using it, how well is the user able to recollect how to use the system? 2) Efficiency: How quickly can experienced users accomplish tasks? 3) Effective: How completely and accurately the work or experience is completed or goals reached? 4) Error tolerant: How many errors do users make, how critical are these errors, and how easily can they recover from the errors? 5) Engaging / Satisfaction: How much does the user like using the system? These 5Es are crucial when designing human evaluation solutions to obtain reliable evaluation results. 2.2 UX and HCI User experience (UX), according to ISO 9241110:2010 is defined as – “a person’s perceptions and responses that result from the use and/or anticipated use of a product, system or service” (Mirnig et al., 2015). UX, thus, expands beyond the concepts of the 5Es of usability to take into account the broader emotions experienced by the users when using the system (Marques et al., 2021; Hartson and Pyla, 2012). In other words, UX takes into account usability as well as the users’ feelings as to how products “dazzle their senses, touch their hearts and stimulate their minds” (Marques et al., 2021). HCI is the UX when humans interact with computer systems, including user interfaces and how information is presented on the digital screen. Commercial LLMs facing end users, such as ChatGPT, have dazzled the minds of their users. Users’ emotion, including perceived usability or usefulness, tends to be heavily influenced by aesthetics (Hartson and Pyla, 2012). Aesthetics influences the user heavily initially or in one-off tests, but over time aesthetics plays a much lesser role and usability becomes crucial (Andreas Sonderegger and Sauer, 2012). The field of UX, therefore, attempts 1138 to disambiguate the perceptions that users form as a result of aesthetics, and the need to measure actual function as defined by the 5Es of usability, whilst embracing user emotions. 2.2.1 Measuring UX: Perception vs. Performance UX feedback can be qualitative such as user interviews or quantitative metrics. Quantitative UX metrics include performance-based metrics such as time to complete a task, the errors the users encounter, and perception-based self-reported metrics through rating scale and preferences (Tullis and Albert, 2013). Likert scale is a type of attitude scale, a special case of a rating scale, that measures the degree to which a person agrees/disagrees with a given statement (Taherdoost, 2019). In NLG, performance-based metrics designed to measure the impact of the end system are considered extrinsic evaluations, while intrinsic evaluations attempt to evaluate properties of the NLG text (van der Lee et al., 2019). Measuring aspects such as fluency are intrinsic evaluations, usually measured through a rating scale or preference tests, where the evaluator is asked which model’s output they prefer. Rating scales and preference tests are based on user perception, and therefore subject to cognitive biases. 2.3 Cognitive biases Cognitive biases are systematic errors in human judgment or aspects that drive irrational behavior (Tversky and Kahneman, 1974; Ellis, 2018). This is usually a result of relying on heuristics to make a decision. There are many types of cognitive biases, the sources can be broadly categorized into a) too much information b) lack of information or lack of understanding or meaning associated with the information c) need to act or make judgments fast d) information that is remembered or recalled (Azzopardi, 2021). For example, when a user is presented with a long list (too much information) during information retrieval, to quickly filter out information, the user would simply click on the first link due to position bias (Azzopardi, 2021). There are several studies on the effects of cognitive biases on information search and retrieval (Lau and Coiera, 2007; White, 2013) and crowdsourcing (Eickhoff, 2018; Santhanam et al., 2020). For instance, White (2013) finds that evaluation of search and retrieval systems is impacted by confirmation bias – people’s unconscious tendency to prefer information that confirms their beliefs and disregard evidence that refutes it. There are over 180 different types of cognitive biases identified (Azzopardi, 2021; Tversky and Kahneman, 1974), resulting from a range of factors from how questions are framed (Choi and Pak, 2005) to prior beliefs (White, 2013) that attempt to explain the heuristics humans use to make decisions. These heuristics also impact how humans evaluate LLMs. 3 ConSiDERS-The-Human framework 3.1 User Experience Cognitive biases play a key role in how humans judge or rate the system. Despite this, there is little reporting of the influence of these biases in NLG tasks with human evaluation (Schoch et al., 2020). In this section, we make the case as to why UX and the psychology of cognitive biases are crucial components of human evaluation in NLP. Since UX and the psychology of cognitive biases are entire fields on their own, it is impossible to cover all the details in this paper. We highlight the significant impact of cognitive biases on human evaluation of NLG tasks that can lead to incorrect conclusions. Processing fluency – the ease with which information is processed by the human mind (Tsai and Thomas, 2011) – affects factors such as perceived truthfulness and usefulness of statements. These lessons from psychology also apply to NLG human evaluation, where the human evaluation strategy needs to isolate the effects of linguistic fluency vs. aspects such as factual correctness. As presented in the introduction section, the notion “what is beautiful is useful” (Sonderegger and Sauer, 2010; Hamborg et al., 2014) also extends to language, where information that is presented in an easy-to-process manner can be perceived as true (Schwarz, 2006). To be clear, we are not calling for linguistic fluency and coherence to be trivialized. On the contrary, we highlight how powerful its influence is on human judgment and evaluation. In the following section 3.1.1, we provide a few tips on how to isolate such effects in NLG evaluation. 3.1.1 Recommendations to mitigate common cognitive biases in NLG evaluation 1. Cognitive uncertainty in user feedback including rating schemes: Explicit user feedback such as 1-5 rating scales, and preference-based tests are inherently subject to cognitive uncertainty, therefore the same user can change their rating on the same item when asked again at a later point in 1139 time, even within a few minutes after the initial rating (Jasberg and Sizov, 2020; Kotkov et al., 2022; Amatriain et al., 2009a). Uncertainty in user feedback is a well-known problem in recommendation systems, where a user’s rating is considered to be noisy (Hill et al., 1995). Jasberg and Sizov (2020) demonstrate the scale of the problem where 65% of the users change their rating even within short intervals between re-rated items. Intra-user rating consistency tends to be higher when the ratings are extreme, e.g., very good or very bad, whereas the in-between ratings tend to have lower consistency (Amatriain et al., 2009a,b). Hence, cognitive uncertainty is more likely to affect model evaluations where the outputs are neither very good nor very bad, where the ratings fall in the mid-point, e.g., 3 on a rating scale of 1 to 5. Similar problems can also surface in preference tests, where users choose “A is marginally better than B” when they don’t have strong opinions. Mitigation: Rating denoising algorithms in recommendation systems obtain multiple ratings from the same user and attempt to keep only some of the ratings (Joorabloo et al., 2022; Amatriain et al., 2009b). For instance, Amatriain et al. (2009b) propose to keep only those intra-user ratings whose difference is less than a predefined threshold and choose the mildest rating (most neutral rating) as the final rating for that user. The intuition here is that the mildest rating is not likely to affect the items recommended to the user. NLP human evaluation can potentially leverage such denoising algorithms. Some studies have reported that binary preference-based evaluation has better consistency compared to rating (Belz and Kow, 2010). 2. Conflate fluency with attributes such as truthfulness: In psychology, the subjective ease with which the mind processes information is more likely to be judged as true (Koch and Forgas, 2012; Reber and Schwarz, 1999). This subjective ease with which the mind processes information can be due to factors including how information is presented, how frequently it is repeated and cues of familiarity, such as native speakers can seem truer than those with a foreign accent (Brashier and Marsh, 2020). Thus, humans rely on shortcuts and draw inferences on aspects such as truthfulness from feelings. Conflating fluency with truthfulness is a result of a cognitive bias called the halo effect. The halo effect is the influence of a global evaluation on evaluations of individual attributes (Nisbett and Wilson, 1977). This particularly affects NLP scoring criteria such as factual completeness, salience, and truthfulness, where these individual attributes can be impacted by the overall global attribute – linguistically fluent easy to process information. Therefore, the reliability of rating schemes particularly affects tasks that require rigorous detailed inspection, such as truthfulness or factual completeness. This cognitive bias has been largely ignored when using rating scales including Likert to evaluate the factual correctness of LLMs. As a result, experimenters can inadvertently conclude that LLMs are as factually comprehensive in performing a given function as a trained professional like a doctor. We find that 9 out of the 19 papers we sampled from top-tier medical journals use the perception-based Likert scale to evaluate factual correctness and completeness (details in Appendix A.4), indicating how widespread these practices are. Mitigation: Tasks such as fact-checking and factual completeness that require inspecting individual traits must be ideally split into atomic facts as detailed in Section 3.2 to isolate the impact of fluency and ease of information processing vs. facts. 3. Over-reliance on initial information: Anchoring bias or over-reliance on an initial piece of information (Tversky and Kahneman, 1974; Furnham and Boo, 2011) affects how models are scored using preference or rating tests. For instance, when performing preference tests, if the model presented first on the screen is always the same, then the initial perception of evaluators can have a significant impact on the rest of the evaluation. Mitigation: It is important to shuffle the display order so that the order, such as the sequence of human evaluation tasks, doesn’t give away the model. In preference tests, within each task where the outputs of say 2 models (e.g., model A vs. B) are compared, the underlying model representing A and B must also be randomly shuffled so that A does not always refer to the same model. 4. Perception vs. Performance Most selfreported feedback such as user ratings tends to be based on human perception (Tullis and Albert, 2013). While perception-based metrics represent how a user feels are necessary to measure subjective aspects such as readability, they do not capture functional performance-based metrics such as efficiency. Studies have shown that users can dislike a system that performs well or like a system that does not perform well (Bailey, 1993). For instance, in an experiment where participants are asked to 1140 choose between one-level and two-level menus for sorting categories, participants preferred the two level menu, even though during actual use the one level menu was much faster and less error-prone (Bailey, 1993; Hayhoe, 1990). This demonstrates the discrepancy between preferences reported vs. measurements of efficiency. In the context of LLM evaluation, consider a hypothetical application scenario where an LLM generated output is used to automatically draft email responses. In this scenario, how much a user prefers the output of a model versus the reality of how useful that model output is in boosting productivity (time spent drafting emails) need not be correlated. The main challenge with conducting usability studies is that the software system needs to be built to study how it impacts an end user’s productivity. In addition, confounding factors such as poorly designed UI can result in reducing productivity and these aspects need to be taken into account when designing and analyzing usability studies to understand the impact of an LLM generated output. Liebling et al. (2022) highlight a similar problem in evaluating large-scale machine translation, where the end user’s experience can be quite different from simply evaluating the model output. Mitigation: NLP tasks that are meant to assist end users, must eventually conduct usability studies to capture performance metrics such as efficiency. 3.2 Consistency of human evaluation Reproducibility of experiments in science is a widespread challenge and has even led to the term “replication crisis” being coined (Baker, 2016). Human evaluation is no exception to this challenge, where less than 5% of human evaluations are repeatable (Belz et al., 2023). Despite the challenges, consistency cannot be ignored, as poor reproducibility can point to core design problems. Broadly speaking, non-random or systematic inconsistencies primarily arise due to 5 main design flaws 1) ill-defined evaluation guidelines provided to the annotators 2) high complexity task 3) evaluators who are not well qualified or suited to the task 4) small number of evaluators and/or evaluation set size 5) rating scales such as the Likert. We specifically need to be able to differentiate between random and non-random inconsistencies, as human evaluators are subject to decision errors/outcomes depending on their cognitive state. Random errors can neither be predicted nor controlled (Sukumar and Metoyer, 2018). Thus, understanding the role of non-random variations due to system design is key to improving the consistency of evaluation. 1) Ill-defined or complex evaluation guidelines: Ill-defined guidelines are often ambiguous, incomplete, do not address boundary cases, and do not provide adequate examples (Gadiraju et al., 2017). To illustrate this point, Pradhan et al. (2022) use the example of a seemingly simple task “Is there a dog in this image?” where the authors point out how even this simple task can elicit several clarification questions such as “Does the dog need to be a real animal?”, “What if the dog is only partially visible in the image?” and “What about a wolf?”. The authors further suggest a 3-stage workflow to improve annotation guidelines: Stage 1 involves workers identifying ambiguous samples; Stage 2 involves labeling a few ambiguous examples to add as clarifying examples in the instructions; and Stage 3 involves workers performing the actual annotation using the revised guidelines with the clarifying examples. Overly long annotation guidelines might even require training the annotators, and hence annotators with task-relevant experience tend to be referred (Rottger et al., 2022). Wu and Quinn (2017) find that using simple vocabulary and logical ordering of instructions can improve the guideline quality. Improving guidelines alone may not be sufficient, as enhancing the user interface design can help reduce cognitive load of the annotator, which in-turn can improve the accuracy of annotation tasks (Alagarai Sampath et al., 2014). A simple way to identify deficiencies in guidelines is to have “experts” independently evaluate a set of task items using the guideline and compute the IRA score using an appropriate metric such as a Kappa score. Low IRA can indicate potential problems with the annotation guidelines, in which case revising the guidelines iteratively can lead to reasonable agreement (Iskender et al., 2021). The 5Es of usability testing criteria listed in Section 2.1 is an important strategy to follow when designing human evaluation solutions, including aspects such as how quickly human evaluators can complete their tasks while minimizing errors, how easily the annotation guidelines can be followed and memorized. For example, an overly detailed hard-to-remember evaluation guideline can simply result in poor usability affecting ease of learning, efficiency, and error tolerance resulting in poor inter-rater agreement and/or very slow evaluation turnaround times. 2) High task complexity: Tasks that involve 1141 high cognitive load for the evaluator, can lead to lower agreement (Kim and Park, 2023; Pommeranz et al., 2012; Liu et al., 2023). High cognitive load can even simply involve asking the evaluator to assign a rating of 1-5 (Freitag et al., 2021). One way of mitigating this is to simplify the task. Simplification is key to obtaining consistent results. An example of task simplification to evaluate the factual completeness or saliency of a modelgenerated summary is to break a long text into hierarchical units of facts (Liu et al., 2023) using protocols such as Pyramid (Nenkova and Passonneau, 2004). Breaking a reference summary into atomic facts allows the evaluators to verify if the fact is present, and the total number of facts captured in the summary can be summed up automatically to compute the overall score. Thus, the fact-level recall score is likely to yield much more consistent and reliable results compared to a grading scheme, such as the Likert scale, which would involve asking the evaluator to rate the completeness, framing the problem as “How complete do you think the summary is on a scale of 1-5”. A flip side to breaking long text into atomic units is that it can lead to loss of information when measuring certain types of criteria, e.g., qualitative aspects such as coherence cannot be evaluated using atomic facts. 3) Ill-suited evaluators: Ill-trained annotators can also be a source of inconsistency, usually a scenario encountered when using crowdsourced workers to evaluate specialized tasks. Annotators who demonstrate poor attention can be identified using a set of attention check questions (Agley et al., 2022). Similarly, ill-trained or ill-qualified workers can be identified using an “exam set”, a test set for which answers are known. Lower IRA can also be a result of variation in the skills or qualifications of the evaluators (Artstein, 2017). 4) Small number of evaluators and/or test set: Low number of participants, or the sample size of the evaluators, is one of the key contributors to poor reproducibility (Maxwell et al., 2015; Button et al., 2013). Using a larger pool of evaluators can mitigate experimenter bias introduced when using very small groups (Sukumar and Metoyer, 2018). A small sample size of the test set is another source that introduces replication problems, A caveat here is that a large group of evaluators and a large test set is necessary but not sufficient to ensure that the study is experiment bias-free, as selection bias can result in a large size that is not representative of the target population (Kaplan et al., 2014). 5) Rating scales such as the Likert: As discussed in Section 3.1.1, self-reported user feedback using rating schemes is inherently noisy and, therefore, unreliable. Despite 50% of human evaluations relying on Likert scale (van der Lee et al., 2021), there has been little investigation into the controversies surrounding it in NLP. These include Likert’s consistency issues (Leung, 2011), aggregation & interpretation of scores (Bishop and Herron, 2015; Willits et al., 2016) and the methods to compute IRA (O’Neill, 2017) that the NLP community needs to research further. 3.2.1 IRA: Importance and caveats IRA or inter-rater agreement in human evaluation measures how well two or more evaluators agree on the scores or preferences they assign independently. While it is important to measure IRA to detect problems in the design, especially given that only 18% of the papers using human evaluation report IRA (Amidei et al., 2019), there are several caveats called out on the use of IRA metrics in the medical community which we will be discussing below. Gisev et al. (2013) guide when to use which IRA measure, e.g., Krippendorff’s-α vs. Cohen’s-κ, depending on experiment design factors such as the number of annotators and the type of variable (e.g., ordinal, nominal, etc). Despite such high-level guidance, which IRA measures to use and how to interpret it is contentious (ten Hove et al., 2018; McHugh, 2012). Using an inappropriate metric, such as Fleiss-κ for interval data, is common in NLG (Amidei et al., 2019). Furthermore, even when an appropriate class of IRA metric is used, depending on the IRA chosen, the scores can range from poor to almost perfect (ten Hove et al., 2018). An intuitive measure of inter-rater agreement is using percent agreement. However, the main criticism was that this does not take chance agreement into account (McHugh, 2012). The question to ask when using IRA metrics, such as a kappa statistic is why and when does chance agreement matter? Some tasks genuinely have a class imbalance, e.g., span annotation tasks for named entity recognition, and unmarked spans for the majority class, which would lead to inflated percentage agreement (Artstein, 2017). Another reason is the assumption that some annotators might be making random guesses when they don’t know the answer, and that the majority of the raters may NOT be making deliberate choices (McHugh, 2012). Hence, various IRA measures of Kappa (e.g., Cohen’s-κ) or Alpha (e.g., 1142 Krippendorff’s-α), estimate the observed chance agreement or disagreement empirically when computing IRA. However, if we consider a group of conscientious raters, does chance agreement matter for rating model outputs? For instance, if a model is a high performer and the most common rating is a 4, should disagreeing on a minority rating such as a 1 vs. 2 drastically reduce the IRA? We further demonstrate, using a toy example in Appendix A.3, how disagreement on minority labels can substantially reduce IRA measured using a Krippendorff’s-α while percentage agreement barely changes. What is considered as “low” IRA can vary from task to task, as complex tasks or difficult samples tend to have low IRA (Kim and Park, 2023). This is crucially important for interpreting IRA and is particularly relevant for NLG evaluation. NLG tasks have typically reported relatively low IRA, e.g., average Krippendorff’s-α of 0.62 (Amidei et al., 2019), the standard interpretation is that experiments with scores less than 0.67 must be deemed unreliable (Marzi et al., 2024). For instance, rating “How good is the generated story?” is more likely to have a lower IRA compared to “Is 99 the largest 2-digit number”. Hence, while it is mandatory to measure IRA, it is important to ensure that the scores are interpreted in the context of the task. Performing detailed analysis including computing IRA using multiple metrics such as baseline percentage agreement and visualizing the item-wise agreement scores can help analyze the results. Researchers have also called for further investigation to understand the usefulness of a measure for a given problem, demonstrating the challenges in selecting an appropriate IRA measure (ten Hove et al., 2018). 3.3 Scoring criteria Scoring criteria refers to “What aspects to score". The 4 common scoring criteria covered in NLP literature are (a) Linguistic Fluency - the quality of single sentence (b) Coherence - overall flow or readability (c) Relevance - importance of content (d) Factuality - factual correctness (Fabbri et al., 2021; Gao and Wan, 2022). Also note that the nomenclature used to indicate a given criterion need not be consistent, e.g., as fluency vs. naturalness, across various studies (van der Lee et al., 2019). Evaluation criteria also need to be customized across NLP tasks, and this is necessary as (a) the criteria will vary between domains and tasks (b) the generated text almost always needs to be evaluated against multiple criteria (Burchardt, 2013; Freitag et al., 2022). For instance, Freitag et al. (2022) propose the use of Multidimensional quality metrics (MQM) for evaluating machine translation. MQM is a generic framework for evaluating translation quality and provides a catalog of over 100 issues or error types organized in a hierarchy that evaluators can check for (Burchardt, 2013). This hierarchical categorization of errors enables granular as well as coarse-grained analysis of the quality of translation, and can be adapted for NLG evaluations. In addition to the exhaustive evaluation categorization provided in the MQM framework, responsible AI (RAI) must be factored into human evaluation. Sun et al. (2024) propose 6 categories for RAI evaluation – truthfulness, safety, fairness, robustness, privacy, and machine ethics. Domain-specific customization and extensions also form an integral part of evaluation. For instance, evaluating the effectiveness of an LLM for a domain such as legal should include additional criteria such as case analysis and charge damages calculation (Fei et al., 2023). See conceptual view in Table 1. High level category Sub criteria Core NLP Fluency Core NLP Factuality Core NLP Relevance Core NLP Coherence Domain Specific ... Responsible AI Bias & Fairness Responsible AI Privacy Responsible AI Safety Responsible AI Robustness Table 1: Logical view of high-level scoring criteria 3.4 Differentiating “To differentiate is to identify the differences between things” (vocabulary.com). The test sets used in evaluation must be able to differentiate between the various capabilities as well as the weaknesses of generative LLMs. For instance, when the test sets are relatively easy, most models can achieve high scores and seem very capable. Conversely, when the test sets are very difficult, models might achieve low scores, making the models seem ineffective. Therefore, tests that are ineffective in differentiating capabilities can lead to (a) incorrect conclusions about the capabilities of a model, (b) poor calibration or ranking of various models, or (c) inability to identify any improvements or degradation between model versions, despite spending 1143 significant resources retraining new models. Hence, constructing effective tests that can differentiate model capabilities is crucial, otherwise evaluation simply results in wasted effort. Traditionally, models have been evaluated using public datasets and benchmark such as GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a). These evaluations are not without their share of problems as models can exploit weaknesses in the official test sets relying on shortcuts or spurious correlation – such as length of the input – to predict the target label achieving high performance yet non-generalizable beyond the official test set (McCoy et al., 2019; Elangovan et al., 2023; Gururangan et al., 2018). In addition to these problems, evaluating the current generation of SOTA LLMs poses further challenges. 1) LLMs are trained on billions of tokens available on the internet, and the training data used is rarely well documented. Hence, these LLMs may have consumed public benchmark test sets as part of their training data (Magar and Schwartz, 2022; Sainz et al., 2023). 2) The models are generative, producing natural language output unlike a model trained on a classification task, making it difficult to automate evaluation and thus are far more reliant on human evaluation. 3) The LLMs are capable of following natural language instructions to solve a vast variety of tasks, hence they need to be evaluated on a range of instructions provided as prompts that emulate end-user use cases. Emulating end-user use cases is crucial, as there are fundamental differences in the tasks in NLP benchmarks vs. the kind of questions or problems users can ask an LLM. Firstly, end users tend to be very creative and prompt the LLM for all kinds of queries. This diversity is particularly important in safety critical applications such as Medicine, as evaluation blindspots can potentially lead to harmful consequences. Secondly, the same message or question can be framed (prompted) in many ways. Hence, test scenarios need to consider the various natural ways in which users write their intentions when interacting with a LLM. If these aspects are not taken account when constructing a test set, solely relying on traditional NLP tasks such as sentiment analysis can result in traditional SOTA models outperforming ChatGPT (Koco´n et al., 2023), when clearly this is not reflective of the end-user experience. Hence, we argue that the test cases used for evaluating LLMs need to represent end user scenarios, in addition to standard NLP tasks. Benchmarks such as Big-Bench (bench authors, 2023) take a step towards this direction by enabling GitHub contributors to add new tasks to the benchmark. Furthermore, the same prompts may not be effective against all LLMs. Hence, prompts might often have to be customized for individual models, making benchmarking non-trivial. In light of these new challenges, curating effective test sets is a critical problem that the NLP community must tackle. Benchmarks such as HELM (Liang et al., 2023) to evaluate LLMs use multiple public datasets across various tasks such as Question Answering (QA) and sentiment analysis and measure aspects beyond accuracy to include toxicity and bias. While these are steps in the right direction, the effectiveness of public test sets such as IMDB dataset (Maas et al., 2011) for sentiment analysis or XSum for summarization (Narayan et al., 2018) in measuring the capabilities of SOTA LLMs can be limited for the reasons discussed earlier. These datasets simply do not sufficiently represent end-user use cases. XSum also has hallucinated content in over 75% of its gold summaries (Maynez et al., 2020), demonstrating further weaknesses in the test sets themselves. Robustness testing is also a key aspect of evaluation, as models can be vulnerable to basic perturbations such as capitalization, white spaces and prompt formatting (Sclar et al., 2024) and need not understand basic linguistic concepts such as negation (Rogers, 2021). While efforts such as DynaBench to curate progressively harder test sets (Kiela et al., 2021) and behavior testing of models (Ribeiro et al., 2020) are promising approaches to curate effective test sets, curation still needs more research to make significant progress to target enduser use cases to evaluate LLMs. 3.5 Responsible In human evaluation, the Responsible pillar has to consider two aspects, a) Is the model behavior responsible? b) Do the human evaluators introduce bias to the evaluation results? While there is no formal definition of Responsible AI, at the very least it entails fairness, safety, truthfulness, and privacy (Sun et al., 2024). Searching papers with “responsible” in their title results in 20 papers, expanding the search to include terms such as “bias” and “privacy” results in ≈1000 or 1% of the papers in ACL Anthology, while none of the human evaluation papers mention responsible AI related keywords in their title or abstract (query in Appendix A.2.8), a 1144 telltale sign that more work is needed. 1. Is the model behavior responsible? Answering this question requires evaluating the model for bias, its ability to withstand privacy attacks, truthfulness, and whether the responses are safe. Bias is “prejudice in favor of or against one thing, person, or group”. Bias affects groups by gender, race, culture, religion, geography, and disability (Esiobu et al., 2023). Thus, the tests sent for human evaluation need to specifically cater to these cases and report the performance of various subgroups to ensure that the LLM behaves responsibly. This requires that the tests have metadata curated such as race to be able to report across these segments. Generative LLMs are susceptible to leaking private details such as person identifiable information (PII) from the training data, including when subject to privacy attacks (Vakili and Dalianis, 2023; Li et al., 2023). One method to mitigate such privacy leaks is to obfuscate PII information such as names and locations from training data using techniques such as Pseudonymization – recognizing privacy-sensitive information and replacing them with realistic substitute (Vakili and Dalianis, 2023) and differential-privacy-based approaches that add noise to the input (Chen et al., 2023). Safety ensures that the models do not generate content that harms a person’s physical safety or mental health (Mei et al., 2023; Rusert et al., 2022). Red teams, a group of people authorized to imitate an adversary’s attack or exploitation capabilities, are used to evaluate the robustness, safety, and privacy of LLMs (Perez et al., 2022; Radharapu et al., 2023). 2. Do the human evaluators introduce bias to the evaluation results? As discussed previously, when a substantial portion of evaluation relies on human perception, the diversity of the human evaluators plays a key role in how representative the results are of the wider population. Despite this, less than 3% of papers report demographic information about their evaluators (van der Lee et al., 2021). Having a small group of evaluators or even when the size is large, aspects such as selection bias can result in biased results. Hence, we call for human evaluation to consider the evaluator demographic to mitigate bias effects in the evaluation. 3.6 Scalability Human evaluation is expensive, yet large volumes of test cases are necessary to differentiate model capabilities and ensure consistency. Hence, optimizations to reduce cost and time is crucial for wider adoption. Automating parts of human evaluation can reduce cost. For instance, automation might be potentially helpful to shortlist a set of candidates for human evaluation. While the shortcomings of n-gram-based automated evaluations using metrics like Rouge (Lin, 2004) are well studied (Deutsch et al., 2022), approaches such as using LLMs to evaluate LLMs (Lin and Chen, 2023; Chiang and Lee, 2023) need further exploration. Firstly, the effectiveness of LLM-based evaluation is measured using its correlation with human evaluation, hence the human evaluation procedures need to be strengthened first to draw comprehensive and robust conclusions, creating a chicken-andegg problem. Secondly, LLMs to evaluate LLMs should take into account the differences between perception-based metrics and evaluations that rely on facts, as discussed in Section 3.1.1. Studies that attempt to reduce the turn-around times of human evaluation itself are limited, less than 50 papers mention “cost” or “scale” with human eval in the title (see Appendix A.2.7 for search query) in the last 20 years. Levinboim et al. (2021) report using coarse-grained caption (as opposed to fine-grained) annotations from crowdsourced users to be able to scale, a method that can be adopted for human evaluation. Huang et al. (2023) propose to identify effective test samples to reduce the cost of human evaluation in conversation systems. Designing a UI that enables the human annotators to work efficiently (one of the Es Efficiency in usability in section 3.1) can reduce time. Given the limited amount of work in this area of scalability of human evaluation, we call for further research. 4 Conclusion & Moving forward We presented the ConSiDERS-The-Human evaluation framework to keep up with the increasing capabilities of LLMs. We highlight the effects of human emotions and cognitive biases on evaluation, given how commonly the perception-based metrics are used to evaluate aspects such as truthfulness. We, hence, encourage researchers in NLP to collaborate with their counterparts in UX, HCI & psychology to ensure that the evaluation measures the right things the right way and the results are interpreted accurately. We also call for further research in critical areas – including curating effective test sets, scalability of human evaluation, and responsible AI components such as privacy, bias, robustness, and safety considerations, when evaluating increasingly powerful & ubiquitous generative LLMs. 1145 5 Limitations Human evaluation is a challenge, especially given the increasing capabilities of SOTA LLMs. Firstly, LLMs have many potential applications and effectively evaluating each application or domain might need customization. Our main aim in this paper is to provide a generic framework extensible for specific domains or applications. Effectively customizing for individual cases might require trial and error. Secondly, perception, whilst important, cannot solely dictate how the quality of LLMs is measured. Humans use heuristics to make decisions, and evaluation has to cater to these heuristics. While there are over 180 cognitive biases, in our paper we only highlight a few that can impact evaluation, we specify this limitation in Section 2.3 as well. Acknowledgements We would like to thank our colleagues Seonwoo Min and Han-Chin Shing for reviewing this paper and providing feedback. References Jon Agley, Yunyu Xiao, Rachael Nolan, and Lilian Golzarri-Arroyo. 2022. Quality control questions on amazon’s mechanical turk (mturk): A randomized trial of impact on the usaudit, phq-9, and gad-7. Behavior Research Methods, 54(2):885–897. Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, page 3665–3674, New York, NY, USA. Association for Computing Machinery. Xavier Amatriain, Josep M. Pujol, and Nuria Oliver. 2009a. I like it... i like it not: Evaluating user ratings noise in recommender systems. In User Modeling, Adaptation, and Personalization, pages 247–258, Berlin, Heidelberg. Springer Berlin Heidelberg. Xavier Amatriain, Josep M. Pujol, Nava Tintarev, and Nuria Oliver. 2009b. Rate it again: increasing recommendation accuracy by user re-rating. In Proceedings of the Third ACM Conference on Recommender Systems, RecSys ’09, page 173–180, New York, NY, USA. Association for Computing Machinery. Jacopo Amidei, Paul Piwek, and Alistair Willis. 2019. Agreement is overrated: A plea for correlation to assess human evaluation reliability. In Proceedings of the 12th International Conference on Natural Language Generation, pages 344–354, Tokyo, Japan. Association for Computational Linguistics. Andreas Uebelbacher Andreas Sonderegger, Gerold Zbinden and Juergen Sauer. 2012. The influence of product aesthetics and usability over the course of time: a longitudinal field experiment. Ergonomics, 55(7):713–730. PMID: 22506866. Ron Artstein. 2017. Inter-annotator Agreement, pages 297–313. Springer Netherlands, Dordrecht. Leif Azzopardi. 2021. Cognitive biases in search: A review and reflection of cognitive biases in information retrieval. In Proceedings of the 2021 Conference on Human Information Interaction and Retrieval, CHIIR ’21, page 27–37, New York, NY, USA. Association for Computing Machinery. Robert W. Bailey. 1993. Performance vs. preference. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 37(4):282–286. Monya Baker. 2016. 1,500 scientists lift the lid on reproducibility. Nature, 533(7604):452–454. Carol M Barnum. 2020. Usability testing essentials: Ready, set... test! Morgan Kaufmann. Anja Belz and Eric Kow. 2010. Comparing rating scales and preference judgements in language evaluation. In Proceedings of the 6th International Natural Language Generation Conference. Association for Computational Linguistics. Anya Belz, Simon Mille, and David M. Howcroft. 2020. Disentangling the properties of human evaluation methods: A classification system to support comparability, meta-evaluation and reproducibility testing. In Proceedings of the 13th International Conference on Natural Language Generation, pages 183–194, Dublin, Ireland. Association for Computational Linguistics. Anya Belz, Craig Thomson, Ehud Reiter, and Simon Mille. 2023. Non-repeatable experiments and nonreproducible results: The reproducibility crisis in human evaluation in NLP. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3676–3687, Toronto, Canada. Association for Computational Linguistics. Big bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research. Phillip A Bishop and Robert L Herron. 2015. Use and misuse of the likert item responses and other ordinal measures. Int. J. Exerc. Sci., 8(3):297–302. Nadia M. Brashier and Elizabeth J. Marsh. 2020. Judging truth. Annual Review of Psychology, 71(1):499– 515. PMID: 31514579. Aljoscha Burchardt. 2013. Multidimensional quality metrics: a flexible system for assessing translation quality. In Proceedings of Translating and the Computer 35, London, UK. Aslib. 1146 Katherine S. Button, John P. A. Ioannidis, Claire Mokrysz, Brian A. Nosek, Jonathan Flint, Emma S. J. Robinson, and Marcus R. Munafò. 2013. Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, 14(5):365–376. Sai Chen, Fengran Mo, Yanhao Wang, Cen Chen, JianYun Nie, Chengyu Wang, and Jamie Cui. 2023. A customized text sanitization mechanism with differential privacy. In Findings of the Association for Computational Linguistics: ACL 2023, pages 5747– 5758, Toronto, Canada. Association for Computational Linguistics. Cheng-Han Chiang and Hung-yi Lee. 2023. A closer look into using large language models for automatic evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8928– 8942, Singapore. Association for Computational Linguistics. Bernard C K Choi and Anita W P Pak. 2005. A catalog of biases in questionnaires. Prev. Chronic Dis., 2(1):A13. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that’s ‘human’ is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282–7296, Online. Association for Computational Linguistics. Daniel Deutsch, Rotem Dror, and Dan Roth. 2022. Reexamining system-level correlations of automatic summarization evaluation metrics. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 6038–6052, Seattle, United States. Association for Computational Linguistics. Carsten Eickhoff. 2018. Cognitive biases in crowdsourcing. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, WSDM ’18, page 162–170, New York, NY, USA. Association for Computing Machinery. Aparna Elangovan, Jiayuan He, Yuan Li, and Karin Verspoor. 2023. Principles from clinical research for nlp model generalization. Aparna Elangovan, Jiayuan He, and Karin Verspoor. 2021. Memorization vs. generalization : Quantifying data leakage in NLP performance evaluation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1325–1335, Online. Association for Computational Linguistics. Geoffrey Ellis. 2018. So, What Are Cognitive Biases?, pages 1–10. Springer International Publishing, Cham. David Esiobu, Xiaoqing Tan, Saghar Hosseini, Megan Ung, Yuchen Zhang, Jude Fernandes, Jane DwivediYu, Eleonora Presani, Adina Williams, and Eric Smith. 2023. ROBBIE: Robust bias evaluation of large generative language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3764–3814, Singapore. Association for Computational Linguistics. Alexander R. Fabbri, Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. SummEval: Re-evaluating Summarization Evaluation. Transactions of the Association for Computational Linguistics, 9:391–409. Zhiwei Fei, Xiaoyu Shen, Dawei Zhu, Fengzhe Zhou, Zhuo Han, Songyang Zhang, Kai Chen, Zongwen Shen, and Jidong Ge. 2023. Lawbench: Benchmarking legal knowledge of large language models. Dayne Freitag, John Cadigan, John Niekrasz, and Robert Sasseen. 2022. Accelerating human authorship of information extraction rules. In Proceedings of the First Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning, pages 45–55, Gyeongju, Republic of Korea. International Conference on Computational Linguistics. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, errors, and context: A large-scale study of human evaluation for machine translation. Transactions of the Association for Computational Linguistics, 9:1460–1474. Adrian Furnham and Hua Chu Boo. 2011. A literature review of the anchoring effect. The Journal of SocioEconomics, 40(1):35–42. Ujwal Gadiraju, Jie Yang, and Alessandro Bozzon. 2017. Clarity is a worthwhile quality: On the role of task clarity in microtask crowdsourcing. In Proceedings of the 28th ACM Conference on Hypertext and Social Media, HT ’17, page 5–14, New York, NY, USA. Association for Computing Machinery. Mingqi Gao and Xiaojun Wan. 2022. DialSummEval: Revisiting summarization evaluation for dialogues. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5693–5709, Seattle, United States. Association for Computational Linguistics. Natasa Gisev, J. Simon Bell, and Timothy F. Chen. 2013. Interrater agreement and interrater reliability: Key concepts, approaches, and applications. Research in Social and Administrative Pharmacy, 9(3):330–338. Rainer Greifeneder and Herbert Bless. 2017. The interplay of cognition and feelings: Fluency. Social cognition, pages 145–164. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 1147 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. Mika Hämäläinen and Khalid Alnajjar. 2021. The great misalignment problem in human evaluation of NLP methods. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 69–74, Online. Association for Computational Linguistics. Kai-Christoph Hamborg, Julia Hülsmann, and Kai Kaspar. 2014. The interplay between usability and aesthetics: More evidence for the “what is usable is beautiful” notion. Advances in Human-Computer Interaction, 2014:946239. Rex Hartson and Pardha S Pyla. 2012. The UX Book: Process and guidelines for ensuring a quality user experience. Elsevier. Douglas Hayhoe. 1990. Sorting-based menu categories. International Journal of Man-Machine Studies, 33(6):677–705. Will Hill, Larry Stead, Mark Rosenstein, and George Furnas. 1995. Recommending and evaluating choices in a virtual community of use. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 194–201. Chen Huang, Peixin Qin, Wenqiang Lei, and Jiancheng Lv. 2023. Reduce human labor on evaluating conversational information retrieval system: A humanmachine collaboration approach. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10876–10891, Singapore. Association for Computational Linguistics. Neslihan Iskender, Tim Polzehl, and Sebastian Möller. 2021. Reliability of human evaluation for text summarization: Lessons learned and challenges ahead. In Proceedings of the Workshop on Human Evaluation of NLP Systems (HumEval), pages 86–96, Online. Association for Computational Linguistics. Kevin Jasberg and Sergej Sizov. 2020. Human uncertainty in explicit user feedback and its impact on the comparative evaluations of accurate prediction and personalisation. Behaviour & Information Technology, 39(5):544–577. Nima Joorabloo, Mahdi Jalili, and Yongli Ren. 2022. Improved recommender systems by denoising ratings in highly sparse datasets through individual rating confidence. Information Sciences, 601:242–254. Robert M Kaplan, David A Chambers, and Russell E Glasgow. 2014. Big data and large sample size: a cautionary note on the potential for bias. Clin. Transl. Sci., 7(4):342–346. Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4110–4124, Online. Association for Computational Linguistics. NamHyeok Kim and Chanjun Park. 2023. Interannotator agreement in the wild: Uncovering its emerging roles and considerations in real-world scenarios. Alex S. Koch and Joseph P. Forgas. 2012. Feeling good and feeling truth: The interactive effects of mood and processing fluency on truth judgments. Journal of Experimental Social Psychology, 48(2):481–485. Jan Koco´n, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika Szydło, Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, Anna Koco´n, Bartłomiej Koptyra, Wiktoria Mieleszczenko-Kowszewicz, Piotr Miłkowski, Marcin Oleksy, Maciej Piasecki, Łukasz Radli´nski, Konrad Wojtasik, Stanisław Wo´zniak, and Przemysław Kazienko. 2023. Chatgpt: Jack of all trades, master of none. Information Fusion, 99:101861. Denis Kotkov, Alan Medlar, Umesh Raj Satyal, Alexandr V. Maslov, Mats Neovius, and Dorota Glowacka. 2022. Rating consistency is consistently underrated: an exploratory analysis of movie-tag rating inconsistency. Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing. Annie Y.S. Lau and Enrico W. Coiera. 2007. Do People Experience Cognitive Biases while Searching for Information? Journal of the American Medical Informatics Association, 14(5):599–608. Shing-On Leung. 2011. A comparison of psychometric properties and normality in 4-, 5-, 6-, and 11-point likert scales. Journal of Social Service Research, 37(4):412–421. Tomer Levinboim, Ashish V. Thapliyal, Piyush Sharma, and Radu Soricut. 2021. Quality estimation for image captions based on large-scale human evaluations. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3157–3166, Online. Association for Computational Linguistics. Haoran Li, Dadi Guo, Wei Fan, Mingshi Xu, Jie Huang, Fanpu Meng, and Yangqiu Song. 2023. Multi-step jailbreaking privacy attacks on ChatGPT. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4138–4153, Singapore. Association for Computational Linguistics. 1148 Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Alexander Cosgrove, Christopher D Manning, Christopher Re, Diana AcostaNavas, Drew Arad Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue WANG, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Andrew Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2023. Holistic evaluation of language models. Transactions on Machine Learning Research. Featured Certification, Expert Certification. Daniel Liebling, Katherine Heller, Samantha Robertson, and Wesley Deng. 2022. Opportunities for humancentered evaluation of machine translation systems. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 229–240, Seattle, United States. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. In Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023), pages 47– 58, Toronto, Canada. Association for Computational Linguistics. Yixin Liu, Alex Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2023. Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4140–4170, Toronto, Canada. Association for Computational Linguistics. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Inbal Magar and Roy Schwartz. 2022. Data contamination: From memorization to exploitation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 157–165, Dublin, Ireland. Association for Computational Linguistics. Leonardo Marques, Patrícia Gomes Matsubara, Walter Takashi Nakamura, Bruna Moraes Ferreira, Igor Scaliante Wiese, Bruno Freitas Gadelha, Luciana Martinez Zaina, David Redmiles, and Tayana Uchôa Conte. 2021. Understanding UX better: A new technique to go beyond emotion assessment. Sensors (Basel), 21(21):7183. Giacomo Marzi, Marco Balzano, and Davide Marchiori. 2024. K-alpha calculator–krippendorff’s alpha calculator: A user-friendly tool for computing krippendorff’s alpha inter-rater reliability coefficient. MethodsX, 12:102545. Scott E Maxwell, Michael Y Lau, and George S Howard. 2015. Is psychology suffering from a replication crisis? what does “failure to replicate” really mean? Am. Psychol., 70(6):487–498. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochem. Med. (Zagreb), 22(3):276–282. Alex Mei, Sharon Levy, and William Wang. 2023. ASSERT: Automated safety scenario red teaming for evaluating the robustness of large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5831–5847, Singapore. Association for Computational Linguistics. Alexander G. Mirnig, Alexander Meschtscherjakov, Daniela Wurhofer, Thomas Meneweger, and Manfred Tscheligi. 2015. A formal analysis of the iso 9241-210 definition of user experience. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, CHI EA ’15, page 437–450, New York, NY, USA. Association for Computing Machinery. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: 1149 HLT-NAACL 2004, pages 145–152, Boston, Massachusetts, USA. Association for Computational Linguistics. M Niranjanamurthy, Archikam Nagaraj, Himaja Gattu, and Puneeth K Shetty. 2014. Research study on importance of usability testing/user experience (ux) testing. International Journal of Computer Science and Mobile Computing, 3(10):78–85. Richard Nisbett and Timothy Wilson. 1977. The halo effect: Evidence for unconscious alteration of judgments. Journal of Personality and Social Psychology, 35:250–256. Thomas A. O’Neill. 2017. An overview of interrater agreement on likert scales for researchers and practitioners. Frontiers in Psychology, 8. Ethan Perez, Saffron Huang, Francis Song, Trevor Cai, Roman Ring, John Aslanides, Amelia Glaese, Nat McAleese, and Geoffrey Irving. 2022. Red teaming language models with language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 3419–3448, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Alina Pommeranz, Joost Broekens, Pascal Wiggers, Willem-Paul Brinkman, and Catholijn M. Jonker. 2012. Designing interfaces for explicit preference elicitation: a user-centered investigation of preference representation and elicitation process. User Modeling and User-Adapted Interaction, 22(4):357– 397. Vivek Krishna Pradhan, Mike Schaekermann, and Matthew Lease. 2022. In search of ambiguity: A three-stage workflow design to clarify annotation guidelines for crowd workers. Front. Artif. Intell., 5:828187. Jan Preßler, Lukas Schmid, and Jörn Hurtienne. 2023. Statistically controlling for processing fluency reduces the aesthetic-usability effect. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, CHI EA ’23, New York, NY, USA. Association for Computing Machinery. Bhaktipriya Radharapu, Kevin Robinson, Lora Aroyo, and Preethi Lahoti. 2023. AART: AI-assisted redteaming with diverse data generation for new LLMpowered applications. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 380–395, Singapore. Association for Computational Linguistics. Rolf Reber. 2011. 223Processing Fluency, Aesthetic Pleasure, and Culturally Shared Taste. In Aesthetic Science: Connecting Minds, Brains, and Experience. Oxford University Press. Rolf Reber and Norbert Schwarz. 1999. Effects of perceptual fluency on judgments of truth. Consciousness and Cognition: An International Journal, 8(3):338– 342. Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4902– 4912, Online. Association for Computational Linguistics. Anna Rogers. 2021. Changing the world by changing the data. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2182–2194, Online. Association for Computational Linguistics. Paul Rottger, Bertie Vidgen, Dirk Hovy, and Janet Pierrehumbert. 2022. Two contrasting data annotation paradigms for subjective NLP tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 175–190, Seattle, United States. Association for Computational Linguistics. Jonathan Rusert, Zubair Shafiq, and Padmini Srinivasan. 2022. On the robustness of offensive language classifiers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7424–7438, Dublin, Ireland. Association for Computational Linguistics. Oscar Sainz, Jon Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. NLP evaluation in trouble: On the need to measure LLM data contamination for each benchmark. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10776–10787, Singapore. Association for Computational Linguistics. Sashank Santhanam, Alireza Karduni, and Samira Shaikh. 2020. Studying the effects of cognitive biases in evaluation of conversational agents. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, page 1–13, New York, NY, USA. Association for Computing Machinery. Stephanie Schoch, Diyi Yang, and Yangfeng Ji. 2020. “this is a problem, don’t you agree?” framing and bias in human evaluation for natural language generation. In Proceedings of the 1st Workshop on Evaluating NLG Evaluation, pages 10–16, Online (Dublin, Ireland). Association for Computational Linguistics. Norbert Schwarz. 2006. on judgments of truth & beauty. Daedalus, 135(2):136–138. Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane Suhr. 2024. Quantifying language models’ sensitivity to spurious features in prompt design or: How i learned to start worrying about prompt formatting. In The Twelfth International Conference on Learning Representations. 1150 Andreas Sonderegger and Juergen Sauer. 2010. The influence of design aesthetics in usability testing: Effects on user performance and perceived usability. Applied Ergonomics, 41(3):403–410. Special Section: Recycling centres and waste handling – a workplace for employees and users. Poorna Talkad Sukumar and Ronald Metoyer. 2018. Towards designing unbiased replication studies in information visualization. In 2018 IEEE Evaluation and Beyond - Methodological Approaches for Visualization (BELIV), pages 93–101. Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang, Wenhan Lyu, Yixuan Zhang, Xiner Li, Zhengliang Liu, Yixin Liu, Yijue Wang, Zhikun Zhang, Bhavya Kailkhura, Caiming Xiong, Chaowei Xiao, Chunyuan Li, Eric Xing, Furong Huang, Hao Liu, Heng Ji, Hongyi Wang, Huan Zhang, Huaxiu Yao, Manolis Kellis, Marinka Zitnik, Meng Jiang, Mohit Bansal, James Zou, Jian Pei, Jian Liu, Jianfeng Gao, Jiawei Han, Jieyu Zhao, Jiliang Tang, Jindong Wang, John Mitchell, Kai Shu, Kaidi Xu, Kai-Wei Chang, Lifang He, Lifu Huang, Michael Backes, Neil Zhenqiang Gong, Philip S. Yu, Pin-Yu Chen, Quanquan Gu, Ran Xu, Rex Ying, Shuiwang Ji, Suman Jana, Tianlong Chen, Tianming Liu, Tianyi Zhou, William Wang, Xiang Li, Xiangliang Zhang, Xiao Wang, Xing Xie, Xun Chen, Xuyu Wang, Yan Liu, Yanfang Ye, Yinzhi Cao, Yong Chen, and Yue Zhao. 2024. Trustllm: Trustworthiness in large language models. Hamed Taherdoost. 2019. What Is the Best Response Scale for Survey and Questionnaire Design; Review of Different Lengths of Rating Scale / Attitude Scale / Likert Scale. Post-Print hal-02557308, HAL. Simone Tedeschi, Johan Bos, Thierry Declerck, Jan Hajiˇc, Daniel Hershcovich, Eduard Hovy, Alexander Koller, Simon Krek, Steven Schockaert, Rico Sennrich, Ekaterina Shutova, and Roberto Navigli. 2023. What’s the meaning of superhuman performance in today’s NLU? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12471– 12491, Toronto, Canada. Association for Computational Linguistics. Debby ten Hove, Terrence D. Jorgensen, and L. Andries van der Ark. 2018. On the usefulness of interrater reliability coefficients. In Quantitative Psychology, pages 67–75, Cham. Springer International Publishing. Claire I Tsai and Manoj Thomas. 2011. When does feeling of fluency matter? how abstract and concrete thinking influence fluency effects. Psychological Science, 22(3):348–354. Alexandre N. Tuch, Sandra P. Roth, Kasper Hornbæk, Klaus Opwis, and Javier A. Bargas-Avila. 2012. Is beautiful really usable? toward understanding the relation between usability, aesthetics, and affect in hci. Computers in Human Behavior, 28(5):1596– 1607. Tom Tullis and Bill Albert. 2013. Chapter 3 - planning. In Tom Tullis and Bill Albert, editors, Measuring the User Experience (Second Edition), second edition edition, Interactive Technologies, pages 41–62. Morgan Kaufmann, Boston. Amos Tversky and Daniel Kahneman. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185(4157):1124–1131. Thomas Vakili and Hercules Dalianis. 2023. Using membership inference attacks to evaluate privacypreserving language modeling fails for pseudonymizing data. In Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa), pages 318–323, Tórshavn, Faroe Islands. University of Tartu Library. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, and Emiel Krahmer. 2021. Human evaluation of automatically generated text: Current trends and best practice guidelines. Computer Speech & Language, 67:101151. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, pages 355–368, Tokyo, Japan. Association for Computational Linguistics. vocabulary.com. https://www.vocabulary.com/ dictionary/differentiate. Accessed: 2024-4-5. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Ryen W. White. 2013. Beliefs and biases in web search. In 36th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2013), Dublin, Ireland, pages 3–12. Best Paper Award. Fern K Willits, Gene L Theodori, and AE Luloff. 2016. Another look at likert scales. Journal of Rural Social Sciences, 31(3):6. Meng-Han Wu and Alexander Quinn. 2017. Confusing the crowd: Task instruction quality on amazon mechanical turk. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 5(1):206–215. 1151 A Appendix A.1 ConSiDERS-The-Human Evaluation Checklist Consistency Ill-defined or complex evaluation guidelines 1. Were expert annotators independently able to follow the annotation guidelines and achieve higher inter-annotator agreement? 2. Is there a mechanism for evaluators to report issues such as ambiguity in the guideline? High task complexity 3. Can you simplify the task by breaking down the task into easier tasks? 4. Are you asking multiple questions in a single task? If yes, split each question into separate tasks so that a) the answer to one question doesn’t bias another one, b) the annotators feel positively motivated that they can complete a given task fast. Ill-suited evaluators 5. Is there a qualification exam for evaluators? 6. Do you insert exam and attention check quality control metrics during evaluation to identify the quality of individual annotators as part of each batch of evaluation? 7. Does your task need fresh annotators? For instance, if your task is to measure how good the LLM’s instructions are, you need to make sure that the annotators are new and not used to how to complete the task. Small number of evaluators and/or test set 8. Is it possible to include more evaluators? 9. Is it possible to increase the size of the test set? Rating scales such as the Likert 10. Are you trying to measure perception or non-perception-based aspects such as truthfulness? If you are measuring nonperception-based metrics, avoid the Likert scoring, and modify the task to collect many more objective metrics such as extracting facts and verifying each fact. Inter-rater agreement 11. Which primary IRA measure did you use? 12. Does the primary measure take into account how the evaluation is designed, such as the aspects defined by Gisev et al. (2013)? 13. Do you expect your task to be inherently unbalanced? If not, do you report baseline percentage agreement to verify if the observed chance estimation lowers the overall IRA? 14. Is the item-wise IRA higher for some items and not the others? The ones with lower IRA might be pointing to higher complexity tasks. Report on the distribution of items IRA to troubleshoot the problem. 15. Are the qualifications of the human evaluators similar? If not, the disparity in the evaluators’ skills can lead to lower IRA. 16. Do you continuously measure IRA for each evaluation? If yes, did you observe a sudden change in IRA? It might be due to changes in annotators (e.g., adding new annotators) or changes in guidelines or tasks. New guidelines and tasks take a few iterations to settle. Scoring Criteria 1. Is the model evaluated on typical dimensions: Fluency, Coherence, Relevance, Factuality? 2. What multi-dimensional domain-specific criteria were the model evaluated on? 3. What are the responsible AI criteria the model was evaluated on? Differentiating 1. How many test cases (test examples) did you use to evaluate each of the criteria? 2. What were the end user use cases, e.g., legal document summarization, were tested? If so, report the end user test cases, their corresponding number of tests, and the scores per user case per criteria. 3. Do you suspect that some of the test cases may have already been used during LLM training? If not sure, answer not sure. If yes, where possible, report the percentage of test cases that may have been impacted. 4. Did you run robustness tests to evaluate model weaknesses? What were the scenarios covered, e.g., semantic preserving perturbations - such as sensitivity to white spaces, synonyms, etc? User Experience 1. Were rating denoising algorithms applied on the rating-based metrics, such as the Likert, to account for the cognitive uncertainty of the human evaluators? 2. Did you split the content into atomic facts for criteria, such as factual completeness and truthfulness, that require inspecting individual traits of the model-generated text? If so, detail the criteria. 3. Is the Likert scale used appropriately to measure perception-based aspects such as readability? 4. During human evaluation, were examples shuffled properly so that the evaluator cannot tell which model generated which example? 5. Did you test the actual usability of the model’s output, in the context of the end-user application (aka extrinsic evaluation)? If so, briefly describe the details. Responsible 1. Was safety testing performed? If yes, how many test cases were used and what were the test scenarios? 2. Was privacy testing performed? If yes, how many test cases were used and what were the test scenarios? 3. Was the model tested for bias? If yes, what subgroups such as gender or disability was the model evaluated for? 1152 4. How many evaluators were used? What was the diversity of evaluators or several evaluators grouped by aspects such as education qualifications, race, religion, gender, age groups, and nationalities? Scalability 1. Were parts of the human evaluation automated? If yes, which aspects did you attempt to automate, and what types of automated metrics were used? 2. If you used LLM-based automation, please provide the details of the LLM and prompts. 3. Did you optimize the usability aspects for the annotators to reduce annotation time? If so, provide a summary of how this was achieved. A.2 Meta-analysis of papers available in ACL Anthology A.2.1 Papers with “human” and “eval” in either abstract or title We search for abstracts or titles containing keywords “human” and “eval”, with ≈3900 papers, where 900 of those published in 2023 as detailed in Figure 1. (title|abstract=human and title|abstract=eval) Figure 1: Yearly publications of papers with keywords “human” and “eval” in title/abstract A.2.2 Papers with “human” and “eval” in the title only We search for titles containing keywords “human” and “eval”, this results in around 238 papers as shown in Figure 2, reducing the number from 3900 when we include abstracts as seen above. ( (title=human and title=eval) ) A.2.3 Papers with phrase “human eval” in either abstract or title We search for abstracts or titles containing phrases human evaluation, and human - evaluation. Around 1300 papers were published. ( (title|abstract =[h|H]uman\s?-?\s?[e|E]val) ) A.2.4 Papers with keywords “human” and “eval” in title/abstract and usability keywords in either abstract or title We search for titles or abstracts that mention human evaluation and contain usability keywords (usability, hci, user experience, and human computer). This results in 172 papers. The problem with this query is that it is too noisy as a result of looking for “human” and “eval” in either the title or abstract. For instance, 1153 Figure 2: Yearly publications of papers with keywords “human” and “eval” in title while results contain some human evaluation, the paper’s primary focus is not on the design of human evaluation itself. Code Listing 1: Jabref search query for human eval and usablity ( (title|abstract=human and title|abstract=eval) and ( (title|abstract=usability) or (title|abstract =[uU]ser\s[eE]xperience) or (title|abstract=user and title|abstract=studies) or (title|abstract=hci) or (title|abstract=human and title|abstract=computer) ) ) A.2.5 Papers with phrase “human eval” in either abstract/title and usability keywords Code Listing 2: Jabref search query for phrase human eval in title and usablity in title or abstract ( ( (title|abstract =[h|H]uman\s?-?\s?[e|E]val) ) and ( (title|abstract=usability) or (title|abstract =[uU]ser\s[eE]xperience) or (title|abstract=user and title|abstract=studies) or (title|abstract=hci) or (title|abstract=human and title|abstract=computer) ) ) 1154 Figure 3: Yearly publications of papers with keywords “human” and “eval” in title and usability keywords in the abstract/title A.2.6 Papers with keywords “human” and “eval” in title and usability keywords Out of 238 paper keywords human and eval in the title “(title=human and title=eval )", only 16 mention usability-related keywords in the title or abstract ACL. Code Listing 3: Jabref search query for phrase human in title and usablity in title or abstract ( ( (title=human and title=eval ) ) and ( (title|abstract=usability) or (title|abstract=user and title|abstract=research ) or (title|abstract=user and title|abstract=experience ) or (title|abstract=user and title|abstract=studies) or (title|abstract=hci) or (title|abstract=human and title|abstract=computer) ) ) A.2.7 Paper that attempt to scale or optimise human evaluation Code Listing 4: Jabref search query for phrase human in title and cost / scale in title or abstract ( ( (title=human and title=eval ) ) and ( (title|abstract=scal) or (title|abstract=cost ) or (title|abstract=time ) ) ) 1155 Code Listing 5: Jabref search query for auto eval human in title ( (title=auto and title=eval ) ) A.2.8 Paper that attempt to mention responsible AI related terms in their title Code Listing 6: Jabref search RAI terms in title ( ( (title=responsib) or (title=fair) or (title=truth) or (title=trust) or (title=privacy) or (title=bias) or (title=safe) ) and ( not (title=inductive and title=bias)) ) Code Listing 7: Jabref search Human eval in title and RAI in asbtract (title=human and title=eval ) and ( ( (abstract|title=responsib) or (abstract|title=fair) or (abstract|title=truth) or (title=trust) or (abstract|title=privacy) or (abstract|title=bias) or (abstract|title=safe) ) and ( not (abstract|title=inductive and abstract|title=bias)) ) A.3 Krippendorff’s Toy example In this example, we demonstrate with 6 items, 6 raters where each item is rated exactly by 3 raters how simply changing 1 label (row 4, col 2) the Krippendorff’s-α drops from 0.7 to 0.24, while percentage agreement only drops from 94.4 to 88.9. In scenario 3, we again just change 1 label, where the % agreement remains the same at 88%, but Krippendorff’s-α drops from 0.24 to -0.06. import numpy as np import krippendorff from collections import Counter def compute_agreement(str_reliability_data): reliability_data = [] for coder in str_reliability_data: reliability_data.append([np.nan if l == "*" else int(l) for l in coder.split ()]) compute_krippendorff(reliability_data) compute_percentage_agreement(reliability_data) def compute_krippendorff(reliability_data): agreement = krippendorff.alpha(reliability_data=reliability_data , level_of_measurement="nominal") 1156 print(np.matrix(reliability_data)) print("Krippendorff 's alpha for nominal metric: ", round(agreement , 2)) def compute_percentage_agreement(reliability_data): """ Percentage agreement , only for binary values. """ item_wise_data = np.array(reliability_data).T item_wise_agreement = [] print("** Percentage agreement **") for item in item_wise_data: row_labels = [l for l in item if not np.isnan(l)] highest_frequency = Counter(row_labels).most_common(1)[0][1] item_percentage = round(100 * highest_frequency / len(row_labels), 2) item_wise_agreement.append(item_percentage) print(row_labels , item_percentage) print("Percentage agreement :", np.mean(item_wise_agreement)) print("*** Scenario 1: Baseline ****") compute_agreement(str_reliability_data=( "1 0 * * * *", # coder A "1 * 1 1 1 *", # coder B "1 0 1 1 1 *", # coder C "* 0 1 1 * 1", # coder D "* * * * 1 1", # coder E "* * * * * 0", # coder F )) print () print("*** Scenario 2: Change just 1 label , results in much lower alpha ****") compute_agreement(str_reliability_data=( "1 0 * * * *", # coder A "1 * 1 1 1 *", # coder B "1 0 1 1 1 *", # coder C "* 1 1 1 * 1", # coder D "* * * * 1 1", # coder E "* * * * * 0", # coder F )) print () print("*** Scenario 3: Same % agreement as Scenario 2, but much lower alpha ****") compute_agreement(str_reliability_data=( "1 1 * * * *", # coder A "1 * 1 1 1 *", # coder B "1 0 1 1 1 *", # coder C "* 1 1 1 * 1", # coder D "* * * * 1 1", # coder E "* * * * * 0", # coder F )) This produces the following output *** Scenario 1: Baseline **** [[ 1. 0. nan nan nan nan] [ 1. nan 1. 1. 1. nan] [ 1. 0. 1. 1. 1. nan] [nan 0. 1. 1. nan 1.] [nan nan nan nan 1. 1.] [nan nan nan nan nan 0.]] Krippendorff 's alpha for nominal metric: 0.7 ** Percentage agreement ** [1.0, 1.0, 1.0] 100.0 [0.0, 0.0, 0.0] 100.0 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 1.0] 100.0 1157 [1.0, 1.0, 0.0] 66.67 Percentage agreement : 94.445 *** Scenario 2: Change just 1 label , results in much lower alpha **** [[ 1. 0. nan nan nan nan] [ 1. nan 1. 1. 1. nan] [ 1. 0. 1. 1. 1. nan] [nan 1. 1. 1. nan 1.] [nan nan nan nan 1. 1.] [nan nan nan nan nan 0.]] Krippendorff 's alpha for nominal metric: 0.24 ** Percentage agreement ** [1.0, 1.0, 1.0] 100.0 [0.0, 0.0, 1.0] 66.67 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 0.0] 66.67 Percentage agreement : 88.89 *** Scenario 3: Same % agreement as Scenario 2, but much lower alpha **** [[ 1. 1. nan nan nan nan] [ 1. nan 1. 1. 1. nan] [ 1. 0. 1. 1. 1. nan] [nan 1. 1. 1. nan 1.] [nan nan nan nan 1. 1.] [nan nan nan nan nan 0.]] Krippendorff 's alpha for nominal metric: -0.06 ** Percentage agreement ** [1.0, 1.0, 1.0] 100.0 [1.0, 0.0, 1.0] 66.67 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 1.0] 100.0 [1.0, 1.0, 0.0] 66.67 Percentage agreement : 88.89 A.4 Papers in medical journals that use the Likert scale for evaluating ChatGPT We searched medical journals as follows and manually selected 19 (not cherry-picking) papers that used human evaluation the Likert scale to assess ChatGPT. We find that the Likert scale was used to evaluate factual completeness, saliency correctness in 9/19 papers, 4/19 of papers use the Likert scale appropriately to measure user perception and for the rest of 6/19 we were not sure of what the criteria meant, for details see Table 3. • Nature & sub-journals: Performed search within the nature website using keywords ⟨gpt medical . likert⟩ • Lancet: Performed search within the Lancet website using keywords ⟨gpt likert⟩. • JMIR: Searched for ⟨gpt likert, jmir⟩on Google Scholar and used the first 2 pages of results to filter relavant context. 1158 Title Relevant quote from paper Dimensions Likert used for factual correctness / completness Nature & Subjournals Harnessing ChatGPT and GPT4 for evaluating the rheumatology questions of the Spanish access exam to specialized medical training The medical experts evaluated the clinical reasoning of the chatbots followed in each of the responses. Their evaluation was based on a 1–5 scale, where a score of 5 indicates that the reasoning was entirely correct and flawless, while a score of 1 signifies that the reasoning was inconsistent or contained significant errors. 1) overall correctness. Yes A pilot study on the efficacy of GPT-4 in providing orthopedic treatment recommendations from MRI reports [in Table 2] Likert scales used for (a) Treatment recommendations are clinically useful and relevantTreatment recommendations are clinically useful and relevant (b) Treatment recommendations are based on scientific and clinical evidence (c) The overall quality of the treatment recommendations 1) overall quality; 2) based on evidence; 3) useful and relevant; 4) up-todate; 5) consistent. Yes Testing the limits of natural language models for predicting human language judgements No details mentioned in the paper 1) overall quality Not sure Availability of ChatGPT to provide medical information for patients with kidney cancer The SERVQUAL model is a research tool that assesses how five dimensions—tangibility, reliability, responsiveness, assurance, and empathy—influence customer perception. The answers to the questions are presented in a five-point Likert scale. SERVQUAL has mainly been used to evaluate the quality of medical services in hospitals and healthcare institutions. 1) tangibility; 2) reliability; 3) responsiveness; 4) assurance; 5) empathy. Yes Explaining machine learning models with interactive natural language conversations using TalkToModel We evaluated the following statements along the 1–7 Likert scale at the end of the survey: Easiness: I found the conversational interface easier to use than the dashboard interface; Confidence: I was more confident in my answers using the conversational interface than the dashboard interface; Speed: I felt that I was able to more rapidly arrive at an answer using the conversational interface than the dashboard interface; Likeliness to use: based on my experience so far with both interfaces, I would be more likely to use the conversational interface than the dashboard interface in the future. 1) Easiness; 2) Confidence; 3) Speed; 4) Likeliness to use. No Evaluating large language models on medical evidence summarization We systematically evaluate the quality of generated summaries via human evaluation. We propose to evaluate summary quality along several dimensions: (1) Factual consistency; (2) Medical harmfulness; (3) Comprehensiveness; and (4) Coherence. These dimensions have been previously identified and serve as essential factors in evaluating the overall quality of generated summaries. ... The order in which the summaries are presented is randomized to minimize potential order effects during the evaluation process. We utilize a 5-point Likert scale for the evaluation of each dimension. 1) Factual consistency; 2) Medical harmfulness; 3) Comprehensiveness; 4) Coherence. Yes A large-scale comparison of human-written versus ChatGPT-generated essays The questionnaire covers the seven categories relevant for essay assessment shown below: Topic and completeness; Logic and composition; Expressiveness and comprehensiveness; Language mastery; Complexity; Vocabulary and text linking; Language constructs. ... These categories are used as guidelines for essay assessment 44 established by the Ministry for Education of Lower Saxony, Germany. For each criterion, a seven-point Likert scale with scores from zero to six is defined, where zero is the worst score (e.g. no relation to the topic) and six is the best score (e.g. addressed the topic to a special degree). The questionnaire included a written description as guidance for the scoring. 1) Topic and completeness; 2) Logic and composition; 3) Expressiveness and comprehensiveness; 4) Language mastery; 5) Complexity; 6) Vocabulary and text linking; 7) Language constructs. Yes People devalue generative AI’s competence but not its advice in addressing societal and personal challenges Participants were asked to rate the author competence on three items: The author is knowledgeable of the subject; The text is credible; I intend to follow the provided recommendations. 1) knowledgeable 2) credible 3) willing to follow. Not sure Quality of information and appropriateness of ChatGPT outputs for urology patients The responses generated by ChatGPT were then compared to those provided by a board-certified urologist who was blinded to ChatGPT’s responses and graded on a 5-point Likert scale based on accuracy, comprehensiveness, and clarity as criterias for appropriateness. 1) accuracy; 2) comprehensiveness; 3) clarity. Yes Table 2: Papers containing “GPT”, “medical” and “Likert” from Nature and Subjournals. 1159 Title Relevant quote from paper Dimensions Likert used for factual correctness / completness Lancet Assessing the potential of GPT4 to perpetuate racial and gender biases in health care: a model evaluation study Factual correctness and humanness of letters were assessed by two independent clinicians using a Likert scale ranging from 0 to 10, with 0 representing completely incorrect or inhuman and 10 representing completely correct and human. two dimensions: correctness and humanness. Yes Journal of Medical Internet Research (JMIR) Putting ChatGPT’s Medical Advice to the (Turing) Test: Survey Study Participants were also asked about their trust in chatbots’ functions in patient-provider communication, using a Likert scale from 1-5. ... On average, responses toward patients’ trust in chatbots’ functions were weakly positive (mean Likert score 3.4 out of 5), with lower trust as the health-related complexity of the task in the questions increased. 1) trustworthy Not sure A Generative Pretrained Transformer (GPT)–Powered Chatbot as a Simulated Patient to Practice History Taking: Prospective, Mixed Methods Study To assess how our participants perceived the simulated patient, we used the Chatbot Usability Questionnaire (CUQ). This 16-item questionnaire measures the personality, user experience, error management, and onboarding of a chatbot and has recently been validated. ... For the CUQ, we provided relative numbers of Likert categories. 1) Personality; 2) User Experience; 3) Error Handling; 4) Onboarding; 5) Other Yes Health Care Trainees’ and Professionals’ Perceptions of ChatGPT in Improving Medical Knowledge Training: Rapid Survey Study The questionnaire was designed according to the Kirkpatrick model, with four dimensions to understand the thoughts of the students: (1) perceived knowledge acquisition (KA), (2) perceived training motivation (TM), (3) perceived training effectiveness (TE), and (4) perceived training satisfaction (TS). Three experts reviewed and edited the questionnaire, which has 12 questions, including 1 open-ended question. A 5-point Likert scale was adopted for all questionnaire items (from 1=strongly disagree to 5=strongly agree). 1) perceived knowledge acquisition (KA); 2) perceived training motivation (TM); 3) perceived training effectiveness (TE); 4) perceived training satisfaction (TS). No Assessing Health Students’ Attitudes and Usage of ChatGPT in Jordan: Validation Study The survey tool was created based on the TAM framework. It comprised 13 items for participants who heard of ChatGPT but did not use it and 23 items for participants who used ChatGPT. ... Each item was evaluated on a 5-point Likert scale with the following responses: strongly agree scored as 5, agree scored as 4, neutral/no opinion scored as 3, disagree scored as 2, and strongly disagree scored as 1. The scoring was reversed for the items implying a negative attitude toward ChatGPT. 13 items No Increasing Realism and Variety of Virtual Patient Dialogues for Prenatal Counseling Education Through a Novel Application of ChatGPT: Exploratory Observational Study Sentences were then appraised by a neonatologist for realism, relevance, and usability for virtual prenatal counseling simulations. Each metric used a 5-point Likert scale, ranging from 1 (the lowest) to 5 (the highest). 1) realism; 2) relevance; 3) usability. Not sure ChatGPT Versus Consultants: Blinded Evaluation on Answering Otorhinolaryngology Case–Based Questions The questions were answered by both ORL consultants and ChatGPT 3. ORL consultants rated all responses, except their own, on medical adequacy, conciseness, coherence, and comprehensibility using a 6-point Likert scale. 1) medical adequacy; 2) conciseness; 3) coherence; 4) comprehensibility Not sure Evaluation of GPT-4’s Chest XRay Impression Generation: A Reader Study on Performance and Perception In a blind randomized reading, 4 radiologists rated the impressions based on “coherence”, “factual consistency”, “comprehensiveness”, and “medical harmfulness”, which were used to generate a radiological score based on a 5-point Likert scale of each dimension. 1) coherence; 2)factual consistency; 3) comprehensiveness; 4) medical harmfulness. Yes Investigating the Impact of User Trust on the Adoption and Use of ChatGPT: Survey Analysis We developed 2 latent constructs based on the question (predictors): Trust and Intent to Use. Participant responses to all the questions were captured using a 4-point Likert scale ranging from 1=strongly disagree to 4=strongly agree. The Actual Use factor, the outcome variable, was captured using a single-item question capturing the frequency of use ranging from 1=once a month to 4=almost every day. 1) trust; 2) intent to use. Not sure Exploring the Possible Use of AI Chatbots in Public Health Education: Feasibility Study Medical students’ feedback was collected anonymously at the end of the training experience through a 3-item questionnaire with a Likert scale (1 to 10) regarding their general satisfaction, willingness to repeat the experience, and ease of use of the tool. In particular, the scale of the 3 items can be translated as follows:; Item 1: 1=“dissatisfied with the experience”, 10=“very satisfied”; Item 2: 1=“I would not repeat the experience”, 10=“I would definitely repeat the experience”; Item 3: 1=“the tool is too difficult to be used”, 10=“the tool was very easy to be used”. 1) satisfy; 2) intent to use; 3) easy to use. No Table 3: Papers containing “GPT” and “Likert” from Lancet and JMIR. 1160
2024
63
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11724–11735 August 11-16, 2024 ©2024 Association for Computational Linguistics An Entropy-based Text Watermarking Detection Method Yijian LU1, Aiwei Liu2 , Dianzhi Yu1 , Jingjing Li1 , Irwin King1* 1The Chinese University of Hong Kong 2Tsinghua University [email protected], [email protected] [email protected], [email protected], [email protected] Abstract Text watermarking algorithms for large language models (LLMs) can effectively identify machine-generated texts by embedding and detecting hidden features in the text. Although the current text watermarking algorithms perform well in most high-entropy scenarios, its performance in low-entropy scenarios still needs to be improved. In this work, we opine that the influence of token entropy should be fully considered in the watermark detection process, i.e., the weight of each token during watermark detection should be customized according to its entropy, rather than setting the weights of all tokens to the same value as in previous methods. Specifically, we propose Entropybased Text Watermarking Detection (EWD) that gives higher-entropy tokens higher influence weights during watermark detection, so as to better reflect the degree of watermarking. Furthermore, the proposed detection process is training-free and fully automated. From the experiments, we demonstrate that our EWD can achieve better detection performance in low-entropy scenarios, and our method is also general and can be applied to texts with different entropy distributions. Our code and data is available1. Additionally, our algorithm could be accessed through MarkLLM (Pan et al., 2024)2. 1 Introduction The rapid advancements in large language models (LLMs) have enabled them to generate high-quality outputs indistinguishable from humans, and also achieve better performance on various real-world generation tasks. However, this gives rise to the potential risk of misuse. Specifically, LLM-generated assignments, including essays and codes, pose a threat to academic integrity. Misusing LLMs to *Corresponding Author. 1https://github.com/luyijian3/EWD 2https://github.com/THU-BPM/MarkLLM Green Token Ratio within Different Entropy Ranges Watermarked High-entropy Text Watermarked Low-entropy Code An example of this is choosing a card from the top thirty cards in a deck. If you draw a card from the top thirty cards in the deck, the probability of drawing the top card from the top thirty cards is the same. if n<2: return False if n%2==0: return False for i in range(3,int(n**0.5)+1,2): if n%i==0: return False return True Entropy tag High Entropy Mid Entropy Low Entropy z-score: 9.29 z-score: 2.54 Figure 1: This figure shows that compared with watermarked texts with mostly high-entropy tokens, watermarked codes with mostly low-entropy tokens see significantly less green tokens, resulting in a small detection z-score. Furthermore, on the bottom of the figure, we demonstrate that the green token ratio in tokens decreases as their entropy decreases. generate fake news can also lead to negative societal consequences (Megías et al., 2021). Therefore, algorithms capable of effectively identifying machine-generated contents have become essential. Text watermarking algorithms can effectively alleviate the problem of LLM misuse by embedding and detecting hidden features in LLM-generated contents. For example, the watermarking algorithm proposed by Kirchenbauer et al. (2023a) (KGW) divides the LLM’s model vocabulary into two lists, green and red, and increases the logits value of tokens in the green list by a positive constant (watermark strength). The modified logits will bias the LLM to sample most tokens from the green list. As a result, the watermarked text containing a dominant number of green tokens will be recognized by the watermark detector while human texts will not. However, the successful watermark generation is largely tied to the token entropy during text generation. Under the watermark algorithm, the probability of sampling a token from the green list is proportional to its entropy, meaning that low11724 entropy tokens would produce far less green tokens than high-entropy tokens, as illustrated by Figure 1. In that case, green tokens will occupy only half of a watermarked low-entropy text, resulting in a smaller detection score and being mistakenly labeled as human-written. Furthermore, to ensure the text quality of output contents, the watermark strength in the generator cannot be excessively enlarged to forcefully modify the logits and sample more tokens from the green list (Tu et al., 2023). Consequently, a crucial problem is how to improve the watermark detector so that low-entropy watermarked texts with a limited number of green tokens can be successfully identified. Lee et al. (2023) propose a selective watermark detector named SWEET to better reflect the watermark level in low-entropy texts. Specifically, they only detect tokens that exceed a certain entropy threshold, thus excluding tokens that are difficult to be affected by the watermark generator. However, this method itself has three weaknesses. First, their method requires a manually-set entropy threshold by analyzing samples of code written by humans. Second, their method also does not consider the distribution of token entropy, that is, their method treats all tokens that exceed the threshold as the same, while in fact the entropy of these tokens may vastly vary. Third, the performance gain using the SWEET detector is limited and there is still significant room for further advancement. To better address the low-entropy watermarking detection problem than previous methods, we introduce an Entropy-based Text Watermark Detection (EWD) algorithm to fully consider the role of entropy in the detection of a given text. Specifically, we propose that the influence of a token during detection should be proportional to its entropy. Furthermore, in order to fully reflect this positive relation between token entropy and detection influence, we utilize a monotonically-increasing and continuous function to generate influence weights from token entropy. By creating a gap between the weights of high entropy tokens and low entropy tokens, we ensure that if the watermark generation algorithm can hardly affect a low-entropy token, the state of that token will make minimal impact to the detection result. Overall, our method has three major benefits. First, the method saves the trouble of altering the watermark generation process. Second, compared to SWEET, our method is more efficient as the acquisition of token weights is an independent and training-free process, avoiding the trouble of a human dataset or any manual operations. Third, the proposed detection scheme can be applied to machine-generated texts of all entropy levels, not solely low-entropy texts. From the experiments, we demonstrate that the detection accuracy of EWD in low-entropy scenarios surpasses existing baselines. Furthermore, EWD exhibits versatility in detecting texts of high entropy levels, as well as robustness against the back-translation watermark-removal attack. In summary, the contributions of our work are summarized as follows: • We propose an entropy-based text watermarking detection method called EWD. • We provide a theoretical analysis of the detection accuracy in the low-entropy setting. • We show that EWD not only improves the detection accuracy under the low-entropy scenario, but also provides similar performance as other methods in high-entropy texts. 2 Related Work In this section, we aim to review existing works on text watermarking algorithms. Overall, text watermarking can be divided into two different categories: watermarking for existing texts and watermarking for LLMs Liu et al. (2023). Text Watermarking for Existing Texts. Watermark message can be embedded into existing text by performing a series of modifications, such as lexical substitutions (Munyer and Zhong, 2023; Topkara et al., 2006; Yang et al., 2022; Yoo et al., 2023a; Yang et al., 2023). Those methods typically replace words with their alternatives to place watermark. For example, Topkara et al. (2006) used WordNet(Fellbaum, 1998) as his dictionary while Munyer and Zhong (2023) utilized a pretrained Word2Vec model to select the alternative words. To ensure that the repalced words would cause minimal changes to the text semantics, Yang et al. (2022) proposed a BERT-based infill model to generate replacement with regard to the sentence meaning. To improve the robustness of the algorithm against possible corruptions, Yoo et al. (2023a) fine-tuned the infill model to generate consistent word distribution given several corrupted texts of the same origin. However, the level of modification to an existing text is often limited to protect text quality in the watermarked output. Hence, watermarking for existing text is less favored due to its limited capacity and effectiveness. 11725 Text Watermarking for LLMs. Watermark embedding can also be conducted during the LLM generation phase, either by modifying the token sampling or model logits. Christ et al. (2023) introduced a technique that incorporates watermark message by predefining a random number sequence for sampling the tokens. To enhance resistance to text modifications, Kuditipudi et al. (2023) utilized the Levenshtein distance in matching the text and numbers. For the latter, the most representative one that modifies the model logits is proposed by Kirchenbauer et al. (2023a) based on previous tokens during text generation. There are a number of works proposed on top of this algorithm with enhanced performance in payload (Yoo et al., 2023b; Wang et al., 2023), robustness (Zhao et al., 2023; Liu et al., 2024b; Kirchenbauer et al., 2023b; Ren et al., 2023; He et al., 2024) and unforgeability (Liu et al., 2024a; Fairoze et al., 2023). For example, Yoo et al. (2023b) extends the method to a multibit setting.Wang et al. (2023) proposed to utilize a proxy LLM for multi-bit message embedding. To increase robustness against editing attacks, Zhao et al. (2023) proposed to set a universal green list during the generation process. Liu et al. (2024b) proposed a different approach called SIR which utilizes the semantic meaning of previous generated text to determine the green lists. Inspired by Liu et al. (2024b), He et al. (2024) firstly proposed the Crosslingual Watermark Removal Attack (CWRA), then offered X-SIR as a defense method against CWRA. Also, to enhance the unforgeability of text watermark, Liu et al. (2024a) introduced a neural network for text generation and detection. In this work, we focus only on the detection of lowentropy texts. Lee et al. (2023) designed a selective watermark generation and detection approach for low-entropy code generation. However, it requires a human code dataset to help manually determine the entropy threshold. To sum up, there lacks an effective method to improve the low-entropy detection problem. 3 Preliminaries 3.1 Text Generation Process of LLMs Here we introduce the necessary concepts used in this paper. A LLM, M takes a prompt as input and outputs subsequent tokens as the corresponding response. Specifically, we denote the initial prompt as xprompt. Then, at the l-th step, the input to the LLM is the xprompt and the already-generated tokens T:(l−1). The LLM would generate a distribution logits PLLM(xprompt, T:(l−1)), consisting of PLLM(v|xprompt, T:(l−1)) over each token v of its model vocabulary based on the joint input. 3.2 Text Watermarking The general watermarking scheme used in this paper is called KGW, which is to modify the logits generation process of an original LLM M to acquire the watermarked LLM, denoted as ˜ M. The modification on the original logits by adding a small value (the watermark strength δ) to the logits of green-list tokens will result in LLM’s preference on those tokens in the output text. Thus such modification could likely produce a green token. The detection algorithm is to calculate the watermark level using the following equation, z = (|s|G −γ|T|)/ p γ(1 −γ)|T|, (1) where |s|G is the number of green tokens in text T and γ is the green-list ratio in the model vocabulary. The algorithm would output Detect(T) = 1 if the value is greater than a given threshold, and Detect(T) = 0 otherwise. 3.3 Token Entropy and Low-entropy Scenario The entropy of a token is measured by the degree of spread-out of the logits generated by the LLM during the sampling process of that token. In this paper, we follow the spike entropy from KGW, SE(k) = X v pv 1 + τpv , (2) where pv is the logits value of each token v in the vocabulary when sampling the token k, and τ is a scalar. The entropy value reaches the minimal value if the logits concentrates on a single token and the maximal value given a uniform distribution. The watermark generation process is more effective on a high-entropy token where the logits distribution is nearly uniform, so that the modification could produce a green-list token easily. On the other hand, a low-entropy token is almost deterministic and hard to be affected. Therefore, the greenlist ratio for a low-entropy token is much lower, resulting in fewer green tokens in the output. Therefore, low-entropy watermarked texts pose threats on the detection accuracy as they are easily identified as human-written. 11726 4 Proposed Method In this section, we will first explain the motivation by analysing the watermark generation effect on low-entropy tokens, then provide a detailed explanation of the proposed EWD. 4.1 Motivation Entropy plays an important role in the watermark generation process, yet its role in the detection process has been underestimated. In the KGW generator, a positive constant is added to the logits value of all green-list tokens. The distribution level of the logits, represented by the entropy, matters significantly during this process. When the logits resembles a uniform distribution, this logits modification would very likely produce a green-list token. However, when the logits concentrates on one single token, this watermark embedding process would hardly affect the token sampling and produce a green token with a much lower probability close to the green list ratio γ. Since entropy determines the watermark generation process, then it should also affect the watermark detection. However, in the KGW detector, all tokens are of the same influence, no matter how their entropy varies. 4.2 Entropy-based Text Watermarking Detection Our proposed EWD aims to fully consider the role of entropy in the watermark detection process. Specifically, we firstly propose a positive influenceentropy relation, then utilize entropy for detection weight customization. The influence of a token during detection should be proportional to its entropy. From previous sections, we explain why a high-entropy token is easier to be affected by the watermark generator, as well as the situation for a low-entropy token. Since a low-entropy token is hard to be affected, then whether it is sampled from the green list or not should have a limited influence on the overall watermark level. Similarly, when an easy-to-affect (high-entropy) token is not green, it should contribute more to showing that the text T is not watermarked. Therefore, we introduce a positive influence-entropy relation to determine how much a token t can influence the detection result: I(Detect(T) = 1(t ∈G)|t) ∝SE(t), (3) where T is the given text and G is the green list. Table 1: Features of detection weights by different methods and how they can reflect the proposed positive influence-entropy relation. Methods Detection Weight Reflection of Eq. (3) KGW Constant None SWEET Binary Partial EWD Customized Full Algorithm 1 Entropy-based Watermark Detection Input: a language model M, input text T, detection key k, window size m, detection threshold τ, green list ratio γ Compute token entropy based on model logits, SE = SpikeEntropy[M(T)] = [SE0, SE1....] Compute token weights based on token entropy, W = ComputeWeight[SE] = [W0, W1....] for l = m, m + 1.... do Use the detection key k and previous m tokens Tl−m:l−1 to find the green list G If current token Tl is inside the green list G, then add its weight Wl to |s|G Compute the detection score z′ by Eq. (5) if z′ > τ then return 1, i.e., "Input text is watermarked" else return 0, i.e., "Input text is not watermarked" The detection weight of a token should be customized based on its entropy to fully reflect the positive influence-entropy relation. The influence index in Eq. (3) is represented by the detection weight of each token. However, the detection weight assigning of each token in SWEET fails to reflect the positive influence-entropy relation. This is because that inside the two groups of above-threshold tokens and below-threshold tokens, the weights of tokens with different entropy are identical. Therefore, to fully reflect the influence-entropy relation, we utilize a monotonically-increasing and continuous function f to customize weights from token entropy. The weight W(t) of a token t can be illustrated as follows: W(t) = f(SE(t) −C0), (4) where C0 is the minimal value of the spike entropy to normalize the entropy input before computing the weight. Using the entropy-based weight customization, even a sight increase in the token entropy will result in a increased detection weight. Table 1 summarizes the weight features of different methods and their reflection of the proposed influence-entropy relation. Pseudocode of our proposed detection method is provided in Algorithm 1. We first acquire the text entropy by computing the model logits of each token. The next step is to determine the influence 11727 weight of each token, the output of a designed function ComputeWeight with their entropy values as inputs. After obtaining the entropy-based weights for each token, we then apply the standard detection procedure by Kirchenbauer et al. (2023a) to determine the group of green tokens, i.e., computing the green lists using the detection key and previously generated tokens. Finally, we sum up the weights of the green tokens |s|G and calculate the z-score by z′ = (|s|G −γ |T|−1 X i=m Wi)/ v u u tγ(1 −γ) |T|−1 X i=m Wi2, (5) given |s|G and weight Wt of each detected token t in text T. The detector would give a positive result if the z-score is greater than the given threshold. It is noticably that the proposed algorithm is a fully automated process that eliminates the need for manual operations from the SWEET method. 5 Theoretical Analysis In this section, we theoretically analyze the Type-I and Type-II error of our proposed detection method. Specifically, we compare EWD with two previous detection methods, KGW and SWEET. To do so, we adopt the same steps as in KGW (Kirchenbauer et al., 2023a). Each token in both watermarked and non-watermarked texts can either be green or red, so we can model it as a binomial distribution. While Zhao et al. (2023) set the window size in the watermark generator as zero (one universal green list for all tokens) and exhibits major token dependence, i.e., the same tokens in the text are of the same color, in our case, where a nonzero window size is adopted, the token dependence is significantly weakened. For example, the same token with different prefix would exhibit different sampling result. Therefore, to simplify and facilitate the process of theoretical analysis, we make the reasonable assumption that the sampling of each token is independent. Hence, the sum of green tokens in the text can be approximated with a normal distribution N(µ, σ2) with different sets of mean µ and variance σ2 for different detection schemes. Finally, we calculate and compare the probability of the sum surpassing or falling behind the detection threshold |˜s|G, which is the |s|G value in each detection function with a fixed z. 5.1 Type-I Error Type-I error measures the likelihood a humanwritten text is incorrectly identified as being generated by a watermark algorithm. In a human-written text T, each token is independent of the watermark algorithm. Therefore, its probability to be included in the green list is γ. Therefore, KGW’s sum of green tokens is approximated by N(γ|T|, γ(1 −γ)|T|). For SWEET that only considers high-entropy tokens, such distribution is modelled as N(γ ˜ |T|, γ(1 −γ) ˜ |T|), where ˜T is the text excluding low-entropy tokens. Our method EWD assign weights to tokens based on entropy, leading to a weighted binomial distribution. We could also approximate it to a normal distribution with mean µ = γ P|T|−1 i=m Wi and variance σ2 = γ(1 −γ) P|T|−1 i=m Wi2. The theoretical Type-I errors for all detection methods are the same. When the detection threshold is set to 2, the probabilities for |s|G under each method to surpass its detection threshold |˜s|G is identically 2.28%. 5.2 Type-II Error Type-II error measures the likelihood a machinegenerated text is incorrectly identified as humanwritten. For analysis, we assume that a sequence generated by the watermark algorithm has a length of |T| = 200 tokens and the watermark algorithm has hyper-parameters γ = 0.5 and δ = 2. Originally, KGW samples over 500 watermarkgenerated news scripts and reports an average spike entropy of 0.807. In this section, we focus specifically on cases with a much lower average spike entropy. Based on our data, we approximate the distribution of spike entropy in lowentropy watermarked texts with a power law distribution f(x) = axa−1, where the parameters are a = 0.106, loc = 0.566, scale = 0.426. Under this distribution, the average spike entropy is 0.608. The following probability from KGW shows a lower bound of the probability of a token k being sampled from the green list: P[k ∈G] ≥ γα 1 + (α −1)γ SE(k), (6) where α = exp(δ). We would utilize this probability later in the calculation of µ and σ2. KGW. Based on Eq. (6), the statistical µ and upper bound of σ2 can be obtained as P k P[k ∈G] and P k P[k ∈G] · (1 −P[k ∈G]). When the 11728 Table 2: This table summarizes our calculated theoretical Type-I and Type-II error using different methods in low-entropy detection scenarios. The data in the table is presented in percentages (%). Lower values indicates better detection accuracy. Methods Type-I Error Type-II Error KGW 2.28 84.1 SWEET 2.28 41.7 EWD 2.28 33.4 average entropy SE∗given by the simulated power law distribution is 0.608, we approximate |s|G by N(107.10, 49.70). If we set a threshold of z = 2 (corresponding to |˜s|G = 114.14) for detection, the false negative rate exceeds 84.1%. This means that KGW is not good at detecting the watermarked texts in cases of low entropy. SWEET. Following SWEET, we set the entropy threshold at 0.695 and exclude all tokens below this threshold during detection. The average spike entropy SE∗and remaining text length ˜ |T| then change to 0.82 and 24, respectively. Using the equations from KGW, we can get the approximation as N(17.48, 4.84). If we set a threshold of z = 2 (corresponding to |˜s|G = 17.02) for detection, the false negative rate exceeds 41.7%. EWD. EWD assigns higher influence weights to tokens with high spike entropy. Note that in experiments, we use several weight functions, and here, for analysis purposes, we use a linear weight function for each token k: W(k) = SE(k)−C0, where C0 is a constant. Based on Eq. (6), the statistical µ and σ2 can be obtained as P k P[k ∈G]·W(k) and P k W(k)2P[k ∈G] · (1 −P[k ∈G]). Therefore, we can approximate the |s|G detected by EWD with N(5.79, 0.31). If we set a threshold of z′ = 2 (corresponding to |˜s|G = 5.55) for detection, the false negative rate exceeds 33.4% (details in Appendix A). Table 2 summarizes our calculated theoretical Type-I and Type-II error using different methods in low-entropy detection scenarios. Overall, our method achieves the lowest Type-II error among all methods while maintaining the same Type-I error. 6 Experiments In this section, we first introduce the tasks and datasets used for watermarked text generation. Then, we compare the performance of our EWD against several baselines on those datasets. Next, we analyze the role of our proposed entropy-based weight and study the performance of EWD using different weight computing functions. Finally, we evaluate the detection performance without the original prompt, detection speed and robustness of our EWD. 6.1 Experiment Settings Tasks and Datasets. There are various text generation tasks in the field of Natural Language Processing, and here we aim to generate watermarked text of both low-entropy and high-entropy distributions. Since our focus is on comparing different detection methods, we utilize the same KGW watermark generation algorithms in all the following tasks and datasets. For low-entropy scenarios, we target at the task of code generation by adopting two datasets following Lee et al. (2023), HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021). The two datasets contain test cases of python programming problems and corresponding reference answers, which are utilized as human-written samples. Code generation is conducted by StarCoder (Li et al., 2023). During evaluation, the length of both samples are restricted to be at least 15 tokens. The first common high-entropy scenario is the news report generation. Our input prompts to the pre-trained OPT-1.3B (Zhang et al., 2022) are from the C4 (Raffel et al., 2020) dataset. Besides, we include another high-entropy data-to-text generation task by utilizing Wu et al. (2021)’s version of the Rotowire dataset (Wiseman et al., 2017). The dataset consists of NBA game records, presented as tables, and human-written descriptions. We finetune a T5-small (Raffel et al., 2023) model for text generation and input the table information to the fine-tuned model. We generate 200 samples of length 200 ± 5 tokens for each high-entropy tasks, using binomial sampling and beam search. Baselines and Evaluation Metrics. Two detection methods are selected as baselines. The first is KGW detector. The second is SWEET’s detector, which excludes low-entropy tokens during detection. Note that SWEET proposes both selective watermark generation and detection, while here we only adopt the detection part. The selection of the entropy threshold follows the instructions in SWEET. For evaluation, we set the false positive rate (FPR) under 5%, following Zhao et al. (2023)’s instruction. We report the true positive rate (TPR) and F1 score for each method. Hyper-parameters. For the KGW generator, γ and δ is set to 0.5 and 2, respectively. Binomial 11729 Table 3: Main results on different datasets using KGW, SWEET and our EWD for detection. Code Detection (Low-entropy Scenario) HumanEval MBPP Methods 1%FPR 5%FPR Best 1%FPR 5%FPR Best TPR F1 TPR F1 F1 TPR F1 TPR F1 F1 KGW 0.331 0.494 0.414 0.567 0.772 0.115 0.205 0.348 0.498 0.744 SWEET 0.455 0.622 0.667 0.778 0.842 0.409 0.577 0.650 0.765 0.865 EWD 0.466 0.633 0.692 0.797 0.859 0.567 0.720 0.744 0.830 0.878 Rotowire (High-entropy Scenario) Sampling Beam-search Methods 1%FPR 5%FPR Best 1%FPR 5%FPR Best TPR F1 TPR F1 F1 TPR F1 TPR F1 F1 KGW 1.000 0.995 1.000 0.976 1.000 1.000 0.995 1.000 0.976 1.000 SWEET 1.000 0.995 1.000 0.976 1.000 1.000 0.995 1.000 0.976 1.000 EWD 1.000 0.995 1.000 0.976 1.000 1.000 0.995 1.000 0.976 1.000 C4 (High-entropy Scenario) Sampling Beam-search Methods 1%FPR 5%FPR Best 1%FPR 5%FPR Best TPR F1 TPR F1 F1 TPR F1 TPR F1 F1 KGW 0.985 0.990 1.000 0.980 0.992 0.995 0.993 1.000 0.980 0.997 SWEET 1.000 0.997 1.000 0.978 1.000 0.995 0.993 1.000 0.976 0.997 EWD 1.000 0.997 1.000 0.978 1.000 0.995 0.993 1.000 0.976 0.997 sampling sets the sampling temperature to a common value of 0.7. Beam-search sampling sets the number of beams to 8. To avoid excessive repetition of text, "no_repeated_ngrams" is set to 16. 6.2 Main Results Before detection, we measure the generation quality of all generated texts. Specifically, we are able to achieve 31.0% and 30.4% in pass@1(Chen et al., 2021) for the HumanEval and MBPP dataset, respectively, comparable to Lee et al. (2023)’s. For the Rotowire and C4 dataset, our generated output can achieve 7.24 and 3.25 in text perplexity (PPL) using OPT-6.7B (Zhang et al., 2022). Table 3 shows that for the low-entropy scenario, watermarked code detection, our detector would achieve the best detection accuracy against other baselines. Specifically, for HumanEval and MBPP, respectively, our method could achieve a 2.5% and 9.4% improvement than SWEET in TPR while maintaining a low-than 5% FPR. For the high-entropy scenarios using the Rotowire and C4 datasets, the detection performance of our method is overall very similar to other baselines, sometimes even better than some baselines. 6.3 Empirical Analysis Impact of Entropy-based Weights. In Figure 2, we provide a few visualizations to show the reason why EWD surpasses other baselines in detection performance. Figure 2(a) shows the z-scores of both watermarked and human texts in different datasets, detected by different methods. Our EWD method would result in overall higher z-scores for watermarked texts, and slightly lower z-scores for human texts. The z-score increase is more significant in low-entropy code generation tasks than in the Rotowire dataset. By enlarging the gap between the z-scores of watermarked and human texts, our detector could better distinguish the watermarked texts. In Figure 2(b), we plot the relationship of a token’s entropy-based weight and probability of that token being sampled from the green list. In human-written texts, there does not exist a relationship between the probability and token weights, explaining why the z-scores of human text remain low. Meanwhile, in watermarked texts, the probability is proportional to the token weight, which means the tokens assigned with a large weight are very likely to be sampled from the green list and contribute to a larger z-score. This proves that our entropy-based weights can help more accurately reflect the watermark level in low-entropy texts. 11730 (a) (b) Figure 2: Subfigure (a) shows the z-scores of watermarked and human texts in the Rotowire, HumanEval and MBPP datasets, respectively, each being detected with 3 different methods. Subfigure (b) shows the relationship between token weights and the probability of being green in both watermarked and human texts. 0.0 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Weight Functions Sigmoid 2 Sigmoid 4 Sigmoid 8 Sigmoid 10 Linear Exp 2 Exp 4 Exp 8 Exp 10 1 2 4 8 10 Function Strength 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 F1(FPR<1%) HumanEval Sigmoid (EWD) HumanEval Linear (EWD) HumanEval Exp (EWD) MBPP Sigmoid (EWD) MBPP Linear (EWD) MBPP Exp (EWD) HumanEval (SWEET) MBPP (SWEET) (a) 0.0 0.2 0.4 0.6 0.8 1.0 Normalized Entropy 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 Weight Functions Sigmoid 2 Sigmoid 4 Sigmoid 8 Sigmoid 10 Linear Exp 2 Exp 4 Exp 8 Exp 10 1 2 4 8 10 Function Strength 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 F1(FPR<1%) HumanEval Sigmoid (EWD) HumanEval Linear (EWD) HumanEval Exp (EWD) MBPP Sigmoid (EWD) MBPP Linear (EWD) MBPP Exp (EWD) HumanEval (SWEET) MBPP (SWEET) (b) Figure 3: We utilize two additional weight functions other than Linear and measured their performance in the code detection datasets. Subfigure (a) is a visualization of all studied functions, with normalized spike entropy as input. Subfigure (b) shows each function’s detection F1 under 1% FPR with comparison to the SWEET baseline. Each data point can correspond to the illustrated function on the left figure by looking at its shape and x-axis value. Performance with Different Weight Functions. In the experiments, we utilize a linear function to gain the entropy-based weights for each token. We also study two more monotonic increasing functions, the Sigmoid and the Exponential function. As shown in Figure 3(a), we further change the function strength to 4 different values in each function, which represents how far the function deviates from being linear, for example, "Sigmoid 10" deviates from linear more than "Sigmoid 8". The abovementioned functions are applied in Eq. (4) as f. Specifically, each function’s gradient is different. For our custom Sigmoid functions, the gradient starts off high and gradually decreases, meaning that tokens with relatively low entropy will have significant differences in weights, while tokens with relatively high entropy will exhibit smaller differences. Similarly, for our custom Exponential functions, the gradient starts off low and gradually increases, meaning that tokens with relatively low entropy will have smaller differences in weight, while tokens with relatively high entropy will exhibit larger differences. Figure 3(b) demonstrates that the two listed functions with different strength could surpass the SWEET baseline in terms of detection F1. This reflects the versatility of our method, as it is applicable to more functions. Performance without Original Prompts. The entropy calculation during detection of our EWD 11731 Table 4: Detection performance of watermarked code using the general prompt instead of original prompts. Detection w/ General Prompt Methods F1 under 5%FPR Best F1 KGW 0.498 0.741 SWEET 0.580 0.762 EWD 0.600 0.764 Table 5: This table shows the average time taken to generate 200-token texts using KGW generator, as well as the average time taken for detection using different methods, measured in seconds. The experiment is conducted on a NVIDIA RTX A6000 GPU. Method Generation Detection KGW 0.0391 SWEET 9.489 0.0830 EWD 0.0831 and SWEET is conducted with the original prompts. There exists many real-world scenarios where the original prompts are available, e.g., some programming assignment/exams, in such cases, our detection method would produce the most accurate entropy and detection performance. However, it is possible that the original prompts are not available during detection. Therefore, we design experiments on MBPP dataset where we use a general prompt "def solution(*args):\n”’Generate a solution\n”’" to replace the original prompt during entropy calculation. The detection result can be found in Table 4. Even without the original prompt, our EWD could achieve better detection performance against other baselines. However, the detection performance by both SWEET and EWD has dropped compared to using original prompts. Detection Speed. Compared to the KGW detection, both SWEET and our proposed EWD would demand extra time to compute token entropy. Specifically, our detector would also compute weights given the token entropy before calculating the final z-score. According to Table 5, the detection time per text is 0.0391, 0.0830 and 0.0831 seconds for KGW, SWEET and our EWD, respectively. In spite of being double the value of KGW, our detector remains highly efficient, and the difference with SWEET is virtually insignificant. Performance against the Back-translation Attack. Watermarked texts usually would be edited before being detected, which would remove part of the watermark and cause detection performance drop. Here we also evaluate the baseline methods and our EWD under this setting by utilizing back-translation as the attack to remove watermark. Table 6: Detection performance of back-translated watermarked texts using different detection methods. Detection w/ Back-translation Methods 1%FPR 5%FPR TPR F1 TPR F1 KGW 0.894 0.940 0.950 0.952 SWEET 0.890 0.938 0.930 0.940 EWD 0.900 0.943 0.942 0.947 Specifically, we firstly generate watermarked texts using the C4 dataset, then translate them from English to French, and later back to English again for detection. In Table 6, our EWD would achieve the best detection accuracy under 1% FPR and similar results with KGW under 5% FPR. 7 Conclusion In this work, we propose a text watermark detection method called EWD that fully considers the influence of token entropy. We first establish the positive influence-entropy relation in the detection process, then utilize entropy for weight customization to fully reflect the proposed relation. We provide theoretical analysis on the detection properties of different methods in low-entropy scenarios. Experiments validate the detection performance of our method in the low-entropy code generation task with a number of weight functions. Furthermore, in high-entropy scenarios, our method would achieve similar performance as previous works in detection accuracy, detection speed and robustness. Limitations Our method mainly includes two limitations. The first one is that the low-entropy datasets tested are limited. We utilize two code generation datasets for evaluation. In the future, we aim to include more low-entropy tasks and datasets. Secondly, our method should be theoretically effective for various types of watermark methods, such as token sampling-based methods. In the future, we aim to implement our method on more types of text watermarking frameworks. Acknowledgement We sincerely appreciate the insightful comments from all reviewers. Their help has greatly contributed to the improvement of this work. The work described in this paper was partially supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14222922, RGC GRF 2151185). 11732 References Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Miranda Christ, Sam Gunn, and Or Zamir. 2023. Undetectable watermarks for language models. arXiv preprint arXiv:2306.09194. Jaiden Fairoze, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, and Mingyuan Wang. 2023. Publicly detectable watermarking for language models. Cryptology ePrint Archive, Paper 2023/1661. https://eprint.iacr.org/2023/ 1661. Christiane Fellbaum. 1998. WordNet: An electronic lexical database. MIT press. Zhiwei He, Binglin Zhou, Hongkun Hao, Aiwei Liu, Xing Wang, Zhaopeng Tu, Zhuosheng Zhang, and Rui Wang. 2024. Can watermarks survive translation? on the cross-lingual consistency of text watermark for large language models. arXiv preprint arXiv:2402.14007. John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, and Tom Goldstein. 2023a. A watermark for large language models. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 17061–17084. PMLR. John Kirchenbauer, Jonas Geiping, Yuxin Wen, Manli Shu, Khalid Saifullah, Kezhi Kong, Kasun Fernando, Aniruddha Saha, Micah Goldblum, and Tom Goldstein. 2023b. On the reliability of watermarks for large language models. arXiv preprint arXiv:2306.04634. Rohith Kuditipudi, John Thickstun, Tatsunori Hashimoto, and Percy Liang. 2023. Robust distortion-free watermarks for language models. arXiv preprint arXiv:2307.15593. Taehyun Lee, Seokhee Hong, Jaewoo Ahn, Ilgee Hong, Hwaran Lee, Sangdoo Yun, Jamin Shin, and Gunhee Kim. 2023. Who wrote this code? watermarking for code generation. arXiv preprint arXiv:2305.15060. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. 2023. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161. Aiwei Liu, Leyi Pan, Xuming Hu, Shuang Li, Lijie Wen, Irwin King, and Philip S. Yu. 2024a. An unforgeable publicly verifiable watermark for large language models. In The Twelfth International Conference on Learning Representations. Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, and Lijie Wen. 2024b. A semantic invariant robust watermark for large language models. In The Twelfth International Conference on Learning Representations. Aiwei Liu, Leyi Pan, Yijian Lu, Jingjing Li, Xuming Hu, Lijie Wen, Irwin King, and Philip S Yu. 2023. A survey of text watermarking in the era of large language models. arXiv preprint arXiv:2312.07913. David Megías, Minoru Kuribayashi, Andrea Rosales, and Wojciech Mazurczyk. 2021. Dissimilar: Towards fake news detection using information hiding, signal processing and machine learning. In Proceedings of the 16th International Conference on Availability, Reliability and Security, pages 1–9. Travis Munyer and Xin Zhong. 2023. Deeptextmark: Deep learning based text watermarking for detection of large language model generated text. arXiv preprint arXiv:2305.05773. Leyi Pan, Aiwei Liu, Zhiwei He, Zitian Gao, Xuandong Zhao, Yijian Lu, Binglin Zhou, Shuliang Liu, Xuming Hu, Lijie Wen, et al. 2024. Markllm: An opensource toolkit for llm watermarking. arXiv preprint arXiv:2405.10051. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023. Exploring the limits of transfer learning with a unified text-to-text transformer. Jie Ren, Han Xu, Yiding Liu, Yingqian Cui, Shuaiqiang Wang, Dawei Yin, and Jiliang Tang. 2023. A robust semantics-based watermark for large language model against paraphrasing. arXiv preprint arXiv:2311.08721. Umut Topkara, Mercan Topkara, and Mikhail J Atallah. 2006. The hiding virtues of ambiguity: quantifiably resilient watermarking of natural language text through synonym substitutions. In Proceedings of the 8th workshop on Multimedia and security, pages 164–174. Shangqing Tu, Yuliang Sun, Yushi Bai, Jifan Yu, Lei Hou, and Juanzi Li. 2023. Waterbench: Towards holistic evaluation of watermarks for large language models. arXiv preprint arXiv:2311.07138. 11733 Lean Wang, Wenkai Yang, Deli Chen, Hao Zhou, Yankai Lin, Fandong Meng, Jie Zhou, and Xu Sun. 2023. Towards codable text watermarking for large language models. arXiv preprint arXiv:2307.15992. Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document generation. arXiv preprint arXiv:1707.08052. Xueqing Wu, Jiacheng Zhang, and Hang Li. 2021. Textto-table: A new way of information extraction. arXiv preprint arXiv:2109.02707. Xi Yang, Kejiang Chen, Weiming Zhang, Chang Liu, Yuang Qi, Jie Zhang, Han Fang, and Nenghai Yu. 2023. Watermarking text generated by black-box language models. arXiv preprint arXiv:2305.08883. Xi Yang, Jie Zhang, Kejiang Chen, Weiming Zhang, Zehua Ma, Feng Wang, and Nenghai Yu. 2022. Tracing text provenance via context-aware lexical substitution. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 11613–11621. KiYoon Yoo, Wonhyuk Ahn, Jiho Jang, and Nojun Kwak. 2023a. Robust multi-bit natural language watermarking through invariant features. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2092–2115. KiYoon Yoo, Wonhyuk Ahn, and Nojun Kwak. 2023b. Advancing beyond identification: Multi-bit watermark for language models. arXiv preprint arXiv:2308.00221. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Xuandong Zhao, Prabhanjan Ananth, Lei Li, and Yu-Xiang Wang. 2023. Provable robust watermarking for ai-generated text. arXiv preprint arXiv:2306.17439. 11734 A Proof of Type-II Error for EWD Here we demonstrate how we obtain the theoretical Type-II error for EWD. First of all, we attempt to calculate the theoretical µ and σ2 based on Eq. (6). The theoretical µ is the weighted sum of green tokens in the watermarked text, using |s|Gk to represent the weighted number of green tokens from each token k,thus we can obtain: µ = X k |s|Gk, (7) = X k P[k ∈G] · W(k), (8) = TE γαSE 1 + (α −1)γ (SE −C0), (9) = T γα 1 + (α −1)γ (E[SE2] −C0E[SE]). (10) Given the simulated power law of spike entropy, we can obtain E[SE2] and E[SE] as 0.377 and 0.608, respectively, producing µ equal to 5.79. And for σ2, we first obtain the variance for each token k: σk2 = E[(|s|Gk −E[|s|Gk])2], (11) = E[|s|G 2 k] −E[|s|Gk]2, (12) = P[k ∈G] · W(k)2 −P[k ∈G]2 · W(k)2, (13) = W(k)2 · (P[k ∈G])(1 −P[k ∈G]). (14) Therefore, if we represent γα 1+(α−1)γ by C1, the variance of the whole text can be formulated as: σ2 = X k W(k)2 · (P[k ∈G])(1 −P[k ∈G]), (15) = TE[W(k)2C1SE(1 −C1SE)], (16) = TC1E[(SE −C0)2SE(1 −C1SE)], (17) = TC1(−C1E[SE4] + (2C0C1 + 1)E[SE3] −(2C0 + C02C1)E[SE2] + C02E[SE]). (18) Given the simulated power law of spike entropy, we can obtain E[SE4] and E[SE3] as 0.24 and 0.158, respectively, producing σ equal to 0.56. When the detection threshold z′ is set as 2 (corresponding to |˜s|G = 5.55), the probability of |s|G ∼N(5.79, 0.31) falling below 5.55 is 33.4%, i.e., the Type-II error. 11735
2024
630
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11736–11748 August 11-16, 2024 ©2024 Association for Computational Linguistics Enhancing Explainable Rating Prediction through Annotated Macro Concepts Huachi Zhou1, Shuang Zhou1, Hao Chen1, Ninghao Liu2, Fan Yang3, Xiao Huang1,∗ 1The Hong Kong Polytechnic University, Hong Kong, China 2University of Georgia, Georgia, USA 3University of Wake Forest, North Carolina, USA [email protected], {csszhou,xiaohuang}@comp.polyu.edu.hk, [email protected] [email protected] [email protected] Abstract Generating recommendation reasons for recommendation results is a long-standing problem because it is challenging to explain the underlying reasons for recommending an item based on user and item IDs. Existing models usually learn semantic embeddings for each user and item, and generate the reasons according to the embeddings of the user-item pair. However, user and item IDs do not carry inherent semantic meaning, thus the limited number of reviews cannot model users’ preferences and item characteristics effectively, negatively affecting the model generalization for unseen user-item pairs. To tackle the problem, we propose the Concept Enhanced Explainable Recommendation framework (CEER), which utilizes macro concepts as the intermediary to bridge the gap between the user/item embeddings and the recommendation reasons. Specifically, we maximize the information bottleneck to extract macro concepts from user-item reviews. Then, for recommended user-item pairs, we jointly train the concept embeddings with the user and item embeddings, and generate the explanation according to the concepts. Extensive experiments on three datasets verify the superiority of our CEER model. 1 Introduction In recommender systems, rating prediction is a crucial component for filtering user interest items from a vast pool of candidates. However, many recommender systems only offer rating prediction results without providing any explanations (Pan et al., 2022; Zhang et al., 2022). Personalized and appropriate explanations can greatly arouse user interest and enhance the overall user experience. Therefore, there is potential to deliver accurate rating predictions while generating personalized and suitable reasons for recommended items (Balog et al., 2019; Wang et al., 2022a). ∗Xiao Huang is the corresponding author. Current explainable rating prediction models typically involve learning user and item embeddings and then generating explanations based on these embeddings (Chen et al., 2019, 2020b). For instance, they attribute semantic meanings to each user and item ID by reconstructing the associated review sentence using user and item embeddings (Wang et al., 2023; Gao et al., 2019). CETP (Li et al., 2021a) goes a step further by incorporating knowledge graphs as auxiliary information and leveraging graph neural networks to enhance the item embeddings. Similarly, MMCT utilizes multi-modal information to aid in training the user and item embeddings (Liu et al., 2023). However, existing models still highly rely on the embeddings of user and item. Nevertheless, training user and item embeddings of explainable rate prediction face the following limitations: (i). training sparsity: The observed user-item pairs are sparse in comparison to all possible combinations. This sparsity issue prevents users and item embeddings from capturing sufficient semantic information by solely reconstructing the available reviews, thus negatively affecting the model’s generalization in generating explanations for unseen user-item pairs. (ii). Inference sparsity: The absence of available information in the inference stage makes it challenging to generate appropriate explanations. There are some efforts that explicitly put the collected item features as prompting information to alleviate this issue (Li et al., 2020b; Cheng et al., 2023). However, it is hard to predict in advance which specific features will appear in the explanations. It is challenging to solve the sparsity problems due to two reasons. (i) ID insufficient semantics: The initial embeddings for users and items lack semantic meanings, and are not effectively recognized by language models. The insufficient initial semantics exacerbate the learning of a detailed and accurate mapping to the semantic space. 11736 (ii) Mixed high-level semantics: The user reviews are complicated and may involve multiple highlevel semantics. For example, a review that “I hate this guitar since the quality of the guitar is bad." could be identified by three high-level semantics: “negative emotion”, “instrument”, and “quality”. It is difficult to assist language models in recognizing user focus by providing an item feature. To this end, we propose a framework named Concept Enhanced Explanation Recommendation (CEER). Our framework explicitly annotates highlevel semantics, i.e., macro concepts that abstract item characteristics with similar semantics in reviews and embeds the potentially matched macro concepts into the user and item embeddings. The mined macro concepts enrich the semantic meaning of user and item embeddings without requiring additional data to learn. Our key contributions are summarized below: • To improve the explanation generation, we leverage the macro concepts to address the issue of inadequate semantic information in user and item embeddings. • To build macro concepts, we discover the micro characteristics of the item attended by the user from the reviews by maximizing the information bottleneck. • We devise three tasks to enrich user and item embeddings motivated by integrating the macro concept into representations space. • Extensive experiments on three real-world datasets prove the superiority of the proposed framework CEER. 2 Preliminary Problem Formulation. We denote the user set and item set as U = {u1, u2, ..., u|U|} and V = {v1, v2, ..., v|V|}, respectively. The rating score rij assigned towards the interaction (ui, vj) characterizes the degree of user preference. The observed reviews are organized within a matrix X, where each element in the matrix represents a review and Xij represents the review for the rating rij. Xij is defined as a sentence denoted as [x1..., xm, ..., xM], where xm is the mth word in Xij. During inference, the available information for each interaction is limited to the user and item ID. In this work, we define explanation generation as a sequence-tosequence prediction task. Specifically, given a useritem pair, we apply the user and item embeddings as soft prompts, asking the transformer to generate explanations, where each word in the explanation is predicted given the preceding words and soft prompt. For example, given the soft prompt [ui, vj], the transformer is trained to predict the next word x1. After multiple generation steps, the output from the transformer forms the sequence [x1..., xm, ..., xM]. The notations are put into Appendix A.1 3 Methodology This section introduces two primary components of our framework: a macro concept annotator and an explanation generator. The first component involves two steps: identifying the informative micro characteristics of items in the reviews and using LLMs to annotate macro concepts. The second component utilizes three tasks based on the annotated concepts to enhance user and item embeddings and facilitate explanation generation. The architecture sketch is provided in Figure 1. 3.1 Macro Concept Annotation Micro Characteristics Identification. We represent each macro concept as a group of semantically similar micro characteristics of items that are carefully curated from the reviews. To serve as the micro characteristics, the selected words are required to be informative and capable of justifying the rationale behind recommendations. To measure the informativeness of the words, we need to first quantify the informative level of the words in the corpus. Pre-trained language models (PLMs) are trained on vast amounts of text data and can recognize which words are more informative in conveying the explanation. To learn the informative level quantification, we maximize the following objective, i.e., information bottleneck (TISHBY, 2000) M(Xij, rij): max M(Xij, rij) = max I(Tij, rij)−τ·I(Tij, Xij), (1) where I(·, ·) represents mutual information between two variables; τ is a trade-off hyperparameter; Tij ∈RM×d signifies the intermediate word representation from PLMs whose informative levels can be measured. Here, d denotes the word dimension in the frozen PLM, which, for instance, is 768 when using BERT (Devlin et al., 11737 Joint Learning I hate this guitar hate this guitar since Transformer Concept Emb. User Emb. Item Emb. Word Emb. ij X Noise minimize error ... ... user-item pair Macro Concept Annotation + ... ... s p m r          Seq2seq Training LLM Word Selection Positive Emotion Macro Concept Micro Characteristics like love favor prefer ... Negative Quality bad poor inferior deficient ... guitar violin piano cello ... I hate this guitar since the quality of the guitar is bad. I like this guitar. It is quite cheap for me. Reviews Users Items Encoder Macro Concept Annotation like guitar cheap hate guitar bad guitar Instrument bad inferior dislike hate Instrument Instrument Figure 1: The figure shows the proposed CEER framework. The lower part demonstrates how we annotate the macro concepts. The upper part shows how to leverage the annotated concepts to enrich user and item embeddings. 2019) as the frozen PLM. In Eq. (1), the first term maximally preserves the information related to rating prediction while the second term (Alemi et al., 2016) serves as the regularization term that constrains the amount of information in intermediate representations encoded from Xij. The variational approximation of the information bottleneck of traditional approaches applies restrictive constraints to the pre-trained language models as the second term (Tu and Li, 2022). The posterior distribution of the intermediate representations may not be standard Gaussian distribution (Tu et al., 2022). Some recent efforts have tried more flexible distributions, e.g., sample-based representations of variational distributions (Fang et al., 2019, 2022). They could learn expressive intermediate representations but are unable to explicitly quantify the informative level of the words. To achieve this goal, we inject noise into word representations (Guan et al., 2019). The extent to which these representations tolerate noise can serve as a reflection of their informativeness, as informative words play an important role in understanding the corpus and are sensitive to the introduced noise. To facilitate the second-term calculation, we synthesize the word representations with noise as Tij: Tij = (1 −σ) ⊙PLM(Xij) + σ ⊙ϵ, (2) where PLM(·) provides the original word representations from PLMs, and ϵ ∈RM×d denotes the random noise independently sampled from standard Gaussian distribution while the learnable vector σ ∈RM [0,1] controls the magnitude of noise. The minimization of I(Tij, Xij) is calculated as: min H(Xij) −H(Xij|Tij) ≈max log σ, (3) where we assume P(Xij|Tij) ≈P(Tij) and follows the multivariate Gaussian distribution with zero covariance. Next we could use 1 −σxm to identify the informative level of word xm in Xij. Then the informative level for each word in Xij is: [1 −σx1, ..., 1 −σxm, ..., 1 −σxM ] . (4) To calculate the first term, we transform word representations into ratings and minimize the difference between predictions and labels. In this process, we employ a rating-specific message-passing encoder for transformation (Shuai et al., 2022). After obtaining σ, we exploit the LLMs to perform the macro concept annotation for top k important micro characteristics. Here we use LLM ChatGPT, i.e., gpt-3.5-turbo and take one demonstration following (Lou et al., 2023; Zhang et al., 2023) as an example: [Task Description] User: Organize the following keywords into groups based on their semantic relationships, and give a concept name to each group. [Demonstrations] Input: [Expensive, Cheap]. Output: {Economy: [Expensive, Cheap]} [Test Instance] Input: [Guitar, Hate, Quality]. Output: {Instrument: [Guitar], Negative Emotion: [Hate], Quality: [Quality]}. In this way, we obtain the macro concept labels for k informative micro characteristics from the historical reviews and create a padded concept label for the remaining words. The words in Xij are labeled as {cx1, ..., cxm, ..., cxM }, where cxm is the macro concept ID for word xm. 11738 3.2 Explanation Generator In this component, our goal is to utilize the annotated macro concepts to enrich user/item embeddings. Specifically, we embed the relationships between users/items and concepts, as well as concepts and reviews, in representations through three tasks. These three tasks collaborate to utilize macro concepts as an intermediary to align user and item embeddings with the review embeddings. 3.2.1 User/item-concept Relationship Modeling The first task user/item-concept relationship modeling utilizes user and item embeddings to aid in predicting macro concepts in the associated review. To train the prediction of macro concept distribution, we need to craft macro concept labels Wij =  w1, ..., wq, ..., w|C|  ∈R|C| for review Xij. The macro concept label wq is determined by the cumulative informative level of the words in Xij associated with that concept cq: M X m=1 I(cxm = cq)(1 −σxm), (5) where the indicator I(·) take value 1 if the condition holds otherwise 0. After obtaining the macro concept labels for review Xij, we come to train the user/item embeddings to predict the macro concept distribution ˆ Wij in the associated review, defined as: ˆ Wij = MLP(eui, evj), (6) where eui, evj denotes the user and item embeddings respectively. Next, we use Kullback-Leibler divergence loss to penalize the divergence between the predicted macro concept distribution and the labels, defined as: Lp = LKL( ˆ Wij||Wij) = |C| X q=1 wq · log wq ˆwq . (7) After training, the user and item embeddings could be utilized to recognize the potential macro concepts in the inference. Then, we obtain a composite concept embedding by the weighted sum of all macro concept embeddings with the predicted importance: P|C| q=1 ˆwqecq/ P|C| q=1 ˆwq. The composite macro concept embedding is appended behind the user and item embedding to instruct the transformer to generate relevant explanations. 3.2.2 Concept-review Relationship Modeling The second task concept-review relationship modeling brings the words in reviews and associated macro concept embeddings closer together. Through this task, we proceed to enhance the words in reviews’ awareness of the associated macro concept within the representation space. To reach this objective, we reduce the uncertainty about each word mapping to its associated macro concept. For simplicity, we use the inner product to compute the uncertainty and optimize it with the cross-entropy loss, driving the movement of word embeddings to the macro concept embedding: Lm = − M X m=1 |C| X q=1 I(cxm = cq) · log(pxm,cq), (8) where log(pxm,cq) is the uncertainty between the word xm and the macro concept cq, computed by inner product of embedding: pxm,cq = e⊤ xm · ecq. It encourages words and associated concepts to exhibit proximate representations. 3.2.3 User/item-review Relationship Modeling The third task user/item-review relationship modeling assists the transformer in considering words belonging to the same macro concepts when generating the next word. Traditional approaches employ the maximum likelihood function to reconstruct the original reviews. However, it ignores the diversity of the language expression. In practice, each position in the sentence could have multiple word choices, e.g., semantically similar words. To achieve this, we substitute the word in the review with multiple alternatives in the same macro concept. The loss can be reformulated as: Ls = −1 M M X m=1 (log p(xm) + α log p(x′ m)), (9) where x′ m denotes a randomly selected word within the same macro concept as xm and α denotes a hyper-parameter to control the loss term. Here, we only incorporate one substitution. More substitutions may take the risk of introducing noise and generating contextually inappropriate explanations. 3.2.4 Training Objective Besides the above tasks, we jointly perform rating prediction following previous efforts (Li et al., 2023b) using the loss function defined as: Lr = MLP(eui, evj) −rij 2 . (10) 11739 Finally, we unify these tasks in a linear combination form to train the transformer. The joint learning objective is indicated by: L = Ls + βLm + γLp + Lr, (11) where β, γ are trade-off hyper-parameters to balance the effect of different objectives of the model. 4 Experiments In this section, we perform comprehensive experiments to validate the effectiveness of CEER and gain insights into its behavior. We aim to answer the following research questions: RQ1: How well do the explanations and ratings generated by CEER and baselines align with the actual labels in terms of text quality? RQ2: How does the quality of explanations generated by all models in terms of interpretability? RQ3: How do different tasks contribute to the whole model performance? RQ4: What impact do different hyper-parameter settings of the proposed tasks have on CEER? RQ5: How does information bottleneck perform compared with other micro characteristics identification methods for macro concept annotation? Table 1: Detailed datasets statistics. Datasets # Users # Items # Interactions # Reviews # Concepts Instruments 8,292 695 22,831 7,380 425 Home 66,519 27,679 539,403 538,204 551 Automotive 20,886 1,573 57,722 16,866 477 4.1 Dataset Processing In our experiments, we use three real-world Amazon datasets (He and McAuley, 2016), i.e., Musical Instruments, Home and Kitchen, and Automotive, to evaluate the overall performance. The dataset statistics are shown in Table 1. The detailed description of these datasets is illustrated in Appendix A.2. 4.2 Experimental Setup We implement our framework by PyTorch using NLG4RS ecosystem (Li et al., 2020a) and run it on NVIDIA Tesla V100S PCIe, 32GB memory. The learning rate is set as 0.001. All models are optimized by the Adam optimizer and the embedding dimension is set as 64 for a fair comparison. The batch size equals 256. We truncate explanations with a maximum length of 20 and the budget k is set as 5000 for each dataset. We search the number of transformer layers and attention heads in each layer for the base explanation generator from {1, 2, 3, 4, 5}. And we also search the trade-off factor α, β, γ from {1e −5, 1e −4, 1e − 3, 1e −2, 1e −1, 1} and the dropout rate from {0, 0.2, 0.4, 0.6, 0.8}. The search strategy is grid search. We also set the patience threshold on the validation set to stop model training earlier and the optimal performance on the validation set is selected to report performance on the test set. We run each model five times and report the average performance. 4.3 Evaluation Metrics To evaluate the explanations, we categorize all the adopted metrics into two groups, which evaluate the text quality and the quality of interpretation, respectively. In the first text quality evaluation group, we have BLEU-1, BLEU-4, MAE, and RMSE. In the second interpretability evaluation group, we have USR, FCR, FMR, Entail, and Consistency. The detailed introduction of these metrics is illustrated in Appendix A.3. 4.4 Baseline Methods We integrate the following baselines to conduct the comparison: RGCL (Shuai et al., 2022) applies a graph contrastive learning framework to learn useritem rating prediction; NRT (Li et al., 2017) learns explanation generation via gated recurrent units; NETE (Li et al., 2020b) generates explanations with controlled neural templates; PETER (Li et al., 2021b) introduces contextualization loss and item features to enhance the quality of the explanation; SAER (Yang et al., 2022) adopts an extract-andrefine architecture to effectively generate explanations; PEPLER (Li et al., 2022) fuses user/item IDs into soft prompt and fine-tunes GPT-2. To demonstrate the effectiveness of different components, we also introduce three variants of the CEER: 1) P-CEER: it removes all the proposed tasks and adopts the pure transformer to generate explanations. 2) C-CEER: it adds the conceptreview relationship modeling task to the P-CEER. 3) CW-CEER: it further adds the user/item-review relationship modeling task to the C-CEER. To examine the usefulness of the information bottleneck in micro characteristics identification, we incorporate the following variants: (1) GPT3.5: Directly annotate the macro concepts in the historical reviews with ChatGPT and remove the information bottleneck. (2) β-VAE: Replace the information bottleneck with the β-VAE (Higgins et al., 2017) and use the learned variance as the informative level. 11740 Instrument Home Automotive 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Performance USR NRT PETER PEPLER NETE CEER (Ours) Instrument Home Automotive 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 Performance FCR NRT PETER PEPLER NETE CEER (Ours) Instrument Home Automotive 0.000 0.001 0.002 0.003 0.004 0.005 Performance FMR NRT PETER PEPLER NETE CEER (Ours) Figure 2: Explanation quality in terms of USR, FCR, FMR. Table 2: Text quality comparison of all methods in terms of BLEU-1 and BLEU-4. Methods Instruments Home Automotive BLEU-1 BLEU-4 BLEU-1 BLEU-4 BLEU-1 BLEU-4 NRT (2017) 12.8507 0.7109 12.1058 0.6764 15.3298 0.8062 PETER (2021) 14.9794 0.8345 11.5656 0.5737 15.9109 0.8116 PEPLER (2023) 4.5601 0.2289 8.8215 0.4074 2.4393 0.0706 NETE (2020) 13.8053 0.8949 10.6288 0.6059 14.6157 0.6916 SAER (2022) 13.7924 0.1210 12.5891 0.2763 13.9785 0.0000 CEER 16.1862 1.0489 12.6839 0.6887 16.1159 0.9010 Table 3: Rating prediction of all methods in terms of RMSE and MAE. Methods Instruments Home Automotive RMSE MAE RMSE MAE RMSE MAE RGCL (2022) 0.9224 0.6425 1.0612 0.7703 0.9883 0.6842 NRT (2017) 1.0930 0.6751 1.3312 0.8508 1.2184 0.7312 PETER (2021) 0.9412 0.7229 1.1073 0.8487 1.0259 0.7801 PEPLER (2023) 1.0183 0.6556 1.0792 0.7610 1.2007 0.7006 NETE (2020) 0.9400 0.7139 1.1207 0.8691 1.0261 0.7740 SAER (2022) 1.0970 0.9156 1.1731 0.9409 1.0429 0.8131 CEER 0.9210 0.6680 1.0789 0.7490 0.9693 0.6771 4.5 Text Quality Analysis (RQ1) 4.5.1 Word-level Relevance First, we evaluate the text quality of explanations in terms of BLEU-1 and BLEU-4 and exhibit the results in Table 2. We have several observations. 1) The performance of baselines does not consistently improve with increasing complexity across the three datasets. The NRT model is simple and designed for generating abstractive tips, while the PETER, NETE, and SAER have more complicated architectures. Surprisingly, we find that the complex models do not consistently outperform the simpler NRT model in all cases. Sparse data during inference and training hinders the effective learning of user and item embeddings and limits the ability of advanced language models to generate accurate explanations. Although PEPLER, which fine-tunes GPT-2, fails to yield satisfactory results. This may be because they do not have harvest knowledge about ID during pre-training. 2) The proposed CEER outperforms all baselines across the three datasets. Compared to the baseline methods, the CEER framework consistently produces better results. We infer that this improveInstrument Home Automotive variable 0.075 0.050 0.025 0.000 0.025 0.050 0.075 0.100 0.125 Performance Entail Method NRT PETER PEPLER NETE CEER(Ours) Instrument Home Automotive variable Consistency Method NRT PETER PEPLER NETE CEER(Ours) Figure 3: Explanation quality w.r.t Entail and Consistency. ment is due to the inclusion of macro concepts, which offer higher-level semantics that enhance the learning of ID embeddings and provide additional information for generating explanations. 4.5.2 Rating Prediction We also examine the recommendation performance and present the results in Table 3. We find that the proposed CEER performs consistently better than the explainable recommendation baselines. And it performs even better than the dedicated rating prediction baseline RGCL in most cases. We infer that incorporating the high-level semantics, i.e., macro concepts refines the user and item representations, leading to improved recommendation performance. As a result, our model exhibits strong potential for real-world applications, as it not only offers explanations with high text quality but also delivers accurate rating predictions, addressing the specific requirements of the system. 4.6 Interpretability Quality Analysis (RQ2) 4.6.1 Personalization We also evaluate the degree of informativeness and diversity, i.e., personalization of explanations, and report the results in Figure 2. Notably, in Subsection 4.6, we only select the explainable baselines with superior performance for the RQ1 (i.e., PEPLER, PETER, NETE, and NRT) for comparison. From the results, we have several observations. First, the higher diversity of explanations does not indicate that the features are accurately 11741 included in each explanation. It indicates that improving informativeness and diversity simultaneously is a challenging task. Second, the proposed model generally outperforms the selected baselines in most cases, especially on the Instrument and Automotive datasets. It indicates that the proposed model can not only generate more diverse explanations but also match the corresponding features for the explanations. 4.6.2 Entail & Consistency We evaluate the interpretability from the sentiment perspective and display the results in Figure 3. We notice that the CEER achieves relatively better performance in producing emotionally coherent and factually accurate explanations in most cases. During training, words become associated with the relevant concepts they belong to. This alignment between positive or negative concepts and the corresponding words enables CEER to consistently generate emotionally coherent explanations. Additionally, user and item representations tend to align with the representations of related concepts. It facilitates the incorporation of related facts into the explanation generation process. 4.7 Ablation Study (RQ3) To explore the individual contributions of different tasks to the overall model performance, we analyze three variants and report the results in Table 4. Notably, in Subsections 4.7 and 4.8, we select a subset of metrics for evaluating text quality and interpretability rather than including all available metrics to save layout space. We have several observations: 1) The P-CEER has the worst performance. It does not include any of the designed tasks. The bad performance underscores the necessity of integrating the designated task to enhance model performance. 2) By taking each designed task gradually, CEER, CWCEER, and C-CEER demonstrate performance improvements compared with its predecessor CW-CEER, and C-CEER, P-CEER correspondingly. It demonstrates the effectiveness of each task in addressing the issue of limited generalization caused by sparsity. The overall results validate the effectiveness of the proposed model. 4.8 Hyper-parameter Sensitivity (RQ4) We further analyze the hyper-parameters α, β, γ in the proposed regularization terms. We conduct experiments on the Automotive dataset and present the results in Figure 4. We have several observations. 1) Across all selected metrics, the model is sensitive to extreme hyper-parameter values. Hyper-parameters that are too large, such as 1, or too small, such as 1e −5, do not lead to the best performance of the model. 2) Different evaluation metrics require different hyper-parameters to reach their optimal values. This observation implies that it is difficult to have a hyper-parameter setting that could achieve optimal results considering multiple evaluation metrics together for the whole model. 3) Compared with β and γ, α generally prefers relatively small values. We infer that the user/item-review relationship modeling task may act as noise and hinder the modeling of the language pattern in the explanations when α is large. All the results imply that the CEER is sensitive to the hyper-parameter values and it is important to choose appropriate values. 4.9 Characteristics Identification Study (RQ5) In this subsection, we first conduct the comparison between different micro characteristics identification methods and display the results in Table 5. Then we present an example from the Amazon Instrument dataset to intuitively show the identified micro characteristics, annotated macro concept, and the generated explanation in Figure 5. Performance Comparison. We compare the information bottleneck method with two other choices using the same annotation prompt, and explanation generator. The two choices are compared on the Amazon Instrument dataset and the Amazon Home dataset, respectively. From the results, we find that direct annotation without information bottleneck negatively affects the model performance. We infer that although LLMs are proficient at text comprehension, they are not trained to assign consistent macro concepts to similar words, which leads to poor performance. And the β-VAE also performs worse than the information bottleneck in most cases. β-VAE does not quantitatively define the importance of the words. And the learned variance may not be a good indicator to differentiate word importance although the small variance may mean that the meaning of the word does not change a lot across contexts, such as stop words. These results jointly demonstrate the effectiveness of the information bottleneck method. Case Study. We showcase the informative level of each word in the historical reviews associated with 11742 Table 4: Ablation study results on three datasets. Methods Instruments Home Automotive BLEU-1 BLEU-4 USR FCR FMR BLEU-1 BLEU-4 USR FCR FMR BLEU-1 BLEU-4 USR FCR FMR P-CEER 12.8870 0.7512 0.2138 0.0158 0.0038 11.6297 0.6568 0.0577 0.0299 0.0002 14.5324 0.7367 0.4688 0.0311 0.0012 C-CEER 14.6413 0.9538 0.3303 0.0250 0.0051 12.3248 0.6598 0.0652 0.0330 0.0003 14.9229 0.8626 0.5402 0.0424 0.0030 CW-CEER 15.8464 0.8565 0.4904 0.0261 0.0038 12.6099 0.6344 0.0791 0.0475 0.0002 15.4576 0.7608 0.6217 0.0473 0.0027 CEER 16.1862 1.0489 0.5552 0.0281 0.0048 12.6839 0.6887 0.0890 0.0431 0.0003 16.1159 0.9010 0.6120 0.0435 0.0044 1e-05 0.0001 0.001 0.01 0.1 1.0 value 15.0 15.5 16.0 16.5 BLEU-1 Automotive 1e-05 0.0001 0.001 0.01 0.1 1.0 value 0.80 0.85 0.90 0.95 BLEU-4 Automotive 1e-05 0.0001 0.001 0.01 0.1 1.0 value 0.4 0.5 0.6 0.7 0.8 USR Automotive 1e-05 0.0001 0.001 0.01 0.1 1.0 value 0.03 0.04 0.05 0.06 FCR Automotive 1e-05 0.0001 0.001 0.01 0.1 1.0 value 0.002 0.004 0.006 FMR Automotive Figure 4: Hyper-parameter sensitivity results of different regularization terms on Automotive dataset. Figure 5: Examples from the Instrument dataset. the given user-item pair. Then, we demonstrate the annotated concepts that drive the model to generate the explanation involved by the user and item in this example. First, we visualize the informative level of each word representation of two historical reviews associated with user 543 and item 504, respectively. We observe that those keywords that explain the reason for the rating are assigned significantly lower amounts of noise. For example, in the first review of subfigure (a), the words “costly” and “price” describe the reason for user dissatisfaction and their informative levels are very high. It demonstrates that the first component could select the informative words. That facilitates the cost reduction for LLMs to annotate concepts. Second, we compare the explanations generated Table 5: Model performance comparison with different micro characteristics identification methods. Methods BLEU-1 BLEU-4 USR FCR FMR GPT-3.5 15.9572 0.8701 0.4763 0.0233 0.0038 CEER 16.1862 1.0489 0.5552 0.0281 0.0048 β-VAE 12.3631 0.7028 0.0819 0.0389 0.0003 CEER 12.6839 0.6887 0.0890 0.0431 0.0003 by the CEER and the P-CEER, i.e., the transformer model for the interaction between user 543 and item 504, and show it in subfigure (b). The target rating is 5. We observe that the P-CEER only describes the user’s positive sentiment towards the item and does not contain any useful information. The generated explanation from CEER reveals the reason for the high rating is due to the price and the positive quality. We also exhibit the top two prioritized concepts and words associated with them in the subfigure (c). It shows to a certain extent that the high-level concept does drive the transformer to generate semantically relevant sentences. And the associated words for the concept demonstrate that the LLMs could appropriately assign a concept label to semantic similar words. 5 Related Work 5.1 Sentence-based Explanation Generation Compared with examining which feature plays a more important role in driving the interaction (Wang et al., 2018a, 2022b), sentence-based explanations for rating prediction offer natural language explainable information about why the user assigns particular ratings (Hua et al., 2023; Wu et al., 2024). Some efforts only manipulate the historical reviews to generate the explanations (Le and 11743 Lauw, 2020; Pugoy and Kao, 2021), other efforts attempt to integrate varied auxiliary information into explanation generation. Knowledge graph provides an organized way to access facts (Chen et al., 2024; Shengyuan et al., 2024). CETP (Li et al., 2021a) selects the triples from the knowledge graph and incorporates them into explanation generation. Further, METER (Geng et al., 2022; Liu et al., 2023) incorporates multi-modality information to encourage the model to write more faithful and diverse explanations. LLMs have strong text manipulation abilities (Dong et al., 2024; Zhang et al., 2024). PRAG (Xie et al., 2023) leverages LLMs and reformulates the explanation generation as a questionanswering task. Additionally, the tips written by users show how they feel about the interaction and can also be utilized to generate a comprehensive explanation (Zhu et al., 2023). To generate coherent explanations, AESG tries to incorporate the syntax graph dependency tree to improve the quality of explanations (Hu et al., 2023). 5.2 Review-based Rating Prediction Review-based rating prediction involves predicting a numerical rating for a user-item pair by modeling the text of a review provided by a user (Harrag et al., 2019). Early effort introduces the reviewlevel attention mechanism to model the review for rating prediction (Chen et al., 2018). It ignores the fine-grained information in the review. NPA (Wu et al., 2019) leverages user ID to incorporate wordlevel and review-level information to improve performance attentively. Further, DAML (Liu et al., 2019) exploits the mutual information between the user and the item extracted from information that the convolutional neural network attends to. Similarly, CARP (Li et al., 2019) models mutual information and uses the capsule network to extract high-level information. EDMF (Liu et al., 2022) extracts the useful feature from the review text to further improve performance. Graph neural networks (GNNs) capture global relationships between nodes that are indirectly connected (Dong et al., 2023; Chen et al., 2020a). Efforts adopt GNNs to model user and item relationships and the review features can be regarded as edge features (Qiao et al., 2022; Shuai et al., 2022). 6 Conclusion and Future Work In this paper, we present the CEER framework, designed to enrich user and item embeddings with high-level semantics, i.e., macro concepts to improve the explanation generation. The framework identifies micro characteristics of items from associated reviews and utilizes LLMs to annotate macro concepts. These macro concepts act as intermediaries to align the user and item embeddings with review embeddings to obtain more semantic information. And we achieve this through three tasks that focus on establishing relationships between users/items, macro concepts, and reviews. We conduct extensive experiments, and the results confirm the effectiveness of CEER. Our future work will involve adapting our method to annotate concepts from various data sources, such as images, and enhancing the explanation generation process (Li et al., 2023a). Acknowledgments The work described in this paper was fully supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. PolyU 25208322). Limitations In this study, we annotate the macro concept from the reviews to assist the explanation generation. The existing framework is constrained to annotating macro concepts solely from text and lacks the capability to expand to additional data sources like images. And we annotate macro concepts from a series of micro item characteristics, i.e., words. However, the semantic meaning of words differs under different contexts. Moreover, we employ LLMs for annotating macro concepts and training a small language model for explanation generation instead of fine-tuning LLMs. While fine-tuned GPT-2 may outperform the suggested framework when available reviews are rich, our framework demonstrates relatively better effectiveness in situations with extremely sparse reviews and does not incur significant computational costs. Ethics Statement In this study, all the Amazon datasets are publicly available and have been extensively used in research related to recommendation systems. All the baseline codes are open-sourced. And we adhere to the ACL Code of Ethics1. 1https://www.acm.org/code-of-ethics 11744 References Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. 2016. Deep variational information bottleneck. In International Conference on Learning Representations. Krisztian Balog, Filip Radlinski, and Shushan Arakelyan. 2019. Transparent, scrutable and explainable user models for personalized recommendation. In SIGIR, pages 265–274. Chong Chen, Min Zhang, Yiqun Liu, and Shaoping Ma. 2018. Neural attentional rating regression with review-level explanations. In WWW, pages 1583– 1592. Hao Chen, Yue Xu, Feiran Huang, Zengde Deng, Wenbing Huang, Senzhang Wang, Peng He, and Zhoujun Li. 2020a. Label-aware graph convolutional networks. In Proceedings of the 29th ACM international conference on information & knowledge management, pages 1977–1980. Shengyuan Chen, Qinggang Zhang, Junnan Dong, Wen Hua, Qing Li, and Xiao Huang. 2024. Entity alignment with noisy annotations from large language models. Zhongxia Chen, Xiting Wang, Xing Xie, Mehul Parsana, Akshay Soni, Xiang Ao, and Enhong Chen. 2020b. Towards explainable conversational recommendation. In IJCAI, pages 2994–3000. Zhongxia Chen, Xiting Wang, Xing Xie, Tong Wu, Guoqing Bu, Yining Wang, and Enhong Chen. 2019. Coattentive multi-task learning for explainable recommendation. In IJCAI, pages 2137–2143. Hao Cheng, Shuo Wang, Wensheng Lu, Wei Zhang, Mingyang Zhou, Kezhong Lu, and Hao Liao. 2023. Explainable recommendation with personalized review retrieval and aspect learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 51–64. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171–4186. Junnan Dong, Qinggang Zhang, Xiao Huang, Keyu Duan, Qiaoyu Tan, and Zhimeng Jiang. 2023. Hierarchy-aware multi-hop question answering over knowledge graphs. In Proceedings of the ACM Web Conference 2023, pages 2519–2527. Junnan Dong, Qinggang Zhang, Chuang Zhou, Hao Chen, Daochen Zha, and Xiao Huang. 2024. Costefficient knowledge-based question answering with large language models. Le Fang, Chunyuan Li, Jianfeng Gao, Wen Dong, and Changyou Chen. 2019. Implicit deep latent variable models for text generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3946–3956. Xianghong Fang, Jian Li, Lifeng Shang, Xin Jiang, Qun Liu, and Dit-Yan Yeung. 2022. Controlled text generation using dictionary prior in variational autoencoders. In Findings of the Association for Computational Linguistics: ACL 2022, pages 97–111. Jingyue Gao, Xiting Wang, Yasha Wang, and Xing Xie. 2019. Explainable recommendation through attentive multi-view learning. In AAAI, pages 3622–3629. Shijie Geng, Zuohui Fu, Yingqiang Ge, Lei Li, Gerard de Melo, and Yongfeng Zhang. 2022. Improving personalized explanation generation through visualization. In ACL, pages 244–255. Chaoyu Guan, Xiting Wang, Quanshi Zhang, Runjin Chen, Di He, and Xing Xie. 2019. Towards a deep and unified understanding of deep neural models in nlp. In International conference on machine learning, pages 2454–2463. PMLR. Fouzi Harrag, AbdulMalik Al-Salman, and Alaa Alqahtani. 2019. Prediction of reviews rating: A survey of methods, techniques and hybrid architectures. Journal of Digital Information Management, 17(3):164. Ruining He and Julian J. McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In WWW, pages 507–517. Irina Higgins, Loic Matthey, Arka Pal, Christopher P Burgess, Xavier Glorot, Matthew M Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-vae: Learning basic visual concepts with a constrained variational framework. ICLR (Poster), 3. Yidan Hu, Yong Liu, Chunyan Miao, Gongqi Lin, and Yuan Miao. 2023. Aspect-guided syntax graph learning for explainable recommendation. IEEE Transactions on Knowledge and Data Engineering, 35(8):7768–7781. Wenyue Hua, Lei Li, Shuyuan Xu, Li Chen, and Yongfeng Zhang. 2023. Tutorial on large language models for recommendation. In Proceedings of the 17th ACM Conference on Recommender Systems, RecSys 2023, Singapore, Singapore, September 1822, 2023, pages 1281–1283. ACM. Trung-Hoang Le and Hady W. Lauw. 2020. Synthesizing aspect-driven recommendation explanations from reviews. In IJCAI, pages 2427–2434. Chenliang Li, Cong Quan, Li Peng, Yunwei Qi, Yuming Deng, and Libing Wu. 2019. A capsule network for recommendation and explaining what you like and dislike. In SIGIR, pages 275–284. 11745 Junyi Li, Wayne Xin Zhao, Zhicheng Wei, Nicholas Jing Yuan, and Ji-Rong Wen. 2021a. Knowledge-based review generation by coherence enhanced text planning. In SIGIR, pages 183–192. Lei Li, Li Chen, and Yongfeng Zhang. 2020a. Towards controllable explanation generation for recommender systems via neural template. In WWW Demo. Lei Li, Yongfeng Zhang, and Li Chen. 2020b. Generate neural template explanations for recommendation. In CIKM, pages 755–764. Lei Li, Yongfeng Zhang, and Li Chen. 2021b. Personalized transformer for explainable recommendation. In ACL, pages 4947–4957. Lei Li, Yongfeng Zhang, and Li Chen. 2022. Personalized prompt learning for explainable recommendation. arXiv, abs/2202.07371. Lei Li, Yongfeng Zhang, and Li Chen. 2023a. Personalized prompt learning for explainable recommendation. ACM Trans. Inf. Syst., 41(4):103:1–103:26. Lei Li, Yongfeng Zhang, and Li Chen. 2023b. Prompt distillation for efficient llm-based recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM 2023, Birmingham, United Kingdom, October 21-25, 2023, pages 1348–1357. ACM. Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, and Wai Lam. 2017. Neural rating regression with abstractive tips generation for recommendation. In SIGIR, pages 345–354. Donghua Liu, Jing Li, Bo Du, Jun Chang, and Rong Gao. 2019. DAML: dual attention mutual learning between ratings and reviews for item recommendation. In KDD, pages 344–352. Hai Liu, Chao Zheng, Duantengchuan Li, Xiaoxuan Shen, Ke Lin, Jiazhang Wang, Zhen Zhang, Zhaoli Zhang, and Neal N. Xiong. 2022. EDMF: efficient deep matrix factorization with review feature learning for industrial recommender system. IEEE Transactions on Industrial Informatics, 18(7):4361–4371. Zhuang Liu, Yunpu Ma, Matthias Schubert, Yuanxin Ouyang, Wenge Rong, and Zhang Xiong. 2023. Multimodal contrastive transformer for explainable recommendation. IEEE Transactions on Computational Social Systems. Renze Lou, Kai Zhang, and Wenpeng Yin. 2023. Is prompt all you need? no. A comprehensive and broader view of instruction learning. arXiv, abs/2303.10475. Jianmo Ni, Jiacheng Li, and Julian J. McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In EMNLP, pages 188–197. Sicheng Pan, Dongsheng Li, Hansu Gu, Tun Lu, Xufang Luo, and Ning Gu. 2022. Accurate and explainable recommendation via review rationalization. In WWW, pages 3092–3101. Reinald Adrian Pugoy and Hung-Yu Kao. 2021. Unsupervised extractive summarization-based representations for accurate and explainable collaborative filtering. In ACL, pages 2981–2990. Pengpeng Qiao, Zhiwei Zhang, Zhetao Li, Yuanxing Zhang, Kaigui Bian, Yanzhou Li, and Guoren Wang. 2022. Tag: Joint triple-hierarchical attention and gcn for review-based social recommender system. IEEE Transactions on Knowledge and Data Engineering. Chen Shengyuan, Yunfeng Cai, Huang Fang, Xiao Huang, and Mingming Sun. 2024. Differentiable neuro-symbolic reasoning on large-scale knowledge graphs. Advances in Neural Information Processing Systems, 36. Jie Shuai, Kun Zhang, Le Wu, Peijie Sun, Richang Hong, Meng Wang, and Yong Li. 2022. A reviewaware graph contrastive learning framework for recommendation. In SIGIR, pages 1283–1293. N TISHBY. 2000. The information bottleneck method. In Proceedings of the 37-thAnnual Allerton Conference on Communication, 2000. Haoqin Tu and Yitong Li. 2022. An overview on controllable text generation via variational auto-encoders. CoRR, abs/2211.07954. Haoqin Tu, Zhongliang Yang, Jinshuai Yang, Siyu Zhang, and Yongfeng Huang. 2022. Adavae: Exploring adaptive gpt-2s in variational auto-encoders for language modeling. CoRR, abs/2205.05862. Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2018a. Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. In CIKM, pages 417–426. Linlin Wang, Zefeng Cai, Gerard de Melo, Zhu Cao, and Liang He. 2023. Disentangled cvaes with contrastive learning for explainable recommendation. In AAAI, pages 13691–13699. Peng Wang, Renqin Cai, and Hongning Wang. 2022a. Graph-based extractive explainer for recommendations. In WWW, pages 2163–2171. Shendi Wang, Haoyang Li, Caleb Chen Cao, Xiao-Hui Li, Ng Ngai Fai, Jianxin Liu, Xun Xue, Hu Song, Jinyu Li, Guangye Gu, and Lei Chen. 2022b. Tower bridge net (tb-net): Bidirectional knowledge graph aware embedding propagation for explainable recommender systems. In ICDE, pages 3268–3279. Xiting Wang, Yiru Chen, Jie Yang, Le Wu, Zhengtao Wu, and Xing Xie. 2018b. A reinforcement learning framework for explainable recommendation. In ICDM, pages 587–596. 11746 Chuhan Wu, Fangzhao Wu, Mingxiao An, Jianqiang Huang, Yongfeng Huang, and Xing Xie. 2019. NPA: neural news recommendation with personalized attention. In KDD, pages 2576–2584. Xuansheng Wu, Haiyan Zhao, Yaochen Zhu, Yucheng Shi, Fan Yang, Tianming Liu, Xiaoming Zhai, Wenlin Yao, Jundong Li, Mengnan Du, et al. 2024. Usable xai: 10 strategies towards exploiting explainability in the llm era. arXiv preprint arXiv:2403.08946. Zhouhang Xie, Sameer Singh, Julian J. McAuley, and Bodhisattwa Prasad Majumder. 2023. Factual and informative review generation for explainable recommendation. In AAAI, pages 13816–13824. Aobo Yang, Nan Wang, Renqin Cai, Hongbo Deng, and Hongning Wang. 2022. Comparative explanations of recommendations. In WWW, pages 3113–3123. Aobo Yang, Nan Wang, Hongbo Deng, and Hongning Wang. 2021. Explanation as a defense of recommendation. In WSDM, pages 1029–1037. Qinggang Zhang, Junnan Dong, Hao Chen, Xiao Huang, Daochen Zha, and Zailiang Yu. 2023. Knowgpt: Black-box knowledge injection for large language models. arXiv preprint arXiv:2312.06185. Qinggang Zhang, Junnan Dong, Hao Chen, Wentao Li, Feiran Huang, and Xiao Huang. 2024. Structure guided large language model for sql generation. arXiv preprint arXiv:2402.13284. Wei Zhang, Junbing Yan, Zhuo Wang, and Jianyong Wang. 2022. Neuro-symbolic interpretable collaborative filtering for attribute-based recommendation. In WWW, pages 3229–3238. Jihua Zhu, Yujiao He, Guoshuai Zhao, Xuxiao Bu, and Xueming Qian. 2023. Joint reason generation and rating prediction for explainable recommendation. IEEE Transactions on Knowledge and Data Engineering, 35(5):4940–4953. A Appendix A.1 Notations In this work, matrices are symbolized by uppercase bold letters (e.g., E), vectors are represented by lowercase bold letters (e.g., e), and scalars are represented by lowercase letters (e.g., rij). We denote sets with calligraphic letters (e.g., U) and denote their size using cardinality (e.g., |U|). The ith row of matrix E is denoted as ei, whereas the element locating in the ith row and jth column is denoted as Eij. A.2 Dataset Details These datasets consist of customer reviews and ratings specifically related to items. We regard the sentence in the review that can explain the purchase motivation as the explanation following the previous practice (Li et al., 2020b; Ni et al., 2019). We extract explanations and item features (Yang et al., 2021) from the given reviews using ChatGPT. Both of them will be included in the evaluation metrics, which will be introduced later. To ensure that users and items under evaluation receive training, we reserve users and items that have at least one record in the training set (Li et al., 2021b). The interactions are initially sorted based on their timestamps and then divided into training, validation, and test sets with a proportion of 8 : 1 : 1. The rating score range in three datasets based on the sentiment from negative to positive is {1, 2, 3, 4, 5}. A.3 Evaluation Metrics Here, the symbol ↑followed by the metric means that the higher the score, the better the performance, while ↓represents the lower the score, the better the performance. In the first group, we view the explanations as purely plain text and evaluate the text quality. (i) We adopt the widely used BLEU1↑and BLEU-4↑metrics to measure the wordlevel relevance of the generated explanations to the ground-truth explanations. (ii) User and item serve as the soft prompt to generate explanations, with their representations containing textual information. We adopt two classic metrics, MAE↓and RMSE↓to evaluate the predicted ratings using user and item representations with Eq. (10) (Li et al., 2017, 2020b, 2021b). In the second group, we examine the interpretability quality of generated explanations from two aspects. (i) To measure the degree of personalization, we adopt Unique Sentence Ratio (USR)↑, Feature Coverage Ratio (FCR)↑, and Feature Matching Ratio (FMR)↑by following previous efforts (Li et al., 2020b, 2021b). Specifically, USR assesses the diversity of generated interpretations, while FCR and FMR measure the diversity and accuracy of generated features. (ii) To examine the semantic-level relevance, we adopt Entail↑(Xie et al., 2023) to estimate the extent of whether the generated explanation entails the label explanation. To measure the emotional consistency between the predicted rating and the generated explanations, we adopt Consistency↑(Wang et al., 2018b). Consistent with prior practices (Xie et al., 2023), we employ a pre-trained entailment model to assess whether the generated explanations entail the ground truths. Similarly, following previous 11747 work (Wang et al., 2018b), we employ a sentiment classification model to estimate sentiment scores based on the generated explanations and calculate the correlation with the predicted rating. 11748
2024
631
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11749–11765 August 11-16, 2024 ©2024 Association for Computational Linguistics How to Engage Your Readers?♦ Generating Guiding Questions to Promote Active Reading Peng Cui1, Vilém Zouhar1, Xiaoyu Zhang2, Mrinmaya Sachan1 ETH Zürich Department of Computer Science1, ETH AI Center2 [email protected] Abstract Using questions in written text is an effective strategy to enhance readability. However, what makes an active reading question good, what the linguistic role of these questions is, and what is their impact on human reading remains understudied. We introduce GUIDINGQ, a dataset of 10K in-text questions from textbooks and scientific articles. By analyzing the dataset, we present a comprehensive understanding of the use, distribution, and linguistic characteristics of these questions. Then, we explore various approaches to generate such questions using language models. Our results highlight the importance of capturing inter-question relationships and the challenge of question position identification in generating these questions. Finally, we conduct a human study to understand the implication of such questions on reading comprehension. We find that the generated questions are of high quality and are almost as effective as human-written questions in terms of improving readers’ memorization and comprehension. github.com/eth-lre/engage-your-readers 1 Introduction Questions play an important role in reading comprehension. Through actively raising questions and seeking answers from the content during reading, readers can deeply engage with the text and achieve better comprehension (Bharuthram, 2017; Syamsiah et al., 2018). However, asking good questions is challenging and requires complex skills. How can we facilitate readers’ active thinking and questioning during reading?♥1 An effective approach could be presenting valuable questions explicitly in the text, which is a recognized strategy to enhance readability and engage readers (Haggan, 2004). An example is shown in Figure 1. The 1Superscript symbols indicate the roles of questions in this paper, detailed in Section 3. Q1: Are the Values Central to Business Ethics Universal? Reader One of the perennial themes in business ethics – indeed, in ethics in general – is the difference between relative and absolute values … Q2: Is there a set of universal values that all can endorse? Q3: Are there “human values” that apply everywhere despite differences in time, place, and culture? ... Q4: Can business ethics provide one (solution to corruption)? Business ethics exists on three levels: the individual, the organizational, and the societal. At the organizational and societal levels, laws … Q5: What, then, is missing from humanistic business? The problem is that if anything flourishes in this model, it is often the business rather than the employees. After all, free enterprise has the … Frame Purpose Arouse Interest … Corruption appears to exist everywhere, so it would seem to require a persistent and consistent answer everywhere. Organize Discourse Figure 1: We generate interconnected questions with diverse rhetorical functions during reading to engage readers and improve comprehension. writer first uses a title question (Q1) to arouse the interest of potential readers. Then, a group of questions (Q2-3) surfaces in the beginning to introduce the central topics to be explored. In what follows, more questions (Q4-5) are raised and elaborated with the moving of discussion, holding the reader’s attention throughout the reading. From a linguistic perspective, these questions not only build up a coherent discourse structure (Curry and Chambers, 2017), but also serve as a communicative device that constructs a virtual dialogue between the writer and potential readers, thereby making the text more engaging and interactive (Hyland, 2002). In this paper, we refer to these in-text questions as guiding questions. Despite being widely used, there is little understanding of the effect of such questions on human reading. To fill this gap, we analyze how expert writers use guiding questions and explore how to model these questions with advanced language models. Further, we hypothesize that these author-posed questions can complement and encourage readers’ spontaneous selfquestioning, thereby fulfilling the goal of active reading. Based on these motivations, we address the following research questions: • RQ1: What is the use, distribution, and role of guiding questions in formal writing?♣ • RQ2: How well can language models understand 11749 and generate guiding questions?♣ • RQ3: What is the effect of these questions on human reading comprehension?♣ To answer these questions, we start by curating GUIDINGQ, a dataset of 10,577 guiding questions from research articles and textbooks. Since the two source texts are written by expert writers, we assume the questions are carefully designed to enhance readability and thus ideal for our research. Through qualitative and quantitative analysis on GUIDINGQ, we summarize the question roles based on their discourse and interactional effects, and present their usage and distribution across the domains (RQ1, Sections 3 and 4). Then, we explore various approaches to model these questions from three interrelated aspects (RQ2, Section 5): identifying question positions (where to ask), predicting question focus (what to ask), and finally generating the questions (how to ask). Our results highlight the importance of inter-question relationships and the challenge of question position identification in generating guiding questions. Finally, to validate whether and how the generated guiding questions can facilitate active reading (RQ3, Section 6), we conduct a carefully designed human study where participants read articles with or without questions, complete a post-reading summarization test, and later evaluate the questions. The results demonstrate that the generated questions are not only of high quality but can help readers produce better summaries, indicating an improved memory retention and understanding of the high-level information of the article. 2 Related Work Here we focus on discussing the linguistic role of questions. Questions can be used as a tool for discourse planning. The Question Under Discussion (QUD) framework uses questions to interpret the discourse relationship between textual units within a document (Van Kuppevelt, 1995; Roberts, 2012; Benz and Jasinskaja, 2017). For example, the relationship between "Sa: A night of largely peaceful protests ended early Monday in a bloody" and "Sb: Hours earlier, Egypt’s new interim leadership had narrowed in on a compromise candidate to serve as the next prime minister." can be described by question "What happened before the clash?", where Sa, which elicits the question, is called anchor sentence and Sb is the answer sentence. Studying QUD with modern NLP techniques is a relatively new field where most efforts focus on data construction (De Kuthy et al., 2018; Westera et al., 2020; Ko et al., 2022; Wu et al., 2023). While QUD provides a more flexible way to represent discourse connections compared to predefined fixed relationships (e.g., elaboration, condition) than other frameworks like Rhetorical Structured Theory (RST; (Mann and Thompson, 1988)), how to use these implicit questions to assist readers is not straightforward. In this paper, we instead focus on questions that are explicitly presented in a text by the author, which we call guiding questions. We argue such questions are more powerful. Besides their discourse effect, these questions directly interact with readers. Therefore, they have the potential to influence readers’ behaviors (Curry and Chambers, 2017). For example, articles titled with questions are found to obtain more downloads (Jamali and Nikzad, 2011). In terms of the goal, QUD strives to explore the dense space of questions to interpret exhaustive intra-document relationships, while we aim to distill sparse but crucial questions to engage readers without overwhelming them. We highlight the question position identification task as a preliminary step before generating questions. This dimension is neglected by previous QUD or generic Question Generation (QG) studies. However, when questions are used as a communicative tool to interact with readers during reading, when and where to raise them is arguably important. We hope this study sheds light on a greater understanding of the implication of questions in reading comprehension. 3 Taxonomy of Guiding Questions This section describes our taxonomy of guiding questions. The taxonomy is built upon the discussion in Hyland (2002), which we adapt based on our dataset (Section 4). In particular, we classify questions into five different roles based on their different discourse or interactional effects. Their definitions and examples are provided below. • Arouse Interest.♦The first category refers to questions that appear in titles. Since the title is generally the reader’s first encounter with a text, formulating it as an intuitive question can grab the reader’s attention. Questions in titles: How do Philosophers arrive at truth? Is there no quantum form of Einstein Gravity? Why do house-hunting ants recruit in both directions? 11750 GUIDINGQ • Documents • Ques+ons • Answers • Evidence • Ques+on Roles Textbook Chapters Research Articles S5: Role Iden,fica,on S4: Evidence Extrac0on S6: Quality Review Q1: Are the Values Central to Business Ethics Universal? One of the perennial themes is business ethics … Q2: Is there a set of universal values that all can endorse? Q3: Are there “human values” that apply everywhere …? Corrup5on appears to exist everywhere, so it would … Q4: Can business ethics provide one? Complete Q1: … A4: Business ethics can provide a solu3on to corrup3on by … Complete Q4: Can business ethics provide one solu3on to corrup3on? Answer to Q4: Business ethics can provide a solu3on to corrup3on by … Evidence of Q4: 1. Business ethics mo3vates managers to 1) meet legal… Answer to Q1: … Source Texts ❌ ✔ Role 4: Establish Claim Q4: Can business ethics provide one solu3on to corrup3on? Q1: Arouse Interst Q2: Frame Purpose Q3: Frame Purpose Q4: Establish Calim Answer to Q2: … 2.Corrup3on can be defeated only by individuals ac3ng in accordance with … S1: Ques0on Extrac0on S2: Ques0on Comple0on S3: Ques0on Answering Data Annota0on = heuris)cs/algorithm = LLM = human annotator Organize Discourse Complete Q3: … Complete Q2: … Answer to Q3: … ✔ Figure 2: The construction pipeline of GUIDINGQ. Important outputs are highlighted Note that although the title question usually reveals the main topic of an article, the breadth of the content is not always encapsulated by it • Frame Purpose.♣This type of question often surfaces and clusters in the beginning section to foreground the central topics to be explored. Several aspects of this theory need further investigation. Is it possible to achieve predictable refractive changes? Can this be achieved through an intact epithelium? ... This paper describes the use of a novel device ... Writers pose these questions to provide an agenda for the article and then pick them up again in later sections to direct readers through the reading. • Organize Discourse.♠Questions can also serve as subheadings to structure the text, guiding readers by explicitly introducing shifts in information and identifying what will be discussed in the ensuing section. What are the advances of telecommuting? The term telecommuting emerged in the 1970s to ... What are the drawbacks of telecommuting? In 2013, Yahoo’s thenCEO, Marissa Mayer, ended ... What are the ethical challenges of telecommuting? ... Noticeably, such questions usually appear multiple times throughout an article, collectively creating a sense of progression toward a greater understanding of the topic. • Establish Claim.♥Another use of questions is to introduce and emphasize the writer’s arguments rather than to seek the reader’s interaction or viewpoint. What contributes to a corporation’s positive image over the long term? Many factors contribute, including a reputation for treating customers and employees fairly and for engaging in business honestly. A distinct feature of such questions is that the writer often provides a clear answer (i.e., the argument), usually close to the question, thereby limiting the reader’s alternative interpretations to the preferred one. • Provoke Thought.★Finally, there are some “genuine” questions that do not anticipate specific responses within the text. Therefore, they can facilitate the reader’s active thinking to the greatest extent. If the technological resources of today’s governments had been available to the East Germany Stasi and the Romanian Securitate, would those repressive regimes have fallen? How much privacy and freedom should citizens sacrifice to feel safe? [END] It’s worth noting that different roles are not mutually exclusive. For instance, an AROUSE INTEREST question may also provoke thoughts and vice versa. Nevertheless, we focus on understanding the main role of a question. 4 GUIDINGQ DATASET In this section, we first discuss the choice and rationale of source texts (§ 4.1), followed by the construction pipeline (Figure 2) of the GUIDINGQ dataset (§ 4.2). Then, we present a series of distributional features of guiding questions (§ 4.3). 4.1 Source Texts We select scientific articles and textbooks as the source texts to build the dataset. Our choice is based on two considerations. First, their writerreader discourses have a clear communicative intent, either peer-to-peer or teacher-to-student, which can motivate the use of questions. Second, they are formal texts written by experts, ensuring 11751 GuidingQ Textbook Research Articles # documents 621 1,501 # avg. words /doc. 2,404 2,140 # questions 3,593 6,964 # avg. words / question 13.8 21.0 # avg. questions / doc. 5.79 4.63 Table 1: Statistics of the GUIDINGQ dataset that questions presented in them are strategically used to enhance readability. Specifically, we use textbook chapters collected from an online free publisher OpenStax2 (Singh et al., 2023) and research papers from the arXiv and PubMed datasets (Cohan et al., 2018). 4.2 Construction Pipeline We describe the main steps of collecting and annotating GUIDINGQ below. S1: Question Extraction. We start by extracting questions from source texts by detecting interrogative marks. We only keep documents with at least three questions, indicating the writer actively used questions in the writing. S2: Question Completion. Since the extracted questions are a part of the source texts, they are not always semantically complete due to omissions or unclear pronouns, e.g., What central point might constitute such a code? Therefore, we first identify and complete such questions based on the context. We do this because it is the first step to understanding the meaning of such questions. S3: Question Answering. Next, we generate the answer to each question. In particular, the answer should be detailed enough and solely based on the article. Therefore, we use the article’s words as the answer whenever possible. If a question is not discussed in the article (e.g., PROVOKE THOUGHTS question), we label it as "NO ANSWER." S4: Evidence Extraction. Given a produced answer, we automatically extract supporting sentences from the article as evidence. We do this by greedily searching a set of sentences that has the maximum Rouge score with the answer, which is the standard way to find Oracle sentences for extractive summarization systems (Nallapati et al., 2017). For questions without answers as per S3, we directly label them as "NO EVIDENCE." S5: Question Role Identification. Finally, we identify the role of each question. We make this the last step as the information collected in previous 2openstax.org 0% 10% 20% 30% 40% Arouse Interest Frame Purpose Organize Discourse Establish Claim Provoke Thought Distribution of Different Questions Textbook Article Figure 3: Distribution of different question roles. steps (complete question, answer, evidence) can help in understanding the role of a question. Recent studies have shown that LLMs with carefully designed prompts are comparable to or even better than human annotators (He et al., 2023; Zhang et al., 2023; Törnberg, 2023). To reduce human effort and enable data scaling in the future, we use ChatGPT to complete the annotation steps S2, S3, and S5. See prompts in Tables 8, 9, and 10. S6: Quality Control. We adopt a Human-LLM coannotation paradigm (Li et al., 2023). Concretely, we ask the model to provide a confidence level for each of its outputs. Examples below a certain level are reviewed by human annotators, followed by an overall quality review, detailed in Appendix A. 4.3 GUIDINGQ Analysis The statistics of the GUIDINGQ dataset is summarized in Table 1. In general, textbook chapters use questions slightly more frequently than research articles. This is possibly because teacher-to-student interactions are more inclined to involve questions than peer-to-peer ones. What is the distribution of question roles?♠ In Figure 3, we compare the distributions of different questions on the two subsets. As can be seen, research articles contain more FRAME PURPOSE and ORGANIZE DISCOURSE questions, while textbooks favor PROVOKE THOUGHT questions possibly due to their educational purpose. Besides, AROUSE INTEREST (title) questions are more common in articles than textbooks, although both proportions are small. A possible reason is that research articles exist in a competitive environment where potential readers are confronted with a large number of papers, under which circumstances interrogative titles could help attract readers (Haggan, 2004; Jamali and Nikzad, 2011). 11752 0% 20% 40% 60% 80% 1st 2nd 3rd 4th 5th Question Position Distribution Frame Purpose Organize Discourse Establish Claim Provoke Thought Figure 4: Position distribution of different questions. We omit AROUSE INTEREST questions as they are in titles by definition. How diversely do human writers use guiding questions?♠To understand this, we measure the number of question roles used in an article. The results are shown in Table 2. Since research articles contain fewer questions per article, their question roles are slightly less diverse than those in textbooks. Overall, the majority of articles (70%+) use less than three types of questions, suggesting room for improvement in human questioning strategies. # Question Types 1 2 3 4 5 Textbooks 21.1 44.8 26.8 7.0 0.2 Articles 35.2 45.5 17.2 2.0 <0.1 Table 2: Distribution (%) of the number of unique question roles in one article. Where are guiding questions asked?♠We divide articles into five equally sized segments and calculate the percentage of questions that appear in each part. As in Figure 4, there is a strong positional bias among different questions. FRAME PURPOSE questions mostly appear in the first and second segments. ORGANIZE DISCOURSE questions are largely located in the middle segments, while PROVOKE THOUGHT questions tend to emerge in the last segment, consistent with their expected functions. ESTABLISH CLAIM questions are skewed towards the end of the text. A possible explanation is that writers make more and more conclusive arguments with the progression of discussion. Where are guiding questions answered?♠ We measure the distance between a question and its farthest evidence sentence (if any) in Figure 5. This can be considered the scope a question acts on the discourse of the document, i.e., from when the question is raised to when it is closed. In 0 20 40 60 80 Establish Claim Organize Discourse Frame Purpose Arouse Interest Max Evidence Distance Article Textbook Figure 5: The average distance (in terms of sentence numbers) between a question and its farthest evidence. this sense, PROVOKE THOUGHT questions go beyond the content as they do not have specific answers. We find the two subsets show a similar distribution. FRAME PURPOSE questions serve as the outline of a document, therefore having the largest range. ORGANIZE DISCOURSE questions are usually answered within one or two paragraphs, consistent with their subheading role. In contrast, ESTABLISH CLAIM questions have prompt answers as the claims. 0% 15% 30% 45% 60% 75% Span Elaboration Same-unit Joint Attribution Explanation Contrast RST Relationship Distribution Overall-Textbook Overall-Scientific Question-Textbook Question-Scientific Figure 6: Distribution of RST discourse relationships with respect to questions and overall units. Relationships accounting for less than 1% are omitted. How are questions related to other text units?♠ As guiding questions are integral to the article, we also analyze the discourse relationships between questions and other text units3 within the same document using an RST (Mann and Thompson, 1988) parser4. In Figure 6, we compare the distribution of discourse relationships concerning questions with those of all text units. The analysis reveals that questions exhibit a higher proportion of concrete relationships, such as "elaboration" and "explanation," and a lower proportion of general relationships, such as "span" and "same-unit" compared to other units. This suggests incorporating ques3Following RST, we consider Elementary Discourse Unit (EDU) as the minimal unit. 4github.com/EducationalTestingService/rstfinder 11753 tions and their answers in writing might add to the coherence of the discourse structure. 5 Guiding Question Generation In this section, we first describe our methods to model guiding questions and then report experimental results. 5.1 Task Formulation Given a document D = {s1, ..., sn} of n sentences, our goal is to learn a sequence of questions Q = {q1, ..., qm}, their positions in the article P = {p1, ..., pm}, and (optionally) their answer information A = {a1, ..., am}. In particular, 1 ≤pi ≤n is the index of the sentence after which qi should be asked. We call these sentences anchor sentences {sp1, ..., spm}. 5.2 Data Preparation To construct training examples, we remove questions from articles and reconstruct them based on the corrupted articles such that the model can learn how to use guiding questions as human writers. In concrete, given an article D, we extract its questions and locate their positions to obtain Q and P. For each question qi, its answer information is a set of keywords ai = {w1, ..., w∣ai∣} extracted from its evidence sentences obtained during the annotation (Section 4.2). We take this form to exclude redundant information in the full answer and reduce input (output) length for efficiency consideration. Since deleting sentences would create incoherence and reduce learning P to identify where sentences are removed, we use gpt-3.5-turbo-1106 to assess the coherence of missing positions and, if necessary, eliminate the incoherence by making small edits around the missing positions. Since there could still be nuanced differences in edited positions, we perform the same “delete and smooth” operation on 1% randomly selected non-question sentences as noise, detailed in Appendix B.1. 5.3 Modeling In what follows, we describe three popular QG paradigms considered for our task: Pipeline, Multitask, and Joint Generation (Ushio et al., 2023). All approaches are unified as a text generation task and use Flan-T5 (Chung et al., 2022) as the backbone model, which we finetune on our dataset. Pipeline. We decompose the task into three subtasks to learn {P, A, Q} with independent models. First, a Position Predictor (PP) identifies the positions of questions. A naive way is to directly generate the position indices {p1, ..., pm} conditioned on D. However, this requires learning a mapping between sentences and numerical symbols. Instead, we opt to identify the anchor sentences by training PP to copy them from D: ˜C = argmax C PθPP(C∣D). (1) where the output ˜C = [sp1∣...∣spm] is a concatenation of anchor sentences separated by "∣". P can be obtained by relocating copied sentences in D. When there is no exact match, we use BM25 to get the most similar sentence. Then, we highlight the target position in D by inserting a special mark [Question] after the anchor sentence and use an Answer Extractor (AP) to generate answer keywords ai.: D(i) = [s1, ..., spi, [Question], ..., sn], (2) ˜ai = argmax a PθAE(a∣D(i)), (3) where D(i) is the document marked at position pi. Finally, a Question Generator (QG) generates questions based on predicted positions and extracted answers: ˜qi = argmax q PθQG(q∣D(i), ai). (4) Note that the PP model predicts all positions in one pass while the other two generate output one by one. MultiTask. The multitask model still consists of the three components described above. However, instead of independently training three models, we train a unified model for all tasks in a multitask learning manner. In practice, we mix the training examples of the three tasks and distinguish them by adding different task prefixes before the inputs. Joint Model. As observed in Figure 1, guiding questions tend to be related to each other. Therefore, we consider jointly generating all questions at the same time. To be specific, we use a template function to convert {P, A, Q} into a flattened sequence T (P, A, Q) = {t(p1, a1, q1)∣t(p2, a2, q2)...} where t(p, a, q) = "Position:sp#Answer:a#Question:q". The 11754 Models Textbook Scientific # Q Rouge-L Meteor BertScore Dist-1/2 (↓) # Q R-L Meteor BertScore Dist-1/2 (↓) GPT-4 (0-shot) 7.58 15.7 18.9 84.3 64.4/48.9 5.57 14.3 19.7 82.8 68.4/49.1 Pipeline250M 1.73 12.9 13.7 77.4 71.4/51.6 1.69 12.7 7.93 79.5 66.1/48.3 Multitask250M 1.65 15.4 15.2 78.1 70.6/49.0 1.84 13.7 9.26 80.8 58.9/43.8 Joint250M 3.41 16.9 19.3 82.8 66.8/47.5 2.08 12.2 9.89 81.0 78.9/51.3 JointR 250M 3.77 18.3 20.8 84.5 65.4/46.9 2.16 13.1 10.2 81.0 75.2/49.8 JointR 780M 3.81 30.5 28.1 87.7 65.6/47.1 2.40 12.4 9.38 82.5 71.3/48.9 JointR 3B 3.95 36.7 34.8 88.7 65.6/46.8 2.46 11.7 9.59 82.6 69.1/46.9 JointR 11B 3.95 56.4 49.5 92.1 64.1/46.5 2.55 13.5 10.3 82.9 68.5/47.2 Reference 5.46 100. 100. 100. 61.7/46.3 4.50 100. 100. 100. 66.7/48.9 Table 3: Results of finetuned and prompt-based models on the GUIDINGQ dataset. Main takeaways: 1) human guiding questions are interrelated (as per Dist-N); 2) Joint generation w/ question role performs the best among fine-tuned models; 3) Generated questions can capture the main message of human questions (as per BertScore). model is trained to directly generate the whole target sequence T based on the article D: ˜T = argmax T PθJT(T ∣D). (5) In doing this, each question is also conditioned on previous ones, enabling the model to learn the inter-question connections. For the joint model, we also introduce a variant that additionally generates question roles. This is done by inserting "Role: question role" between the position and answer field of each entry. We denote this model as JointR. Since the above methods are all formalized as a text generation task, their training objectives take a similar form: L = − ∣Y ∣ ∑ i log Pθ(yi∣y1, ..., yi−1, X), (6) where X and Y are the input and output sequence of the corresponding task. See Appendix B.2 for training details. 5.4 Automatic Evaluation The main results are presented in Table 3, where we also include zero-shot prompted GPT-4 (gpt-4-1104-preview) with the prompt in Table 13. We report the average number of generated questions # Q, Rouge (L) (Lin, 2004), Meteor (Banerjee and Lavie, 2005), and BertScore (Zhang et al., 2020). Besides, we use Dist-N (1/2) (Li et al., 2016), the percentage of distinct n-grams in generated questions, to measure the balance between diversity and relevance of guiding questions. Since there is no one-to-one map between generated and ground-truth questions, we concatenate all the questions as a whole sequence to compute reference-based metrics. Overall, fine-tuned models generate fewer questions than references, while zero-shot GPT-4 generates more. We found that Flan-T5 tends to output short sequences; therefore, we attribute this to their different pre-training paradigms. Jointly generating question roles can boost performance, which is expected as they are indicative of a series of distributional features. The best fine-tuned models achieve remarkable BertScores. This suggests that the generated questions successfully replicate the main information of human questions. For the textbook dataset, the joint model generally performs the best among fine-tuned approaches. In particular, it best resembles the Dist1/2 results of human questions, while others tend to generate more independent questions (higher Dist1/2). This suggests that the success of this model is possibly because it better learns the inter-question relationships. As for the scientific set, the multitask model achieves competitive results with the joint model. We conjecture the more complex content and sparser questions increase the difficulty of learning the question relationship, which could diminish the advantage of joint generation. Finally, we scale up the parameter size of the best-performing model JointR up to 11B. We can see that scaling the model size results in significant performance gain on the textbook set but little on the scientific set. This demonstrates the challenge of understanding scientific language with LLMs. Question Position. Since the generated questions are different from references, naively matching their positions (e.g., recall, precision) is not a suitable way. Instead, we evaluate a question’s 11755 15 20 25 30 p-3 p-2 p-1 p p+1 p+2 p+3 Textbook Reference Joint Multitask Pipeline Perplexity Sentence Index 30 40 50 60 p-3 p-2 p-1 p p+1 p+2 p+3 Scientific Articles Reference Joint Multitask Pipeline Sentence Index Figure 7: Average word perplexity of the question (index p) and its surrounding sentences. We omit questions by 0-shot GPT-4 as we find their positions are sensitive to prompts and temperatures. position by measuring how well it fits the context using perplexity. Specifically, let p be the index of question q. We insert q in p and calculate the average word perplexity of sentences [sp−3, sp−2, sp−1, q, sp+1, sp+2, sp+3], each conditioned on its previous three sentences. The results are presented in Figure 7. Interestingly, human questions in textbooks create a local peak of PPL in the context. We explain this phenomenon from the connection between surprisal and salience: text units with higher salience are usually less expected, making them stand out from the context and attract the reader’s attention (Rácz, 2013; Zarcone et al., 2016; Blumenthal-Dramé et al., 2017). This is consistent with the interactional purpose of using questions. However, none of the models replicate this salience effect, either lowering the PPL (joint model) or increasing the PPL without a quick decline (pipeline/multitask). The scientific articles show a different pattern where human questions are with lower PPL. We conjecture this is because scientific articles usually make sufficient discussions before proposing a question. As a consequence, their questions are highly predictable by prior sentences. Finetuned models manage to replicate the effect, possibly because research articles and their question usage are usually structured (e.g., proposing questions in the introduction) and thus easier to learn. 6 Human Study Finally, we conduct a between-group human study to investigate the impact of guiding questions on reading comprehension. Participants are asked to read articles with (or without) questions, after which we gather their feedback and analyze their information retention and understanding to gain a holistic view of the effect of guiding questions. Reference Generated Quality Relevance 4.0 4.2 Position 4.2 4.3 Importance 4.4 4.2 Usefulness Engaging 3.9 4.0 Understanding 4.2 4.1 Overall 4.3 3.9 Table 4: Average scores by participants. “Quality” is evaluated on each single question, while “Usefulness” is evaluated on all questions of an article. 6.1 Experiment Design Procedure. The human study runs on a crowdsourcing platform Prolific and consists of 4 stages. 1. Demographic Questionnaire: We collect participants’ demographic information (Appendix C.1) and consent before the experiment. 2. Article Reading: Participants read an assigned article in 20 minutes with or without questions beside the article. Details of the reading interface are in Appendix C.2. 3. Comprephension Test: After reading, participants are asked to write a summary of at least 100 words without access to the article. 4. Evaluation: Finally, we ask participants to rate the usefulness and quality of questions (If any). Participants. We select 45 participants with the criteria being at least C1 command of English. Each participant received 10 GBP/hour on average. We randomly assign them to one of three groups: • Control: reading w/o questions. • Reference: reading w/ expert-written questions. • Generated: reading w/ generated questions. Test Articles. We use the JointR model to generate questions for five textbook chapters5 selected from different domains spanning Business, Philosophy, Sociology, Political Science, and Psychology. A few small adaptations are made to make them better fit the study (e.g., length reduction). To focus on question quality, we generate the same number of questions as reference6. Each article is assigned to 3 participants to average-out the effect of individual articles. 5We only consider textbook passages because scientific articles are challenging for general readers. 6We do this by truncating over-generated questions or disallowing the <eos> token until we get enough questions. 11756 Summary Quality (1-5) Coherence Consistency Informativeness Ctrl. 2.38±0.09 2.52±0.08 2.31±0.09 Gen. 3.14±0.08 3.28±0.08 2.98±0.05 Ref. 3.28±0.04 3.23±0.06 2.99±0.08 Table 5: Quality scores (Meanstd.) of summaries from three groups over 5 runs. 6.2 Question Evaluation Participants evaluate the intrinsic quality of each question from three aspects: relevance, position, and importance (detailed in Table 14). From a later survey, we found that participants may have different interpretations of these dimensions despite our instructions. Nevertheless, the results in the first section of Table 4 prove that the average quality of generated questions is on par with (if not better than) ground truth ones. 6.3 Effect on Reading Comprehension User Perceived Usefulness. Results in the second part of Table 4 prove that the generated questions are as good as human questions in perceived usefulness. See the questionnaire in Table 14. Improved Memorization and Comprehension. We use summary quality as a proxy to measure participants’ memorization and comprehension. The evaluation consists of three dimensions: Coherence, Consistency, and Informativeness, and is based on GPT-4, which has shown a superior correlation with human annotations on summary evaluation (Liu et al., 2023), detailed in Appendix C.3. As shown in Table 5, the reference and generated groups achieve comparable scores, and both outperform the control group by a large margin. An additional between-group summary analysis, including summary time, length, and n-gram overlap, is shown in Table 15. An intuitive explanation for the improved quality is that users memorize question-related information and incorporate them into the summary. To verify this, we measured the entailment score between summaries and answers to guiding questions using BertScore recall (BSr): ENTSCORE = 1 ∣S∣∑ s∈S 1 ∣Qs∣∑ q∈Qs BSr(s, aq), (7) where s is a summary, Qs is the set of guiding questions of the source article of s, and aq is the answer to q. Sum. Ans. Reference Generated Reference 55.2 – Generated – 56.3 Control 52.8 52.5 Table 6: Entailment score between guiding questions summaries and answers with BertScore recall (×100). The results in Table 6 show that the reference and generated groups incorporated more answer information compared to the control group, indicating improved memorization. However, the difference in recall scores is not as pronounced as the difference in summary quality (Table 5). This suggests that readers do not simply compile questionanswer pairs into summaries. Therefore, we believe the improvement in summary quality also reflects a deeper understanding facilitated by the guiding questions. Reading Time. The reading speeds of different groups are shown in Table 7. When questions are displayed next to the articles, the users spend a longer time reading. On the one hand, this is a signal of enhanced engagement. On the other hand, this potentially implies an extra cognitive load caused by guiding questions. Auth. Gen. Contr. Reading Speed (w/s) 1.00 0.98 1.39 Table 7: Reading speeds of different user groups measured by words per second (w/s). Participant Feedback. Finally, we gathered user feedback through a preliminary interview-based study, detailed in Appendix C.5. In summary, participants confirmed the benefits of guiding questions and discussed various aspects of these questions, highlighting both consistent preferences and nuanced complexities. We hope these insights will inspire future research in this area. 7 Conclusion This paper studies the discourse and interactional role of guiding questions in textbooks and scientific articles. We explore various approaches for modeling these questions, providing insights into how to model this task and highlighting challenges to be solved. We validate our results with human studies, which demonstrate reading with guiding questions can improve the high-level memorization and understanding of human readers. 11757 Broader Impacts In this study, we analyzed the use of questions in academic and educational articles, demonstrating their benefits for reading comprehension. While questions can enhance engagement, they can also increase readers’ cognitive load, as evidenced by longer reading times (Table 7). Additionally, questions may introduce unintended nuances for communication, such as creating unequal social relationships (Hyland, 2002). Therefore, it is important to be aware of these mixed effects when using guiding questions in writing. Limitations We summarize the limitations of this study into the following open questions. Is our question role taxonomy generalizable to other domains?★ Our investigation of the role of guiding questions is initially focused on textbooks and scientific articles. However, different domains might use questions differently. Nevertheless, our analysis (Section 4.3) uncovers distributional features that are indicative of question functions, such as their positions and question-answer relationships. These findings offer insights that could be generalized to understand the roles of questions in broader contexts. How to align guiding questions with individual preferences?★ Our model aims to replicate guiding questions crafted by human writers. However, these questions may not always resonate with individual readers, given their different reading goals and prior knowledge. We expect that personalized generation (Cui and Sachan, 2023), which takes into account user profiles, would yield more helpful questions. How has the role of questions evolved?★ It is important to note that the use of questions could change over time. For instance, Ball (2009); Jiang and Hyland (2022) have analyzed the distribution shift of questions in titles over the past decades. In this study, we did not take the temporal dimension into account, and the conclusions are based on contemporary texts. Therefore, the findings of this paper may not remain consistent in the future. References Rafael Ball. 2009. Scholarly communication in transition: The use of question marks in the titles of scientific articles in medicine, life sciences and physics 1966–2005. Scientometrics, 79(3):667–679. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Anton Benz and Katja Jasinskaja. 2017. Questions under discussion: From sentence to discourse. Sharita Bharuthram. 2017. Facilitating active reading through a self-questioning strategy: Student and tutor experiences and reflections of the strategy use. Journal for Language Teaching, 51(2):85–103. Alice Blumenthal-Dramé, Adriana Hanulíková, and Bernd Kortmann. 2017. Perceptual linguistic salience: Modeling causes and consequences. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615–621, New Orleans, Louisiana. Association for Computational Linguistics. Peng Cui and Mrinmaya Sachan. 2023. Adaptive and personalized exercise generation for online language learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10184–10198, Toronto, Canada. Association for Computational Linguistics. Niall Curry and Angela Chambers. 2017. Questions in english and french research articles in linguistics: A corpus-based contrastive analysis. Corpus Pragmatics, 1:327–350. Kordula De Kuthy, Nils Reiter, and Arndt Riester. 2018. QUD-based annotation of discourse structure and information structure: Tool and evaluation. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). 11758 Madeline Haggan. 2004. Research paper titles in literature, linguistics and science: Dimensions of attraction. Journal of Pragmatics, 36(2):293–317. Xingwei He, Zhenghao Lin, Yeyun Gong, Alex Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen, et al. 2023. Annollm: Making large language models to be better crowdsourced annotators. arXiv preprint arXiv:2303.16854. Ken Hyland. 2002. What do they mean? questions in academic writing. Text & Talk, 22(4):529–557. Hamid R Jamali and Mahsa Nikzad. 2011. Article title type and its relation with the number of downloads and citations. Scientometrics, 88(2):653–661. Feng Kevin Jiang and Ken Hyland. 2022. Titles in research articles: Changes across time and discipline. Learned Publishing. Wei-Jen Ko, Cutter Dalton, Mark Simmons, Eliza Fisher, Greg Durrett, and Junyi Jessy Li. 2022. Discourse comprehension: A question answering framework to represent sentence connections. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11752–11764, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy Chen, Zhengyuan Liu, and Diyi Yang. 2023. CoAnnotating: Uncertainty-Guided work allocation between human and large language models for data annotation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1487–1505, Singapore. Association for Computational Linguistics. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2511–2522, Singapore. Association for Computational Linguistics. William C Mann and Sandra A Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text-interdisciplinary Journal for the Study of Discourse, 8(3):243–281. Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2017. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. In Proceedings of the AAAI conference on artificial intelligence, volume 31/1. Péter Rácz. 2013. Salience in sociolinguistics: A quantitative approach, volume 84. Walter de Gruyter. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Craige Roberts. 2012. Information structure: Towards an integrated formal theory of pragmatics. Semantics and pragmatics, 5:6–1. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. PMLR. Janvijay Singh, Vilém Zouhar, and Mrinmaya Sachan. 2023. Enhancing textbooks with visuals from the web for improved learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11931–11944, Singapore. Association for Computational Linguistics. Nur Syamsiah, Zainal Rafli, and Sakura Ridwan. 2018. Self–questioning strategy on reading comprehension process. In 5th Asia Pasific Education Conference (AECON 2018), pages 120–129. Atlantis Press. Petter Törnberg. 2023. Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. arXiv preprint arXiv:2304.06588. Asahi Ushio, Fernando Alva-Manchego, and Jose Camacho-Collados. 2023. An empirical comparison of LM-based question and answer generation methods. In Findings of the Association for Computational Linguistics: ACL 2023, pages 14262–14272, Toronto, Canada. Association for Computational Linguistics. Jan Van Kuppevelt. 1995. Discourse structure, topicality and questioning. Journal of linguistics, 31(1):109– 147. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc. Matthijs Westera, Laia Mayol, and Hannah Rohde. 2020. TED-Q: TED talks and the questions they evoke. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1118–1127, Marseille, France. European Language Resources Association. 11759 Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Yating Wu, Ritika Mangla, Greg Durrett, and Junyi Jessy Li. 2023. QUDeval: The evaluation of questions under discussion discourse parsing. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5344– 5363, Singapore. Association for Computational Linguistics. Peiwen Yuan, Shaoxiong Feng, Yiwei Li, Xinglin Wang, Boyuan Pan, Heda Wang, and Kan Li. 2023. Batcheval: Towards human-like text evaluation. arXiv preprint arXiv:2401.00437. Alessandra Zarcone, Marten Van Schijndel, Jorrig Vogels, and Vera Demberg. 2016. Salience and attention in surprisal-based accounts of language processing. Frontiers in psychology, 7:844. Ruoyu Zhang, Yanzeng Li, Yongliang Ma, Ming Zhou, and Lei Zou. 2023. LLMaAA: Making large language models as active annotators. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13088–13103, Singapore. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In International Conference on Learning Representations. 11760 A Annotation Details We use gpt-3.5-turbo-1106 with a temperature of 0.2 for data annotation. The prompts used for question completion (QC), question answering (QA), and question role identification (QRI) are listed in Tables 8, 9, and 10. [Task Description] You will be given some questions extracted from an article and their surrounding texts. Your task is to check whether these questions are self-contained. For example, “What constitutes such a code?” is not a self-contained question due to the unclear “code”. If a question is not self-contained, complete it based on its context; otherwise, output it as it is. [Input] Question 1: {question 1} Context 1: {context of question 1} ... [Output] (Please strictly organize your output in the following format) Question 1: <complete question 1> ... Table 8: Prompt for question completion. We use the surrounding 10 sentences of a question as its context. [Task Description] You will be given an article and some questions. In particular, these questions are posed in the article and highlighted by a marker “[Question]” before them. Your task is to answer these questions based on the article. Start by reading the article carefully and locating the given questions. For each question, check whether it is answered in the article. If not, output “no answer”; otherwise, provide an answer that should be as detailed as possible and faithful to the article. Finally, provide a confidence level from 1 to 5 for each generated answer, where 1 is the lowest and 5 is the highest. [Input] Article: {article with questions marked} Question 1: {question 1} ... [Output] (Please strictly organize your output in the following format) Answer 1: <answer to question 1> Confidence 1: <confidence level, 1-5> ... Table 9: Prompt for question answering. We ask the model to provide confidence levels for outputs of QA and QRI as these two tasks are arguably more challenging. 437 questions have a confidence level of 1 on both tasks. An expert anno[Task Description] You will be given an article and some questions. In particular, these questions are presented in the article and play different roles. Your task is to identify their role. The definition and example of each question role are described below. {Definitions and examples of question roles} Make sure you understand the above instructions clearly. Start by reading the article carefully and locating the positions of the given questions. Check each question-answer pair and write down your analysis about its role based on the above definition. Finally, output its role and provide a confidence level from 1 to 5 for your judgment, where 1 is the lowest and 5 is the highest. [Input] Article: {article} Question 1: {question 1} Answer 1: {answer to question 1} . . . [Output] Analysis 1: <analysis for the role of question 1> Role 1: <role of question 1> Confidence 1: <confience of output, 1-5> Table 10: Prompt for question role identification. tator (author of this work) manually validated these questions. Note that our goal is not to annotate as many examples as possible, but to check whether there are unknown question roles beyond our taxonomy and analyze common bad cases to improve prompts. We found that most cases with low confidence are because of their mixed roles. For example, when a question introduces an argument and meanwhile starts related discussions, it could be both ESTABLISH CLAIM and ORGANIZE DISCOURSE question. For such ambiguous cases, we recommend determining their main roles based on their question-answer relationships, as shown in Figure 5. For example, if the abovementioned question has an immediate answer, its main role should be ESTABLISH CLAIM . Note that our taxonomy is to provide a holistic understanding of question functions rather than a hard classification system. Task Criteria Yes (%) QC The question is self-contained 90.5% QA The answer is overall acceptable 83.6% QRI The identified role is correct or 79.4% Table 11: Quality review of automatic annotation. Finally, we sample 10 research articles and 10 textbook chapters with 116 questions in total and 11761 review the accuracy of annotation. As shown in Table 11, decent results are observed on all tasks. We plan to scale the dataset and update the annotation with more powerful LLMs available in the future. B Experiment Details B.1 Article Processing To process an article into the training format, we delete questions from the article and eliminate incoherence by prompting gpt-3.5-turbo-1104 with the instruction in Table 12. To reduce the cost, we build the paragraph using the 5 sentences before and after the deleted question, which is enough to assess or restore the coherence of the local context according to our qualitative inspection. When performing this operation on random sentences, we disallow sentences around (distance<10) any already deleted or question sentences to be selected in order to avoid severe incoherence. [Task Description] Given a paragraph where a sentence has been removed and replaced with "[MASK]," your task is to assess whether the paragraph remains coherent without the missing sentence. If yes, simply remove the [MASK] token. If not, please edit the text around [MASK] to restore its coherence. You can only make necessary and minimal edits, leaving the majority of the paragraph verbatim. You can not introduce new information or change or remove existing information. [Input] Input Paragraph: {paragraph with a missing sentence} [Output] Coherent paragraph: <coherent paragraph> Table 12: Prompt for coherence maintenance. B.2 Training setup We split the dataset into 90% training and 10% test sets. Our implementations are based on the Transformers Library (Wolf et al., 2020). In concrete, for all approaches, we fine-tune the Flan-T5 for up to 10 epochs with a learning rate of 5e −5 and batch size of 32. Following Raffel et al. (2020), we employ the AdaFactor (Shazeer and Stern, 2018) optimizer and do not use warm-up. An early stop strategy is applied when the loss on the validation set does not decrease in three continuous epochs. We use 4 Nvidia Tesla A100 cards with 40 GB GPU memory for training. One epoch takes around half an hour. At inference, we use beam search [Task Description] Given an article, your task is to incorporate several questions into the text to enhance its readability and make it more engaging. For each question, first determine its position in the article by copying the sentence after which the question should be raised, then provide a set of keywords of the answer to the question. Finally, generate the target question. [Input] {article} [Output] Output 1: Position: <sentence precedes the question> Answer Keywords: <keywords separated by “,”> Question: <question 1> Output 2: ... Table 13: Prompt for GPT-4 question generation decoding with a beam size of 4. All evaluations are conducted with the default parameters in their public implementations. Since the articles in our dataset are relatively long, we truncate them and keep sentences whose indices fall within [max(0, p0 −5), min(∣D∣, pm + 5)], where ∣D∣is the number of sentences in article D, and p0, pm are the index of the first and last question respectively. In other words, we keep at least 10 sentences as the context for predicting each question. If the truncated document exceeds the model’s context length, we split it into segments of roughly the same length and run each segment separately. C Human Study Details C.1 Demographic Information of Participants The average age was 29 years, with 25 female and 20 male participants. The simplified ethnicity distribution is: 23 white, 15 black, and 7 Asian. All information is on a self-identification basis. C.2 Reading Interface Figure 8 shows a screenshot of the reading interface. To highlight guiding questions, we display them on the right side of the article. In order to measure paragraph-level reading time, participants need to click the "Finished" button at the end of each paragraph to reveal the next one. 11762 Figure 8: Main reading interface. We highlight reference or generated questions on the right side. In order to measure paragraph-level reading time, participants have to click Finished at the end of each paragraph in order to reveal the next one. Engaging The questions helped in keeping me engaged with the text. Understanding The questions improved my understanding of the structure and main ideas of the article. Overall Overall, I prefer to have such questions during the reading. Relevant Is the question is relevant to the context? Position Is the question raised at an appropriate position and not distracting? Important Is the question important to the central topic of the article? Table 14: Evaluation questions used in the human study. C.3 Summary Evaluation We prompt GPT-4 (gpt4-1104-preview) with a temperature of 0.7 to evaluate the quality of collected summaries. The prompt is shown in Table C.3, where we adopt the chain-of-thought (Wei et al., 2022) and batch evaluation (Yuan et al., 2023) prompting strategy. C.4 Additional Summary Analysis We provide additional analysis about collected summaries in Table 15, including summary time, length, and n-gram overlap between summaries and their sources. We can observe the reference and generated group produced longer summaries than the control group and spent longer time accordingly. The authentic group’s summaries and the generated group’s summaries are similar in quality, but the former is more concise. This can be attributed to the greater significance of reference questions, consistent with user ratings in Table 4. In addition, summaries from both the reference and specifically generated groups show a larger n-gram Sum. Length Sum. Time N-gram Overlap (%) uni-gram bi-gram tri-gram Contr. 117 619 70.8 6.22 0.69 Auth. 123 627 74.1 6.45 0.71 Gen. 146 745 76.0 8.84 1.44 Table 15: Averaged summary length (words), summarization time (seconds), and n-gram overlap between summaries and articles. overlap, providing further evidence of improved memorization. C.5 Observations from Preliminary Study Before the reported human study, we conducted preliminary interviews and follow-up surveys with 15 participants, five from each group, as detailed in Sec. 6.1. The aim is to validate the experiment design and gather user preferences and feedback on presenting questions during their article reading experience. We refer to participants as group_id (R/C/G) + user_id (1-5), where R, G, and C stand for Reference, Generated, and Controlled. Many participants expressed positive feedback on the generated questions, noting that “the questions are easy to understand” (G5). Notably, compared to the author-curated questions with relatively complex terms and sentences, our generated questions had simpler vocabulary and shorter sentences and facilitated quicker context comprehension. Some participants even found it helpful to use the questions to “remember the content of the whole article” and “better understand it again” (G1). We summarize the observations from the interview into three points. 11763 Prompt Template [Task Description] Your will be given three summaries written for an article titled “{title}”. Please act as an impartial judge and evaluate the quality of these summaries in terms of {metric}. Please make sure you read and understand the following instructions carefully. Please keep this document open while reviewing and refer to it as needed. [Evaluation Criteria] {metric description} [Evaluation Steps] Read the source article carefully and identify the main topic and key points. Read each summary carefully. Check if the summary meets the above criteria and provide an explanation for your judgment. Assign a {metric} score for each summary on a scale of 1 (lowest) to 5 (highest). Decimal scores are particularly encouraged. [Input] Article: {article} Summary 1: {summary_1} Summary 2: {summary_2} Summary 3: {summary_3} [Output] (Please strictly organize your output in the following format) Explanations: Analysis of summary 1: <your analysis> Analysis of summary 2: <your analysis> Analysis of summary 3: <your analysis> Scores: Score for summary 1: <score only, 1 - 5, preferably decimal> Score for summary 2: <score only, 1 - 5, preferably decimal> Score for summary 3: <score only, 1 - 5, preferably decimal> {metric description} Coherence (1-5) - the collective quality of all sentences. The summary is well-structured and well-organized. The summary should not just be a heap of related information, but should build from sentence to sentence to a coherent body of information about a topic. Consistency (1-5) - the factual and conceptual alignment between the summary and the source article. The summary faithfully and precisely conveys the messages and ideas from the source material, without distortion or misinterpretation. Informativeness (1-5): the extent to which the summary encapsulates the essential and relevant information. Informativeness is not merely about including various pieces of information, but selecting the most crucial elements that offer a comprehensive understanding of the topic. Table 16: Prompt template and metric descriptions for summary evaluation. 11764 Consistent Question Preference. While each participant has their unique criteria for defining a good question, we have observed a consistent preference for questions that are both intriguing and informative. Among the 10 participants exposed to supported questions, whether authentic or generated, seven conveyed a preference for questions that stimulate thought and reflection. These questions may offer insights (R5) or highlight specific details (G1), but they must be inherently challenging and motivate reading (G5). Conversely, participants tend to dislike questions that are too easy or have immediate answers within the article (R4, R5, G5). This observation provides valuable insights for refining our future question generation process by allowing us to better control the level of difficulty and align with participants’ preferences. Relevance, Distractibility, and Helpfulness. In the survey, we asked participants to assess the relevance, distractibility, and helpfulness of each question on a 5-point Likert scale. Half believe a question’s relevance depends on whether the following sentences answer it. Meanwhile, shorter questions tend to be less distracting, with their distractibility rating inversely proportional to perceived helpfulness. Interestingly, conflicting viewpoints arose, with some participants considering certain questions neither distracting but useless (R1) or helpful yet irrelevant (G1, G4). This nuanced understanding of human complexity calls for more research in human-AI collaboration research to find adaptive question generation solutions. Question Position and Scenario Matters. The two observations mentioned above may vary slightly depending on the question’s position and the reading scenario. Notably, five participants experienced a “cold start” during the reading task, expressing difficulty in “getting into the context of an article at first” (R1). Therefore, their initial preference leans towards easier and more relevant questions at the article’s beginning. As they familiarize themselves with the context after reading a few paragraphs, their inclination shifts towards more intriguing and divergent questions. Furthermore, participants’ standards for a good question may fluctuate in different scenarios. Distinctions were made between first-time reading versus content reviewing (R2), learning scenarios versus examination scenarios (R4), and serious learning versus casual reading (G4). These considerations could serve as additional factors in our future iterations. 11765
2024
632
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11766–11781 August 11-16, 2024 ©2024 Association for Computational Linguistics Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective Zihao Yue Renmin University of China [email protected] Liang Zhang Renmin University of China [email protected] Qin Jin* Renmin University of China [email protected] Abstract Large Vision-Language Models (LVLMs) often suffer from multimodal hallucinations, wherein they may create content that is not present in the visual inputs. In this paper, we explore a new angle of this issue: overly detailed training data hinders the model’s ability to timely terminate generation, leading to continued outputs beyond visual perception limits. By investigating how the model decides to terminate generation with EOS, the special end-of-sentence token, we find that the model assesses the completeness of the entire sequence by comparing the generated text with the image. This observation suggests that the model possesses an inherent potential of making proper EOS decisions based on its visual perception to avoid overly lengthy outputs. To take advantage of such potential, we explore two methods to mitigate multimodal hallucinations: a training objective that enables the model to reduce hallucinations by learning from regular instruction data, and a data filtering strategy to prevent harmful training data from exacerbating model hallucinations. Both methods significantly improve the hallucination performance of LVLMs, without requiring any additional data or knowledge.1 1 Introduction Ever since Large Vision-Language Models (LVLMs) (Yin et al., 2023a) were achieved through bridging vision encoders with Large Language Models (LLMs) (Brown et al., 2020; Zhao et al., 2023a), they have been plagued by the problem of multimodal hallucinations, i.e., their text outputs may include unfaithful content to the visual inputs, such as non-existent objects (Rohrbach et al., 2018), which greatly harms the reliability of their applications. Extensive research has shed light on the origins of multimodal hallucinations, including *Corresponding Author. 1https://github.com/yuezih/less-is-more Training Data: Q: Can you describe the main features of this image for me? A: In the image, a man is skillfully catching air while skateboarding in a public park in a city, seemingly flying through the air … Additionally, there are two benches visible in the park, one closer to the left and another further back … overly detailed Figure 1: Top: An example from the LLaVA instruction data. The training data can be overly detailed to exceed the model’s visual perception limits. Bottom: Average log-likelihood of the LLaVA (7b) model predicting EOS at positions labeled as EOS during instruction tuning. Training the model with overly detailed data leads to a decrease in its tendency to stop generation. the inability of vision encoders to represent finegrained visual details (Jiang et al., 2023b; Tong et al., 2024), model reliance on inherent parametric knowledge such as language priors and statistical biases (Leng et al., 2023; Zhou et al., 2023), and pervasive hallucinations in the training data itself (Yu et al., 2023a; Liu et al., 2023a). In response to these insights, a variety of strategies have been proposed to mitigate hallucinations in LVLMs (Yin et al., 2023b; Zhai et al., 2023; Zhao et al., 2023b). Although significant progress has been made, in this paper, we highlight a crucial but often overlooked source of hallucinations: the excessively detailed training data. For example, in the detailed image captioning task, the caption data for an image typically integrates rich visual semantics from multiple human annotations or vision expert models, and is rewritten into lengthy paragraphs by LLMs (Liu et al., 2023c), as shown in Fig. 1. These training data, while high-quality and meeting our expectations for detail, may exceed the 11766 visual perception capability of LVLMs, especially for subtle image features such as small or easily confusable objects. When trained with such data, in an attempt to fit the detail level and length distribution of ground truth captions, the model may risk expressing details that it cannot discern from the image, and therefore exhibit hallucinations. Ideally, models should be trained to terminate generation upon reaching their visual perception limits to avoid hallucinations. However, because gauging such limits is not trivial, it is difficult to provide explicit supervision to teach models to stop generation timely, and it is impossible to construct training data that well matches model capabilities. Fortunately, we can draw inspiration from a closer examination of the model’s decisions regarding the generation of the end-of-sentence (EOS) token. We first employ a saliency-based method to analyze how information flows from the context to the target position where the model predicts the next word. We discover that in predicting the EOS token, the model tends to rely more on all preceding sentences rather than just the current sentence. This leads to a hypothesis that the model assesses the completeness of the entire sequence when deciding whether to terminate the generation. Then, by manipulating the context, we observe that the model’s tendency to predict EOS clearly varies depending on the semantic completeness of the generated text relative to the visual input. For instance, reducing visual context (easier to reach textual completeness) makes the model more likely to end the generation, whereas concealing textual context (further away from textual completeness) prompts continued generation. This confirms the hypothesis above and implies that such a completeness assessment is accomplished by comparing the generated text with the perceived visual information. These observations suggest that the model inherently holds the potential to make timely EOS decision to terminate the generation based on its visual perception. When the model decides to end the generation, it indicates that the current generated context sufficiently captures the visual information it can perceive, and any further outputs may exceed the model’s visual perception limits, possibly leading to hallucinations. To unlock such potential of models, we explore two approaches to enhance the model for better EOS decisions. (1) A learning objective for model training, termed Selective EOS Supervision. Simply modified from the Maximum Likelihood Estimation (MLE), it enables the model to mitigate hallucinations through learning from regular instruction data. It is applicable both for further training to reduce hallucinations in existing models, and for initial instruction tuning (Ouyang et al., 2022) to alleviate the onset of hallucinations. Specifically, by briefly further training on the original instruction data, the sentence-level and instance-level hallucinations of LLaVA-1.5 (Liu et al., 2023b) are reduced by 26% and 27%, respectively. (2) A data filtering strategy based on Scoring EOS Supervision, to eliminate harmful training data that can impair the model’s ability to end sequences. We design two metrics to assess the positive and negative impact of data on the model’s EOS tendency, and combine them to rank and filter the training data. Experimental results show that removing a small portion of the data can significantly reduce the hallucinations of models trained on it. These findings further validate our hypothesis and provide simple yet effective solutions for mitigating multimodal hallucinations in LVLMs. 2 EOS Decision In autoregressive language models, sequences are completed through continuous next-token prediction (NTP) . The termination of the process is achieved by introducing a special end-of-sentence (EOS) token, denoted as vEOS, into the vocabulary. At each NTP step, the model chooses between a regular content token and vEOS, deciding whether to continue the sequence generation or end it, which we refer to as EOS decision (Newman et al., 2020). In this section, we delve into how LVLMs reach EOS decisions. Specifically, in Section 2.1, we analyze the contextual information that the model relies on to predict vEOS; in Section 2.2, we explore how the model adjusts its tendency to terminate generation with the multimodal input. Corresponding findings are further discussed in Section 2.3. 2.1 Information Basis of EOS Decision We first investigate where the information for EOS decisions comes from in the context. Given that the context usually contains a long paragraph with multiple sentences, we group the context tokens into three parts: image tokens, preceding sentences, and the current sentence, to observe their respective contributions to the model’s prediction decision. For comparison, we also examine the information for non-EOS target predictions occurring in the 11767 0 5 10 15 20 25 30 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Information Flow from image from current sentence from preceding sentences 17% 41% 42% 0 5 10 15 20 25 30 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Information Flow from image from current sentence from preceding sentences 16% 14% 58% Figure 2: Significance of the information flows from different parts of the input context to the target position during the prediction of a random token (left) and the EOS token (right). The significance refers to the proportion of these information flows out of a layer’s total flows. middle of a sequence, where the model needs to predict a regular content token. Since the context exposed for vEOS prediction is the entire sequence, for a fair comparison, non-EOS targets are randomly selected from the last 10 tokens of the last sentence in the sequence. This ensures that they have access to all previous sentences while perceiving a sufficient portion of the current sentence. We adopt the saliency score (Simonyan et al., 2013; Michel et al., 2019) as the metric for investigation. The saliency score of a token represents the sensitivity of the model to this token, i.e., how much its change affects the model prediction. As suggested by Wang et al. (2023b), we use the saliency score to quantify the information flow between tokens. Concretely, we feed the model with the first n−1 tokens to predict the n-th target token through a forward pass, and obtain the crossentropy loss L(x) at the n-th target position. The saliency matrix I is given by I = A ⊙∂L(x) ∂A , where A denotes the self-attention score matrix of the language model, ⊙means element-wise product, and I(i, j) reflects the significance of the information flow from the j-th token to the i-th token. We compare the information flow patterns of EOS predictions and non-EOS predictions to elucidate the information basis of EOS decisions. Implementation details. We choose the 7b version of LLaVA-1.5 as the model, containing a language decoder with 32 layers and 32 attention heads. The saliency matrix per layer is derived by averaging across all heads. The data used for investigation comes from Detail23K, a subset of the LLaVAInstruction dataset (Liu et al., 2023c), containing 23K detailed image descriptions for instruction tuning. For each run, we calculate the expectation over a random sample of 500 data entries. Results. As illustrated in Fig. 2, we first observe a pronounced information flow from contextual tokens to the target position, especially at higher layers (near the output). This implies a clear information aggregation pattern for model prediction. Then, we want to figure out where the information used for prediction comes from. As shown in Fig. 2 (left), for non-EOS predictions, the significance of information flows from the current sentence is comparable to that from previous sentences, despite the latter being significantly longer. This indicates that the current sentence is of great importance to model predictions. However, when the model is tasked with predicting vEOS, as depicted in Fig. 2 (right), the significance of information flows from previous sentences significantly increases and dominates. This suggests that the model, when predicting vEOS, places more emphasis on integrating information from all already generated content. This distinctive behavior indicates that the model’s EOS decision is related to the current state of the entire sequence. Thus, we speculate that the model might be actively assessing the completeness of its text generation relative to its visual input, i.e., whether the current text is sufficient to describe its perceived visual information. 2.2 Semantic Comparison for EOS Decision To validate the hypothesis from Section 2.1, we intervene in the multimodal input context and analyze the model’s tendency for EOS predictions. Please note that the EOS decision does not solely occur at the final position of a sequence but at every position. However, for a well-trained language model, EOS predictions typically occur at the end of each sen11768 Figure 3: The predictive probability of vEOS at various target positions, fitted by exponential functions. Position denotes the relative location i/N of the i-th target token among all N target tokens in the sequence. tence, i.e., the position right after the period. Hence, we focus on these target positions. We employ the same data and model mentioned in Section 2.1 for analysis, and obtain the conditional probabilities of vEOS at various target positions through a forward pass. Fig. 3 (dotted line) illustrates the model’s expected EOS tendency at each target position. A clear trend is that such a tendency increases as the sequence lengthens, implying the correlation between the textual richness and the EOS tendency. However, this correlation could stem from a variety of factors, for example, the length bias in training data can prompt the model to rely on positional embeddings for vEOS predictions. To ablate potential disturbances, we additionally design three context manipulation methods: • Visual reduction (image−): Applying a Gaussian noise mask to the input image, to reduce recognizable semantics in the image. • Visual augmentation (image+): Concatenating the image with a random one, to introduce visual information not described in the current text.2 • Textual reduction (text−): Using an attention mask to hide a portion of the exposed text. Here, we mask the first 30 tokens to ensure the coherence of the adjacent context for vEOS predictions in the end part of the sequence. These manipulation methods enable the augmentation or reduction of the multimodal contextual semantics without altering the sequence length. Results. As illustrated in Fig. 3, the reduction of image information through noise notably increases the model’s tendency to predict vEOS. Conversely, introducing new image information or concealing text information, both implying a reduction in the 2We also implement a variant that replaces the input image with a random new one instead of concatenation, to avoid increasing the absolute information richness (see Appendix B.2). relative textual completeness, lead to a decreased tendency of vEOS prediction. These observations further support our conjecture that the model tends to assess the completeness of the current text to make an EOS decision, particularly, by comparing the generated text to the input image. Specifically, the more completely the image is described, the more likely the model is to terminate generation. 2.3 Discussion Our investigation on the information basis and the model’s intrinsic criteria for EOS decisions reveal that models consider the current state of the entire sequence (Section 2.1) and assess the completeness of generated text relative to the image (Section 2.2). These findings suggest that while models may fit the training data length distribution and generate text beyond their capability limits, they still retain the inherent potential to adjust generation length according to visual perception. When the model tends to terminate generation, it can imply that the currently generated text adequately describes the visual information that the model can perceive. In Section 3, we explore how this potential can be harnessed to mitigate multimodal hallucinations. 3 Mitigating Multimodal Hallucinations Inspired by the preceding analysis, we propose two approaches to mitigate multimodal hallucinations: (1) a learning objective, namely Selective EOS Supervision (Section 3.1), which unlocks the model’s capability to make EOS decisions at proper positions, thereby mitigating hallucinations; (2) a data filtering strategy, namely Scoring EOS Supervision (Section 3.2), which eliminates training data that may hinder the model’s capability to terminate generation in a timely manner. 3.1 Selective EOS Supervision for Training The instruction tuning of LVLMs typically utilizes Maximum Likelihood Estimation (MLE) as the training objective. Given the visual content v and previous tokens w<, the model predicts a probability distribution P V = {p1, p2, · · · , p|V|} over the vocabulary V to determine the next word, where pj represents the probability of the j-th word in V. The model parameter θ is optimized to maximize the likelihood of the label word indexed y, with the loss function defined as: LMLE = −log(py|v, w<; θ). 11769 !! !" ⋯!# ⋯!$%& maximize !! !" ⋯!' ⋯ maximize ! = #$% ! ≠#$% Figure 4: Illustration of the probability distribution derived from our proposed Selective EOS Supervision. Arrows indicate the maximizing and minimizing effects of the training objective on the probability of each word. When the label is not EOS, the EOS token is excluded from the probability distribution. With such an objective, two optimization situations would happen regarding the vEOS prediction: first, when the label is vEOS, the model’s tendency to predict vEOS will be enhanced; second, when the label is not vEOS, and if the model assigns some probability to vEOS, it will be penalized, becoming less likely to predict vEOS. Recalling our analysis in Section 2, the model’s tendency for vEOS prediction implies that the current text adequately represents its perceived visual information. Thus, in the second situation, stopping generation is the right choice. However, as the corresponding label is not vEOS but a regular content token, the model will be discouraged from stopping and encouraged to continue generating content that may exceed its visual perception limits. Therefore, we aim to selectively preserve the first optimization situation, allowing the model to learn when to end generation, while minimizing the second optimization situation, to prevent compromising the model’s EOS decision ability due to overly detailed training data. To achieve the aforementioned goal, we implement a minor modification to MLE. Concretely, at positions where the label is not vEOS, we exclude vEOS from the calculation of probability distribution. This means that the label’s probability is determined using a modified softmax operation: py = softmax∗(zy) = exp(zy) P j∈V\{vEOS} exp(zj), where z denotes logits. Since vEOS does not participate in the probability distribution, it will not be suppressed by maximizing the label’s probability, as depicted in Fig. 4. This modification prevents MLE from undermining the model’s inherent tendency to predict vEOS. For positions where the label is vEOS, we retain vanilla MLE as objective, allowing the model to learn when to end sequences. 3.1.1 Experimental Settings Models and datasets. Our training objective can be applied to any LVLMs with an EOS token and optimized by MLE. As a representative, we select two widely used open-source LVLMs, LLaVA (Liu et al., 2023c) and MiniGPT (Zhu et al., 2023) series. Among them, LLaVA, LLaVA-1.5 (Liu et al., 2023b), and MiniGPT-v2 (Chen et al., 2023a) are trained with data recipes that include the LLaVAInstruction dataset. Thus, we validate our method with these models by fine-tuning them on LLaVAInstruction. Additionally, our experiment results show that a smaller subset of LLaVA-Instruction150K which contains detailed captions, Detail23K, has a similar effect but brings significant computational efficiency. Thus, most of our experiments are conducted with Detail23K. If not specified, models undergo one epoch of training with LoRA (Hu et al., 2022). Other training details remain consistent with the models’ official documentation. Evaluation. Following previous works (Huang et al., 2023; Yu et al., 2023a), we evaluate model hallucination with Caption Hallucination Assessment with Image Relevance (CHAIR) (Rohrbach et al., 2018), a framework that quantifies object hallucination in image captions by comparing generated objects to the ground truth objects. The sentence-level score, CHAIRS, represents the proportion of captions that contain hallucinations, and the instance-level score, CHAIRI, denotes the frequency of hallucinated objects relative to all mentioned objects by the model. In addition, we measure an object recall to evaluate the semantic richness of generated captions. Our CHAIR tests involve 500 images randomly chosen from the MSCOCO validation set (Lin et al., 2014). We also adopt another metric FaithScore (Jing et al., 2023) to evaluate caption hallucination. It verifies the consistency between atomic facts in the caption and the input image with LLMs and visual expert models, for which we employ ChatGPT (OpenAI, 2023) and OFA (Wang et al., 2022). It also provides a sentence-level score, FaithScoreS. Baselines. Since our method facilitates models to timely terminate generation, which often results in shorter responses, we incorporate baselines that simply reduce the generation length, including sequence truncating and decoding with a length penalty. The truncating method keeps only the initial R% of words in each caption, and the decoding method adopts an exponential length penalty 11770 Table 1: Hallucination performance of different models. w/ Cap. and w/ Inst. denote fine-tuning models with the detailed caption subset, Detail23K, and with the full LLaVA-Instruction-150K, respectively. Faith: FaithScore. Row Model Method Length CHAIRS ↓ CHAIRI ↓ Recall ↑ Faith ↑ FaithS ↑ 1 LLaVA-1.5 (7b) 100.6 50.0 15.4 77.1 87.0 68.8 2 VCD 100.4 48.6 14.9 77.3 87.1 70.2 3 OPERA 98.6 47.8 14.6 76.8 88.0 72.6 4 OPERA (fast) 85.3 48.6 14.5 76.7 87.7 71.3 5 Ours (w/ Inst.) 76.2 36.8 11.3 74.3 88.4 73.0 6 Ours (w/ Cap.) 79.7 40.2 12.3 75.7 89.3 72.3 7 LLaVA-1.5 (13b) 100.9 47.2 13.0 77.3 87.6 73.1 8 Ours (w/ Cap.) 85.1 36.8 11.4 75.3 88.8 72.8 9 LLaVA (7b) 57.8 35.4 13.8 64.8 86.9 67.4 10 Ours (w/ Cap.) 39.9 27.0 13.2 57.1 88.9 71.6 11 MiniGPTv2 (7b) 87.2 38.0 11.1 66.3 85.6 67.8 12 Ours (w/ Cap.) 62.2 27.0 9.8 66.6 89.9 76.0 72 73 74 75 76 77 Recall 11 12 13 14 15 16 CHAIRI Truncate Len. Penalty Ours Figure 5: Hallucination vs. Recall performance of LLaVA-1.5 (7b). Ours: the models fine-tuned on Inst. and Cap. respectively with our training objective. to adjust the score of the EOS token during generation. Varying the truncating proportion or the length penalty factor leads to different generation length, and affects both hallucination and recall performance. Additionally, we include two recently proposed plug-in methods: (1) Visual Contrastive Decoding (VCD) (Leng et al., 2023), which contrasts the output distributions derived from the original and noisy visual inputs, to reduce the influence of the model’s parametric knowledge. (2) Over-Trust Penalty and Retrospection-Allocation (OPERA) (Huang et al., 2023), a decoding strategy that penalizes the model’s over-reliance on certain tokens and allows roll-back when needed. We test VCD at different noise steps of 200, 500, 700, and 999, and report the optimal results. For OPERA, our implementation follows the suggestions in their released code, including two hyperparameter configurations for standard and fast inference. 3.1.2 Results Versus the original model. As shown in Table 1, after a single training epoch on the detailed caption 0.0 0.2 0.4 0.6 0.8 1.0 Epoch 12 14 16 CHAIRI 30 40 50 60 CHAIRS CHAIRI (MLE) CHAIRI (Ours) CHAIRS (MLE) CHAIRS (Ours) Figure 6: CHAIR performance trends of LLaVA-1.5 (7b) throughout training on LLaVA-Instruction-150K. subset, Detail23K, using our learning objective, all models tend to produce shorter captions and notably reduce hallucinations at both the sentence and instance levels (r1 vs r6, r7 vs r8, etc.). This improvement is even more significant when using the full 150K instruction data (r1 vs r5), resulting in a 26.4% and 26.6% decrease in CHAIRS and CHAIRI of LLaVA-1.5 (7b), respectively. While our method leads to some decrease in recall (e.g., −1.8% of LLaVA-1.5 (7b) and −2.6% of LLaVA1.5 (13b)), we view this as a beneficial compromise since the models become more conservative and less likely to “guess” uncertain visual content. More analysis can be found in Appendix B.3. Versus baselines. As demonstrated in Fig. 5, the truncating and length-penalty decoding baselines, with varying length-controlling configurations, effectively reduce hallucinations at the cost of Recall. However, these methods fall short of ours for either being less effective in alleviating hallucinations or resulting in more significant recall loss. Our method also outperforms existing methods, i.e., VCD and OPERA, as shown in Table 1. Our proposed method does not require additional data 11771 Table 2: CHAIR evaluation results of the LLaVA (7b) models instruction-tuned from scratch. Loss Length CHAIRS CHAIRI Recall MLE 57.8 35.4 13.8 64.8 Ours 36.1 24.2 11.6 55.9 Combined 42.7 26.6 11.0 57.5 construction or other expert models. Furthermore, unlike decoding methods, it does not slow down inference and remains technically compatible with various decoding strategies. Therefore, it presents a viable, practical supplement or alternative to current methods. Versus MLE. To confirm that the improvement in model hallucination performance results from our modification to MLE, we also conduct a comparison by further training the model using the vanilla MLE. As shown in Fig. 6, the performance of the model optimized by MLE varies throughout the training and remains at the original level. This variation suggests that different training samples impact the model differently; some may enhance the model’s EOS tendency while others do the opposite, collectively preserving the model’s initial generation habits. In contrast, with our modified learning objective, the degree of model hallucination steadily decreases throughout one epoch of training, with the model eventually significantly outperforming its MLE counterpart. This indicates that our selective supervision consistently enhances the model’s EOS decision with varying data inputs. Instruction tuning from scratch. Beyond further training existing models, our method is also compatible with instruction-tuning new models, starting from a vision-language aligned yet not instruction-tuned state. Table 2 presents the results of fine-tuning LLaVA for 3 epochs with LLaVAInstruction-150K. With our learning objective, the model’s sentence-level and instance-level hallucinations are reduced by 31.6% and 15.9%, respectively. Combining our learning objective with the vanilla MLE at a 1:1 ratio achieves a more balanced performance between hallucinations and recall. 3.2 Scoring EOS Supervision for Training Data Filtering As the preceding analysis shows, learning from overly detailed data can impair a model’s ability to predict vEOS, so an intuitive solution is to filter out such “harmful” training data. As described in Section 3.1, there exist two optimization situations regarding vEOS prediction, corresponding to a positive effect that enhances the model’s EOS tendency when the label is vEOS and a negative effect otherwise. We thus design two metrics to quantitatively evaluate the two effects on models when trained with a certain data sample: Spos= − N X i=1 [yi =vEOS] log(pvEOS|v, w<; θ∗); Sneg= − N X i=1 [yi ̸=vEOS] log(1−pvEOS|v, w<; θ∗). Here, θ∗is a reference model used for evaluating the data. For positions where the label is vEOS, we define Spos as the cross-entropy loss of the reference model predicting the label. A large crossentropy loss indicates that, on this particular training data, the model fails to predict vEOS to end the sequence, and the feedback from the training loss will enhance the model to learn this capability. Thus, Spos quantifies the positive effect of the data on the model’s EOS prediction. Conversely, for positions where the label is not vEOS, if the model tends to predict vEOS, this tendency will be undesirably suppressed. Particularly, a larger pvEOS leads to a more significant negative effect, especially when pvEOS approaches 1. Therefore, as defined above, Sneg serves to estimate the negative effect of the data on the model’s EOS decision. Intuitively, our goal is for Spos to be as high as possible, indicating strong penalties for the model’s inability to predict vEOS where it should, and for Sneg to be as low as possible, reflecting minimal suppression on the model’s EOS tendency. Therefore, we calculate a composite score Sfinal =Sneg−Spos to estimate the “harmfulness” of the data. It is recommended to remove the highest-scoring data parts from training to achieve a more desired outcome for appropriate EOS decisions. With the shared goal of preserving the model’s EOS decision capability, our data filtering strategy can serve as an alternative to the Selective EOS Supervision described in Section 3.1. 3.2.1 Experimental Settings Data filtering. We apply the proposed data filtering strategy to the LLaVA-Instruction-150K dataset. The model used for scoring Sfinal is the instructiontuned version of LLaVA-1.5 (7b). We test three data filtering ratios, ranging from 10% to 30%, to remove data with the highest Sfinal. Additionally, we evaluate a random filtering strategy where 20% 11772 Table 3: CHAIR evaluation results of models trained with different data. CS/I denotes CHAIRS/I. Row Train. Data Model Performance Filter Len. Len. CS CI Recall 1 Original 178.3 57.8 35.4 13.8 64.8 2 10% 171.7 63.7 35.4 14.0 64.5 3 20% 168.2 45.5 27.0 10.6 58.9 4 30% 166.7 49.2 29.4 11.7 58.0 5 Random 178.2 68.9 35.5 11.8 61.9 6 Reversed 176.8 100.6 46.6 18.9 68.6 of the data is removed randomly, as well as a reversed filtering strategy, with 20% of data with the lowest Sfinal being removed. Models. We fine-tune the LLaVA (7b) model from scratch with the filtered data to validate their effectiveness. Following common practice, all models are trained for 3 epochs with a batch size of 128. We adopt QLoRA (Dettmers et al., 2023) to reduce computational load. Note that in this subsection, models are trained with the vanilla MLE. 3.2.2 Results As shown in Table 3, by removing a small proportion, i.e., 20%, of “harmful” data from the original training set, the model significantly reduces learning hallucinations during instruction tuning, resulting in the sentence-level and instance-level hallucinations reduced by 23.7% and 23.2%, respectively. In contrast, the reversed filtering which removes the least “harmful” data leads to opposite effects, greatly exacerbating the model hallucination, while the random removal brings no improvements in sentence-level hallucination performance. This shows that our criteria used for data filtering well reflect the impact of the data on the model’s ability to end generation. Another interesting finding is that filtering the data does not bring a big change in the length of the training data, but it does significantly affect the length of the model generation. For instance, the reversed filtering leaves the average length of the training data almost unchanged, but the average length of model generation nearly doubles. This implies that the impact of our data filtering strategy does not come from changing the length distribution of the training data. Instead, it affects the model through manipulating the EOS supervision, further validating our motivation. 3.3 Discussion In this section, to fully exploit the potential of the model to properly end generation according to its visual perception, we propose to mitigate hallucinations by not suppressing its inherent EOS tendency, through a training objective (Section 3.1) and a data filtering strategy (Section 3.2). From a practical perspective, the former approach has the merits of broader applicability and easier deployment, especially when used to further train existing models. The latter has greater compatibility since filtered data can be paired with various training methods. 4 Related Works Hallucination Origins. Investigations on the causes of multimodal hallucination in LVLMs identify three main factors: (1) Limited visual representations. For example, the visual encoders commonly employed in LVLMs depict abstract features while struggling to capture fine-grained visual details (Jiang et al., 2023b; Tong et al., 2024; Wang et al., 2023a). Jiang et al. (2023a) observe a modality gap persists between visual and textual features, despite vision-language alignment. (2) Models’ over-reliance on parametric knowledge, such as statistical biases and language priors, rather than on visual evidence (Zhou et al., 2023; Leng et al., 2023; Liu et al., 2023a; Zhai et al., 2023; Guan et al., 2023). (3) Inferior data for instruction tuning. This includes insufficient visual supervision (Chen et al., 2023b), a lack of positive/negative human feedback (Yu et al., 2023b), and the presence of hallucinations within the training data (Yu et al., 2023a; Liu et al., 2023a). Our paper identifies a new source of hallucinations: overly detailed training data hinders the model’s inherent EOS decision ability, further enriching existing explanations. Mitigation Solutions. An effective way to reduce hallucinations is to construct high-quality data, including employing automatic data cleaning pipelines (Yu et al., 2023a), generating (Liu et al., 2023a) or rewriting (Wang et al., 2024) training data with LLMs, and integrating human feedback into annotations (Gunjal et al., 2023). Training approaches view hallucinatory data as negative examples, and adopt preference optimization (Zhai et al., 2023; Zhao et al., 2023b; Sun et al., 2023; Yu et al., 2023b; Li et al., 2023a) or contrastive learning (Jiang et al., 2023a) to enhance models’ resistance to hallucinations. Inference strategies focus on the decoding process, suppressing models’ 11773 reliance on parametric biases (Leng et al., 2023) or penalizing inferior attention patterns (Huang et al., 2023). Other works explore posthoc-fixing ways to rectify hallucinations in model outputs, by training a revisor model (Zhou et al., 2023), employing expert models (Yin et al., 2023b), and prompting the original model for self-correction (Lee et al., 2023). In this paper, we propose a new learning objective and a data filtering strategy, belonging to the training and data perspectives, respectively. 5 Conclusion This paper investigates the multimodal hallucination issue in large multimodal models. We suggest that overly detailed training data can prevent the model from stopping generation at the appropriate time, thus leading to hallucinated outputs. By examining the model’s inner behavior of EOS prediction, we discover that the model inherently holds the potential to terminate generation based on its visual perception limits. To enhance such potential, we develop two approaches, a learning objective for training models and a data filtering strategy for selecting training data, both of which facilitate the model learning to timely terminate generation and significantly reduce hallucinations. Limitations This work presents a novel perspective on the origins of multimodal hallucinations in large multimodal models with corresponding solutions. However, it faces several limitations. First, it focuses solely on generative tasks, i.e., detailed image description, without covering hallucinations in broader tasks like classification-oriented Visual Question Answering (VQA). Second, our solutions are examined only on multimodal models, though technically they could also be applied to unimodal large language models. We leave this possibility for future exploration. Third, our solutions mitigate hallucinations by enhancing the model’s ability to timely conclude sequences. While effective, they address only the simplest source among various causes of hallucinations. Fully solving the problem of hallucination remains a substantial challenge. Ethics Statement This work focuses on reducing hallucinations in large multimodal models to enhance their reliability and trustworthiness. We have carefully considered the ethical implications of our work and anticipate no significant ethical concerns. This work was carried out using publicly available and commonly used data and models, and our findings may inherit the biases and limitations carried in these resources. Acknowledgements We thank all reviewers for their insightful comments and suggestions. This work was partially supported by the Beijing Natural Science Foundation (No. L233008), the National Natural Science Foundation of China (No. 62072462), and the Outstanding Innovative Talents Cultivation Funded Programs 2023 of Renmin University of China. References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. 2023a. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. ArXiv preprint, abs/2310.09478. Zhiyang Chen, Yousong Zhu, Yufei Zhan, Zhaowen Li, Chaoyang Zhao, Jinqiao Wang, and Ming Tang. 2023b. Mitigating hallucination in visual language models with visual supervision. ArXiv preprint, abs/2311.16479. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. ArXiv preprint, abs/2305.14314. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. 2023. Mme: A comprehensive evaluation benchmark for multimodal large language models. ArXiv preprint, abs/2306.13394. Tianrui Guan, Fuxiao Liu, Xiyang Wu, Ruiqi Xian, Zongxia Li, Xiaoyu Liu, Xijun Wang, Lichang Chen, Furong Huang, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. 2023. Hallusionbench: An advanced diagnostic suite for entangled language hallucination & visual illusion in large vision-language models. arXiv e-prints, pages arXiv–2310. Anisha Gunjal, Jihan Yin, and Erhan Bas. 2023. Detecting and preventing hallucinations in large vision language models. ArXiv preprint, abs/2308.06394. 11774 Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Qidong Huang, Xiaoyi Dong, Pan Zhang, Bin Wang, Conghui He, Jiaqi Wang, Dahua Lin, Weiming Zhang, and Nenghai Yu. 2023. Opera: Alleviating hallucination in multi-modal large language models via over-trust penalty and retrospection-allocation. ArXiv preprint, abs/2311.17911. Chaoya Jiang, Haiyang Xu, Mengfan Dong, Jiaxing Chen, Wei Ye, Ming Yan, Qinghao Ye, Ji Zhang, Fei Huang, and Shikun Zhang. 2023a. Hallucination augmented contrastive learning for multimodal large language model. ArXiv preprint, abs/2312.06968. Dongsheng Jiang, Yuchen Liu, Songlin Liu, Xiaopeng Zhang, Jin Li, Hongkai Xiong, and Qi Tian. 2023b. From clip to dino: Visual encoders shout in multimodal large language models. ArXiv preprint, abs/2310.08825. Liqiang Jing, Ruosen Li, Yunmo Chen, Mengzhao Jia, and Xinya Du. 2023. Faithscore: Evaluating hallucinations in large vision-language models. ArXiv preprint, abs/2311.01477. Seongyun Lee, Sue Hyun Park, Yongrae Jo, and Minjoon Seo. 2023. Volcano: mitigating multimodal hallucination through self-feedback guided revision. ArXiv preprint, abs/2311.07362. Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, and Lidong Bing. 2023. Mitigating object hallucinations in large visionlanguage models through visual contrastive decoding. ArXiv preprint, abs/2311.16922. Lei Li, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, and Lingpeng Kong. 2023a. Silkie: Preference distillation for large visual language models. ArXiv preprint, abs/2312.10665. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023b. Evaluating object hallucination in large vision-language models. ArXiv preprint, abs/2305.10355. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023a. Mitigating hallucination in large multi-modal models via robust instruction tuning. ArXiv preprint, abs/2306.14565. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023b. Improved baselines with visual instruction tuning. ArXiv preprint, abs/2310.03744. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023c. Visual instruction tuning. ArXiv preprint, abs/2304.08485. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 14014–14024. Benjamin Newman, John Hewitt, Percy Liang, and Christopher D. Manning. 2020. The EOS decision and length extrapolation. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, pages 276–291, Online. Association for Computational Linguistics. OpenAI. 2023. Introducing chatgpt. https://openai. com/blog/chatgpt. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. 2018. Object hallucination in image captioning. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4035–4045, Brussels, Belgium. Association for Computational Linguistics. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, LiangYan Gui, Yu-Xiong Wang, Yiming Yang, et al. 2023. Aligning large multimodal models with factually augmented rlhf. ArXiv preprint, abs/2309.14525. Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, and Saining Xie. 2024. Eyes wide shut? exploring the visual shortcomings of multimodal llms. ArXiv preprint, abs/2401.06209. Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, et al. 2023a. Evaluation and analysis of hallucination in large vision-language models. ArXiv preprint, abs/2308.15126. 11775 Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023b. Label words are anchors: An information flow perspective for understanding in-context learning. ArXiv preprint, abs/2305.14160. Lei Wang, Jiabang He, Shenshen Li, Ning Liu, and Ee-Peng Lim. 2024. Mitigating fine-grained hallucination by fine-tuning large vision-language models with caption rewrites. In International Conference on Multimedia Modeling, pages 32–45. Springer. Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. OFA: unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 23318–23340. PMLR. Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. 2023a. A survey on multimodal large language models. ArXiv preprint, abs/2306.13549. Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, and Enhong Chen. 2023b. Woodpecker: Hallucination correction for multimodal large language models. ArXiv preprint, abs/2310.16045. Qifan Yu, Juncheng Li, Longhui Wei, Liang Pang, Wentao Ye, Bosheng Qin, Siliang Tang, Qi Tian, and Yueting Zhuang. 2023a. Hallucidoctor: Mitigating hallucinatory toxicity in visual instruction data. ArXiv preprint, abs/2311.13614. Tianyu Yu, Yuan Yao, Haoye Zhang, Taiwen He, Yifeng Han, Ganqu Cui, Jinyi Hu, Zhiyuan Liu, Hai-Tao Zheng, Maosong Sun, et al. 2023b. Rlhf-v: Towards trustworthy mllms via behavior alignment from finegrained correctional human feedback. ArXiv preprint, abs/2312.00849. Bohan Zhai, Shijia Yang, Xiangchen Zhao, Chenfeng Xu, Sheng Shen, Dongdi Zhao, Kurt Keutzer, Manling Li, Tan Yan, and Xiangjun Fan. 2023. Halleswitch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption. ArXiv preprint, abs/2310.01779. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023a. A survey of large language models. ArXiv preprint, abs/2303.18223. Zhiyuan Zhao, Bin Wang, Linke Ouyang, Xiaoyi Dong, Jiaqi Wang, and Conghui He. 2023b. Beyond hallucinations: Enhancing lvlms through hallucinationaware direct preference optimization. ArXiv preprint, abs/2311.16839. Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. ArXiv preprint, abs/2310.00754. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. ArXiv preprint, abs/2304.10592. A Additional Implementation Details A.1 Experiment in Figure 1 For the trend depicted in Fig. 1, the model undergoes fine-tuning on the LLaVA-Instruction-150K dataset over 3 epochs and is evaluated with the same data used in Section 2. At the initial stages of training, the model shows notable fluctuations in performance due to the lack of prior fitting to the instruction data. To better demonstrate how data affects the model’s EOS tendency, we focus on the latter half of the training period, where performance begins to stabilize. A.2 Context Manipulation In Section 2.2, we reduce the semantics of images by overlaying them with a Gaussian noise mask. This involves gradually introducing minor amounts of Gaussian noise over T steps, mirroring the forward diffusion process used in image generation tasks (Ho et al., 2020). Our implementation follows that of Leng et al. (2023), and we set T to 500 for analysis. A.3 Computation Table 4 presents the computational cost of model training on a setup with 8 NVIDIA RTX A6000 GPUs. Our proposed training objective and data filtering strategy do not introduce a noticeable increase in training costs. In this work, all experimental results are derived from single runs, with greedy search as the decoding strategy. B Additional Results B.1 Information Aggregation Pattern In Section 2.1, we observe a significant information aggregation from the context to the target position during token prediction. This section further clarifies the details of this information aggregation pattern. At lower layers (near the input), the information from regular content tokens within a sentence converges at the sentence’s end, typically a period, 11776 Table 4: The computational burden of model training. Train. Strategy means either further training models that have already been fine-tuned for instruction following or instruction tuning from scratch. PEFT stands for the Parameter-Efficient Fine-Tuning (PEFT) strategies we adopt. Train. Strategy PEFT Model Data Epoch(s) Total GPU Time Further LoRA LLaVA-1.5 (7b) Cap. 1 ∼0.6 h LLaVA-1.5 (7b) Inst. 1 ∼9.0 h LLaVA-1.5 (13b) Cap. 1 ∼3.0 h LLaVA (7b) Cap. 1 ∼0.6 h MiniGPTv2 (7b) Cap. 1 ∼2.5 h From-scratch QLoRA LLaVA (7b) Inst. 3 ∼11.0 h 0 5 10 15 20 25 30 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Relative Information Flow EOS Predictions others to periods periods to vEOS among others 0 5 10 15 20 25 30 Layer 0.0 0.2 0.4 0.6 0.8 1.0 Relative Information Flow Non-EOS Predictions others to periods periods to target among others Figure 7: Relative significance of the information flow from regular content tokens (others) to periods, from periods to the target position for prediction, and among others. The significance is averaged over information flow targets and normalized across these three aspects for clearer comparison. seemingly summarizing the entire sentence. At higher layers (near the output), this “summarized” information then aggregates to the target position for the next token prediction. Following Wang et al. (2023b), we illustrate these effects in Fig. 7, where such effects occur for both EOS and non-EOS predictions. This observation closely aligns with the findings by Wang et al. (2023b) in in-context learning (ICL), where the labels of in-context demonstrations act as "anchors" that aggregate information at lower layers and provide it for the final prediction at higher layers. This hierarchical information aggregation pattern elucidates how information moves within contexts and underpins our analysis in SecFigure 8: The predictive probability of the EOS token at different target positions within a sequence. tion 2.1. We hope these observations can shed some light on future research. B.2 Context Manipulation In Section 2.2, we design three context manipulation methods to analyze how the model adjusts its EOS tendency according to these interventions. In addition to these methods, we also implement a variant of visual augmentation (image+), where we replace the input image with a random new one instead of concatenating a random image with the input image. This method can also decrease the relative completeness of the text, while not necessarily increasing the absolute information richness. The results in Fig. 8 demonstrate a similar impact from both variants, suggesting that the model does not merely compare the absolute semantic richness of the text and the image, but assesses the relative semantic completeness of the text to the image, i.e., whether the existing text encompasses the perceived visual information. This observation further supports our conjecture. B.3 Selective EOS Supervision EOS prediction tendency. In Fig. 9, we illustrate the EOS prediction tendency (average probability) of the LLaVA-1.5 (7b) model during further training on Detail23K. With Selective EOS Supervision 11777 0.0 0.2 0.4 0.6 0.8 1.0 Epoch 0.006 0.008 0.010 0.012 0.014 0.016 Avg. EOS Probability Ours MLE Figure 9: The average probability of the LLaVA-1.5 (7b) model predicting the EOS token at each position within the minibatch during further training. Table 5: Hallucinated and correct objects “omitted” from the original model outputs by our methods. Method #Halluc. #Correct Halluc. Rate↑ Ours (w/ Inst.) 263 104 71.7% Ours (w/ Cap.) 244 93 72.4% Table 6: Average correct and hallucinated object counts of generated captions. Original model: LLaVA-1.5 (7b). Model #Correct↑ #Hallucinated↓ Original model 2.45 0.90 Ours (w/ Cap.) 2.40 0.63 Ours (w/ Inst.) 2.36 0.55 proposed in Section 3.1, the model’s tendency to predict EOS rises and stabilizes, while the model optimized by MLE shows no change in this behavior. This suggests that the proposed training objective effectively helps the model regain its capability to timely conclude sequences. Dissecting omitted content. As our method reduces the generated content to alleviate hallucinations, it is interesting to investigate what is “omitted” by our method from the originally generated captions, specifically, how many “omitted” objects are correct and how many are hallucinations. We extract the generated objects from the outputs of both the original model and our further trained models, using the same technique as in the CHAIR evaluation. Then, we focus on these objects that are mentioned by the original model but not by our models, which are “omitted” from the original captions. As the results of the Halluc. Rate (hallucinated object rate of omission) in Table 5 shows, nearly 3/4 of the “omitted” objects are hallucinations, implying that such an omission is beneficial. Furthermore, we analyze the average counts of correct and hallucinated objects in the model generation, as a supplement to the CHAIR metrics, 0.0 0.2 0.4 0.6 0.8 1.0 Spos 0 20000 40000 0.0 0.2 0.4 0.6 0.8 1.0 Sneg 0 20000 40000 60000 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Sfinal = Sneg Spos 0 10000 20000 Figure 10: The score distributions of Spos, Sneg, and Sfinal in the LLaVA-Instruction-150K dataset. to demonstrate more comprehensively how our method impacts the quality of model generation. As shown in Table 6, our method reduces hallucinations while largely preserving the correct content. B.4 Scoring EOS Supervision Data Score Distributions. In Section 3.2, we discuss two metrics, Spos and Sneg, which are summed over positions labeled as EOS and nonEOS, respectively. A natural concern is that the number of non-EOS positions far exceeds that of EOS positions, raising the question of whether the combined Sfinal might be dominated by Sneg. To clarify this, we examine the score distributions within the LLaVA-Instruction-150K dataset. As illustrated in Fig. 10, the value distributions of Spos and Sneg are comparable in magnitude, and the Sfinal distribution is approximately normal with zero mean. This indicates a balance between Spos and Sneg with neither metric dominating, and the top Sfinal scores necessitate both high Sneg and low Spos. Thus, by maintaining a straightforward formulation of Sfinal = Sneg −Spos without introducing a balancing hyperparameter, the contributions of both metrics are reflected. Initial experiments also reveal that relying solely on Sneg for data filtering increases hallucinations, as it can lead to mistakenly removing data with high Spos; whereas using Spos alone does reduce hallucina11778 Table 7: MME and POPE evaluation results of baselines and models trained with our proposed two methods. For LLaVA-1.5 (7b), we compare the original model (Baseline), the model trained with MLE, and the one with Selective EOS Supervision (Ours). For LLaVA (7b), Baseline and Ours refer to the models trained with the original data and the data filtered by our Scoring EOS Supervision, respectively. Model Method MME POPE Perception Cognition F1 Accuracy Precision Recall LLaVA-1.5 (7b) Baseline 1,516.1 348.2 85.9 86.9 94.0 79.1 MLE 1,470.9 372.5 86.1 87.0 93.6 79.7 Ours 1,490.4 367.9 86.0 86.8 93.5 79.5 LLaVA (7b) Baseline 883.1 263.6 73.3 63.8 58.8 97.5 Ours 910.9 260.0 71.2 59.7 56.0 98.1 tions, but is not as effective as Sfinal and will bring greater recall loss. Balancing both metrics yields the most desirable outcomes. B.5 MME and POPE Evaluation As mentioned in Limitations, our proposed techniques focus on mitigating hallucinations in generative tasks by adjusting the models’ propensity for appropriately concluding outputs. However, these methods are not directly transferable to addressing hallucination problems in broader Visual Question Answering (VQA) tasks, such as those evaluated in the MME (Fu et al., 2023) and POPE (Li et al., 2023b) benchmarks. The MME benchmark assesses the model’s capabilities in terms of perception and cognition, whereas POPE concentrates on object hallucinations. Both benchmarks challenge models with Yes-or-No questions. As shown in Table 7, our methods do not yield performance gains on these benchmarks. The effectiveness of our approaches in generative tasks suggests that a model’s failure to timely stop generation is an important hallucination source. However, addressing this issue alone does not fundamentally solve all hallucination problems as the origins of multimodal hallucinations are multifaceted. This area remains open for further investigation. B.6 Qualitative Results We present qualitative examples of our methods, Selective EOS Supervision in Fig. 11 and Scoring EOS Supervision in Fig. 12. The baseline models often produce hallucinations towards the end of their outputs, as they try to include too many details from the image, sometimes beyond their visual perception limits. This also explains why simply truncating sequences can reduce hallucinations. However, with our methods, the models better retain the innate ability to stop generation right after covering what they can visually perceive. This prevents the generation of overly lengthy, inaccurate, or irrelevant outputs that lower the overall quality and information density of the generated content, echoing the principle that “less is more.” 11779 Baseline: The image features a person wearing a wetsuit, standing in the water and holding a sailboard. The person is positioned in the middle of the scene, with the sailboard prominently visible in front of them. The water appears to be choppy, adding to the dynamic nature of the scene. In the background, there are several boats of varying sizes scattered across the water, creating a lively atmosphere. Some of the boats are closer to the person with the sailboard, while others are further away, providing a sense of depth to the image. Ours: The image features a person wearing a wetsuit, standing in the water and holding a sail. The person appears to be preparing to windsurf or engage in a water sport that involves a sail. The sail is large and positioned in front of the person, with the water surrounding them. Baseline: The image captures a young man skillfully performing a trick on a skateboard, jumping into the air while riding it. He is wearing a plaid shirt and appears to be in a park-like setting. The skateboard is positioned underneath him, with the wheels visible as he jumps. In the background, there are a few other people present, possibly watching the skateboarder or engaging in their own activities. The scene also features a bench, a chair, and a potted plant, adding to the park-like atmosphere. Ours: The image captures a young man skillfully performing a trick on his skateboard, jumping into the air with the skateboard beneath his feet. He is wearing a plaid shirt and appears to be in a park-like setting. The skateboarder is the main focus of the scene, with his impressive stunt drawing attention. Baseline: The image features a train station with a train on the tracks. A man is standing next to the train, holding a green trash can, likely preparing to clean the train. Another man is standing in the train, possibly a passenger or a worker. There are several other people in the scene, some of them standing near the train, while others are scattered around the station. A handbag can be seen placed on the ground, possibly belonging to one of the people in the scene. Ours: The image features a train station with a train on the tracks. A man in a blue shirt is standing next to the train, holding a green trash can. He appears to be in the process of unloading the trash can from the train. Another man is standing in the open door of the train, possibly waiting to get off or assisting with the unloading process. Baseline: The image features a wooden dining table with a tray holding three delicious donuts. The donuts are arranged in a visually appealing manner, with one donut placed in the center and the other two positioned on either side. The table appears to be in a restaurant setting, as there is a bench situated nearby. In addition to the donuts, there is a cup placed on the table, possibly containing a beverage to accompany the sweet treats. The overall scene conveys a pleasant and inviting atmosphere, perfect for enjoying a snack or dessert. Ours: The image features a wooden dining table with three donuts placed on a napkin. The donuts are of different sizes and are arranged in a way that they are stacked on top of each other. The table is positioned near a window, allowing natural light to illuminate the scene. Baseline: The image captures a man skillfully riding a wave on a surfboard in the ocean. He is wearing a wetsuit and appears to be enjoying the thrill of the sport. The surfer is positioned in the center of the scene, with the surfboard beneath him. In the background, there are several cars parked along the shore, likely belonging to beachgoers or spectators. The cars are scattered around the area, with some closer to the water and others further back. The scene is a lively representation of a day at the beach, with people enjoying various activities. Ours: The image captures a man in a black wetsuit skillfully riding a wave on a surfboard. He is positioned in the center of the scene, with the surfboard beneath him as he expertly navigates the water. The wave he is riding is quite large, providing an exciting challenge for the surfer. Baseline: The image features a man sitting at a desk, working on a laptop computer. He is surrounded by numerous books, which are scattered around the desk and on the floor. Some books are placed close to the laptop, while others are located further away. In addition to the laptop and books, there are a few other items on the desk, such as a mouse and a cell phone. A clock can also be seen on the wall, indicating the time. The scene suggests a focused and studious environment, with the man likely engaged in work or research. Ours: The image features a man sitting at a desk, working on a laptop computer. He is surrounded by numerous books, which are scattered across the desk and on the shelves behind him. The books vary in size and are placed in different orientations, creating a busy and studious atmosphere. Figure 11: Qualitative results of the LLaVA-1.5 (7b) model (Baseline) and its counterpart further trained on LLaVA-Instruction-150K with Selective EOS Supervision (Ours). 11780 Baseline: The image shows a bathroom with a white toilet and a white sink. The toilet is located on the left side of the bathroom, and the sink is situated on the right side. Ours: The image shows a white toilet with a white seat and a white tank. The toilet is located in a bathroom, and it has a window above it. Baseline: A person is holding a sandwich in their hand, and they are eating it. The sandwich is a large, juicy steak sandwich with a bun. There are two people in the image, one of them is eating the sandwich, and the other person is standing nearby. The sandwich is placed on a plate, and there are two bottles on the table. Ours: A man is holding a sandwich in his hand, and he is eating it. The sandwich is a large, juicy steak sandwich with a bun and meat. Baseline: In the image, a group of people are gathered around a large dining table, enjoying a meal together. There are several chairs around the table, and a few people are sitting in them. The table is set with various items, including cups, plates, and utensils. There are also a few bottles on the table, and a vase is placed in the center of the table. The scene appears to be a social gathering or a business meeting, as people are engaged in conversation and enjoying each other's company. Ours: In the image, there is a large dining table with a variety of dishes and utensils, including forks, knives, and spoons. The table is set for a formal dinner, with a white tablecloth and a centerpiece. There are multiple people sitting around the table, enjoying their meal. Baseline: The image shows a snow-covered street with a one-way sign and a traffic light. The street is empty, and the snow is piled up on the sidewalk. There are two cars parked on the street, one on the left side and the other on the right side. The cars are parked close to each other, and they are both facing the same direction. Ours: The image shows a snow-covered street with a one-way sign and a traffic light. The traffic light is green, indicating that it is safe to proceed. Baseline: The image shows a bathroom with a large mirror and a sink. The sink is made of white marble and has a faucet on it. The mirror is large and has a frame around it. The bathroom is clean and well-maintained, with a towel rack and a toilet in the background. Ours: The image shows a bathroom with a sink and a mirror. The sink is made of white marble and has a faucet on it. The mirror is large and has a frame around it. Baseline: A man is sitting at a desk with a laptop computer in front of him. He is wearing glasses and appears to be working on the computer. There are several books on the desk, and a cup is placed on the desk as well. The man is sitting in a chair, and the desk is surrounded by various items, including a lamp and a clock. Ours: In the image, a man is sitting at a desk with a laptop computer in front of him. He is wearing glasses and appears to be working on the computer. There are several books and papers on the desk, and a lamp is also present. Figure 12: Qualitative results of the LLaVA (7b) model trained with original LLaVA-Instruction-150K data (Baseline) and with the data filtered by Scoring EOS Supervision (Ours). 11781
2024
633
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11782–11794 August 11-16, 2024 ©2024 Association for Computational Linguistics Integrate the Essence and Eliminate the Dross: Fine-Grained Self-Consistency for Free-Form Language Generation Xinglin Wang1*, Yiwei Li1*, Shaoxiong Feng2, Peiwen Yuan1, Boyuan Pan2, Heda Wang2, Yao Hu2, Kan Li1† 1 School of Computer Science, Beijing Institute of Technology 2 Xiaohongshu Inc {wangxinglin,liyiwei,peiwenyuan,likan}@bit.edu.cn {shaoxiongfeng2023,whd.thu}@gmail.com {panboyuan,xiahou}@xiaohongshu.com Abstract Self-consistency (SC), leveraging multiple samples from LLMs, shows significant gains on various reasoning tasks but struggles with freeform generation due to the difficulty of aggregating answers. Its variants, UCS and USC, rely on sample selection or voting mechanisms to improve output quality. These methods, however, face limitations due to their inability to fully utilize the nuanced consensus knowledge present within multiple candidate samples, often resulting in suboptimal outputs. We propose Fine-Grained Self-Consistency (FSC) to addresses these limitations by extracting and integrating segment-level commonalities from candidate samples, enhancing the performance of LLMs both in open-ended and reasoning tasks. Based on this, we present two additional strategies: candidate filtering, which enhances overall quality by identifying highly similar candidate sets, and merging, which reduces input token requirements by combining similar samples. The effectiveness of FSC is demonstrated through extensive experiments on various tasks, including summarization, code generation, and mathematical reasoning, using GPT3.5-turbo and GPT-4. The results indicate significant improvements over baseline methods, showcasing the potential of FSC to optimize output quality by effectively synthesizing finegrained consensus knowledge from multiple samples1. 1 Introduction The remarkable success of large-scale language models (LLMs) have transformed the landscape of natural language processing, showcasing significant improvements across a wide range of tasks, from reasoning tasks (Wei et al., 2022) with distinct *Equal contribution. †Corresponding author. 1Our code and data have been released on https:// github.com/WangXinglin/FSC answers like arithmetic and commonsense reasoning to free-form generation tasks like code generation (Austin et al., 2021) and summarization (Goyal et al., 2022). However, LLMs may still generate suboptimal samples in challenging tasks. Efforts to improve output quality involve selecting the best response from multiple samples based on specific criteria. This includes using trained models for reranking outputs (Ravaut et al., 2023) and employing LLMs to evaluate the responses (Liu et al., 2023b). However, both approaches require additional models and overlook the knowledge present among the candidates. Wang et al. (2023) introduce selfconsistency (SC) to improve performances without additional models, which mitigates noise from individual sampling by employing a voting mechanism across multiple samples. Unfortunately, SC is limited to tasks where the final answer can be aggregated through precise matching. How to aggregate the answers for free-form problems remains unclear. Recently, some works seek to evolve the idea of self-consistency into open-ended generative tasks. UCS (Jain et al., 2023) calculates the overlap of unigrams between candidates and then selects the final answer with highest value. Alternatively, USC (Chen et al., 2023) leverages the capabilities of LLMs instead of rule-based criteria to choose the most consistent one. However, both approaches still rely on selection or voting mechanisms, which do not align well with the nature of free-form generation tasks, i.e., the final quality is determined by the entirety of the output content, rather than specific individual tokens. Therefore, sample-level selection methods only yield suboptimal outputs, primarily due to two reasons: (1) They are unable to incorporate consensus knowledge from unselected samples, which, despite their lower overall quality, may contain locally valuable information to enhance the quality of selected samples. (2) 11782 They cannot eliminate low-quality segments from selected samples to further improve overall quality. In a word, such coarse-grained selection methods suppress or overlook the role of fine-grained consensus within the candidate samples. Figure 1 and Table 1 depict scenarios that select-based SC methods struggle to address. The former indicates that both methods perform poorly when the quality of candidates is low for code generation tasks. The latter illustrates with case the scenario where, in summarization tasks, the semantic information of the ground truth cannot be fully covered by any one of candidate samples. Thus, regardless of the selection, only suboptimal outputs can be obtained. Distribution USC UCS Random 5 / 0 0.0 % 0.0 % 0.0 % 4 / 1 13.9 % 13.9 % 20.0 % Table 1: Accuracy on code generation benchmark HumanEval with GPT-3.5-turbo. We generate 5 samples for each problem. The term "5/0" distribution pertains to cases where all five generated codes are erroneous, while "4/1" distribution indicates that one sample is correct while the other four are incorrect. To address this issue and better leverage the consensus knowledge among multiple samples for open-ended problems, we propose Fine-Grained Self-Consistency (FSC). Specifically, after generating multiple candidate samples, FSC extracts the segment-level common elements within them by taking full advantage of LLM’s text comprehension and contrasting capabilities. Subsequently, it synthesizes these consensus elements to produce an optimized output, effectively integrating the essence from the candidate samples. Based on this, we further propose two strategies: candidates filtering and merging. The former employs automated metrics to identify candidate sets that exhibit high similarity, thereby enhancing the overall quality of the candidates. The latter strategy merges samples that are highly similar, effectively reducing the quantity of input tokens required. A wide range of tasks, including summarization, code generation, and formal mathematical reasoning, are evaluated on GPT-3.5-turbo and GPT-4 for proposed FSC. The results show that proposed method outperform baselines in a large margin. Additional experiments indicate that filter strategy can further enhance performance by selecting better candidates and merge strategy can reduce cost while maintaining the performance. To summarize, this work includes the following contributions: • We propose Fine-Grained Self-Consistency, designed to fully leverage the consensus knowledge extracted within multiple samples for tasks involving free-form generation. • Two strategies, candidates filtering and merge, are devised to improve performance and minimize costs. • Extensive experiments show that our proposed method can surpasses competitive baselines. 2 Related Work Consistency-Based Response Selection Approaches. The literature presents a variety of consistency-based response selection approaches, typically incorporating a voting procedure to select the most frequently occurring response (Wang et al., 2023; Zhou et al., 2022; Portillo Wightman et al., 2023; Yue et al., 2023; Li et al., 2023b). The self-consistency approach proposed by Wang et al. (2023) demonstrates that by generating multiple responses for the same task and selecting the reasoning path leading to the most common final answer, the performance of chain-of-thought reasoning can be improved. Candidate responses can also be derived from different prompt variants corresponding to the same problem (Zhou et al., 2022; Portillo Wightman et al., 2023; Yue et al., 2023). For open-ended generation tasks, Jain et al. (2023) propose the n-gram consistency score to measure the pairwise similarity between candidate responses. The consistency score for each response is calculated as the sum of the pairwise similarity scores. Chen et al. (2023) propose leveraging LLMs themselves to directly select the most consistent answer among multiple candidates without an explicit definition of the pairwise similarity. In this work, we take a closer look into the coarse-grained limitations faced by selection based self-consistency methods on open-ended generation tasks, and propose Fine-Grained Self-Consistency to fully leverage the segment-level consensus knowledge extracted within multiple samples so as to gain better consistency output. Response improvement with multiple candidates. Recent studies show that the LLM can enhance its own prediction output based on candidate responses. Zheng et al. (2023) demonstrate 11783 Completions: (1)Andy Murray, British No 1 and two-time Grand Slam champion, is set to marry his girlfriend of almost 10 years, Kim Sears, in his hometown of Dunblane, Scotland. (2)When they were teenagers at the US Open in New York in 2005, Andy Murray and his long-time girlfriend Kim Sears met and they will get married in the Scottish town of Dunblane on Saturday. (3)Andy Murray and his girlfriend Kim Sears are set to get married in Dunblane, Scotland this Saturday. The couple, who have been together for over nine years, announced their engagement in November of last year. (4)Murray and Sears, who met when they were teenagers at the US Open in New York in 2005, confirmed their engagement last November after more than nine years together. (5)Andy Murray and his long-term girlfriend Kim Sears are set to get married in the Scottish town of Dunblane on Saturday. Ground truth: Andy Murray and long-time girlfriend Kim Sears will tie the knot in the Scottish town of Dunblane on Saturday. The British No 1 and his partner met when they were teenagers at the US Open in New York in 2005. Murray and Sears confirmed their engagement last November after more than nine years together. FSC: Andy Murray, the British No 1 and two-time Grand Slam champion, is set to marry his girlfriend of almost 10 years, Kim Sears, in his hometown of Dunblane, Scotland. The couple met as teenagers at the US Open in New York in 2005 and got engaged last November. Figure 1: Case from DailyMail with GPT-3.5-turbo. Each color represents a distinct semantic information. The input document is provided in the Appendix A.5 due to space constraints. Question LLM Responses Criteria of consistency Selected response … 1 2 n 3 Selection based self-consistency Question LLM Responses Integrate and generate … 1 2 n 3 Final response i Fine-grained self-consistency Extract common part of each segment Common part I II III IV V I II III IV V segment is the common part among all responses segment is the unique part among all responses Figure 2: Illustration of selection based self-consistency and proposed fine-grained self-consistency. that the given a trajectory of previously generated solutions, the LLM can iteratively produce superior solutions for an optimization task. Other researches focus on aggregating multiple reasoning chains and prompting the LLM to generate a better final response, which shows performance enhancement in multi-hop question answering (Yoran et al., 2023), mathematical reasoning (Yang et al., 2023), machine translation (Zhang et al., 2024) and code generation (Huang et al., 2023). Instead of prompting the LLM to generate a superior response, our proposed FSC focuses on extracting and integrating consistency, eliminating the need for post-answer guidance. We demonstrate that FSC effectively leverages multiple responses to enhance performance across a range of tasks. 3 Method The core idea behind fine-grained self-consistency is to measure and extract the commonality of each response generated by LLM at the segment level, and then fuse to generate the final response with superior commonality. Figure 2 illustrates the idea of select-based self-consistency and the proposed fine-grained self-consistency (FSC). Specifically, select-based self-consistency ranks the responses generated by LLM according to specific consistency criteria (Jain et al., 2023; Chen et al., 2023), and the top-ranked response is selected as the output. However, these methods do not assess the consistency at a more fine-grained level within the response. On the other hand, FSC first measures the commonality of each segment of the responses generated by LLM, and extracts the corresponding common part, then fuses the common part and generates the final output, achieving a more refined consistency sample output. We present the overall workflow of Fine-grained self-consistency (FSC) in Figure 3, Including FSC with two plugins, candidates filtering and merge. 11784 input responses … Filter … filtered responses … … merged responses LLM I have generated the following responses to the question {question} Response 0: {response_0} Response 1: {response_1} … Please integrate and generate the final response based on the major consensus of generated responses. Final response FSC prompt sample belongs to the part with higher commonality among all responses. sample belongs to the part with lower commonality among all responses. Candidates Filtering Merge FSC Question LLM cluster Ini7al responses input responses Plug in Plug in Figure 3: Overview of the proposed fine-grained self-consistency (FSC) workflow. 3.1 Fine-Grained Self-Consistency Considering that it is difficult to divide and measure the consistency of the segments in the responses based on prior knowledge, we seek to utilize the comparative and integrative capabilities of LLMs to inherently extract the fine-grained common part and integrate consistency knowledge from different responses. As shown in Figure 3, for the multiple input responses, we concatenate them all and construct a prompt with an instruction asking the LLM to extract the major consensus of generated responses, then integrate and generate the final response. In this way, FSC measures commonality at a finer granularity, utilizing, and integrating the consistency knowledge from different responses, alleviating the coarse-grained limitations of the selection based self-consistency. 3.2 Candidates filtering As shown in Figure 3, considering the varying quality of responses generated by LLM, we propose candidates filtering strategy to measure the consistency of generated responses at the sample level, eliminating responses with low consistency to ensure the overall quality of the candidate responses. Specifically, given the similarity function Sim, we can define a sample-level selfconsistency score SCSim(i) for each response i, given by SCSim(i) = 1 N−1 PN j=1,j̸=i Sim(j, i). Here, N represents the number of responses, and we use ROUGE (Lin, 2004) for our similarity function Sim. In the end, we take the Top-k responses according to SCSim as the filtered candidates set, where k is a hyperparameter. 3.3 Merge As shown in Figure 3, to reduce the computation cost of FSC, we propose merging semantically similar candidates to decrease the number of responses to be integrated by the LLM. Furthermore, due to the limitations of the LLM input length, the number of samples inputted to the LLM for consistency extraction is limited. By merging, we can provide the LLM with samples that contain more diverse knowledge, broadening the LLM’s knowledge field of view. specifically, To merge similar responses, we employ the K-Medoids clustering algorithm based on their semantic similarity. We categorize all responses into c clusters, each encompassing a set of similar results. Then we select the centroids of each cluster as representative responses and discard the remaining ones. It ensures the selected response has diverse knowledge and reduces the cost of FSC. 4 Experiment 4.1 Evaluation setup 4.1.1 Benchmarks We evaluate FSC on the following variety of tasks: 11785 Code generation We conduct experiments on three widely used code generation benchmarks, including HumanEval (Chen et al., 2021), HumanEval+ (Liu et al., 2023a) for Python code generation and BIRD-SQL dataset (Li et al., 2023a) for text-to-SQL generation. HumanEval (Chen et al., 2021) is a hand-written Python programming problem, which is further enhanced by HumanEval+ (Liu et al., 2023a) through the addition of more unit tests. BIRD-SQL (Li et al., 2023a) is a much more challenging dataset consisting of text-to-SQL tasks across 37 professional domains, derived from 95 databases with a total size of 33.4 GB. Text summarization We conduct our experiments on two widely used text summarization datasets: DailyMail for short text summarization (Nallapati et al., 2016) and SummScreen for long text summarization (Chen et al., 2022). In the DailyMail dataset, each input is an ∼800 words news article, and each reference output is a humanwritten summary of the article with ∼55 words. In SummScreen, every input is a transcript of a TV show episode with ∼5, 600 words, and each reference output is a ∼100 words human-written recap of the episode. We follow Nallapati et al. (2016) and measure ROUGE 1, ROUGE 2, and ROUGEL2 which measure n-gram overlap with the reference summary, and we also measure BERTScore F13 (Zhang et al., 2019). Mathematical reasoning We introduce the widely used GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al.) datasets to verify the generalizability of the proposed method on tasks with answer of fixed form. GSM8K consists of 8,500 grade school math word problems, and MATH consists of 12,500 challenging mathematics problems from high school competitions. 4.1.2 Baselines We compare FSC to the following self-consistency methods: (1) Random selects one answer randomly from multiple responses with temperature > 0; (2) UCS (Jain et al., 2023)4 calculates the overlap of unigrams between candidates and then selects the final answer with highest value; (3) USC (Chen 2We use the implementation of https://github.com/ pltrdy/rouge. 3We use the bert-base-uncased version for evaluation. 4As token probabilities cannot be obtained from GPT-3.5turbo and GPT-4, we implemented UCS based on its unigram version. et al., 2023) utilizes LLMs to choose the most consistent one as the final answer. (4) SC (Wang et al., 2023) is the standard self-consistency decoding with answer extraction, which mitigates noise from individual sampling by employing a voting mechanism across multiple samples. Specifically, random select represents the performance of the LLM itself when the temperature > 0, while UCS and USC are two strong baselines for selection based selfconsistency methods. Since the outputs of code generation and summarization tasks are in freeform, we evaluate SC on mathematical reasoning benchmarks where the final answers can be compared through exact match. 4.1.3 Implementation details We conduct experiments using GPT-3.5-turbo and GPT-4 models5. We set the temperature as 0.8 for both GPT-3.5-turbo (ChatGPT)6 and GPT-4 (Achiam et al., 2023) models to generate 50 initial responses for all benchmarks. For summarization and python code generation, the initial samples are generated with zero-shot prompting, thus the output formats are diverse. For BIRD-SQL, we used the 1-shot chain-of-thought prompt following Li et al. (2023a), which improves the performance. Considering the cost of experiments, we randomly select 1,000 samples from the test splits of DailyMail and SummScreen respectively to form our text summarization benchmarks. We set the temperature of FSC and USC as 0 to ensure the reproducibility of the results. Unless otherwise specified, we set the default number of input responses as 5 for all baselines. All experiments are repeated five times and the average performance is reported. 4.2 Main results 4.2.1 Code generation As shown in Table 2, we present the results on HumanEval, HumanEval+ and BIRD-SQL respectively. Besides the execution accuracy, we follow Li et al. (2023a) to also evaluate the valid efficiency score on BIRD-SQL, which measures the efficiency of the generated SQL queries. We show that FSC outperforms all baselines on execution accuracy by a significant margin across all datasets, while also generating more efficient SQL code. 5we use the "2023-05-15" version of API for both. 6https://chat.openai.com 11786 Model Method HumanEval HumanEval+ BIRD-SQL Accuracy Accuracy Accuracy Efficiency GPT-3.5-turbo Random 69.8 63.2 43.1 43.9 UCS 70.1 ↑0.3 63.2 ↑0.0 43.6 ↑0.5 44.5 ↑0.6 USC 70.5 ↑0.4 66.3 ↑3.1 43.6 ↑0.5 44.6 ↑0.7 FSC 74.5 ↑4.7 68.4 ↑5.2 45.3 ↑2.2 46.0 ↑2.1 GPT-4 Random 82.5 76.4 49.5 50.6 UCS 82.3 ↓0.2 76.7 ↑0.3 50.5 ↑1.0 51.8 ↑1.2 USC 86.1 ↑3.6 80.9 ↑4.5 50.9 ↑1.4 51.6 ↑1.0 FSC 87.1 ↑4.6 82.8 ↑6.4 51.4 ↑1.9 52.2 ↑1.6 Table 2: The results on three code generation benchmarks. The improvements are calculated between each methods and Random. The best performance for each dataset are shown in bold. Model Method DailyMail Summscreen Rouge1 Rouge2 RougeL BertScore Rouge1 Rouge2 RougeL BertScore GPT-3.5-turbo Random 37.3 14.1 38.4 60.6 18.3 2.1 16.8 48.8 UCS 36.9 14.0 38.2 60.5 16.9 2.0 16.4 48.3 USC 37.7 14.3 38.7 60.8 19.3 2.3 17.2 49.3 FSC 38.6 14.4 39.0 60.9 20.1 2.1 17.4 49.6 GPT-4 Random 37.5 14.5 38.9 61.0 19.1 2.6 17.3 49.5 UCS 36.9 14.3 38.6 60.9 18.6 2.4 17.0 49.3 USC 37.6 14.6 39.0 61.1 19.3 2.7 17.5 49.6 FSC 38.1 14.7 39.3 61.2 19.5 2.9 17.8 50.0 Table 3: Results on summarization benchmarks. FSC consistently improves over the baselines on summary quality. Dataset UCS USC FSC DailyMail 19.5% 26.5% 54.0% Summscreen 19.3% 31.5% 49.2% Table 4: Comparison of GPT-4 score between different methods on GPT-3.5-turbo. For each test data, we use GPT-4 to score the quality of summaries generated by each method and count the proportion of the highest evaluation values obtained by each method on the entire dataset. Model Method GSM8K MATH GPT-3.5-turbo Random 75.9 35.0 UCS 77.2 35.6 USC 80.6 39.3 SC 82.0 41.9 FSC 81.0 39.5 GPT-4 Random 87.5 50.2 UCS 87.6 50.7 USC 88.1 54.8 SC 88.8 55.5 FSC 91.3 56.0 Table 5: Accuracy on mathematical reasoning benchmarks. FSC performance is comparable to SC. The best and second best performance for each dataset are shown in bold and underline. 4.2.2 Text summarization Table 3 presents the results for summarization benchmarks. In both datasets FSC consistently improves over the baselines across all metrics, which demonstrates that FSC can improve performance in both short and long text summarization tasks simultaneously. Considering that LLMs demonstrate better consistency with humans in evaluation tasks (Liu et al., 2023b; Yuan et al., 2023), we employ GPT-4 as an evaluator to assess the generated summaries, following Liu et al. (2023b). As shown in Table 4, the experimental results indicate that FSC is superior to both UCS and USC. 4.2.3 Mathematical reasoning As shown in Table 5, besides selection based selfconsistency methods, we compare FSC against the standard self-consistency (SC). For SC, we employ a regular expression matching to extract the final answer on GSM8K, and reuse the answer parsing code from Li et al. (2023b) for MATH. Overall, FSC consistently improves over the selection based self-consistency methods UCS and USC, and the performance is generally comparable to SC, which needs answer parsing to perform the voting. surprisingly, FSC outperform SC on GPT-4, which demonstrates that FSC is not simply dependent on 11787 Method N HumanEval HumanEval+ DailyMail Accuracy Accuracy Rouge1 Rouge2 RougeL BertScore UCS 10 70.7 62.5 38.2 14.4 39.2 60.9 USC 10 71.0 65.9 38.5 14.5 39.1 61.1 FSC 5 74.5 68.4 38.6 14.4 39.0 60.9 FSC+Filter 10 75.5 69.2 38.7 14.6 39.3 61.0 Table 6: Results of candidates filtering on HumanEval, HumanEval+ and DailyMail with GPT-3.5-turbo. N represents the number of input responses. The best performance for each dataset is shown in bold. Method N HumanEval HumanEval+ Save DailyMail Save Accuracy Accuracy Rouge1 Rouge2 RougeL BertScore FSC 5 74.5 68.4 38.6 14.4 39.0 60.9 FSC 4 74.3 67.8 20.0% 38.8 14.2 38.8 60.8 20.0% FSC+Merge 5 75.3 68.0 25.4% 38.8 14.0 38.6 60.8 26.5% FSC+Filter+Merge 5 75.6 68.5 40.0% 38.6 14.4 38.9 60.9 53.2% Table 7: Results of merge on HumanEval, HumanEval+ and DailyMail with GPT-3.5-turbo. "Save" represents how many candidate responses input into the LLM are saved compared to the default settings of FSC (N=5). We calculate "Save" through relative difference percentage. The best performance for each dataset is shown in bold. statistical measures of final reasoning answers, and its analysis and integration of various reasoning paths are effective. These results suggest that FSC could be further generalized to tasks where answer extraction is feasible for voting. 4.3 Effect of candidates filtering As shown in Table 6, we compare the performance of FSC combined with candidates filtering strategy (denoted as FSC+Filter) with FSC itself and selection based self-consistency baselines. Specifically, FSC+Filter performs candidate filtering on the initial N responses to obtain N 2 filtered responses, and then applies FSC to the filtered responses to get the final output. The results show that candidates filtering consistently improves FSC performance across all test benchmarks, indicates that candidates filtering obtains a higher-quality response candidate set through screening. On the other hand, the performance of FSC+Filter surpasses UCS and USC under the same number of response inputs on all test datasets (except for BertScore on DailyMail), demonstrating the superiority of FSC combined with candidates filtering strategy. 4.4 Effect of merge We present the results of FSC combined with merge strategy in Table 7. The results show that the Merge strategy can significantly reduce the number of FSC input responses (20.0% on HumanEval and HumanEval+; 26.5% on Daily Mail), with minimal performance loss. Compared to saving costs by reducing the number of input responses (N = 4 for FSC), the Merge strategy saves more costs and maintains better performance (except for Rouge2 and RougeL on DailyMail), demonstrating its effectiveness. Furthermore, we combine FSC with both Filter and Merge strategies and achieve the best performance on HumanEval and HumanEval+, saving 40.0% of the cost. Besides, we save 53.2% of the cost on DailyMail with minimal performance drop. The results demonstrate the superiority of the combination of two proposed strategies. 4.5 Discussion 4.5.1 Different number of input responses As shown in Figure 4, we examine the effect of using different numbers of responses (denoted as n) in FSC on GPT-4 model.7 The results show that FSC consistently benefits from more input responses with n ≤8. However, the performance of FSC decreases with n = 16, which could be due to the difficulty in understanding long-context when the prompt includes a larger number of input responses, as well as the length constraint of LLMs. Nevertheless, we believe that using a limited number of input responses (e.g., 5) strikes an ideal balance between task accuracy and computational cost. In such case, FSC reliably enhances performance across all benchmarks. 7Due to budget constraints, we do not conduct the experiment on text summarization benchmark. 11788 0 15 30 45 60 75 90 5 8 3 1 16 HumanEval+ HumanEval BIRD-SQL 49.2 51.7 50.7 51.4 50.9 82.8 87.0 86.7 87.4 84.9 76.2 82.8 83.2 82.5 78.4 (a) Results on code generation. (b) Results on mathematical reasoning. The top numbers are FSC accuracies, and the bottom numbers are the differences to SC accuracies. n 0 15 30 45 60 75 90 5 8 3 1 16 GSM8K MATH 50.4 56.0 57.6 57.1 53.6 89.5 91.3 91.7 90.4 87.4 (-0.1) (+1.0)(+2.5) (+2.6) (+2.2) (+0.1) (+0.3) (+0.5) (+0.2) (-1.8) n Figure 4: FSC results on GPT-4 model with different number of input responses. Distribution GPT-3.5-turbo GPT-4 Random UCS USC FSC Random UCS USC FSC 5 / 0 0.0% 0.0% 0.0% 1.1% 0.0% 0.0% 0.0% 2.2% 4 / 1 20.0% 13.9% 13.9% 27.9% 20.0% 23.1% 16.6% 29.1% 3 / 2 40.0% 35.8% 47.1% 62.2% 40.0% 31.0% 51.7% 55.1% 2 / 3 60.0% 68.3% 56.6% 70.0% 60.0% 65.9% 87.2% 82.9% 1 / 4 80.0% 82.1% 86.9% 91.6% 80.0% 73.6% 91.6% 97.2% Table 8: Accuracy on code generation benchmark HumanEval with GPT-3.5-turbo and GPT-4. Please refer to Table 1 for the definition of distribution. We mark values lower than random performance in red and values higher than random in green. The best performance is highlighted in bold. Method HumanEval HumanEval+ Random 19.7 16.9 UCS 20.3 17.2 USC 26.2 21.9 FSC 29.9 24.4 Table 9: Accuracy on HumanEval and HumanEval+ with Mistral-7B-Instruct-v0.2. 4.5.2 Robustness to noise in input responses Table 8 shows the accuracy on the HumanEval benchmark under different distributions. We define noise as the the proportion of erroneous examples in the input responses, (Distribution 5/0 corresponds to the maximum noise). While the performance of UCS and USC lag behind random select when the input noise is high, FSC consistently surpasses random select by a large margin under all distributions, proving that FSC has better robustness. Furthermore, the accuracy of FSC is greater than 0 when the distribution is 5/0, indicating that FSC can still recover the correct answer when all input responses are wrong. This demonstrates that FSC is capable of integrating the correct knowledge from different input responses and eliminating the wrong part to achieve the correct final response. 4.5.3 Generalizability on open-source small model As shown in Table 9, We validate the generalization of FSC on the open-source small model Mistral-7BInstruct-v0.28 in code generation tasks. We set the temperature for baseline sampling to 0.2 and kept the rest of the implementation completely consistent with the main experiment. The experimental results indicate that FSC has the potential to work effectively on smaller models. 4.5.4 Segment-level consensus of FSC To provide additional evidence that FSC can incorporate a higher level of segment-level consensus, we carry out quantitative experiments. For all the generated candidates, we construct a pool of 4grams (as representations of segments), and then 8https://huggingface.co/mistralai/ Mistral-7B-Instruct-v0.2 11789 Win rate FSC vs UCS FSC vs USC DailyMail 79.2% 67.1% Summscreen 87.6% 81.8% Table 10: Comparison of fine-grained consensus obtained by different methods with GPT-4. calculate the overlap between the 4-grams of the final sample and the pool. We compare our method against two key baselines by computing the win rate. As shown in Table 10, the results demonstrate that our method can integrate more fine-grained segments from the candidate set, thereby generating samples of higher quality. 5 Conclusion In this work, we propose Fine-Grained SelfConsistency (FSC), which fully leverages the segment-level consensus knowledge extracted within multiple samples to overcome the coarsegrained limitations faced by selection based selfconsistency methods. To improve performance and minimize costs, we further propose two strategies called candidates filtering and merge. Extensive experiments demonstrate that FSC notably boosts the performance on diverse range of tasks, exhibits superior robustness against noise in input responses, and can be generalized to those tasks where answer extraction is feasible through voting. Additional experiments confirm that the proposed candidates filtering and merge strategies can further enhance the performance of FSC while reducing the required computational cost. Limitations Despite the remarkable performance gain on variety of tasks, the current implementation of FSC still suffers from the following limitation: As illustrated in Section 4.5.1, While self-consistency can be applied to any number of samples, the number of samples supported by FSC is bounded by the context length of the underlying LLM. That is to say, FSC would be limited in tasks that require lengthy responses, such as story generation, long text translation, etc. Ethics Statement All of the datasets used in this study were publicly available, and no annotators were employed for our data collection. We confirm that the datasets we used did not contain any harmful content and was consistent with their intended use (research). We have cited the datasets and relevant works used in this study. Acknowledgements This work is supported by the Beijing Natural Science Foundation, China (Nos. 4222037, L181010). We thank the anonymous reviewers for their constructive comments. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Jacob Austin, Augustus Odena, Maxwell I. Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. 2021. Program synthesis with large language models. CoRR, abs/2108.07732. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. 2022. SummScreen: A dataset for abstractive screenplay summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8602–8615, Dublin, Ireland. Association for Computational Linguistics. Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Kefan Xiao, Pengcheng Yin, Sushant Prakash, Charles Sutton, Xuezhi Wang, and Denny Zhou. 2023. Universal self-consistency for large language model generation. CoRR, abs/2311.17311. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of GPT-3. CoRR, abs/2209.12356. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. Sort, 2(4):0–6. 11790 Baizhou Huang, Shuai Lu, Weizhu Chen, Xiaojun Wan, and Nan Duan. 2023. Enhancing large language models in coding through multi-perspective self-consistency. arXiv preprint arXiv:2309.17272. Siddhartha Jain, Xiaofei Ma, Anoop Deoras, and Bing Xiang. 2023. Self-consistency for open-ended generations. CoRR, abs/2307.06857. Jinyang Li, Binyuan Hui, Ge Qu, Binhua Li, Jiaxi Yang, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, et al. 2023a. Can llm already serve as a database interface? a big bench for largescale database grounded text-to-sqls. arXiv preprint arXiv:2305.03111. Yiwei Li, Peiwen Yuan, Shaoxiong Feng, Boyuan Pan, Xinglin Wang, Bin Sun, Heda Wang, and Kan Li. 2023b. Escape sky-high cost: Early-stopping selfconsistency for multi-step reasoning. In The Twelfth International Conference on Learning Representations. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023a. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023b. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2511–2522, Singapore. Association for Computational Linguistics. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Ça˘glar Gulçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. Gwenyth Portillo Wightman, Alexandra Delucia, and Mark Dredze. 2023. Strength in numbers: Estimating confidence of large language models by prompt agreement. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 326–362, Toronto, Canada. Association for Computational Linguistics. Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2023. Unsupervised summarization re-ranking. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8341–8376, Toronto, Canada. Association for Computational Linguistics. Mirac Suzgun, Luke Melas-Kyriazi, and Dan Jurafsky. 2023. Follow the wisdom of the crowd: Effective text generation via minimum Bayes risk decoding. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4265–4293, Toronto, Canada. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS. Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv preprint arXiv:2309.03409. Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, and Jonathan Berant. 2023. Answering questions by meta-reasoning over multiple chains of thought. arXiv preprint arXiv:2304.13007. Peiwen Yuan, Shaoxiong Feng, Yiwei Li, Xinglin Wang, Boyuan Pan, Heda Wang, and Kan Li. 2023. Batcheval: Towards human-like text evaluation. arXiv preprint arXiv:2401.00437. Murong Yue, Jie Zhao, Min Zhang, Liang Du, and Ziyu Yao. 2023. Large language model cascades with mixture of thoughts representations for cost-efficient reasoning. arXiv preprint arXiv:2310.03094. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Wenqi Zhang, Yongliang Shen, Linjuan Wu, Qiuying Peng, Jun Wang, Yueting Zhuang, and Weiming Lu. 2024. Self-contrast: Better reflection through inconsistent solving perspectives. arXiv preprint arXiv:2401.02009. Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. 2023. Progressive-hint prompting improves reasoning in large language models. arXiv preprint arXiv:2304.09797. Chunting Zhou, Junxian He, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Prompt consistency for zero-shot task generalization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2613–2626, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. 11791 A Appendix A.1 Performance of FSC under different temperatures To verify the generalizability of FSC under different temperature settings, we conduct experiments based on GPT-3.5-turbo under different temperature settings on HumanEval+ dataset. As shown in Table 11, the results show that FSC exhibits consistent generalization under different temperature settings, with more significant performance improvements compared to the baselines when the temperature is higher. We hypothesize that this could be due to the fact that as the temperature increases, the diversity of the samples also increases, thereby enriching the knowledge from the input samples that FSC is able to integrate. Temperature 0.2 0.4 0.6 0.8 1.0 Random 63.5 64.1 64.2 63.2 63.6 UCS 63.2 63.9 64.1 63.2 64.2 USC 64.9 65.7 66.9 66.5 66.8 FSC 66.1 66.9 68.4 68.4 69.6 Table 11: Accuracy on HumanEval+ under different temperatures with GPT-3.5-turbo. A.2 Sampling cost of FSC Under the setting of input sample size N=10, we conduct a statistical analysis of the token cost9 on the HumanEval+ dataset based on GPT-3.5-turbo. As shown in Table 12, the results show that the token cost of FSC+filter+merge is comparable to that of USC, while FSC achieves a significant performance improvement. Method Prompt Completion Price Acc UCS 0 62.5 USC 1207 9 0.53 65.9 FSC+filter+merge 724 128 0.55 69.2 Table 12: Comparison of the cost of USC, UCS and FSC+filter+merge. A.3 Case study of FSC To gain a more intuitive understanding of the working mechanism of FSC, and to conduct a qualitative analysis of the consistency of FSC’s final output, we present the case of FSC on HumanEval benchmark. Figure 5 shows the final output of FSC on 9we convert the token cost into price according to https: //openai.com/pricing and report the average cost for every thousand samples. HumanEval_130 when the input responses are all wrong. Specifically, for all error input responses, FSC incorporates consensus knowledge from each input and eliminates low-quality segments, ultimately recovering the correct solution. Our analysis indicates that FSC is capable of achieving finegrained commonality extraction and obtaining outputs with better consistency compared to selection based self-consistency methods. A.4 Comparision with Minimum Bayes Risk Decoding (MBDR) Suzgun et al. (2023) propose MBDR method, achieving sample selection by calculating the BertScore between the generated samples. As shown in Table 13, we reproduce the MBDR according to the original paper, and compare it with FSC on our code generation benchmark. A.5 Input document for Figure 1 Input: “The wedding of the year in Scotland takes place on Saturday when British No 1 and two-time Grand Slam champion Andy Murray marries Kim Sears, his girlfriend of almost 10 years, in his hometown of Dunblane. Murray and Sears, both aged 27, met when the pair were teenagers during the US Open in 2005. Murray was playing in only his second Grand Slam tournament, while Sears was travelling with her tennis coach father Nigel. The couple got back together in 2010 following a brief split and after having to field constant questions over the years on when he would propose, their engagement was confirmed last November. Andy Murray kisses his new girlfriend Kim Sears in the crowd after winning his first ATP World Tour title in San Jose in February 2006 . Murray pictured walking alongside Sears on the streets of London during the Wimbledon Championships in June 2006 . Murray watches his brother Jamie in action at London‘s Queen’s Club in June 2007 alongside Sears and mother Judy (left) Murray watches his brother in action at Wimbledon in 2007 alongside Sears and Carlos Mier (right), who will be one of Murray‘s three best men . Murray and Sears watching British boxer Amir Khan in action during a title fight at the ExCel Arena in London in February 2008 . Murray and Sears attend the exhibition match held to mark the launch of the new Wimbledon Centre Court roof in May 2009 . Murray and Sears attend a Burberry fashion show alongside Serena Williams (second left) and Sarah Jessica Parker (left) in September 2010 . Murray was the best man 11792 Model Method HumanEval HumanEval+ BIRD-SQL GPT-3.5-turbo MBDR 71.5 65.6 44.1 FSC 74.5 68.4 45.3 GPT-4 MBDR 84.4 78.6 50.8 FSC 87.1 82.8 51.4 Table 13: Comparison of accuracy between FSC and MBDR on code generation tasks. for the wedding of his brother Jamie (right) and wife Alejandra (second right) at Cromlix House in October 2010 . Television viewers are well used to the sight of Sears, pictured here at Wimbledon in 2011, showing her emotions during Murray’s matches . Murray looks dejected as he and Sears wait for transport after the Brit lost the Wimbledon 2012 final to Roger Federer . There were happier moments just weeks later though as Murray celebrates with Sears after beating Federer to win Olympic gold in London . Murray then won his first Grand Slam title at the US Open in New York in September 2012 by beating Novak Djokovic in an epic final . Murray and Sears laugh with television host Jimmy Fallon before an appearance on the show following his US Open victory . Murray and Sears pose for photographers as they arrive for a Burberry fashion show during London Fashion Week in September 2012 . Murray leans over to kiss Sears after becoming the first British man in 77 years to win the men‘s singles title at Wimbledon in July 2013 . Murray and Sears pose with the famous trophy during the Wimbledon Champion’s Dinner at a hotel in Park Lane later that evening . Murray and Sears stand outside Buckingham Palace in October 2013 after the British No 1 was awarded an OBE by Prince William . Murray and Sears outside Dunblane High School after the local hero received the Freedom of Stirling at his former school in April 2014 . Murray and Sears watch golf as the couple stroll by the fairways of Ridgewood Country Club in New Jersey during The Barclays in August 2014 . Sears poked fun at the reaction to her foul-mouthed rant during the Australian Open this year by wearing this t-shirt for Murray‘s final.” 11793 def tri(n): """Everyone knows Fibonacci sequence, it was studied deeply by mathematicians In the last couple centuries. However, what people don't know is. Tribonacci sequence. Tribonacci sequence is defined by the recurrence: tri(1) = 3; tri(n) = 1 + n / 2, if n is even; tri(n) = tri(n - 1) + tri(n - 2) + tri(n + 1), if n is odd. For example: tri(2) = 1 + (2 / 2) = 2; tri(4) = 3; tri(3) = tri(2) + tri(1) + tri(4) = 2 + 3 + 3 = 8 You are given a non-negative integer number n, you have to a return a list of the first n + 1 numbers of the Tribonacci sequence. Examples: tri(3) = [1, 3, 2, 8] """ def tri(n): # Define initial values tri_seq = [0, 3, 2] # Calc ulateremaining numbers for i in range(3, n + 1): if i % 2 == 0: tri_seq.append(1 + i / 2) else: tri_seq.append(tri_seq[i - 1] + tri_seq[i - 2] + tri_seq[i - 3]) return tri_seq[:n + 1] def tri(n): # initialize the sequence with the base cases seq = [0, 3] # calculate the rest of the sequence for i in range(2, n + 1): if i % 2 == 0: # if i is even seq.append(1 + i / 2) else: # if i is odd seq.append(seq[i - 1] + seq[i - 2] + 2) return seq def tri(n): if n < 0: return [] # Initialize the array with base cases sequence = [0] * (n + 2) sequence[0] = 1 sequence[1] = 3 for i in range(2, n + 2): if i % 2 == 0: sequence[i] = 1 + i // 2 else: sequence[i] = sequence[i - 1] + sequence[i - 2] + sequence[i + 1] return sequence[:n + 1] def tri(n): # initialize the list with first two elements tribonacci = [1, 3] # calculate the next elements for i in range(2, n + 1): if i % 2 == 0: tribonacci.append(1 + i / 2) else: tribonacci.append(tribonacci[i - 1] + tribonacci[i - 2] + tribonacci[i - 1]) return tribonacci def tri(n): if n == 1: return [3] elif n == 2: return [3, 2] elif n == 3: return [3, 2, 8] else: tri_list = tri(n - 1) if n % 2 == 0: tri_list.append(1 + n / 2) else: tri_list.append(tri_list[-1] + tri_list[-2] + (1 + (n + 1) / 2)) return tri_list def tri(n): if n < 0: return "Invalid input" tribonacci = [1, 3] for i in range(2, n + 1): if i % 2 == 0: tribonacci.append(1 + i / 2) else: tribonacci.append(tribonacci[i - 1] + tribonacci[i - 2] + tribonacci[i + 1] return tribonacci[:n + 1] Response 1 Response 2 Response 3 Response 4 Response 5 FSC output HumanEval_130 Figure 5: The input responses and final output of FSC with GPT-3.5-turbo on HumanEval_130. We mark the correct parts in green and the wrong parts in red for each input response. 11794
2024
634
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11795–11810 August 11-16, 2024 ©2024 Association for Computational Linguistics More frequent verbs are associated with more diverse valency frames: Efficient principles at the lexicon-grammar interface Siyu Tao Saarland University [email protected] Lucia Donatelli Vrije Universiteit Amsterdam [email protected] Michael Hahn Saarland University [email protected] Abstract A substantial body of work has provided evidence that the lexicons of natural languages are organized to support efficient communication. However, existing work has largely focused on word-internal properties, such as Zipf’s observation that more frequent words are optimized in form to minimize communicative cost. Here, we investigate the hypothesis that efficient lexicon organization is also reflected in valency, or the combinations and orders of additional words and phrases a verb selects for in a sentence. We consider two measures of valency diversity for verbs: valency frame count (VFC), the number of distinct frames associated with a verb, and valency frame entropy (VFE), the average information content of frame selection associated with a verb. Using data from 79 languages, we provide evidence that more frequent verbs are associated with a greater diversity of valency frames, suggesting that the organization of valency is consistent with communicative efficiency principles. We discuss our findings in relation to classical findings such as Zipf’s meaning-frequency law and the principle of least effort, as well as implications for theories of valency and communicative efficiency principles. 1 1 Introduction The idea that functional pressures shape the organization of natural language lexicons has a long tradition, a prominent early example being Zipf’s law of abbreviation: the length of the word is inversely related to the frequency of its use, motivated by the “principle of least effort” (Zipf, 1935, 1949). Further evidence has been provided by a functionalist line of research studying the emergence of the lexicon from the frequency of use (e.g. Bybee, 1998; Hopper and Bybee, 2001) as well as more 1Code for reproducing our results is available under GNU GPL Version 2 at https://github.com/siyutao/ verbal-valency-ud recent work that formalizes such ideas using information theory (e.g. Regier et al., 2015; Wu et al., 2019; Mahowald et al., 2022; Pimentel et al., 2021). Common across this literature is the idea that the lexicon reflects a pressure for efficient language use under the constraints posed by human cognition. Evidence has been adduced from domains such as word lengths (Piantadosi et al., 2011; Pimentel et al., 2023), morphological complexity (Wu et al., 2019), orthographic forms (Mahowald et al., 2018), and the partitioning of semantic space into word meanings (e.g. Regier et al., 2015; Kemp et al., 2018; Zaslavsky et al., 2018). However, a key aspect of the lexicon has gone unaddressed in this literature: words are not used in isolation but systematically integrated within the broader structure of a sentence. Verbs in particular play a key role in determining the syntactic and semantic structure of a sentence, and the study of valency features prominently in theoretical linguistics (e.g. Chomsky et al., 1970; Levin and Rappaport Hovav, 2005). In broad strokes, the valency of a verb indicates the type and number of dependents it takes. Certain features of valency are shared across many but not all languages, e.g., a majority of verbs in many languages minimally requires a subject (Chomsky, 1982; McCloskey, 1994); and a transitive category of verbs may take on one or more objects in addition (Bowers, 2002). Typological literature has documented the many ways in which valency differs across verbs within a language, and across languages (e.g. Hartmann et al., 2013). Fig. 1 shows how the sentences equivalent to English The teacher helped the children with the homework are rendered differently in several languages, with dependency relations of verb annotated according to Universal Dependencies (UD, v.2 Nivre et al., 2020) guidelines. We observe both cross-lingual similarities and differences: for example, there is a subject (UD relation nsubj), an ob11795 ject (obj) and an oblique dependent (obl) in both English and Finnish, but all three dependents are marked for case in Finnish (nominative, partitive and inessive, respectively). Other languages use other combinations of relation types to express the same meaning; for instance, the verb has only two dependents in Japanese and Chinese: in Japanese the verb takes homework as obj and in Chinese the sentence must be formulated with another verb as a clausal complement (ccomp). Within a language, different verbs allow different valency frames: specific configurations of morphosyntactic features they appear with. For example, Fig. 2 shows some other valency frames observed with the English verb help (also known as its diathesis alternations). There is general consensus that the syntactic frames can at least in part be predicted by semantics (e.g. Fillmore, 1968; Levin, 1993), but divergent explanations of the findings have been offered (Gropen et al., 1989; Goldberg, 1992). A substantial body of work has also classified verbs within a language based on their diathesis alternations (e.g. Levin, 1993). Here, we investigate whether valency properties of verbs show signatures of efficient organization and test the hypothesis that verbs will systematically differ in the diversity of valency frames they are associated with, so that high-frequency verbs will be associated with a greater diversity of frames. A range of theoretical considerations converge on this prediction. Our hypothesis is related to Zipf’s meaning-frequency law, which states that more frequent words tend to have more meanings and can be derived assuming two other Zipfian laws, the law for word frequencies and the law of meaning distribution (Zipf, 1945; Ferrer-iCancho and Vitevitch, 2018). It has been empirically verified in corpus-based studies for various languages, among others Dutch, English (Baayen and Prado Martin, 2005) and Turkish (Ilgen and Karaoglan, 2007). Psycholinguistic evidence has similarly suggested a correlation between semantic ambiguity and verb frequency – more frequent verbs tend to be more ambiguous semantically (Hoffman et al., 2013). As the syntactic frames of a verb correlate with its semantics, a corollary of the meaning-frequency law is that the more frequency verbs will have more diverse valency frames, i.e., our hypothesis. Like other Zipfian observations of statistical regularities in languages, our hypothesis follows from the principle of least effort. The greater semantic ambiguity associated with the verbs requires more freedom with respect to the syntactic frames to allow speakers to convey nuanced meanings or contextual information more efficiently. From a comprehension perspective, this kind of pattern also ensures frames are predictable when verbs are not, reducing spikes in information density. We argue that similar considerations apply from the perspective of language production and learning. We operationalize valency at a lexeme-level, and use two information-theoretic metrics of the diversity of valency frames: (i) valency frame count (VFC) that measures the number of distinct frames associated with a verb; and (ii) valency frame entropy (VFE) that measures the average surprisal of valency frames as conditioned by the verb, determined by the diversity of frames associated with that verb. This allows us to quantitative test our hypothesis regarding the correlation between these metrics and the diversity of valency frames in experiments and show that effects predicted by learnability and efficiency considerations shape valency systems of languages cross-linguistically. 2 Background and Motivation The Notion of Valency The notion of valency, and closely related notions such as argument structure, subcategorization and diathesis alternation, have received a variety of formalizations across formal linguistic theories (e.g. Tesnière, 1959; Chomsky, 1965; Fillmore, 1982; Levin, 1993; Goldberg, 1995). While many theories make a formal distinction between arguments and adjuncts (Chomsky et al., 1970; Ackema, 2015), others treat them in a similar fashion (van Noord and Bouma, 1994). For this study, we abstract away from such theoretical questions, and adopt a broad definition of verb valency within UD, a dependency grammar framework. In doing so, we aim for a broad descriptive basis for cross-linguistic comparison, as UD classifies dependency relations into cross-linguistically meaningful types (cf. Figures 1–2); pragmatically, this also allows applicable operationalization on large-scale cross-linguistic usage data available as UD treebanks. We formalize as valency the set of such relation types of dependents a verb co-occurs with, and a valency frame as a set of UD relations corresponding to dependents of a verb (see § 4.1). Computational Studies of Valency A key line of computational research on valency has cre11796 老師 幫助 孩子們 完成 了 作業 teacher help child-pl complete perf homework ccomp nsubj nsubj aux obj (a) Chinese The teacher helped the children with the homework nsubj obj obl case (b) English Opettaja auttoi lapsia läksyissä teacher help.past child.pl.part homework.pl.inessive case=Nom case=Par case=Ine nsubj obj obl (c) Finnish Der Lehrer hat den Kindern bei den Hausaufgaben geholfen det teacher aux det.pl child.pl prep det.pl homework.pl help.ptcp case=Nom Case=Dat Case=Dat nsubj aux obl obl case (d) German 先生 は 子供たち の 宿題 を 手伝った teacher top child-pl gen homework obj help-past nsubj case obj case (e) Japanese Öğretmen çocuklara ödevlerine yardımcı oldu teacher child.pl.dat homework.pl.3poss.dat helper become-past case=Nom case=Dat case=Dat yardımcı oldu advcl obl obl nsubj (f) Turkish Figure 1: Example sentences in 6 languages showing relevant dependency relations. In each language, the verb expressing “helping” is associated with a different set of UD relation types linking it to its dependents. The witness helped with the investigation nsubj obl case (a) It helped to have a fire extinguisher expl xcomp (b) We were helped by a total stranger nsubj aux obl case (c) Figure 2: Further examples of the verb “help” in English, showing its diathesis alternations. ated datasets of the valency frames associated with verbs, such as FrameNet (Baker et al., 1998; Fillmore and Baker, 2015) and VerbNet (KipperSchuler, 2005; Kipper et al., 2006, 2008). Mu et al. (2017) automatically classifies verbs in terms of their valency as listed in such resources to recover classifications from the theoretical literature (Levin, 1993). Cross-linguistically, Baker and Lorenzi (2020); Ellsworth et al. (2021) explores the alignment of frames based on FrameNet. In contrast to their approach, our operationalization of valency frame is morphosyntactic in nature, directly in terms of UD relations. This allows us to match valency frames with large scale usage data across many languages and for the full set of verbs appearing in any given treebank – something that is not available when operationalizing frames using curated lexical resources such as FrameNet. Theories of Communicative and Processing Efficiency One justification for efficient organization of the lexicon comes from learnability considerations. For instance, it has been suggested that infrequent irregular verbs are poorly acquired, leading to morphological irregularity being focused on high-frequency words (e.g. Bybee, 1998; Wu et al., 2019). Similar considerations apply to valency: When learning from a finite sample, a learner should be able to acquire a larger variety of frames for frequent verbs, as more data is available for them. Thus, if learnability affects the organization of valency, one expects that infrequent verbs should be associated with smaller diversity of frames. Another key justification for efficient organization of the lexicon (and language in general) comes from online processing, in both language production and comprehension. On the production side, accessibility, i.e. the ease of retrieval, has been shown to play a key role in shaping language form and structure (Kathryn Bock and Warren, 1985; MacDonald, 2013). To the extent that verbs may be planned in advance of their arguments in sentence production (e.g. Momma and Ferreira, 2019), reducing the diversity of frames associated with a verb should increase the accessibility of any of these frames. Such a pressure should be stronger for the long tail of low-frequency verbs, as producers will be less attuned to their production. Conversely, being compatible with a larger number of frames could make a verb more versatile and therefore encourage its use. 11797 On the comprehension side, we expect consequences for sentence interpretation as well as the distribution of information density. Psycholinguistic evidence shows that argument structure information is used in online comprehension to eliminate implausible interpretations (Boland et al., 1995). The more frequent verbs, being more ambiguous or vague (Hoffman et al., 2013), would require additional information to disambiguate or narrow down the meaning. Additionally, the Uniform Information Density (UID) hypothesis (Fenk and FenkOczlon, 1980; Levy and Jaeger, 2006) posits that language speakers prefer a more even distribution of surprisal values across utterances in order to maximize but not overload the capacity of the communication channel. Less frequent verbs will, on average, have high surprisal; high entropy in the valency frames associated with it would lead to an undesirable spike in overall surprisal. 3 Data and Methodology 3.1 Data We draw on Universal Dependencies (UD) as the source of data. It provids as a cross-linguistically consistent system for annotating morphosyntactic information within a dependency grammar framework (de Marneffe et al., 2021). We use the UD v2.11 release, and include all languages with at least 10,000 tokens across treebanks. We exclude L2 learner corpora and code-switched corpora. Korean corpora are excluded due to the lack of verb lemmatization, which is necessary for empirically estimating valency. In total, 79 languages out of the 138 languages remained. 3.2 Quantifying Diversity of Frames and Verbs We hypothesize a positive relation between a verb’s frequency and the diversity of frames that it occurs with. In order to test this, we propose two measures of the diversity of frames associated with a verb. The simplest metric of the diversity of frames associated with a verb is valency frame count (VFC): the number of distinct frames that a verb occurs with in the corpus. By definition, VFC gives equal weight to more or less frequent frames. As discussed in §2, information-theoretic models of language comprehension such as UID predict an effect of the entropy of the frames, conditioned on the verb. Our second measure formalizes this. Let R be the set of UD relations under consideration. Let F be a random variable ranging over subsets of R – that is, over frames. Let V be a random variable ranging over the verb lemmata v in a language. A dependency treebank defines, for each verb v, a probability distribution P(f|V = v) over different frames proportional to their frequencies of occurrence. We formalize the valency frame entropy (VFE) as H[F|V = v], the entropy of the frame F conditioned on a verb v: − X f⊆R P(f|V = v) log2 P(f|V = v) (1) The measure we use follows the subcategorization entropy measure as used by Linzen et al. (2013) and Linzen and Jaeger (2015) as a predictor of online processing costs. These two measures are equivalent in information-theoretic terms, but we adapt our measure further to include so-called adjuncts (non-core dependencies in UD) that are not considered part of subcategorization frames in most generative grammars (Chomsky, 1965). Valency frame entropy equals log(VFC) in the special case where all frames that appear with a verb do so at equal frequencies. However, keeping VFC constant, VFE is lowered when the frames appear at more skewed frequencies. 4 Experiment 4.1 Dependency Relations As we take a broad view on the notion of valency, we correspondingly include a broad range of UD relations (Figure 6). Definitions of valency vary across the literature, but conveniently UD classifies the relevant set of relations. We included the UD v2 relations defined by UD as core arguments and non-core dependents. Core arguments, including – among others – subjects and objects are most clearly associated with argument structure in typical theories of valency and syntax. Noncore dependents are other possible dependents of a clausal predicate, including – among others – oblique nominals and adverbial modifiers. UD further includes nominal dependents – dependents of nouns – which are by definition of no relevance to our study. Finally, UD also has the categories coordination, headless, loose, special, and other; we do not consider any of these to be relevant: for instance, coordination (conj and cc) defines a symmetric relationship rather than a hierarchical dependency structure; and special lacks cross-lingual 11798 applicability. We examine correlations estimated from various subsets in Figure 5. 4.2 Controlling for Confounds via Cross-Entropy and Subsampling Estimating count and entropy measures on finite samples is a difficult problem, and popular estimators for such quantities are generally biased (e.g. Paninski, 2003). As our goal here is to test a hypothesized relationship, we aim for estimators that avoid any bias towards false positives in the directions of our hypothesis – that is, estimators that lead to conservative estimates for the testing of our hypothesis. Indeed, testing our hypothesized relationship using naive estimators of VFC and VFE faces a circularity confound: if there are few observations for a verb in a corpus, the number of distinct frames observed with it will necessarily be small, leading to an underestimation bias for entropy and frame count. As precisely these verbs are expected to have low frame entropy and low frame count under our hypotheses, it may lead to spurious correlations. Stated differently, low-frequency verbs might have a low measured diversity of frames simply due to lack of observations. We use two strategies to eliminate this confound. Estimation via Cross-Entropy Our first strategy is to empirically estimate the valency frame entropy using the cross-entropy. That is, for each verb, we randomly split its observations in the corpora into two halves, resulting in two empirical distributions P(F|V = v) and P(F ′|V = v). The cross-entropy between them, conditioned on a verb v, is: − X f⊆R P(xf|V = v) log2 P ′(f|V = v) (2) We use Laplace smoothing with α = 1 over the full set of frames to ensure well-definedness. How Cross-Entropy Counteracts the Confound Whereas naive estimation creates a bias in the direction of our hypothesis (more frequent verbs exhibit higher valency frame entropy), cross-entropy-based estimation introduces a conservative bias against our hypothesis: when a verb has very few observations, the observations in the two halves both are very noisy estimates of the actual distribution, increasing cross-entropy. In contrast, for verbs with many observations, both halves have sufficient data to closely estimate the distribution, eliminating the overestimation bias affecting low-frequency verbs. In order to understand this formally, we consider two verbs, each with k frames and identical distributions P(F|V = v), but where v1 has N observations, and v2 has 2N observations. In the limit of an infinite corpus, both verbs would have the same values for both VFC and VFE. However, when N is not much larger than k, there is a substantial amount of probability that not all k frames were observed; that is, the diversity of the frames will be underestimated. Such underestimation is more likely to affect the less frequent verb v1; hence, the entropy of the observed distribution will tend be smaller for v1 than for v2. Underestimating both VFE and VFC more strongly for low-frequency verbs, this phenomenon introduces a circularity confound in the direction of our proposed relationship. The bias reverses under cross-entropy estimation: cross-entropy (2) reflects the true entropy (1) plus DKL(P||P ′). This KL Divergence converges to zero as the number of observations increases, and will tend to be larger for v1 than for v2. The bias is thus again stronger for the infrequent verb v1, but now is an overestimation bias, pointing into the direction opposite of our proposed relationship. Cross-entropy estimation controls for the circularity confound in the sense that a systematic pattern in the direction of our hypothesized relationship must reflect a pattern in the underlying true (population-level) VFE values, as the estimation bias points in the opposite direction. Cross-entropy estimation has two limitations: it replaces one bias by another, and it applies only to VFE, not to VFC. Subsampling As a second way of mitigating the circularity confound, we performed subsampling such that (i) for each verb with frequency above the subsampling threshold ∆, a sample of ∆observations is taken, and (ii) verbs with frequency below ∆are excluded from the analysis. Considering again verbs v1, v2 with the same distribution over frames but different counts N, 2N, subsampling caps observations of both verbs at ∆. If both verbs are included (i.e., N ≥∆), they are on equal footing in estimation of VFE and VFC – the estimation bias is the same for v1 and v2. Subsampling is applicable to both VFE and VFC, and eliminates effects of verb frequency on the estimation bias. However, verbs with insufficient observations need to be excluded, reducing the number of verbs for which correlations of these measures with verb frequency can be tested. We determine the subsampling threshold ∆for 11799 0 200 400 600 800 1000 1200 # of verbs 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Spearman's afr akk grc hbo ara hye bam eus bel bul bxr yue cat zho lzh cop hrv ces dan nld eng myv est fao fin fra glg deu aln got ell heb hin hun isl ind gle ita jpn kaz kpv kmr lat lav lit mlt pcm sme nor chu orv fro fas pol qpm por ron rus san gla srp slk slv spa swe tam tha tur ukr urd uig vie hyw wol sjo 0.5 1.0 1.5 2.0 2.5 3.0 3.5 total # of tokens in UD 1e6 (a) results using subsampling 0 2000 4000 6000 8000 10000 # of verbs 0.65 0.70 0.75 0.80 0.85 0.90 0.95 Spearman's afr akk grc hbo ara hye bam eus bel bul bxr yue cat zho lzh cop hrv ces dan nld eng myv est fao fin fra glg deu aln got ell heb hin hun isl ind gle ita jpn kaz kpv kmr lat lav lit mlt glv gun pcm sme nor chu orv fro fas pol qpm por ron rus san gla srp slk slv spa swe tam tha tur ukr hsb urd uig vie cym hyw wol sjo 0.5 1.0 1.5 2.0 2.5 3.0 3.5 total # of tokens in UD 1e6 (b) results not using subsampling Figure 3: Scatter plots showing languages by their ISO 639 codes with the number of verbs across languages on the x-axis, and Spearman’s rank correlation between VFE and frequency rank of verbs on the y-axis. Color shows the total number of tokens in UD. Results use subsampling in subfigure 3a and full data in subfigure 3b; all use cross-entropies. Note that the y-axes have different ranges in (a) and (b) to maximize visibility of the individual languages. each language by applying a ratio of 0.1 to the maximum frequency of any verb in that language, but limit the absolute value to between a ceiling of 25 and a floor of 10 to avoid wide fluctuations between languages, which would have ranged from 2 in Mbya Guarani to 663 in German. On average for 79 languages, subsampling filters out 91.6% of all verbs (SD = 4.7%). The number of verbs filtered out ranges from Russian (9083 out of 10410 verbs, 87.3%) to Mbya Guarani (108 out of 113 verbs, 95.6%). We include only languages where more than 10 verbs remain after subsampling, excluding a further four languages2. 5 Results We report primarily results with both subsampling and cross-entropy estimation, as these choices lead to the most conservative estimates for testing our hypothesis. The effect of subsampling is examined below. We quantify the hypothesized relationships using Spearman’s rank correlation coefficient (Spearman, 1904). The results show robust correlation between frequency and our valency metrics: when using VFE, we observe at least moderate correlations (ρ ≥.40, following Schober et al., 2018) in 66 out of 75 languages and strong to very strong correlations (defined as ρ ≥.70) in 18. The remaining 9 languages show only weak correlations3. The mean Spearman’s ρ is .59, with a standard 2Manx (10), Mbya Guarani (5), Upper Sorbian (10), Welsh (9) 3They are Akkadian (.17), Ancient Hebrew (.27), Bambara (.39), Chinese (.39), Classical Chinese (.39), Japanese (.20), Sanskrit (.38), Turkish (.36), and Urdu (.22). deviation of .15. Using VFC measure results in similar correlations, marginally stronger than VFE on average across the 75 languages (mean ρ = .67, SD = .14), of which 71 show at least moderate correlations, and 31 strong to very strong. We note that our correlation hypothesis takes the form of an overall trend rather than a strict rule. A more frequent verb is not guaranteed nor expected to always have higher diversity of frames. Outliers to this trend among high-frequency verbs illustrate this: in English, verbs like arrive (frequency = 237, VFC = 13, VFE = 3.94) have significantly lower VFC and VFE values than others with similar frequency, e.g. call (frequency = 238, VFC = 18, VFE = 5.22) and work (frequency = 221, VFC = 18, VFE = 5.23). Fig. 3a shows how the correlation coefficients vary across the languages and in relation to the sample sizes, using VFE. Larger sample sizes, either measured by the number of verb lexemes or by the overall UD corpora size, reduce cross-lingual variance by allowing for better estimations of the entropy and count4 measures. Effects of subsampling Our subsampling procedure as described in §3.2 eliminates the circularity confound by fixing the sample size, albeit at the expense of statistical power. Results without subsampling validate our cautious approach, especially for VFC: the mean ρ between VFC and frequency rank is a staggering 0.996 (SD=.004), i.e., almost perfect correlations for all languages. Performing subsampling but then including verbs below the cutoff results in similar numbers (mean ρ =.996, 4A version of the plot using the VFC measure, which shows a similar pattern, is shown in Appendix, Fig. 7a. 11800 0 250 500 750 frequency rank 0 1 2 3 4 5 frame entropy Chinese 0 250 500 750 1000 frequency rank 0 1 2 3 4 5 6 frame entropy English 0 250 500 750 frequency rank 0 1 2 3 4 5 6 frame entropy Finnish 0 250 500 750 1000 frequency rank 2 3 4 5 6 frame entropy German 0 200 400 600 800 frequency rank 0 1 2 3 4 5 frame entropy Japanese 0 250 500 750 1000 frequency rank 0 1 2 3 4 5 6 frame entropy Turkish 0 5 10 15 20 frame count 0 5 10 15 20 25 frame count 0 5 10 15 20 25 frame count 5 10 15 20 25 frame count 0 5 10 15 20 frame count 0 5 10 15 20 25 frame count frame entropy frame count Figure 4: Scatter plots showing verbs with their frequency rank (up to 1000) on the x-axis, VFE and VFC on left and right y-axes. Results use subsampling, the vertical line indicates the subsampling threshold, i.e. only verbs to the left of the line are used in calculating the correlation coefficient in Fig. 3; entropies are estimated as cross-entropies. See Appendix, Fig. 8 for results for all languages, Fig. 9 for results without subsampling. SD = .073 for VFC). These results are arguably a reflection of the circularity confound, eliminated using subsampling. A visual confirmation of this effect is possible when we plot individual verb lexemes by their frequency ranks on the x-axis and VFE and VFC on y-axes. As seen in Fig. 4, subsampling introduces a flattening effect to the VFE and VFC metrics of verbs with frequency above the subsampling threshold ∆(i.e., verbs to the left of the dotted lines), such that the plot density makes the correlations less visually clear (which can nevertheless be statistically confirmed); in contrast, the clean correlations for the excluded verbs with frequency below ∆(i.e., verbs to the right) reflect how circularity confound would have clouded our analysis without subsampling. For the VFE measure, cross-entropy estimation alone reverses the circularity confound, even without subsampling (§ 3.2). The stronger correlation (mean ρ =.849, SD = .073) we observe without subsampling helps to confirm that our hypothesis apply to less frequent verbs that were elsewhere excluded. Fig 3b shows cross-linguistic variation in the correlation coefficient (VFE) without subsampling. Compared to the subsampled version (Fig 3a), we also observe a greater correlation between the sample size and correlation strength, as the larger number of less frequent verbs for languages with a larger sample size would mean a stronger confound. 5 10 15 core (6) non-core (10) Subset Size 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Correlation metric frame entropy frame count Figure 5: Correlations between frequency rank and VFE / VFC, for the randomly sampled subsets of relations, and core-only and non-core-only relations. Results use subsampling. Impact of UD Relations We now investigate whether the selection of the UD relations impact the results, i.e., whether certain UD relations drive 11801 0 1 UD relation 0.14 0.16 0.18 0.20 0.22 Spearman's nsubj obl advmod xcomp mark advcl ccomp aux discourse obj iobj dislocated csubj vocative expl cop Figure 6: Effects of the 16 relation labels on VFE correlation strength. See Fig. 10 for a version of the plot using VFC. Solid lines indicate core relations; dashed ones indicate non-core relations. the observed correlations more than others. We randomly sampled subsets of sizes 5, 10, 15 from the full set of 16 UD relations, and compared their correlation results with those of a subset including only the 6 core UD relations and of another including only the 10 non-core UD relation. Across the random subsets, larger sets lead to stronger correlations (Figure 5). These results suggest that the observed correlation reflects an across-the-board statistical pattern across a larger set of relation types, and not an artifact of any particular relation type. Next, in order to compare the effects of the specific relations, we fitted a separate linear mixedeffects model for each relation, predicting the regression coefficient from the presence or absence of that relation in a random subset, with random adjustments for each language. We fitted the model on the set of 1000 randomly sampled subsets of size 5. The models are defined formally in Appendix A. Fig. 6 shows each relation as a linear function with the fixed effect coefficient as the slope and the fixed effect intercept. The value at UD relation = 1 is therefore the predicted correlation coefficient when the relation is present. Note that a negative slope in the figure only indicates that including the relation instead of other relations on average has a negative effect on correlation strength, i.e., it has a less positive effect than average on correlation strength. Relations differ in their impact. The presence of nsubj and obl have the strongest positive effect on the correlation strength. The latter is expected, as obl encompasses a variety of features often part of frame encoding, including case markings on adjuncts, passive verb use, and dative alternation. The strong positive effect of nsubj is more surprising, but the variety of non-predicative uses of verbs, which often involve the absence of or different case markings on nsubj, may provide an explanation. There is no evidence that core and non-core relations have systematically different effects based on this metric. 6 Discussion Results across 79 languages provide strong support to our hypothesis of a positive correlation between a verb’s frequency and the diversity of its valency frame. By controlling for the circularity confound, we are able to determine that this correlation stands even after we exclude the effect of the more frequent verbs having more opportunities to appear. The cross-linguistic coverage further confirms that the results indicate a general trend and are not specific to individual languages. While the correlational nature of our findings precludes us from drawing definitive conclusions as to which causal pathway is behind the observed regularities, the results match predictions made by efficiency-based theories and thus corroborate them. As we have detailed in §2, considerations from learnability in language acquisition, online production and comprehension converge to predict the correlation between verb frequency and valency frame diversity and provide possible casual explanations for them. In particular, our results complement a number of existing studies focusing a correlation between word frequency and number of meanings (as a measure of semantic ambiguity) (Hoffman et al., 2013) but test our hypothesis independently on the morphosyntax; seen from a processing perspective, the valency frames may serve as additional morphosyntactic disambiguation for verbs with multiple or broad meanings, suggesting a close interaction between lexical semantics and morphosyntax. In this way, our study extends previous work on natural language lexicons and shows that they are optimized for efficiency not just in word-internal proprieties but also in their interface with grammar. Additionally, our findings show that different syntactic relations differ in their impact on the correlation, but do not support a clear-cut distinction between core and non-core dependents, otherwise understood as arguments and adjuncts. The quantitative trend was supported both by core and 11802 non-core dependents, and larger sets of relations strengthened the observed correlation. This is evidence in support of a more graded distinction between different verb dependencies rather than a binary one. While the predicted pattern holds across languages, we observe variation in correlation strength between languages. Understanding whether these differences reflect typologically meaningful variation is an interesting task for future work. One possibility is that the UD annotation scheme does not sufficiently account for cross-linguistic differences in argument encoding: e.g. in Chinese, many outlier verbs that have high frequency but low VFE are coverbs, which serve semantic functions similar to prepositions in English while are syntactically verbs (e.g., 自‘from’, 隨‘follow / with’). Their have narrow semantic functions and are associated with a relatively small number of valency frames compared to similarly frequent verbs. This brings into question the strictness of word category boundaries, and fuller picture may need to better situate the verb category within the overall lexicon. 7 Conclusion Across 79 languages, we have provided evidence for a cross-linguistic quantitative trend in the organization of valency: More frequent verbs are associated with a larger diversity of valency frames, both when estimated by count-based or entropy-based metrics. Crucially, we showed that this pattern is not driven by differences in the number of observations between high- and low-frequency verbs, and persists even when equalizing the number of observations available for each verb. Extending a recent line of research studying the lexicon’s efficiency for human communication and language use, these results suggest that such considerations apply not just to word-internal structure, but also to the way words are integrated within the sentence. Limitations In §2, we derived our prediction from efficiency considerations in both language production and language comprehension. However, our methods did not allow us to determine which of these pressures are ultimately responsible for the observed pattern. Indeed, it is an ongoing debate to what extent apparently efficient properties of language respect optimization for production or comprehension. It is possible that, in the context of valency organization, these perspectives will make distinct predictions. For instance, theories based on comprehension might induce asymmetries between dependents before and after the head. Teasing these apart in order to determine the contributions of production and comprehension is an interesting problem, beyond our scope here. A second limitation is that, even though we considered data from 79 languages, we are not able to make truly universal claims. Some language families, such as the Indo-European languages, are substantially over-represented in available corpus data, whereas languages from some parts of the world, in particular Africa and indigenous languages of the Americas, are under-represented, both in the number of languages and language families, as well as in the corpora size. A third limitation is that corpus data is finite and that in particular the circularity confound forced us to introduce biases to the estimation in order to eliminate that confound. While our results unambiguously show that the predicted correlation exists, a more precise estimation will be affected by these factors. Fourth, we study a very simple operationalization of valency in terms of UD relations, far simpler than the sophisticated representations typically used in formal linguistic theories. This is due to the fact that such sophisticated representations are not currently available at scale and across languages. While we did verify that the pattern held across languages, we opted for cross-linguistic applicability over language-specific analysis. Acknowledgments ST would like to thank Annemarie Verkerk for her help and patient guidance in getting this project off the ground. We also thank the anonymous reviewers and action editor for their helpful feedback and suggestions for our paper. References Peter Ackema. 2015. Arguments and Adjuncts, volume 1 of Handbücher zur Sprach- und Kommunikationswissenschaft / Handbooks of Linguistics and Communication Science (HSK), pages 246–273. De Gruyter Mouton, Germany. R. Harald Baayen and Fermin Moscoso Del Prado Martin. 2005. Semantic Density and Past-Tense Formation in Three Germanic Languages. Language, 81(3):666–698. 11803 Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 86–90, Montreal, Quebec, Canada. Association for Computational Linguistics. Collin F. Baker and Arthur Lorenzi. 2020. Exploring Crosslinguistic Frame Alignment. In Proceedings of the International FrameNet Workshop 2020: Towards a Global, Multilingual FrameNet, pages 77– 84, Marseille, France. European Language Resources Association. Julie E Boland, Michael K Tanenhaus, Susan M Garnsey, and Greg N Carlson. 1995. Verb argument structure in parsing and interpretation: Evidence from wh-questions. Journal of memory and language, 34(6):774–806. John Bowers. 2002. Transitivity. Linguistic inquiry, 33(2):183–224. Joan Bybee. 1998. The emergent lexicon. In Chicago Linguistic Society, volume 34, pages 421–435. Noam Chomsky. 1965. Aspects of the Theory of Syntax. The MIT Press, Cambridge, MA. Noam Chomsky. 1982. Some concepts and consequences of the theory of government and binding. MIT press. Noam Chomsky, Roderick A Jacobs, and Peter S Rosenbaum. 1970. Remarks on nominalization. 1970, 184:221. Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. Computational Linguistics, 47(2):255–308. Michael Ellsworth, Collin Baker, and Miriam R. L. Petruck. 2021. FrameNet and Typology. In Proceedings of the Third Workshop on Computational Typology and Multilingual NLP, pages 61–66, Online. Association for Computational Linguistics. August Fenk and Gertraud Fenk-Oczlon. 1980. Konstanz im Kurzzeitgedächtnis - Konstanz im sprachlichen Informationsfluß? Zeitschrift für experimentelle und angewandte Psychologie, 27:400–414. Ramon Ferrer-i-Cancho and Michael S. Vitevitch. 2018. The origins of Zipf’s meaning-frequency law. Journal of the Association for Information Science and Technology, 69(11):1369–1379. Charles J. Fillmore. 1968. The Case for Case. In Emmon Bach and Robert T. Harms, editors, Universals in Linguistic Theory, pages 21–119. Rinehart and Winston, New York. Charles J. Fillmore. 1982. Frame semantics. In Linguistic Society of Korea, editor, Linguistics in the Morning Calm, pages 111–137. Hanshin Publishing Company, Seoul, Korea. Charles J. Fillmore and Collin Baker. 2015. A frames Approach to Semantic Analysis. In Bernd Heine and Heiko Narrog, editors, The Oxford Handbook of Linguistic Analysis. Oxford University Press. Adele E. Goldberg. 1992. The inherent semantics of argument structure: The case of the English ditransitive construction. 3(1):37–74. Adele E. Goldberg. 1995. Constructions: A Construction Grammar Approach to Argument Structure. Cognitive Theory of Language and Culture Series. University of Chicago Press, Chicago, IL. Jess Gropen, Steven Pinker, Michelle Hollander, Richard Goldberg, and Ronald Wilson. 1989. The learnability and acquisition of the dative alternation in english. Language, pages 203–257. Iren Hartmann, Martin Haspelmath, and Bradley Taylor. 2013. The valency patterns leipzig online database. Max Planck Institute for Evolutionary Anthropology. Paul Hoffman, Matthew A Lambon Ralph, and Timothy T Rogers. 2013. Semantic diversity: A measure of semantic ambiguity based on variability in the contextual usage of words. Behavior research methods, 45:718–730. Paul J. Hopper and Joan L. Bybee. 2001. Frequency and the Emergence of Linguistic Structure. Typological Studies in Language. John Benjamins Publishing Company, Amsterdam. Bahar Ilgen and Bahar Karaoglan. 2007. Investigation of Zipf’s ‘law-of-meaning’ on Turkish corpora. In 2007 22nd International Symposium on Computer and Information Sciences, pages 1–6. J. Kathryn Bock and Richard K. Warren. 1985. Conceptual accessibility and syntactic structure in sentence formulation. Cognition, 21(1):47–67. Charles Kemp, Yang Xu, and Terry Regier. 2018. Semantic typology and efficient communication. Annual Review of Linguistics, 4(1):109–128. Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2006. Extending VerbNet with Novel Verb Classes. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of English verbs. Language Resources and Evaluation, 42(1):21–40. Karin Kipper-Schuler. 2005. VerbNet: A BroadCoverage, Comprehensive Verb Lexicon. Ph.D. thesis, University of Pennsylvania, USA. 11804 Beth Levin. 1993. English Verb Classes and Alternations: A Preliminary Investigation. Univ. of Chicago Press, Chicago, Ill. Beth Levin and Malka Rappaport Hovav. 2005. Argument Realization. Research Surveys in Linguistics. Cambridge University Press, Cambridge. Roger Levy and T. Florian Jaeger. 2006. Speakers optimize information density through syntactic reduction. In Proceedings of the 19th International Conference on Neural Information Processing Systems, NIPS’06, pages 849–856, Cambridge, MA, USA. MIT Press. Tal Linzen and T Florian Jaeger. 2015. Uncertainty and expectation in sentence processing: Evidence from subcategorization distributions. Cognitive science, 40(6):1382–1411. Tal Linzen, Alec Marantz, and Liina Pylkkänen. 2013. Syntactic context effects in visual word recognition: An meg study. The Mental Lexicon, 8(2):117–139. Maryellen MacDonald. 2013. How language production shapes language form and comprehension. Frontiers in Psychology, 4. Kyle Mahowald, Isabelle Dautriche, Mika Braginsky, and Ted Gibson. 2022. Efficient communication and the organization of the lexicon. Kyle Mahowald, Isabelle Dautriche, Edward Gibson, and Steven T Piantadosi. 2018. Word forms are structured for efficient use. Cognitive science, 42(8):3116– 3134. James McCloskey. 1994. Subjects and subject positions in Irish. Linguistics Research Center, Cowell College, UCSC. Shota Momma and Victor S Ferreira. 2019. Beyond linear order: The role of argument structure in speaking. Cognitive psychology, 114:101228. Jesse Mu, Joshua K. Hartshorne, and Timothy O’Donnell. 2017. Evaluating hierarchies of verb argument structure with hierarchical clustering. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 986–991. Association for Computational Linguistics. Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Jan Hajiˇc, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An Evergrowing Multilingual Treebank Collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034–4043, Marseille, France. European Language Resources Association. Liam Paninski. 2003. Estimation of entropy and mutual information. Neural computation, 15(6):1191–1253. Steven T. Piantadosi, Harry Tily, and Edward Gibson. 2011. Word lengths are optimized for efficient communication. Proceedings of the National Academy of Sciences, 108(9):3526–3529. Tiago Pimentel, Clara Meister, Ethan Gotlieb Wilcox, Kyle Mahowald, and Ryan Cotterell. 2023. Revisiting the optimality of word lengths. arXiv preprint arXiv:2312.03897. Tiago Pimentel, Irene Nikkarinen, Kyle Mahowald, Ryan Cotterell, and Damián E. Blasi. 2021. How (non-)optimal is the lexicon? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 4426–4438. Association for Computational Linguistics. Terry Regier, Charles Kemp, and Paul Kay. 2015. Word meanings across languages support efficient communication. The handbook of language emergence, pages 237–263. Patrick Schober, Christa Boer, and Lothar A. Schwarte. 2018. Correlation Coefficients: Appropriate Use and Interpretation. Anesthesia & Analgesia, 126(5):1763. C. Spearman. 1904. The Proof and Measurement of Association between Two Things. The American Journal of Psychology, 15(1):72–101. Lucien Tesnière. 1959. Éléments de syntaxe structurale. C. Klincksieck, Paris. Gertjan van Noord and Gosse Bouma. 1994. Adjuncts and the processing of lexical rules. In COLING 1994 Volume 1: The 15th International Conference on Computational Linguistics. Shijie Wu, Ryan Cotterell, and Timothy O’Donnell. 2019. Morphological Irregularity Correlates with Frequency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5117–5126, Florence, Italy. Association for Computational Linguistics. Noga Zaslavsky, Charles Kemp, Terry Regier, and Naftali Tishby. 2018. Efficient compression in color naming and its evolution. Proceedings of the National Academy of Sciences, 115(31):7937–7942. George Kingsley Zipf. 1935. The Psycho-Biology of Language: An Introduction to Dynamic Philology. The Psycho-Biology of Language: An Introduction to Dynamic Philology. Houghton Mifflin, Oxford, England. George Kingsley Zipf. 1945. The Meaning-Frequency Relationship of Words. The Journal of General Psychology. George Kingsley Zipf. 1949. Human Behavior and the Principle of Least Effort. Human Behavior and the Principle of Least Effort. Addison-Wesley Press, Oxford, England. 11805 A Linear Mixed-Effect Model Formally, for the i-th relation across K languages, we fit a linear mixed-effect model with with N different randomly subsampled subsets of relations, y = α(i) + β(i)X + Zu + ϵ where y ∈[−1, 1]N is the outcome variable, i.e. the strength of the rank correlation between verb frequency and valency frame entropy; X ∈ {0, 1}N indicates, for each of the N subsets, whether the i-th relation is in that subset; α(1) is the fixed effect intercept; β(1) is a fixed effect coefficient; Z is a N × K design matrix of the random effect; u is a K × 1 vector of the random effect of language; and ϵ is a N × 1 column of residuals. B Additional Figures 11806 0 200 400 600 800 1000 1200 # of verbs 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Spearman's afr akk grc hbo ara hye bam eus bel bul bxr yue cat zho lzh cop hrv ces dan nld eng myv est fao fin fra glg deu aln got ell heb hin hun isl ind gle ita jpn kaz kpv kmr lat lav lit mlt pcm sme nor chu orv fro fas pol qpm por ron rus san gla srp slk slv spa swe tam tha tur ukr urd uig viehyw wol sjo 0.5 1.0 1.5 2.0 2.5 3.0 3.5 total # of tokens in UD 1e6 (a) results using subsampling 0 2000 4000 6000 8000 10000 # of verbs 0.980 0.985 0.990 0.995 1.000 Spearman's afr akk grc hbo ara hye bam eus bel bul bxr yue cat zho lzh cop hrv ces dan nld eng myv est fao fin fra glg deu aln got ell heb hin hun isl ind gle ita jpn kazkpv kmr lat lav lit mlt glv gun pcm sme nor chu orv fro fas pol qpm por ron rus san gla srp slk slv spa swe tam tha tur ukr hsb urd uig vie cym hyw wol sjo 0.5 1.0 1.5 2.0 2.5 3.0 3.5 total # of tokens in UD 1e6 (b) results not using subsampling Figure 7: Scatter plots showing relationship between Spearman’s rank correlation between valency frame count and frequency rank of verbs and the number of verbs across languages. Results use subsampling in subfigure 7a and full data in subfigure 7b; entropies are estimated as cross-entropies. Color shows the total number of tokens for this language in the UD. 11807 Figure 8: Scatter plots showing relationship between valency frame entropy or frame count and frequency rank of verbs. Results use subsampling, the vertical line showing the subsampling threshold, i.e. only verbs to the left of the line are used in calculating the correlation coefficient in Fig. 1; entropies are estimated as cross-entropies. 11808 Figure 9: Scatter plots showing relationship between valency frame entropy or frame count and frequency rank of verbs. Results do not use subsampling; entropies are estimated as cross-entropies. 11809 0 1 UD relation 0.26 0.28 0.30 0.32 0.34 Spearman's obl nsubj advmod advcl xcomp mark ccomp aux discourse obj dislocated iobj csubj vocative expl cop Figure 10: Effects of the inclusion of the 16 relation labels on Spearman’s rank correlation coefficient between valency frame count and frequency rank, using VFC metric 11810
2024
635
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11811–11822 August 11-16, 2024 ©2024 Association for Computational Linguistics Quantifying Generalizations: Exploring the Divide Between Human and LLMs’ Sensitivity to Quantification Claudia Collacciani, Giulia Rambelli and Marianna Bolognesi University of Bologna, Italy {claudia.collacciani2, giulia.rambelli4, m.bolognesi}@unibo.it Abstract Generics are expressions used to communicate abstractions about categories. While conveying general truths (e.g., Birds fly), generics have the interesting property to admit exceptions (e.g., penguins do not fly). Statements of this type help us organizing our knowledge of the world, and form the basis of how we express it (Hampton, 2012; Leslie, 2014). This study investigates how Large Language Models (LLMs) interpret generics, drawing upon psycholinguistic experimental methodologies. Understanding how LLMs interpret generic statements serves not only as a measure of their ability to abstract but also arguably plays a role in their encoding of stereotypes. Given that generics interpretation necessitates a comparison with explicitly quantified sentences, we explored i.) whether LLMs can correctly associate a quantifier with the generic structure, and ii.) whether the presence of a generic sentence as context influences the outcomes of quantifiers. We evaluated LLMs using both Surprisal distributions and prompting techniques. The findings indicate that models do not exhibit a strong sensitivity to quantification. Nevertheless, they seem to encode a meaning linked with the generic structure, which leads them to adjust their answers accordingly when a generalization is provided as context. 1 Introduction Generic generalizations, or simply generics, are statements such as Birds fly , Dogs are mammals or Clocks are round, that convey information about categories. They are a powerful way through which language allows us to communicate and learn abstract knowledge that extends beyond present context and direct experience. We use generic sentences to express our knowledge about the world, including stereotypes or prejudices (e.g., Men are better at math than women). Generics are fundamental to human cognition because they help us conceptualize the properties we associate with different categories, organizing our experience of the world (Chatzigoga, 2019). The most notable feature of generics is that they allow for exceptions (Krifka et al., 1995). For example, Birds fly is considered true by speakers even if there are birds that cannot fly (e.g., penguins): in this case, therefore, the corresponding universal statement (All birds fly) is false. Different generalizations tolerate exceptions to varying degrees, according to their different semantic content: Dogs are mammals requires for its truth that all dogs be mammals; Ducks lay eggs is judged true even if only mature female ducks lay eggs, while Mosquitoes carry malaria refers to an even smaller minority (about 1 percent of mosquitoes carry malaria). There are generics that might be better paraphrased with all, others with most, and others with some; however, unlike quantified statements, they do not explicitly convey information about how many category members possess the predicated property. Given this property, the meaning of generics can be considered “underspecified”: humans’ correct interpretation is derived through world knowledge and pragmatic abilities (Tessler and Goodman, 2019). The main questions that cognitive and psycholinguistic studies conducted on generics seek to answer are whether generics are a default mechanism, whether there exists a generic bias, and what is the relationship between genericity and prevalence, i.e., to what proportion of category members the property predicted by the generic applies (Cimpian et al., 2010; Leslie et al., 2011; Khemlani et al., 2012; Prasada et al., 2013, among others). These studies investigate the nature of generalizations by contrasting generic and overtly quantified sentences (Chatzigoga, 2019); in this sense, quantifiers are used to make explicit the underspecified meaning of generics. The present paper investigates the interpretation of generalizations in different Large Language Models (LLMs). Since psycholinguistics experiments conducted on humans involve the compari11811 son with quantified expressions, we also used quantifiers as a means of unraveling generics comprehension to directly compare humans’ and models’ interpretations. We aim to explore the capability of LLMs to interpret generalizations that differ in semantic content as humans do. As we mentioned, this ability in people is closely related to world knowledge, which allows us to interpret ‘underspecified’ and implicitly quantified sentences by making use of our prior information. For this reason, investigating what is encoded in models with regard to generic form and its relation to quantification is crucial to comprehend whether they effectively understand the level of inclusiveness of a conceptual category, based on the context in which it is used, and whether they can leverage the power of quantifiers to replicate human-like distinctions, thereby enhancing their capacity to comprehend and interpret natural language accurately. The capability of LLMs to interpret generic sentences is not only an index of their capacity for abstraction but is also arguably involved in their encoding of stereotypes, since generics are one of the most powerful ways through which language convey them (Beukeboom and Burgers, 2019). In what follows, we present two experiments that try to answer our research questions (RQs): RQ1: Are LLMs capable of interpreting generic sentences according to their semantic content? Generalizations can have different implicit quantificational values depending on their semantic content. Humans are able to derive their correct meaning thanks to their world knowledge. In our first experiments (cf. 4), we investigate if LLMs are able to do the same through two different methodologies. RQ2: Do LLMs have a linguistic default interpretation associated with the generic form? People seem to have a default interpretation associated with the form of generics (Cimpian et al., 2010). In our second experiment (cf. 5), we conduct an exploratory analysis of how models interpret generalizations aside from their content, i.e., whether they seem to have encoded linguistic knowledge associated with the generic form. 2 Related works Most of the NLP literature dealing with genericity in language has focused on the building of resources geared towards distinguishing generic expressions that refer to whole categories from their non-generic counterparts that refer to specific exemplars (Friedrich et al., 2015; Govindarajan et al., 2019; Uryupina et al., 2020; Collacciani et al., 2024, among others). More recently, the usefulness of generic sentences as a resource to retrieve common sense knowledge, exploitable to boost performance in various NLP applications, has been proposed and demonstrated by (Bhakthavatsalam et al., 2020; Nguyen et al., 2023). However, to the best of our knowledge, there are no studies investigating the interpretation of linguistic generalizations by LLMs, except for the recent works by Ralethe and Buys (2022), which addresses the generic overgeneralization effect, and Collacciani and Rambelli (2023), which investigates generics interpretation, building on psycholinguistic experimental designs. Both works, however, only focus on Masked Language Models such as BERT and RoBERTa. We will use quantifiers to investigate generics comprehension, placing them at the beginning of bare generic sentences to explicitly specify their quantificational value. The most relevant studies that have evaluated model predictions following quantifiers are Kalouli et al. (2022), which focus on logical quantifiers such as all, every, and some, and Michaelov and Bergen (2023) and Gupta (2023), which focus on few and most-type quantifier; the other few works involving quantifiers focus on predicting the quantifier itself (Pezzelle et al., 2018; Talmor et al., 2020). The present work will contrast LLMs’ predictions on generic sentences and sentences quantified by no, few, some, most, and all, investigating which quantifiers seem to best approximate the meaning of the generic form. Therefore, our work aims not only to understand LLMs’ generics interpretation but also to contribute to the exploration of LLMs’ knowledge of quantifiers, adding the systematic comparison with generic sentences as a novel element. 3 Materials and Methods Dataset For this study, we created a dataset in which each generic sentence is paired with the correct quantifier, i.e., the quantifier that humans would prefer to make explicit the implicit quantification value of the generic sentence1. From now 1The dataset is available at: https://osf.io/ahspu/ ?view_only=1f789e020b7346338c53b684943dc9f1 11812 on, we will refer to this quantifier as Human Quantifier, while we will use the label LLM Quantifier to indicate each quantifier when paired with the sentences for the LLMs evaluation. To assemble our dataset, we employed sentences from different existing resources. In the first place, we looked at the Herbelot and Vecchi (2016) dataset, consisting of concept-feature pairs from McRae et al. (2005), such as airplane hasengines or ant is-black, labeled by native speakers through quantifiers. For each pair in the norms, annotators were asked to provide a label expressing how many members of the category possess the property in question, choosing among the natural language quantifiers no, few, some, most, all. We selected only those pairs on which all three annotators agreed on the same quantifier. From this dataset, we sampled 500 sentences annotated with some and all, plus 97 sentences annotated with most. Sentences annotated by humans with some, most, and all are those that can be considered as true generalizations, in their generic form. However, in order to better understand whether they are correctly interpreted by LLMs, we decided to add to the dataset also sentences quantifiable with few and no, that are characterized by implausible or impossible category-property pairs. In this case, the effect of the quantifier is to reverse the truth value of the sentence (from implausible to plausible and from impossible to possible): because of this feature, these sentences will be useful as a touchstone to evaluate the others. First, we included a sample of 500 conceptproperty pairs extracted from the COMPS dataset (Misra et al., 2023). We selected 500 cases in which negative-sample-type is equal to random (i.e., for which the similarity between the acceptable and unacceptable concept for a certain property is equal to 0), and used the unacceptable concept to form our sentences. In these cases, the Human Quantifier would always be no because the predicated property is unacceptable, as in Unicycles clean dishes. Additionally, we selected 240 stimuli originally constructed by Urbach and Kutas (2010) for a psycholinguistic task and recently used by Michaelov and Bergen (2023) and Gupta (2023) for LLMs evaluation. These stimuli consist of 120 typical subject-verb pairs (called “backbone phrases") completed by both a typical and an atypical object, as in postmen carry mail vs. postmen carry oil. In the original psycholinguistic experiment of Urbach and Kutas (2010), these sentences were Human Quantifier Generic sentences Examples NO 500 Unicycles clean dishes. FEW 120 Smugglers transport umbrellas. SOME 500 Oranges are used for juice. MOST 217 Clocks are round. ALL 500 Whales are mammals. Total 1837 Table 1: Structure of our dataset. alternatively modified by most-type and few-type quantifiers in order to collect offline plausibility ratings and record brain activity (using EEG) for the different conditions. Following the plausibility ratings of the original experiment, we included these stimuli by annotating sentences with a typical object with most quantifier, while sentences with an atypical object are annotated with few. Our final dataset consists of 1,837 sentences. Even if the dataset is not completely balanced, we believe that the stimuli should be sufficient to observe tendencies in intra- and inter-conditions. Table 1 shows the structure of our dataset, along with some examples. Models We conducted our experiments on BERTlarge-uncased (Devlin et al., 2019), a bi-directional masked language model, GPT2-xl (Radford et al., 2019), and 2 open-source pre-trained generative LLMs and their instruction-tuned variants: Llama2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023) with 7 billion parameters.2 4 Are LLMs Capable to Interpret Generic Sentences According to their Semantic Content? 4.1 Surprisal In our first experiment, we measure the Surprisal of each of the sentences in our dataset modified by each of the five quantifiers considered (no, few, some, most, all). We use the Surprisal of the overall sentence (Ss), defined as the sum of the Surprisals of each token (St) normalized by the length of the sentence: Ss = PT t∈S St count(t) where St is the negative log-probability of the occurrence of a token given its context. The Surprisal scores were extracted using the Minicons library, v. 2We only focus on open LLMs for reproducibility reasons and because we are interested in comparing the base and the instruct version of the very same models. 11813 0.2.33 (Misra, 2022). The underlying idea is that if LLMs correctly take the meaning of the different quantifiers into account in their decision process, for each sentence, the Surprisal would be lower in the condition modified by the corresponding Human Quantifier than in the others. Let us consider the examples in Table 1: a LLM is considered accurate if 1. (a) Ss (No unicycles clean dishes) < Ss (Few/Some/Most/All unicycles clean dishes.) (b) Ss (Few smugglers transport umbrellas.) < Ss (No/Some/Most/All smugglers transport umbrellas.) and so on. In addition to the five quantified conditions, we also extracted the Surprisal of the bare generic sentence, without quantifier. In the case of the generic condition, we expect it to have i) higher Surprisal than the sentences preceded by no and few for the sentences for which the Human Quantifier is no and few, whilst ii) having an approximately equivalent Surprisal score to the sentence preceded by all, some, and most. Sentences annotated with no and few quantifiers are semantically impossible or implausible sentences and, therefore, should be ‘surprising’ unless they are preceded by the respective Human Quantifier, which reverses the truth value of the sentences: 2. (a) Ss (Unicycles clean dishes) > Ss (No unicycles clean dishes.) (b) Ss (Smugglers transport umbrellas) > Ss (Few smugglers transport umbrellas) In contrast, sentences annotated with some, most, and all refer to semantically plausible events. The generic versions of these sentences are implicitly quantified, that is, semantically equivalent to the respective quantified sentence. Consequently, there should be no difference in the Surprisal scores between the bare generic and the quantified versions: 3. (a) Ss (Oranges are used for juice) ≃Ss (Some oranges are used for juice) (b) Ss (Clocks are round) ≃ Ss (Most clocks are round) (c) Ss (Whales are mammals) ≃Ss (All whales are mammals) Results Following the above assumptions, we computed the accuracy of each model separately for each Human Quantifier class, reported in Figure 1. On the left (Accuracy QUANT), we report Figure 1: Heatmaps of Accuracy values per Human Quantifier on Surprisals, for each LLM. accuracy values computed following (1): a model is correct if the sentence with the lowest Surprisal is the one with the same quantifier of the specific Human Quantifier class. As the plot reveals, the highest accuracy is obtained for the Human Quantifier all (especially for GPT2 and Mistral models), followed by the Human Quantifier some. On the right (Accuracy GEN), we compare the Surprisal of a generic sentence (without quantifier) with its version modified by the specific Human Quantifier: an LLM is considered accurate if it fulfills the conditions in (2) and (3). For some, most, and all classes, we considered as approximately equal a Surprisal of ± 1 std3. Similarly, we observe that accuracy scores are higher for some and all classes, but, in this case, we obtain high accuracy even for the class most. On the contrary, the scores are low for no and few classes. To inspect the behavior of LLMs in more detail, we examined the distributions of the Surprisal values inside each Human Quantifier class. Figure 2 reports the distributions for GPT2-xl and Mistral, as the other LLMs analyzed (BERT-large and Llama) show the same trends (all boxplots are in Appendix A). For each Human quantifier (x-axis), a boxplot represents the Surprisals of a sentence with a specific quantifier (e.g., “No unicycles clean dishes” vs. “All unicycles clean dishes”).We can observe two main trends: by looking at the average mean of each Human Quantifier group, we notice that the no and few classes tend to have higher Surprisals in general, regardless of the LLM Quantifier condition. We can hypothesize that this happens because these sentences contain words that do not usually co-occur with each other precisely because they are meant to identify 3For each LLM, we used the standard deviation of its Surprisals on the entire dataset. 11814 Figure 2: Sentence surprisal distributions per Human Quantifier and LLM Quantifier, for GPT2-xl and Mistral. properties that are impossible or implausible for the categories in question (Kauf et al., 2023). However, the overall meaning of these sentences should become less surprising when introduced by the appropriate Human Quantifier (no or few), since it has the effect of reversing the truth value of the sentence, as illustrated in (2). Nevertheless, the presence of the quantifier does not model the Surprisal scores as theoretically expected. Looking inside each LLM Quantifier group, we notice that the Surprisal distribution is the same across the five groups, and we do not see the reversal of the ratios among the distributions that should occur if the quantifier meaning was properly taken into account. In other words, if the quantifiers’ meaning were correctly taken into consideration, the sentences with the lowest score should be the ones with the same quantifier of the target class (cf. (1)). Conversely, the average surprisal of generic sentences (GEN) should be similar to the Surprisal of sentences quantified with some, most and all in their respective classes (cf. (3)), while they should be higher than no and few in their respective classes (cf. (2)). However, sentences quantified with some, most and all tend to have lower Surprisals in all five conditions across all LLMs (with the partial exception of Llama, in which the subgroups are roughly all at the same level). This inspection can help us interpret the accuracy values: the fact that the models perform better on the some, most and all classes seems to be due to a general preference for these quantifiers over the others in all cases, rather than a real grasp of the meaning of the quantifiers, also with respect to the generic sentences. What we just observed leads us to point out that the recent results reported by Gupta (2023) on quantifiers comprehension in LLMs may be misleading. In their experimental paradigm, the accuracy of Surprisal is calculated on sets of minimal pairs, such as S (Most postmen carry mail) < S (Few postmen carry mail) and S (Most postmen carry oil) > S (Few postmen carry oil). In this task, the two complementary conditions are satisfied by the two opposite outcomes. The accuracy values they report are consistently around 0.5, which means the model satisfies the conditions in about half of the cases. In light of our results, this outcome seems to be due to a general agnostic preference of LLMs for a quantifier on the other: most has a tendency to always have a lower Surprisal than few, regardless of what would be the correct Human Quantifier, as well as the other quantifiers to maintain their position in the reciprocal distribution. Our results align with those of Michaelov and Bergen (2023), as well as with previous studies on the sensitivity of LLMs probability values to negation and logical quantifiers (Ettinger, 2020; Kassner and Schütze, 2020; Kalouli et al., 2022). In the next section, we will discuss a possible explanation for these outcomes and propose an alternative method for investigating the LLMs’ interpretation of generic sentences through quantifiers. 4.2 Prompting From the analysis of Surprisals, it emerged that LLMs are unable to correctly interpret generic sentences through quantifiers with respect to their semantic content (i.e., their Human Quantifier). However, we want to point out that the Surprisals, as well as the probability values produced by LLMs, are an index of the LLM’s ‘online’ decision-making process: in this sense, they are somewhat comparable to human brain activity in response to linguistic stimuli, and they have indeed been used in works comparing them to brain responses such as the 11815 Figure 3: Percentages of occurrence for each options (LLM Quantifier) per Human Quantifier class in the LLMs responses when prompted. For each LLM, we show the responses to both QUANT and QUANT+GEN prompting. N400 amplitude (Ettinger, 2020; Michaelov and Bergen, 2023; Gupta, 2023). It is interesting to note that there is experimental evidence in psycholinguistic works that shows that the manipulation of both no-some-all (Fischler et al., 1984; Kounios and Holcomb, 1992), and few-most (Urbach and Kutas, 2010), which leads to a reversal of the truth values of the modified sentences4, while well taken into account in offline plausibility judgments, does not similarly reverse the N400 amplitudes in incremental sentence comprehension. This is considered by Urbach and Kutas (2010) as a dissociation between the patterns of quantifier and typicality effects for the offline and online measures. Given these considerations, the LLMs’ online processing (measured through Surprisal), which reveals low sensitivity to quantifiers but good sensitivity to typicality (as shown in the previous paragraph and similar to Michaelov and Bergen (2023)) is not that dissimilar to that of humans. In other words, both humans and LLMs, in online processing, are more sensitive to the plausibility of the predicated property on a given category (i.e., the fact that Postmen carry mail is more plausible than Postmen carry oil), rather than to the presence of the correct quantifier. Therefore, we decided to test our dataset through another methodology that is possibly more comparable to offline plausibility judgments (as are the Human Quantifier classes annotation on our dataset): metalinguistic prompting. For this task, we tested the instruction-tuned variants of Llama-2 and Mistral, using the same hyperparameters5. We used two different versions of prompting strategies, 4E.g., from the cited studies: [All/some/no] gems are rubies. - [All/some/no] rubies are gems.; [Most/Few] farmers grow [crops/worms.] 5Temperature=0, do_sample=False, top-k=10, max-tokens=50, frequency and presence penalty=0. both in zero-shot settings, since we are interested in eliciting the knowledge already encoded in each model (examples of each prompt are reported in Appendix B). In the first condition, models were asked to choose the most truthful sentence from the list of its quantified versions for each of the sentences in our dataset. In the second condition, the only difference is that the generic form of the sentence is also presented among the options; we will call the first version QUANT and the second QUANT+GEN. Results Figure 3 reports the percentages in which each different option (LLM Quantifier condition) occurs in the LLMs responses per Human Quantifier class, in both QUANT and QUANT+GEN prompting versions. Both language models show the same trends. When they are given only the quantified sentences as options (QUANT prompting), they take a ‘conservative’ stance, overextending the existential quantifier some over all classes. Interestingly, we can observe a trend for which from the Human Quantifier class no to all there is a progressive extension of the quantifiers most and all, and a simultaneous reduction of no and few in the responses. When the generic form is also provided among the options (QUANT+GEN prompting), this is often preferred, especially by Mistral. Even in this case, we can observe a progression in the extension of the generic form from left to right. This progressive trend seems to suggest that the instruction-tuned models are able to partially discriminate between different classes on the basis of their semantic content and have encoded some kind of meaning associated with quantifiers and the generic form, although not particularly refined. However, the accuracy (Table 2) remains overall not satisfying. In this case, the accuracy val11816 ues were computed considering the model accurate if its choice matched with the Human Quantifier class: Accuracy = NLLMQUANT=HUMANQUANT NTOT Furthermore, in the QUANT+GEN version, the choice of the generic form was considered accurate if the Human Quantifier class was some, most or all, given that these are the cases for which the generic expression is acceptable (cf. (3)). As in the previous experiment, the accuracy is not good on all classes consistently, but only on which there is a strong general preference, i.e., when there is overextension (e.g., some). Llama Mistral Human Quantifier QUANT QUANT+GENQUANT QUANT+GEN no .278 .082 .690 .124 few .092 .125 .000 .008 some .746 .498 .818 .384 most .198 .097 .000 .009 all .000 .076 .158 .080 Table 2: Prompting Accuracy per Human Quantifier class on QUANT and QUANT+GEN versions. 5 Do LLMs Have a Linguistic Default Interpretation of the Generic Form? Our second study is more exploratory and aims to investigate the relationship between generalizations and quantification from a more formal point of view. We draw inspiration from the work of Cimpian et al. (2010), who found that people, when presented with a generic sentence about a novel category (Morseths have silver fur.) and asked to estimate how many members of the category possess the characteristic predicated by the generic, tend to assign very high percentages (on average, Figure 4: Percentages of occurrence for each option (LLM Quantifier) per Human Quantifier in the LLMs responses when prompted on the entailment condition. very close to 100 percent). From that, we can infer that people have a default interpretation of the generic form: if informed about a made-up category, that lacks associations to properties in their minds, through a generic form, humans tend to extend by default the predicated property on all members of the category. Since models do not seem to encode the world knowledge necessary to interpret generics on account of their semantic content as humans, we decided to test them on a similar paradigm. Our aim is to comprehend whether and how LLMs encode a default interpretation associated with a generic form. As in Section 4.2, we used prompting to test the instruction-tuned variants of Llama-2 and Mistral, using the same parameter configurations presented in the previous experiment. Our prompting strategy for this task is inspired by the experimental design of (Cimpian et al., 2010); an example is shown in B. This prompting strategy is analogous to the QUANT condition used in the previous section, since the options are exactly the same; the only difference is that, in this case, the generic sentence is given as a premise in an entailment condition (e.g., Birds fly, therefore...), with the aim of exploring whether this leads the models to reshape their response accordingly. Results Figure 4 reports the percentages in which each different option (LLM Quantifier condition) occurs in the LLMs responses per Human Quantifier class in the entailment condition. If we compare them with the results in Figure 3, for the QUANT condition - in which the options were exactly the same, we can observe that in Llama there is a strong reduction of no and few and a large overextension of the quantifier most, as well as the emergence of all on the last classes; in Mistral, no and few have practically disappeared and, although there is still an overextension of some, there is a strong increase of most and all. Overall, the generic sentence provided as a premise seems to lead both models to skew toward “strong" positive quantifiers (most and all), to the expense of negative ones. 6 General Discussion This paper offers both quantitative and qualitative insights into how LLMs interpret generics, employing experimental designs that utilized quantified expressions to probe the comprehension of generic statements. Our two experiments were conceived to evaluate two related but separate abilities: first, 11817 the models’ capacity to accurately recognize the common knowledge implied in generic statements (i.e., they can generalize a property to the right level of inclusiveness of categories); secondly, their ability to comprehend generalizations irrespective of their content, specifically, whether they incorporate any linguistic cues linked to the generic form. To the best of our knowledge, we are the first to perform this investigation with recent LLMs, including their instruction-tuned variant, and testing them with prompting methodologies. The experiment illustrated in 4 was designed to investigate whether LLMs are capable of interpreting generic sentences according to their semantic content through quantifiers (RQ1). We observed that Surprisals do not seem to be particularly sensitive to the effect of quantifiers on sentence meaning, thus preventing us from using them as an explicit marker of the interpretation of generic sentences that differ in semantic content. However, it is possible that this outcome is not due to a complete insensitivity of the models to the meaning of quantifiers as much as to the method employed. In fact, the measurement of Surprisals could be more akin to measurements of human online processing (such as recording of brain activity) rather than offline judgments (such as the annotations we have on our dataset). Interestingly, the Surprisal of the models with respect to the effect of quantification does indeed seem to follow a similar pattern to that emerging from comparable studies on human N400 potentials (Fischler et al., 1984; Kounios and Holcomb, 1992; Urbach and Kutas, 2010). Therefore, we investigated the behavior of the models through prompting, which mirrors offline human judgments. The analyzed outcomes suggest that the instruction-tuned models have encoded some kind of meaning associated with quantifiers and the generic form, although not particularly refined. LLMs judge the choice of most and all, as well as of the generic form over the others, as more suitable as the semantic content of the sentence goes from impossible/implausible category-property pairs (no/few classes) to plausible category-property pairs (some/most classes), to necessary category-property pairs (all class). However, the comprehension of the meaning of generic and quantified sentences with respect to their semantic content does not seem to be particularly accurate. LLMs tend to take a very ‘conservative’ stance, preferring the intermediate quantifier some when given only quantified sentences as options, and the generic sentence itself (inherently vague) when this is added among the options6. This could be due to the fact that explicit quantification is actually a relatively rare phenomenon in naturally occurring text, on which LLMs are exclusively trained, while underspecified constructions like generic sentences are much more frequent (Herbelot and Copestake, 2011; Herbelot and Vecchi, 2016). Moreover, the different quantifiers all appear in the same syntactic positions and in superficially very similar contexts; the choice of one or the other is inextricably linked to our extralinguistic knowledge of the categories and the properties predicated on them, something LLMs do not possess. For this reason, we conducted a last study to explore the models’ interpretation of generalization and quantification aside from the semantic content of the predications, i.e., whether they seem to have encoded linguistic knowledge associated with the generic form (RQ2). We found that, when a generic sentence is provided as a premise in an entailment condition, instruction-tuned models tend to reshape the distributions of the different quantifiers in their responses (cf. Figure 3 vs. Figure 4), skewing their preferences toward “strong" positive quantifiers (most and all). People behave similarly (interpreting a property predicated by a generic as applicable to virtually all members of the category) when tested on novel categories, for which they have no prior understanding. However, in a real language setting humans undeniably modulate their interpretations of generalizations through their knowledge of real categories. 7 Conclusions In conclusion, this study observed that i) LLMs seem not to have the world knowledge necessary for the comprehension of the meaning of generic and quantified sentences with respect to their semantic content in a human-like way; ii) LLMs overextend the truth of a generic sentence when this is presented as an assumption, on most/all members of real-world categories, regardless of the meaning of the predication. This behavior could play a role in their encoding of stereotypes, which could be a potentially harmful bias. Overall, we believe that 6In this regard, it should be kept in mind that for our experiments we used temperature=0, which makes models’ responses more focused and deterministic. 11818 further investigations are needed to clarify the interpretation of generics in Language Models and, more generally, the role that this phenomenon has in their behavior. Acknowledgements Funded by the European Research Council, Abstraction project (attributed to Marianna Bolognesi. Grant agreement: ERC-2021-STG-101039777). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. Limitations Prompting strategies In this study, we assessed models under a conservative condition by employing a low temperature. Future research could explore the responses of the same models under higher temperatures, investigating how enhancing the linguistic creativity of LLMs impacts their performance in the presented tasks. Another limitation pertains to the prompts utilized. We evaluated all LLMs using the query “Tell me which of the following is the most truthful sentence" on the first prompting task, and “What is the correct completion?" for the second one, in each case followed by a list of the options. While we experimented with different prompts before choosing this format, we did not quantitatively investigate whether alternative queries could enhance the accuracy of the models, nor did we explore whether different examples within the prompt could yield different results. Study on English The current dataset and research are exclusively centered on English. Extending the dataset to include other languages would be advantageous. However, we currently face a scarcity of resources for other languages annotated with comparable linguistic information. Ethics Statement The resources used to build our dataset (Herbelot and Vecchi, 2016; Misra et al., 2023; Urbach and Kutas, 2010) are publicly available. We will release the dataset used in the present experiments and the obtained results in the OSF project of this study7. For reasons of replicability, we chose to use only LLMs freely available through huggingface. Given a limited GPU, we relied on 7 billion parameter models and used quantization techniques to reduce memory and computational costs, using bitsandbytes library. However, the experiments presented require a considerable memory and computational cost, especially for the prompting tasks. In addition, there is still a significant ethical concern regarding Language Models (LLMs). These models have been demonstrated to produce inaccurate information, potentially generating offensive material when prompted with certain inputs. However, it appears that LLMs fine-tuned with specific instructions have undergone training to mitigate the harmful nature of their responses. Nevertheless, some responses may still contain objectionable content. Any showcases of LLMs’ linguistic capabilities should not suggest their safety or alignment with human preferences and values. References Camiel J Beukeboom and Christian Burgers. 2019. How stereotypes are shared through language: a review and introduction of the aocial categories and stereotypes communication (scsc) framework. Review of Communication Research, 7:1–37. Sumithra Bhakthavatsalam, Chloe Anastasiades, and Peter Clark. 2020. Genericskb: A knowledge base of generic statements. arXiv preprint arXiv:2005.00660. Dimitra Lazaridou Chatzigoga. 2019. Genericity. In The Oxford Handbook of Experimental Semantics and Pragmatics, pages 156–177. Oxford University Press. Andrei Cimpian, Amanda C Brandone, and Susan A Gelman. 2010. Generic statements require little evidence for acceptance but have powerful implications. Cognitive science, 34(8):1452–1482. Claudia Collacciani and Giulia Rambelli. 2023. Interpretation of generalization in masked language models: An investigation straddling quantifiers and generics. Proceedings of the 9th Italian Conference on Computational Linguistics - CLiC-it 2023. Claudia Collacciani, Andrea Amelio Ravelli, and Marianna Bolognesi. 2024. Specifying genericity through inclusiveness and abstractness continuous scales. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 15126–15136. 7https://osf.io/ahspu/?view_only= 1f789e020b7346338c53b684943dc9f1 11819 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Ira Fischler, Paul A Bloom, Donald G Childers, A Antonio Arroyo, and Nathan W Perry Jr. 1984. Brain potentials during sentence verification: Late negativity and long-term memory strength. Neuropsychologia, 22(5):559–568. Annemarie Friedrich, Alexis Palmer, Melissa Peate Sørensen, and Manfred Pinkal. 2015. Annotating genericity: a survey, a scheme, and a corpus. In Proceedings of the 9th Linguistic Annotation Workshop, pages 21–30. Venkata Govindarajan, Benjamin Van Durme, and Aaron Steven White. 2019. Decomposing generalization: Models of generic, habitual, and episodic statements. Transactions of the Association for Computational Linguistics, 7:501–517. Akshat Gupta. 2023. Probing quantifier comprehension in large language models: Another example of inverse scaling. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 56–64. James A Hampton. 2012. Generics as reflecting conceptual knowledge. Recherches linguistiques de Vincennes, (41):9–24. Aurelie Herbelot and Ann Copestake. 2011. Formalising and specifying underquantification. In Proceedings of the Ninth International Conference on Computational Semantics (IWCS 2011). Aurélie Herbelot and Eva Maria Vecchi. 2016. Many speakers, many worlds. LiLT (Linguistic Issues in Language Technology), 13. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. Aikaterini-Lida Kalouli, Rita Sevastjanova, Christin Beck, and Maribel Romero. 2022. Negation, coordination, and quantifiers in contextualized language models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3074–3085, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Nora Kassner and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics. Carina Kauf, Anna A Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko, and Alessandro Lenci. 2023. Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely. Cognitive Science, 47(11):e13386. Sangeet Khemlani, Sarah-Jane Leslie, and Sam Glucksberg. 2012. Inferences about members of kinds: The generics hypothesis. Language and Cognitive Processes, 27(6):887–900. John Kounios and Phillip J Holcomb. 1992. Structure and process in semantic memory: evidence from event-related brain potentials and reaction times. Journal of experimental psychology: General, 121(4):459. Manfred Krifka, Francis Jeffry Pelletier, Gregory Carlson, Alice ter Meulen, Gennaro Chierchia, and Godehard Link. 1995. Genericity: An introduction. In Greg N. Carlson and Francis Jeffry Pelletier, editors, The Generic Book, pages 1–124. University of Chicago Press. Sarah-Jane Leslie. 2014. Carving up the social world with generics. Oxford studies in experimental philosophy, 1. Sarah-Jane Leslie, Sangeet Khemlani, and Sam Glucksberg. 2011. Do all ducks lay eggs? the generic overgeneralization effect. Journal of Memory and Language, 65(1):15–31. Ken McRae, George S Cree, Mark S Seidenberg, and Chris McNorgan. 2005. Semantic feature production norms for a large set of living and nonliving things. Behavior research methods, 37(4):547–559. James Michaelov and Benjamin Bergen. 2023. Rarely a problem? language models exhibit inverse scaling in their predictions following few-type quantifiers. In Findings of the Association for Computational Linguistics: ACL 2023, pages 14162–14174, Toronto, Canada. Association for Computational Linguistics. Kanishka Misra. 2022. minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models. arXiv preprint arXiv:2203.13112. Kanishka Misra, Julia Rayz, and Allyson Ettinger. 2023. COMPS: Conceptual minimal pair sentences for testing robust property knowledge and its inheritance in pre-trained language models. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2928– 2949, Dubrovnik, Croatia. Association for Computational Linguistics. 11820 Tuan-Phong Nguyen, Simon Razniewski, Aparna Varde, and Gerhard Weikum. 2023. Extracting cultural commonsense knowledge at scale. In Proceedings of the ACM Web Conference 2023, pages 1907–1917. Sandro Pezzelle, Shane Steinert-Threlkeld, Raffaella Bernardi, and Jakub Szymanik. 2018. Some of them can be guessed! exploring the effect of linguistic context in predicting quantifiers. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 114–119, Melbourne, Australia. Association for Computational Linguistics. Sandeep Prasada, Sangeet Khemlani, Sarah-Jane Leslie, and Sam Glucksberg. 2013. Conceptual distinctions amongst generics. Cognition, 126(3):405–422. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language Models Are Unsupervised Multitask Learners. OpenAI Blog, 1(8):9. Sello Ralethe and Jan Buys. 2022. Generic overgeneralization in pre-trained language models. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3187–3196, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2020. olmpics-on what language model pre-training captures. Transactions of the Association for Computational Linguistics, 8:743–758. Michael Henry Tessler and Noah D Goodman. 2019. The language of generalization. Psychological review, 126(3):395. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open Foundation and Fine-tuned Chat Models. arXiv preprint arXiv:2307.09288. Thomas P Urbach and Marta Kutas. 2010. Quantifiers more or less quantify on-line: Erp evidence for partial incremental interpretation. Journal of Memory and Language, 63(2):158–179. Olga Uryupina, Ron Artstein, Antonella Bristot, Federica Cavicchio, Francesca Delogu, Kepa J. Rodriguez, and Massimo Poesio. 2020. Annotating a broad range of anaphoric phenomena, in a variety of genres: the arrau corpus. Natural Language Engineering, 26(1):95–128. A Analysis of LLMs Surprisals Figure 5: Sentence surprisal distributions per Human Quantifier and LLM Quantifier, for each LLM analyzed. 11821 B Prompting strategies We report an example for each of the three prompting strategies used. For each of them, the options were randomized for each iteration. • Section 4.2 QUANT version Tell me which of the following is the most truthful sentence: No birds fly. Few birds fly. Some birds fly. Most birds fly. All birds fly. QUANT+GEN version Tell me which of the following is the most truthful sentence: Birds fly. No birds fly. Few birds fly. Some birds fly. Most birds fly. All birds fly. • Section 5 What is the correct completion? Birds fly, therefore... no birds fly. few birds fly. some birds fly. most birds fly. all birds fly. 11822
2024
636
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11823–11835 August 11-16, 2024 ©2024 Association for Computational Linguistics Can Large Language Models Interpret Noun-Noun Compounds? A Linguistically-Motivated Study on Lexicalized and Novel Compounds Giulia Rambelli University of Bologna [email protected] Emmanuele Chersoni The Hong Kong Polytechnic University [email protected] Claudia Collacciani University of Bologna [email protected] Marianna Bolognesi University of Bologna [email protected] Abstract Noun-noun compounds interpretation is the task where a model is given one of such constructions, and it is asked to provide a paraphrase, making the semantic relation between the nouns explicit, as in carrot cake is “a cake made of carrots.” Such a task requires the ability to understand the implicit structured representation of the compound meaning. In this paper, we test to what extent the recent Large Language Models can interpret the semantic relation between the constituents of lexicalized English compounds and whether they can abstract from such semantic knowledge to predict the semantic relation between the constituents of similar but novel compounds by relying on analogical comparisons (e.g., carrot dessert). We test both Surprisal metrics and prompt-based methods to see whether i.) they can correctly predict the relation between constituents, and ii.) the semantic representation of the relation is robust to paraphrasing. Using a dataset of lexicalized and annotated noun-noun compounds, we find that LLMs can infer some semantic relations better than others (with a preference for compounds involving concrete concepts). When challenged to perform abstractions and transfer their interpretations to semantically similar but novel compounds, LLMs show serious limitations1. 1 Introduction Noun-noun compounds represent an important challenge for all the applications related to Natural Language Understanding, given the implicit semantic relation assumed between the two components, namely: head and modifier (Nakov, 2008b). Their correct interpretation is an essential step for several Natural Language Processing applications such as question answering, machine trans1Data and code available at: https://osf.io/67k9u/ ?view_only=258fa2570d984372ad104e19d77f71bb lation, and information extraction. For example, if a question answering system is asked something about birthday cake, it must understand that the user is talking about a cake made for birthdays; while if it is asked about a carrot cake, it must understand that the query refers to a cake made with carrots (not for carrots). The capacity to grasp the semantic connection underlying the pairing of two terms in a compound represents a form of abstraction inherent to human cognition, applicable to concrete and abstract concepts alike (concrete such as carrot cakes and abstract such as bank loans). This skill is often wielded even for never-encountered-before compounds (Van Jaarsveld and Rattink, 1988). Previous research stressed the role of structured world knowledge in the interpretation of compounds (Wisniewski and Love, 1998; Ó Séaghdha, 2008), which includes the knowledge of the constituent entities and their potential relations. Moreover, people are able to interpret novel compounds by abstracting from knowledge based on past experiences with similar conceptual combinations (Gagné and Spalding, 2006b; Gagné and Shoben, 1997, 2002, among others) and to extend them by relying on analogical comparisons (Krott, 2009). Can the modern Large Language Models (LLMs) do the same? The main goal of our study is to propose a more refined methodology to understand when and how LLMs are capable of performing abstractions that humans routinely do, namely: understanding the semantic relation existing between the two components of a lexicalized compound and then extending such relation to novel compounds that are constructed in such a way to maintain the semantics of the original components. To do so, we manually manipulated existing compounds by replacing one of the two terms (head or modifier) with their hypernym, namely a word denoting a superordinate concept (Cruse, 1986). This allowed us to gener11823 compound coarse-grained (Tratz, 2011) fine-grained (Tratz, 2011) Hatcher-Bourque (Pepper, 2022) paraphrase (Pepper, 2021) plastic bag containment SUBSTANCE -MATERIALINGREDIENT COMP(OSITION)-R a bag that is composed of plastic trash bag containment CONTAIN CONT(AINMENT)-R a bag that contains trash supermarket shelf loc_part_whole LOCATION LOCATION a shelf that is located in a supermarket car door loc_part_whole WHOLE+ PART_OR _MEMBER_OF PARTONOMY a door that is part of a car food company purpose CREATEPROVIDEGENERATESELL PRODUCTION a company that produces food bank loan causal CREATORPROVIDERCAUSE_OF PROD(UCTION)-R a loan that a bank produces research group purpose PERFORM& ENGAGE_IN PURPOSE a group intended for research art class topical TOPIC TOPIC-R a class that is about art wind turbine topical MEAN US(A)G(E)-R a turbine that uses wind Table 1: Semantic relations of Tratz (2011) and their mapping onto the Hatcher-Borque classification. ate novel compounds such as birthday dessert and event cake, based on the lexicalized birthday cake. To test the LLMs’ ability to understand the semantics of lexicalized and novel compounds, we assess whether Surprisal, a metric directly based on the log probabilities of the LLMs, is able to differentiate between the possible interpretations of a compound. We hypothesize that LLMs may be accurate in recognizing the correct semantic relation holding between the two components of a lexicalized compound. Moreover, if we were to observe any differences in the performance across different types of compounds, we would argue that such differences may be (at least partially) explained by the concreteness of the compound, in line with previous psychological findings showing that concrete concepts are processed more easily than abstract ones (Jessen et al., 2000). As a complement to Surprisal analyses, we performed a metalinguistic prompt asking to identify the correct interpretation of a compound from a list of options. We relied on LLMs trained with Instruction tuning, a method that has recently been proposed to enhance the generalization capability of LLMs, and assessed the performance of some of the most popular architectures on this task. Contributions Our contributions can be summarized as follows: 1. To the best of our knowledge, we are the first to investigate compound interpretation with the most recent LLMs, including instructiontuned variants; 2. We introduce a dataset designed to manipulate compounds at several levels of linguistic information and present a methodology to generate novel compounds that could be helpful for future investigations. 2 Related Work The problem of the interpretation of compounds has generally been addressed via two different tasks: the first one is the classification in a limited inventory of ontological/semantic relations holding between the two nouns (Nastase and Szpakowicz, 2003), and the second one is the free generation of a paraphrase describing the same relations (Hendrickx et al., 2013; Shwartz and Waterson, 2018; Shwartz and Dagan, 2019). With the introduction of Transformer-based language models, several studies have proposed to investigate their internal representations to understand how the constituent meanings are composed (Ormerod et al., 2023; Mileti´c and Schulte im Walde, 2023; Buijtelaar and Pezzelle, 2023, among others), and if and to what extent they are able to generalize to interpret unseen compounds (Li et al., 2022). Coil and Shwartz (2023) proposed a few-shot model based on GPT-3 (Brown et al., 2020) to tackle interpretation, and they were able to achieve almost perfect performance on a SemEval noun compounds benchmark by Hendrickx et al. (2013). However, by measuring the n-gram overlap between the generated paraphrases and the C4 corpus (Raffel et al., 2020), they found that GPT-3 might just be parroting word sequences seen in the 11824 training data, and the strategy turned out to be less effective with rare or novel compounds. Is the knowledge encoded in recent LLMs including instruction-tuned ones- sufficient to interpret the relation between constituent nouns and to generalize the interpretations to novel compounds? Language models retain a non-trivial amount of knowledge about the world, and this is reflected in the log probability scores that they assign to real-world situations and events described by natural language sentences (Pedinotti et al., 2021; Kauf et al., 2023); moreover, the recent progress on instruction tuning led to even better alignment with conceptual representations in the human brain (Aw et al., 2023). Therefore, our investigation will focus on three of the most popular LLMs (Llama-2, Falcon, and Mistral), both in their Base and in their Instruct version, to see if instruction tuning leads to performance improvements also in the interpretation of compounds. 3 Do LLMs Grasp Semantic Relations in Lexicalized Noun Compounds? 3.1 Data For our experiment, we selected compounds from two previously released datasets. Tratz (2011) gathered in his dataset around 19K compositional noun compounds human-annotated with a semantic relation (37 fine-grained relations, 12 coarsegrained relations). Conversely, Muraki et al. (2023) collected concreteness ratings for over 60K multiword expressions from 2,825 online participants. Expressions were rated from 1 to 5, where 1 indicates that the expression was very abstract and 5 that the expression was very concrete2. In order to use the concreteness ratings collected by Muraki as a predictor for the LLM accuracy in identifying the correct semantic relation between head and modifier, we retained only the compounds from Tratz associated with concreteness ratings in Muraki. The intersection of the two datasets resulted in 2,268 noun-noun compounds annotated with word and bigram frequency (extracted from enTenTen20 corpus; cf. Jakubíˇcek et al., 2013; Suchomel, 2020), concreteness score, semantic relation class, and the semantic type of the compound (provided by three annotators who followed the coding scheme of Villani et al., 2024). We be2’Concreteness’ refers to the degree to which the concept denoted by a word refers to a perceptible entity (Brysbaert et al., 2014). lieve that the more linguistic features are added to a compound, the more we can shed light on which factors influence LLMs’ plausibility of noun compounds. Additionally, we associated a paraphrase created for each compound for the following reasons. Using abstract semantic categories to describe compounds is considered problematic because i.) it is unclear which relation inventory is the best one, ii) such relations capture only part of the semantics (e.g., classifying malaria mosquito as CAUSE obscures the fact that mosquitos do not directly cause malaria, but just transmit it), and iii.) multiple relations are possible (Nakov, 2008a). Therefore, common compound datasets used in NLP typically provide linguistic paraphrases of compounds produced by human annotators. However, if multiple paraphrases are reported for each compound, this causes an exponential generation of similar paraphrases in the data; for instance, golf course can be “course for golf,” “course for playing golf,” “course for the game of golf,” etc. (from Hendrickx et al., 2013). We decided to follow a different approach to reduce the variability of paraphrases. We converted Tratz’s relations into the Hatcher-Bourque classification (Pepper, 2022), a classification of semantic relations suitable for typologically different languages. The classification comprises 17 low-level relations, and some of them can be reversible (the first word of the compound, usually the modifier, is the semantic head). These relations are grouped according to the three high-level relations (similarity, containment, and direction). We chose this classification not just because it was conceived to be cross-linguistically consistent but also because Pepper (2021) proposed an Excelbased tool for the computer-assisted analysis of semantic relations called the “Bourquifier”. For instance, the relation USAGE, which expresses the relation between something that is “used” and the entity (“user”) that uses it, can be translated as “an H that an M uses” (e.g., a lamp oil is “(an) oil that a lamp uses"). Conversely, animal doctor is annotated with the semantic class PURPOSE and expresses the relation between an entity and its purpose, and it is paraphrased as “a doctor intended for animals." We used the Bourquifier as a template to create compound paraphrases; as a result, compounds classified under the same semantic relation have a similar paraphrase. For the present study, we selected only com11825 Relation Count Mean Conc COMP-R 85 4.47 CONT-R 54 4.49 LOCATION 107 4.15 PARTONOMY 16 4.58 PROD-R 13 3.18 PRODUCTION 47 4.34 PURPOSE 270 4.01 TOPIC-R 66 3.30 USG-R 10 4.24 Table 2: Statistics of frequency and mean concreteness ratings for the LNC dataset. pounds with a clear map between Tratz and Hatcher-Bourque classifications from the overall dataset, disregarding ambiguous compounds or odd paraphrases. The final subset consists of 668 lexicalized (and compositional) noun-noun compounds (henceforth, LNC) and contains compounds for nine semantic relations. Table 1 illustrates the final relations with the associated paraphrase, while Table 2 describes the distribution of semantic relations together with their mean concreteness. In addition, we used the dataset of Nakov (2008b), which contains 250 compounds annotated with 16 semantic classes (coming from the classification by Levi (1978)) and humanproposed paraphrasing verbs (see Table 3). For our purposes, we selected the most frequently produced verb expressing the correct underlying relation for each compound and created a short sentence. For example, beacon grease becomes “(a) grease that comes from (a) bacon." This dataset serves as a diagnostic test for the evaluation of our dataset. Specifically, we assess whether LLMs show higher performance when asked to recognize paraphrases that are generated spontaneously by humans instead of those generated from the Bourquifier templates. 3.2 Methods Models We evaluated three open-source LLMs and their instruction-tuned variant: Llama-2 (Touvron et al., 2023), Falcon (Almazrouei et al., 2023), and Mistral (Jiang et al., 2023). All models are open-source, pre-trained autoregressive text models with 7 billion parameters. As a baseline, we selected BERT-large-uncased (Devlin et al., 2019), a bi-directional masked language model, Relation Verb Count ABOUT involve 18 BE be 42 CAUSE1 cause 8 CAUSE2 be caused by 17 FOR contain 16 FROM come from 22 HAVE1 contain 14 HAVE2 come from 14 IN occur in 22 MAKE1 make 4 MAKE2 be made of 20 NOMIN:ACT be made by 15 NOMIN:AGENT give 6 NOMIN:PATIENT work for 5 NOMIN:PRODUCT be made by 11 USE use 16 Table 3: Descriptive statistics for Nakov dataset. We report the most frequent verbal expression associated with each of the 16 semantic relations. and GPT2-xl (Radford et al., 2019), an autoregressive one.3 Tasks The aim of this study is to evaluate whether LLMs are able to correctly identify the semantic relation underlying noun-noun compounds. We propose not to make the model generate the correct paraphrase but to pick the correct one from a list of possible paraphrases. From the LNC dataset, we used the Bourquifier templates to make implausible paraphrases of the compound. For the Nakov dataset, we selected the most frequent verbal phrase associated with each relation (Table 3) and used it to create the distractors. We designed two complementary tasks to evaluate the ability to interpret compounds: i.) direct probability measures and ii.) metalinguistic prompting. In the first task, we compute the Surprisal at the sentence level. The Surprisal St of the single token ti is defined as the negative of the log probability of ti, conditioned on the preceding sentence tokens w<i. The Surprisal of the overall sentence (Ss) is then defined as the sum of the Surprisals of each token (St), normalized by the length of the sentence: Ss = PT t∈S St count(t) (1) For BERT, a bidirectional masked language model, the Surprisal of sentences was computed using a modified version of the metric by Kauf 3We only focus on open LLMs i.) for reproducibility reasons, and ii.) because we are interested in comparing the Base and the Instruct version of the very same models. 11826 baselines LLMs (Base) LLMs (Instruct) BERT-large GPT2-xl Llama-2 Falcon Mistral Llama-2 Falcon Mistral LNC Acc 0.262 0.338 0.401 0.433 0.403 0.448 0.38 0.428 MRR 0.509 0.542 0.583 0.595 0.569 0.599 0.557 0.592 Nakov Acc 0.484 0.548 0.592 0.568 0.6 0.632 0.56 0.648 MRR 0.641 0.682 0.722 0.707 0.73 0.746 0.698 0.756 Table 4: Surprisal results on the LNC and Nakov datasets. and Ivanova (2023). In short, each sentence token is sequentially masked, the Surprisal score is retrieved by using the sentence context in a masked language modeling setting, and then the partial scores finally get summed; additionally, for out-of-vocabulary words, all the tokens within the word also get masked, and not just the target one (this helps to avoid the probability overestimation of rare words). The Surprisal scores were extracted using the minicons library v. 0.2.33 (Misra, 2022). Our assumption is that the correct paraphrase of a given compound (goodNC) should have a lower Surprisal score than the scores of all incorrect alternatives (badNC). ∀s ∈badNC, S(goodNC) < S(s) (2) As a more natural way of evaluating the performance of instruct-tuned models, we decided to prompt them to select the best paraphrases for a given compound. Specifically, a model is asked to choose a correct paraphrase from a list of expressions (full example in Appendix A): Which is the most likely description of "olive oil"? 1. an oil that uses olives; 2. an oil that is part of olives; ... 9. an oil that is composed of olives We ran three different versions of prompting strategies: zero-shot (no examples of the task are provided), one-shot, and three-shot learning (one and three examples are provided, respectively). Since we observed inconsistent output from the zero-shot prompting, we just reported results for the other two settings. For this task, we selected only the instruction-tuned variants of Llama-2, Falcon, and Mistral and used the same hyperparameters for all models4. All experiments were run on Colab TPU and A100. 4Temperature: 0, do_sample: False, top-k: 10, top-p: 5, max-tokens: 50, frequency and presence penalty: 0. 3.3 Results Surprisals Table 4 reports LLMs’ performance over the two datasets. We computed two different performance metrics: i.) Accuracy, the proportion of compounds where the model assigns the lowest Surprisal to the correct paraphrase, and ii.) Mean Reciprocal Rank (MRR). For this metric, we ranked the paraphrases in terms of their Surprisal (from the smallest values to the largest ones) and computed the multiplicative inverse of the rank of the correct answer (1 if it is in the first place, 0.5 for the second, and so on). The overall Accuracy of recent LLMs is higher than the two baselines (BERT: 26,2%; GPT2: 33,8%), with BERT performing poorly. The MRR scores generally align with Accuracy. Instruction-tuned variants are not consistently better than their Base variants: Llama-2 Instruct reaches a statistical significance of the improvement over the Base model, but the opposite trend is observed for Falcon, whose instruction-tuned version performs statistically worse than its Base counterpart. Finally, Mistral’s improvement of the Instruct model over the Base one does not reach statistical significance. Considering the instruction-tuned models, Llama-2 gains the highest performance (44,8%), but there is no statistical difference with Mistral (42,2%), while both models are statistically better than Falcon (38%)5. Overall, 200 compounds are always correctly categorized by Llama-2, Falcon, and Mistral. To further gather an idea of which semantic relations are commonly mistaken by all models and to identify similar patterns across their Surprisal distributions, we additionally computed each class’s accuracy. In this case, per-class Accuracy is considered as the proportion of compounds where the model assigns the lowest Surprisal to the paraphrase of the correct class over 5We determine the significance of differences between model accuracies with McNemar’s Chi-Square Test, applied to a 2x2 contingency matrix containing the number of correct and incorrect answers. Statistical significance is reached when p-value < 0.01. 11827 Figure 1: LNC dataset: Percentage of semantic relations chosen by the model compared to the gold semantic relation (in the x-axis). the total compounds annotated with that semantic relation/class in the gold standard. We transformed the results as a stacked barplot (Figure 1): each column represents the original semantic class of compounds, while the colors in each bar represent the percentage of semantic relations ‘chosen’ by each model, i.e., the corresponding paraphrase with the lowest Surprisal. If the bar has the same color as the original category, the model consistently tends to assign the lowest score to the correct class; otherwise, it is possible to investigate which errors the model is making, especially if it is biased towards some relations. By looking at Figure 1, the analysis by category reveals an interesting trend across LLMs: some semantic relations have higher Accuracy (COMP-R and PRODUCTION are almost perfect), whilst others are commonly mistaken (PURPOSE, PROD-R, and TOPIC-R). It is worth noticing that the semantic relations that are less understood are also the ones referring to less concrete referents (the average of concreteness ratings is 3.18 for PROD-R and 3.30 for TOPIC-R, cf. Table 2). A binomial generalized linear mixed model demonstrates that there is a positive, significant effect between Accuracy as dependent variable and concreteness as the independent variable (coefficient= 0.703, SE=0.133, p<0.001), showing that accuracy increases with concreteness (AIC: 894.7 BIC: 908.2). The evaluation of the Nakov dataset gives similar results (cf. Appendix B): Instruction-tuned LLMs have higher performance (Llama-2: 63,2%, Mistral: 64,8%). In this case, the compounds that are accurately recognized are from the semantic classes of FROM (between 86-94% of accuracy), CAUSE2 (between 84-94% of accuracy), MAKE2 (between 80-92% of accuracy), and NOMINALIZATION_PATIENT (but this group consists of 5 compounds only). While accuracy scores are higher than those computed for our LNC dataset, this outcome does not demonstrate that a more naturalistic input changes the Surprisal distributions. Prompting The results of the prompting experiment are in line with Surprisal scores. As reported in Table 5, Mistral obtains the highest values, reaching 59% of Accuracy in the 1-shot setting. It is interesting to notice that adding examples to the prompt negatively affects the models’ answers. Considering the best variant, PRODUCTION is almost always identified correctly (96%), but its counterpart PROD-R is hardly chosen (15%). The evaluation of Nakov compounds (Table 6) is in line with the LNC dataset, and Mistral performs very well in both settings (one-shot:80%, threeshot:75%). Overall, the best model is more confused with the ABOUT relation (just 61% of accuracy). Finally, the models sometimes tend to justify their choice, giving us an idea of what their interpretation is. Interestingly, they do not hallucinate but answer coherently even when they fail to 11828 model 1-shot 3-shot Llama-2-7B-chat-hf .41 .18 Mistral-7B-Instruct .59 .56 Falcon-7B-Instruct .15 .14 Table 5: Prompt Accuracy over the LNC dataset. model 1-shot 3-shot Llama-2-7B-chat-hf .42 .33 Mistral-7B-Instruct .80 .75 Falcon-7B-Instruct .15 .21 Table 6: Prompt Accuracy over the Nakov dataset. select the preferred option. This qualitative analysis of the answers further confirms that instructiontuned LLMs can provide definitions similar to human ones but do not always process the underlying relation encoded into the semantics of compounds. 4 Are LLMs Generalizing Semantic Relations over Novel Compounds? Interpreting a novel compound (e.g., birthday dessert) involves both the conceptual and lexical systems; one must: i.) access the concepts denoted by the words and ii.) select a relation (e.g., a dessert intended for a birthday) to form a unified conceptual representation (Gagné and Spalding, 2006b). Coil and Shwartz (2023) observed that even for rare compounds, GPT-3 is able to generalize and make sense of new concepts, but the model tends to parrot incorrect paraphrases from the training set more often than correct ones. We hereby designed and explored a diagnostic dataset to investigate how LLMs deal with novel compound interpretation. Instead of relying on randomly generated infrequent combinations, we manipulated our original dataset of lexicalized compounds by replacing the head or the modifier with one of its hypernyms in order to answer the following questions: i.) Can LLMs generalize (i.e., can they abstract) an implicit semantic relation that ties the two constituents of a conventional compound and transfer it to a semantically similar but novel compound? ii.) Does LLMs’ performance change as a function of the type of the component (head or modifier) being replaced for the construction of the novel compound? 4.1 Data and Methods From the original dataset, we extracted the hypernyms of the head and modifier using WordNet 3.0 (Fellbaum, 2010)6. Only hypernyms occurring more than 1000 times in the enTenTen20 corpus were selected. The frequency of the new bigram (the novel compound) was then calculated, and only meaningful expressions with a frequency of occurrence lower than 30 were retained as novel compounds. For instance, given the compound apple orchard (“an orchard that produces apples”), we created the compounds pome orchard (“an orchard that produces pomes”) as a novel compound with the same head (sameHead) but replaced modifier, and apple parcel (“a parcel that produces apples”), as a same modifier (sameMod) but replaced head novel compound. This diagnostic dataset, which we named Novel Nominal Compounds (NNC), consists of 64 novel compounds covering four semantic relations: CONTAINMENT-R, LOCATION, PRODUCTION, and PURPOSE. 4.2 Results Surprisals We computed the Surprisal scores on the novel compounds’ paraphrases containing the original semantic relation (e.g., “pome orchard is an orchard that produces pomes”) and compared them with the Surprisals of the corresponding distractor paraphrases (e.g., “pome orchard is an orchard that is located in pomes”), following the same methodology presented in the previous experiment. As expected, the results are lower than the previous experiment. An aspect to notice is that the models tend to assign the lower score to the paraphrase of the original semantic relation more often when the head is fixed (blue bar) than when the modifier is fixed (orange bar, Figure 2). This is valid for Llama-2 (Base and Instruct), Falcon Base, and Mistral Instruct. It is worth noticing that BERT performs better than some of the larger models and shows the opposite trend. Prompting We observe that Llama-2 and Falcon perform poorly on this task, while Mistral achieves good performance, obtaining accuracy scores of .578 (1-shot) and .531 (3-shot) for the sameHead part of the NNC dataset and .469 (1-shot) and .30 (3-shot) for the sameMod. Considering just the results for this model, we observe that changing the head or the modifier affects the ability of 6We queried WordNet by relying on NLTK package, version 3.8.1. (Bird et al., 2009). 11829 Figure 2: Surprisal Accuracy over the NNC dataset. sameHead sameMod model 1 3 1 3 shot shot shot shot Llama-2-7B-chat-hf .156 .172 .141 .219 Mistral-7B-Instruct .578 .531 .469 .30 Falcon-7B-Instruct .047 .063 .079 .047 Table 7: Prompt accuracy over the NNC dataset. the model to recognize good paraphrases differently. For example, the CONTAINMENT-R relation (CONT-R) is not particularly problematic when the novel word is the modifier (accuracy 1shot: .474, 3-shot: .737). For instance, the novel compound equipment box (from glove box) is correctly paraphrased as “a box that contains equipments” by the two versions of Mistral. However, performance drops when changing the head (accuracy 1-shot: .37, 3-shot: .05). Given the previous example, Mistral (3-shot setting) associated to the compound glove container (from glove box) the paraphrase “a container intended for gloves” instead of “a container that contains gloves,” which should be expected if the model retains the same semantic relation of the original compound. From a qualitative analysis, we observed a tendency for the model to answer with the PURPOSE category instead of the appropriate one; indeed, this category gets the highest number of correct answers among the four classes (1-shot: .82; 3-shot: .53). 5 General Discussion This paper evaluated recent LLMs on their ability to interpret Noun-Noun compounds and, specifically, to correctly identify the semantic relation underlying existing and novel compounds. For the interpretation of existing compounds (LNCs), we released a dataset that assembles several linguistic and conceptual features associated with each compound, extracted from previous resources or added by the authors (concreteness, semantic type, semantic relation from different classifications) together with a limited set of paraphrases generated from Pepper (2022)’s classification. LLMs accuracy was tested on both the Surprisal scores and metalinguistic knowledge extracted by prompting strategies. In both settings, the models showed different performance levels in the identification of different semantic relations. Some relations like PRODUCTION are easy to recognize; that is, its paraphrase is the most expected (considering Surprisal scores) and more frequently identified in a metalinguistic prompt task. Moreover, compounds characterized by higher concreteness were interpreted more accurately overall, as hypothesized. This effect may be explained by the so-called concreteness effect (Jessen et al., 2000), which suggests that concrete concepts are processed faster and more easily than abstract ones. Previous studies reported that LLMs generate compound definitions that highly resemble human-generated paraphrases, reaching an almost perfect performance. However, the analyses presented here reveal that they are not as perfect when asked to identify the correct paraphrase, given alternatives. Our outcomes confirm what was observed by Coil and Shwartz (2023): LLMs’ performance can largely be attributed to parroting definitions or parts of definitions extracted from the training corpora. However, it is unclear to what extent LLMs extract the relational linguistic patterns they learn from corpora and use them to hypothesize about the most likely relationship underpinning a noun compound. In other words, while the models can somehow interpret the semantic relation underlying compounding, there is still a question far from being completely answered: what linguistic properties make compounds more or less difficult to interpret by LLMs? For this reason, we believe that more effort should be made in designing a comprehensive dataset of noun-noun compounds annotated with different factors influencing the plausibility of the noun compounds. The second experiment represents the first attempt to model novel compounds to understand LLMs’ abilities to abstract and transfer knowl11830 edge. According to previous studies, the interpretation of a novel combination relies on previous language experience (Gagné and Shoben, 1997, 2002; Gagné and Spalding, 2006a, among others). That is, people are able to interpret novel compounds by abstracting from their knowledge of past experiences with similar conceptual combinations, which provide an analogical basis for the production and interpretation of novel compounds (Krott, 2009) 7. We manipulated a subset of lexicalized compounds by replacing the modifier or the head word with a hypernym and observed how much harder it is for the LLMs to interpret the generated compounds. As expected, language models are challenged by this task, but we observe that they still look for a suboptimal solution. For instance, they choose the PURPOSE relation, which has a more general paraphrase (indended for) than other relations (such as LOCATION or CONTAINER). We believe that this task could provide a window into a specific aspect of the creative abilities of LLMs. In conclusion, the present study illustrates that there are still questions unanswered regarding how LLMs interpret compounding. Future works will focus on expanding both the LNC and the NNC datasets, including more linguistic features and evaluating the acceptability of selected paraphrases with human judgments. Acknowledgements GR and MB’s work is funded by the European Union (GRANT AGREEMENT: ERC-2021STG-101039777). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. EC was supported by a GRF grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. PolyU 15612222). We would like to thank the three anonymous reviewers for their constructive comments and suggestions. 7On the possible role of analogy compositionality process, see also the vector composition approach presented in Rambelli et al. (2022). Limitations The work focuses only on English The present dataset and work are focused only on English. Expanding the dataset to other languages would be beneficial, but we currently lack the same amount of resources for other languages annotated with the same amount of linguistic information, such as concreteness ratings and semantic relations. However, we chose Pepper (2022) classification precisely because it has been implemented to be suitable across languages, and the Bourquifier templates could be easily converted into other languages. Additionally, the methodology presented to generate novel compounds could be replicated for other languages by relying on language-specific WordNet versions released within the OpenMultiWordNet project (Bond and Paik, 2012; Bond et al., 2016), accessible through the NLTK package. Prompting strategies are conservative For the present study, we evaluated models in a conservative setting by using a low temperature. Further studies could investigate how the same models with higher temperatures answer, that is, how augmenting the linguistic creativity of LLMs affects models’ performance on compound interpretations. An additional limitation concerns the prompt used. We evaluated all LLMs on the question “Which is the most likely description of COMPOUND?” followed by a list of possible paraphrases. However, we did not test whether other questions could improve the models’ accuracy, nor did we explore whether different examples within the prompt could yield varied outcomes. Finally, the examples presented in oneand few-shot settings are the same independently of whether the target question has the same semantic relation as the prompt. This could, of course, affect the final results. For time and computation constraints, we did not test how the models behave with different semantic relations in the prompt. Comparing LLMs’ performance over humans’ judgments A limitation of this dataset comes from the annotations of Tratz (2011). We used an aggregated version of this dataset, so it is impossible to determine the degree of agreement across annotators for each compound. However, literature reports that some expressions show greater entropy of conceptual relations, i.e., greater competition between possible underlying semantic re11831 lations (Benjamin and Schmidtke, 2023). This information could be useful for a more fine-grained evaluation of LLMs’ performance. A related consideration is that, when collecting paraphrases for compounds, there can be various relationships with different degrees of acceptability (Spalding and Gagné, 2014; Benjamin and Schmidtke, 2023), while we simplify by assuming there is only one correct relationship. While it was out of the scope of the present paper, we would further investigate these hypotheses and collect the acceptability of paraphrases for both lexicalized and novel compounds. Ethics Statement Data The datasets used to build our LNC dataset are publicly available online. Concreteness ratings of Muraki et al. (2023) can be downloaded from the authors’ OSF project: https://osf.io/ksypa/. For the Tratz (2011) dataset, we used the data released by Shwartz and Dagan (2018) at https://github.com/vered1986/panic/tree/ master/classification/data. (Nakov, 2008b) dataset is available from the SIGLEX-MWE archive (https://multiword.sourceforge. net/PHITE.php%3Fsitesig%3DFILES%26page% 3DFILES_20_Data_Sets) under Creative Commons Attribution 3.0 Unported License. We will release all additional data and code used in the present experiment. Models For reasons of replicability, we used only open-access models available from huggingface. Given a limited GPU, we relied on 7 billion parameter models and used quantization techniques to reduce memory and computational costs (we used the bitsandbytes library). There are well-known ethical concerns about LLMs, which have been shown to produce factually incorrect output, which may generate offensive content if prompted with certain inputs. Instruction-tuned LLMs have been trained to reduce the harm of model responses, as we also observed in our analyses. For instance, when asked to choose the correct paraphrase, the Llama2 answered: “It is important to clarify that child pornography is a criminal and morally reprehensible activity. Therefore, none of the descriptions provided accurately describe child pornography. Instead, it is essential to understand that child pornography involves the production..”. However, some responses may still contain offensive content. Finally, any demonstrations of LLMs’ linguistic generalizations should not imply that they are safe to use or that they can be expected to behave in a way that is aligned with human preferences and values. References Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, Quentin Malartic, et al. 2023. The Falcon Series of Open Language Models. arXiv preprint arXiv:2311.16867. Khai Loong Aw, Syrielle Montariol, Badr AlKhamissi, Martin Schrimpf, and Antoine Bosselut. 2023. Instruction-tuning Aligns LLMs to the Human Brain. arXiv preprint arXiv:2312.00575. Shaina Benjamin and Daniel Schmidtke. 2023. Conceptual Combination during Novel and Existing Compound Word Reading in Context: A Self-paced Reading Study. Memory & Cognition, pages 1–28. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O’Reilly Media, Inc. Francis Bond and Kyonghee Paik. 2012. A Survey of Wordnets and their Licenses. In Proceedings of the Global WordNet Conference. Francis Bond, Piek Vossen, John Philip McCrae, and Christiane Fellbaum. 2016. Cili: The Collaborative Interlingual Index. In Proceedings of the 8th Global WordNet Conference. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language Models Are Few-shot Learners. Advances in Neural Information Processing Systems, 33:1877–1901. Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness Ratings for 40 Thousand Generally Known English Word Lemmas. Behavior Research Methods, 46:904–911. Lars Buijtelaar and Sandro Pezzelle. 2023. A Psycholinguistic Analysis of BERT’s Representations of Compounds. In Proceedings of EACL. Jordan Coil and Vered Shwartz. 2023. From Chocolate Bunny to Chocolate Crocodile: Do Language Models Understand Noun Compounds? In Findings of ACL. D Alan Cruse. 1986. Lexical Semantics. Cambridge University Press. 11832 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of NAACL-HLT. Christiane Fellbaum. 2010. WordNet. In Theory and Applications of Ontology: Computer Applications, pages 231–243. Springer. Christina L Gagné and Edward J Shoben. 1997. Influence of Thematic Relations on the Comprehension of Modifier–noun Combinations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(1):71. Christina L Gagné and Edward J Shoben. 2002. Priming Relations in Ambiguous Noun-noun Combinations. Memory & Cognition, 30(4):637–646. Christina L Gagné and Thomas L Spalding. 2006a. Conceptual Combination: Implications for the Mental Lexicon. The Representation and Processing of Compound Words, pages 145–168. Christina L Gagné and Thomas L Spalding. 2006b. Using Conceptual Combination Research to Better Understand Novel Compound Words. SKASE Journal of Theoretical Linguistics, 3(2):9–16. Iris Hendrickx, Preslav Nakov, Stan Szpakowicz, Zornitsa Kozareva, Diarmuid O Séaghdha, and Tony Veale. 2013. SemEval-2013 Task 4: Free Paraphrases of Noun Compounds. Proceedings of SemEval. Miloš Jakubíˇcek, Adam Kilgarriff, Vojtˇech Kováˇr, Pavel Rychl`y, and Vít Suchomel. 2013. The TenTen Corpus Family. In Proceedings of the International Corpus Linguistics Conference (CL), pages 125–127. Frank Jessen, Reinhard Heun, Michael Erb, D-O Granath, Uwe Klose, Andreas Papassotiropoulos, and Wolfgang Grodd. 2000. The Concreteness Effect: Evidence for Dual Coding and Context Availability. Brain and Language, 74(1):103–112. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. Carina Kauf and Anna Ivanova. 2023. A Better Way to Do Masked Language Model Scoring. In Proceedings of ACL. Carina Kauf, Anna A Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko, and Alessandro Lenci. 2023. Event Knowledge in Large Language Models: The Gap Between the Impossible and the Unlikely. Cognitive Science, 47(11):e13386. Andrea Krott. 2009. The Role of Analogy for Compound Words. In Analogy in Grammar: Form and Acquisition. Oxford University Press. Judith N Levi. 1978. The Syntax and Semantics of Complex Nominals. Academic Press, New York. Siyan Li, Riley Carlson, and Christopher Potts. 2022. Systematicity in GPT-3’s Interpretation of Novel English Noun Compounds. In Findings of EMNLP. Filip Mileti´c and Sabine Schulte im Walde. 2023. A Systematic Search for Compound Semantics in Pretrained BERT Architectures. In Proceedings of EACL. Kanishka Misra. 2022. minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models. arXiv preprint arXiv:2203.13112. Emiko J Muraki, Summer Abdalla, Marc Brysbaert, and Penny M Pexman. 2023. Concreteness Ratings for 62,000 English Multiword Expressions. Behavior Research Methods, 55(5):2522–2531. Preslav Nakov. 2008a. Noun Compound Interpretation Using Paraphrasing Verbs: Feasibility Study. In Artificial Intelligence: Methodology, Systems, and Applications: 13th International Conference, AIMSA 2008, Varna, Bulgaria, September 4-6, 2008. Proceedings 13, pages 103–117. Springer. Preslav Nakov. 2008b. Paraphrasing Verbs for Noun Compound Interpretation. In Proceedings of the LREC Workshop on Multiword Expressions. Vivi Nastase and Stan Szpakowicz. 2003. Exploring Noun-modifier Semantic Relations. In Proceedings of IWCS. Diarmuid Ó Séaghdha. 2008. Learning Compound Noun Semantics. Technical report, University of Cambridge. Mark Ormerod, Barry Devereux, and Jesús Martínez del Rincón. 2023. How is a “Kitchen Chair” like a “Farm Horse”? Exploring the Representation of Noun-Noun Compound Semantics in Transformer-based Language Models. Computational Linguistics, pages 1–33. Paolo Pedinotti, Giulia Rambelli, Emmanuele Chersoni, Enrico Santus, Alessandro Lenci, and Philippe Blache. 2021. Did the Cat Drink the Coffee? Challenging Transformers with Generalized Event Knowledge. In Proceedings of *SEM. Steve Pepper. 2021. The Bourquifier: An Application for Applying the Hatcher-Bourque Classification (Version 3)[MS Excel]. https://www.academia. edu/83122396. Steve Pepper. 2022. Hatcher-Bourque: Towards a Reusable Classification of Semantic Relations. In Binominal Lexemes in Cross-Linguistic Perspective, pages 303–354. De Gruyter Mouton. 11833 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models Are Unsupervised Multitask Learners. OpenAI Blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Textto-Text Transformer. Journal of Machine Learning Research, 21:1–67. Giulia Rambelli, Emmanuele Chersoni, Philippe Blache, and Alessandro Lenci. 2022. Compositionality as an Analogical Process: Introducing ANNE. In Proceedings of the COLING Workshop on Cognitive Aspects of the Lexicon. Vered Shwartz and Ido Dagan. 2018. Paraphrase to Explicate: Revealing Implicit Noun-compound Relations. In Proceedings of ACL. Vered Shwartz and Ido Dagan. 2019. Still a Pain in the Neck: Evaluating Text Representations on Lexical Composition. Transactions of the Association for Computational Linguistics, 7:403–419. Vered Shwartz and Chris Waterson. 2018. Olive Oil Is Made of Olives, Baby Oil Is Made for Babies: Interpreting Noun Compounds Using Paraphrases in a Neural Model. In Proceedings of NAACL. Thomas L Spalding and Christina L Gagné. 2014. Relational Diversity Affects Ease of Processing even for Opaque English Compounds. The Mental Lexicon, 9(1):48–66. Vít Suchomel. 2020. Better Web Corpora for Corpus Linguistics and NLP. Ph.D. thesis, Masaryk University. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open Foundation and Fine-tuned Chat Models. arXiv preprint arXiv:2307.09288. Stephen Tratz. 2011. Semantically-enriched Parsing for Natural Language Understanding. Ph.D. thesis, University of Southern California. Henk J Van Jaarsveld and Gilbert E Rattink. 1988. Frequency Effects in the Processing of Lexicalized and Novel Nominal Compounds. Journal of Psycholinguistic Research, 17:447–473. Caterina Villani, Adele Loia, and Marianna M. Bolognesi. 2024. The Semantic Content of Concrete, Abstract, Specific, and Generic Concepts. Language and Cognition, page 1–28. Edward J Wisniewski and Bradley C Love. 1998. Relations versus Properties in Conceptual Combination. Journal of Memory and Language, 38(2):177–202. A Experiment 1-Prompt Example We report below an example of the prompt used as an example (1-shot setting) for the LNC dataset. Which is the most likely description of "olive oil"? 1. an oil that uses olives; 2. an oil that is part of olives; 3. an oil that olives produce; 4. an oil that produces olives; 5. an oil that contains olives; 6. an oil that is about olives; 7. an oil that is composed of olives; 8. an oil that is located in olives; 9. an oil intended for olives We report below an example of the prompt used as an example (1-shot setting) for the Nakov dataset. Which is the most likely description of “pumpkin pie"? 1. a pie that uses a pumpkin; 2. a pie that is caused by a pumpkin; 3. a pie that is made from a pumpkin; 4. a pie that gives a pumpkin; 5. a pie that comes from a pumpkin; 6. a pie that is made by a pumpkin; 7. a pie that causes a pumpkin; 8. a pie that is a pumpkin; 9. a pie that involves a pumpkin B Experiment 1-Additional Analyses As for the LNC dataset, we plot the distribution of semantic relations with the lowest Surprisal scores inside each class for the Nakov dataset. Figure 3 allows us to grasp common errors across LLMs. 11834 Figure 3: Navok dataset: Distribution of semantic relations with the lowest Surprisal scores for each relation. C Experiment 2 - Additional results NNC sameHead sameMod BERT-large .219 .167 GPT2-xl .100 .094 Llama-2 (Base) .133 .109 Falcon (Base) .217 .125 Mistral (Base) .117 .125 Llama-2 (Instruct) .283 .141 Falcon (Instruct) .150 .141 Mistral (Instruct) .283 .156 Table 8: Surprisal accuracy of instruction-based models on the NNC dataset, distinguishing when we substitute the first word (sameHead) or the second word (sameMod) of a compound with a hypernym. 11835
2024
637
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11836–11850 August 11-16, 2024 ©2024 Association for Computational Linguistics CharacterEval: A Chinese Benchmark for Role-Playing Conversational Agent Evaluation Quan Tu1, Shilong Fan2, Zihang Tian1, Tianhao Shen3, Shuo Shang4, Xin Gao5∗, Rui Yan1∗ 1Gaoling School of Artificial Intelligence, Renmin University of China 2Beijing University of Posts and Telecommunications 3Tianjin University 4University of Electronic Science and Technology of China 5King Abdullah University of Science and Technology 1{quantu,tzh2003,ruiyan}@ruc.edu.cn [email protected] [email protected], [email protected], [email protected] Abstract Recently, the advent of large language models (LLMs) has revolutionized generative agents. Among them, Role-Playing Conversational Agents (RPCAs) attract considerable attention due to their ability to emotionally engage users. However, the absence of a comprehensive benchmark impedes progress in this field. To bridge this gap, we introduce CharacterEval, a Chinese benchmark for comprehensive RPCA assessment, complemented by a tailored high-quality dataset. The dataset comprises 1,785 multi-turn role-playing dialogues, encompassing 11,376 examples and featuring 77 characters derived from Chinese novels and scripts. It was carefully constructed, beginning with initial dialogue extraction via GPT-4, followed by rigorous human-led quality control, and enhanced with in-depth character profiles sourced from Baidu Baike. CharacterEval employs a multifaceted evaluation approach, encompassing thirteen targeted metrics on four dimensions. To facilitate the convenient evaluation for these subjective metrics in CharacterEval, we further developed CharacterRM, a role-playing reward model based on human annotations, which has a higher correlation with human judgment compared to GPT-4. Comprehensive experiments on CharacterEval demonstrate that Chinese LLMs exhibit more promising capabilities than GPT-4 in Chinese role-playing conversation. Source code, data source, and reward model will be publicly accessible at https://github.com/ morecry/CharacterEval. 1 Introduction The development of large language models (LLMs) has marked the beginning of a new era in conversational AI (Zhao et al., 2023; Chang et al., 2023), and opened up a wide range of application possibilities, particularly in agent-based interactions (Park ∗*Corresponding authors:Xin Gao and Rui Yan ([email protected], ruiyan @ruc.edu.cn) et al., 2023; Wang et al., 2023a; Gao et al., 2023). The automated agents, equipped with the emerging capabilities of LLMs such as planning (Silver et al., 2022; Ge et al., 2023; Song et al., 2023), reasoning (Wei et al., 2022; Wang et al., 2022), and in-context learning (Dong et al., 2022; Brown et al., 2020), can perform complex tasks for humans without any supervision. Among the diverse agents, the Role-Playing Conversational Agent (RPCA), designed to offer emotional value instead of productivity, attracts an amount of interest. RPCA represents a unique category within the realm of conversational agents, distinguished by their capability for immersive interaction (Li et al., 2023). Different from traditional dialogue systems, which typically focus on chit-chat (Yan et al., 2022), knowledge-based (Chen et al., 2020), personalized (Zheng et al., 2019) and empathetic dialogue (Ma et al., 2020), RPCAs engage users in dynamic scenarios, where LLM agents are assumed as specific characters or roles, often derived from existing composition such as novels, films, cartoons, and games. The development of connections between fictional characters and humans has the potential to not only deepen the impact of cultural works but also improve human engagement. Furthermore, RPCAs hold significant application value in their ability to offer emotional value to users, positioning fictional characters as virtual friends. The multifaceted nature of RPCAs has sparked considerable attention, leading to a surge in both research (Shao et al., 2023; Wang et al., 2023d; Tu et al., 2023; Zhou et al., 2023) and application development (e.g., Character AI1, Tongyi Xingchen2 and Glow3). However, these implementations of RPCAs vary significantly in both approach and objectives, presenting a challenge in systematically assessing and comparing their capabilities. There1https://beta.character.ai 2https://xingchen.aliyun.com/xingchen 3https://www.glowapp.tech/ 1 11836 秀才,昨天晚上吃了没有? Xiucái, did you have dinner last night? (无精打采)吃不吃也就是那么回事儿。 (Lethargically) Eating or not, it's all the same anyway. 昨天的帐都清干净没有? Did you settle all the accounts from yesterday? 应该是吧。(回头拿账本看)啊,清干净了。 I suppose so. (turns back to check the ledger) Ah, it's all cleared up. 啥叫应该是?(拿过账本看,拍在柜台上,指着)这 叫清干净了?What do you mean 'suppose so'?(Grabs the ledger, slaps it on the counter, and points at it) Is this what you call 'cleaned up'? (看账本)啊? (Looking at the ledger) Huh? 场景:佟湘玉在柜台处质问吕秀才昨天的账目是否清理干净, 随后两人就工作中的问题发生了争执。 Scene: Tong Xiangyu, at the counter, questions Lv Xiucai whether he has cleared yesterday's accounts properly. Subsequently, the two of them get into an argument over issues related to their work. 姓名: 佟湘玉 Name: Xiangyu Tong 性别: 女 Gender: Female 年龄:二十七 Age: 27 身份:同福客栈掌柜 Identity: Manager of the Tongfu Inn 志向: 将同福客栈做成连锁企业... Aspiration: hope to develop Tongfu Inn into a chain enterprise. 人物经历: 佟湘玉是龙门镖局镖头的 女儿,原本下嫁衡山派掌门,却在 途中成了寡妇... Background: Tong Xiangyu is the daughter of the chief escort of the Longmen Escort Agency. She was originally betrothed to the head of the Hengshan Sect, but became a widow on the way to her wedding. 名言: "我不是黄蓉,我不会武功,我 没有靖哥哥和完美的爱情…" Quote: "I am not Huang Rong; I don't know martial arts; I don't have a 'Brother Jing' nor a perfect love...". ...... 姓名: 吕秀才 Name: Xiucai Lv 性别: 男 Gender: Male 年龄:二十四 Age: 24 身份:同福客栈帐房 Identity: Accountant of the Tongfu Inn 性格: 博学多才,有些自负... Personality: knowledgeable and talented, but a bit arrogant 爱好: 读书、思考哲学 Hobby: Reading, thinking philosophy 人物经历: 前朝知府之孙,自幼聪明, 屡试不中科举。24岁时穷困潦倒,卖 祖产尚咨客栈,后在同福客栈打工... Background: He is the grandson of a former county magistrate, was intelligent from a young age but repeatedly failed the imperial examinations. At the age of 24, facing poverty, he sold his family estate. Later, he found work at the Tongfu Inn... 名言: "子曾经曰过…" Quote: "Confucius once said..." ...... ...... Figure 1: An example of the CharacterEval, including the dialogue, scene, and character’s profile. fore, we propose the CharacterEval, a Chinese role-playing conversation benchmark for advancing RPCA development. To develop a benchmark, the primary problem is the construction of a dataset. While there are existing datasets (Shao et al., 2023; Wang et al., 2023d; Tu et al., 2023; Zhou et al., 2023; Chen et al., 2023; Shen et al., 2023), their quality is concerning, which are either generated by LLMs or suffering from significant noise due to the extractive methods. These limitations render the evaluation results unreliable for the RPCA’s actual capabilities. To address it, we constructed a Chinese role-playing conversation dataset comprising 1,785 multi-turn roleplaying dialogues, encompassing 11,376 examples and 77 leading characters, drawn from diverse Chinese novels and scripts. Our process began with the collection of well-known sources across various genres. After that, GPT-4 was employed to extract dialogue scenes, utterances, and behaviors of the leading roles of these sources. Following basic preprocessing and the removal of dialogues with fewer turns, we invited annotators to assess the quality of the dialogues. Their task was to identify and retain high-quality dialogues while discarding those of lower quality. Additionally, we crawled detailed character profiles from Baidu Baike4, composing a comprehensive dataset for RPCA evaluation. The example from the dataset is as Figure 1 shows. Otherwise, role-playing conversation is a compli4https://baike.baidu.com/ cated task that requires not only mimicking a character’s behavior and utterance but also maintaining the character’s knowledge, as well as excellent multi-turn ability. Considering this, we proposed a multifaceted evaluation approach including thirteen specific metrics on four dimensions for a fair and thorough assessment of RPCAs, Our evaluation approach considered conversational ability, character consistency, and role-playing attractiveness, and utilized a personality back-testing method to evaluate the personality accuracy of an RPCA. To assess conversational ability, we measured conversational fluency, coherence, and consistency at both the sentence and conversation levels (Chen et al., 2017). Character consistency is the most crucial in role-playing conversation. Hence, we evaluated knowledge and persona consistency to measure how vividly an RPCA can simulate a character. This involves assessing knowledge exposure, accuracy, and hallucination for knowledge consistency, and evaluating behavior and utterance consistency for persona consistency. Considering that RPCAs are entertainment-oriented, role-playing attractiveness is also an important element. We assessed this through human-likeness, communication skills, expression diversity, and empathy. Finally, we introduced personality back-testing. With the collected Myers-Briggs Type Indicator(MBTI) (Myers, 1962) personality types as a reference, we let RPCAs do the MBTI assessment and calculate the MBTI accuracy (personality back-test) as implemented in Wang et al. (2023b). 2 11837 For convenient re-implementation, we invited 12 annotators to score responses generated by different models for the subjective metrics in our evaluation system. Based on the human judgments, we developed a role-playing reward model—CharacterRM, whose correlation with humans could surpass stateof-the-art LLM GPT-4. On CharacterEval, We conducted comprehensive evaluations for existing LLMs, encompassing both open- and closedsource models. Experimental results show the broad prospect of existing Chinese LLM while GPT-series models do not take the predominance in Chinese role-playing conversation. In summary, our contributions are as follows: • We create a large-scale, high-quality dataset for RPCA evaluation, consisting of 1,785 multi-turn role-playing dialogues, and 11,376 examples, featuring 77 leading characters from diverse Chinese novels and scripts. • We propose CharacterEval, a new benchmark for RPCAs, which contains a comprehensive set of evaluation principles, encompassing thirteen specific metrics on four dimensions. • We develop CharacterRM, a role-playing reward model for evaluating RPCAs in several subjective metrics, achieving better performance than GPT-4 in correlation with humans. • We conducted thorough evaluations of existing LLMs on CharacterEval, including openand closed-source, and derived valuable findings from the results. 2 Related Work 2.1 Knowledge-based Dialogue Knowledge-based dialogue systems integrate external knowledge resources, such as knowledge graphs or unstructured documents, into dialogue systems (Zhao et al., 2020; Li et al., 2020). Recent efforts have focused on improving the understanding and utilization of knowledge within these dialogues. For instance, Xue et al. (2023) introduced K-DIAL, which incorporates additional Feed-Forward Network (FFN) blocks into Transformers (Vaswani et al., 2017) to enhance factual knowledge expression and consistency in dialogue. Similarly, Chen et al. (2020) proposed a knowledge distillation-based training strategy to optimize the knowledge selection decoder. While these methods significantly advance knowledge selection and utilization, they primarily address general knowledge. Role-playing dialogues, however, demand a more intricate approach, encompassing personalized knowledge, style, behavior, etc. 2.2 Personalized Dialogue Personalized dialogue systems, which generate responses based on specific personas, represent another relevant area of research (Den Hengst et al., 2019; Zhong et al., 2022). Zheng et al. (2019) pioneered this field by creating the first large-scale personalized dialogue dataset, complete with persona labels. This dataset has spurred further advancements in the field. Additionally, Zheng et al. (2020) developed a pre-trained personalized dialogue model, which could generate coherent responses using persona-sparse dialogue. Although these studies begin to explore persona in dialogue, the personal profiles they utilize are typically limited to short-term, person-related information like name, age, and location, which are considered personalized knowledge in essence. 2.3 Character-based Dialogue The most closely related research to this work involves recent developments in character-based dialogue systems, which aim to mimic the behavior and utterance style of specific characters (Shao et al., 2023; Wang et al., 2023d; Zhou et al., 2023). Shao et al. (2023) gathered character profiles from Wikipedia and generated character-based dialogues by prompting ChatGPT (OpenAI, 2022). Wang et al. (2023d) used GPT-4 to create character descriptions and developed detailed instructions for prompting ChatGPT to produce character-based dialogues. However, these approaches primarily rely on ChatGPT’s generative capabilities and may not accurately reflect the true personality of the characters. Li et al. (2023) addresses this by extracting role-playing dialogues from novels, scripts, and games, which better preserve the characters’ original traits. Despite this, their approach suffers from a lack of human-in-the-loop refinement and a scarcity of multi-turn dialogues in the dataset. Otherwise, Chen et al. (2023) develop a role-playing dataset focused on Harry Potter. However, the scarcity of diversity makes it hard to comprehensively evaluate the generalized RPCA. 3 11838 3 Problem Formulation The Role-Playing Conversational Agent (RPCA) is designed to engage in conversations with users by emulating specific characters. These characters are defined by their knowledge, behavior, and style of response. To achieve this, the RPCA utilizes a character profile, denoted as P, and the current dialogue context represented as Cn = [q1, r1, q2, r2, . . . , qn]. Here, qi and ri correspond to the i-th question and response in the dialogue, respectively. The goal for the RPCA is to generate a response rn that is consistent with the character’s profile, which can be represented as: rn = RPCA(Cn, P), (1) where rn is composed of two elements: behavior and utterance. The behavior aspect is enclosed in brackets and provides a detailed description of the character’s actions, expressions, and tone. This separation allows for a fine-grained evaluation of the RPCA’s ability to not only generate appropriate utterances but also unique behavioral traits. 4 Data Collection In this section, we detail the methodology for constructing the character-based, multi-turn dialogue dataset with high quality. Prior to initiating data collection, adherence to the following four principles is important: • Fidelity to Source Material: It is crucial that all dialogues are in line with the original works, ensuring the character’s authenticity. • Diversity in Distribution: The dataset must encompass a wide range of scenarios to thoroughly assess the role-playing capabilities. • Multi-Turn Feature: The dataset should predominantly consist of multi-turn dialogues, rather than being limited to single-turn ones. • Human-in-the-Loop: Active human involvement is necessary to guarantee the quality, as reliance solely on LLMs is insufficient. The pipeline of data collection includes four steps: plot division, dialogue extraction, quality filtering, and human annotation. Plot Division: The plots in narrative text such as novels and scripts are extremely complex, making it challenging to divide the text into meaningful chunks. Using the sentence tokenization tool, without considering semantics, will result in breaking a conversation mid-way. To address this, we first employ GPT-4 to identify plot twists—sentences that signify the end of a continuous plot. These plot twists are then used to segment the text into chunks, each containing a complete plot. Dialogue Extraction: Once we have the plot chunks, GPT-4 is utilized again, this time to extract role-playing dialogues. We design prompts for GPT-4 to perform information extraction, preserving characters’ utterances, behaviors, and scenes from the plots. Quality Filtering: Dialogues in novels and scripts often involve more than two characters. Simply retaining dialogues between two characters and omitting others will distort the dialogue structure. Therefore, we opt to preserve dialogues following an ABAB pattern (dialogue between two characters) until a third character joins. This approach, while straightforward, helps maintain the original dialogue structure more effectively. Besides, we only keep the dialogue exceeding five turns (six sentences) reserved, filtering the short dialogues. Human Annotation: Although LLMs have the capability to perform basic information extraction tasks, the randomness still affects data quality. To mitigate this, we invite human annotators to assess the coherence and quality of dialogues and eliminate any problematic instances. 5 Evaluation Metric Different from traditional chatbots, we contend that RPCAs require a more comprehensive evaluation framework to assess their role-playing capabilities. As shown in Figure 2, we have devised a four-dimensional evaluation system, which includes conversational ability, character consistency, role-playing attractiveness, and personality backtesting, including thirteen metrics. 5.1 Conversational Ability Basic conversational ability is the first consideration in role-playing conversation. Inspired by previous neural metrics, which evaluate the responses based on well-trained neural models, we introduce a similar approach to assess the fundamental conversational abilities of RPCAs. We focus on three key objectives for generated responses: fluency, coherency, and consistency (Zhang et al., 2021; Mesgar et al., 2020). 4 11839 Conversational Ability Character Consistency Role-playing Attractiveness Personality Back-Testing Fluency Coherency Consistency Know-Exposure Know-Accuracy Know-Hallucination Persona-Behavior Persona-Utterance Human-likeness Communication Skills Expression Diversity Empathy MBTI Accuracy Figure 2: Evaluation system of CharacterEval. “Know-” is the abbreviation of “Knowledge”. • Fluency (Flu.) measures the grammatical correctness of a response, indicating whether a response is readable and free from obvious grammatical errors. • Coherency (Coh.) evaluates the topic relevance between the response and the context. Generally, when the user submits a query on a specific topic, an RPCA should respond following the topic instead of providing an irrelevant response. • Consistency (Cons.) assesses the stability of RPCAs during a conversation. Responses of an RPCA should not contradict their own responses in previous turns. 5.2 Character Consistency Character consistency plays a crucial role in evaluating the role-playing ability of the RPCAs. It will bring the most intuitive experience to users when the character consistency of RPCAs varies. Specifically, we evaluate character consistency from two levels, knowledge consistency and persona consistency. The former evaluates if an RPCA could respond based on the character’s knowledge, which includes knowledge exposure, accuracy, and hallucination metrics. The latter assesses if a RPCA’s reflection is in line with the character’s personality, including the behavior and utterance metrics. • Knowledge-Exposure (KE). For assessing the informativeness of a response, it’s crucial for an RPCA to reflect knowledge in its responses, as this supports the subsequent evaluation of its knowledge expression capabilities. • Knowledge-Accuracy (KA). Once the RPCA demonstrates the ability to generate responses with specific knowledge, it’s important to assess whether this knowledge aligns with the character. The goal is for the RPCA to accurately generate responses based on the knowledge from the character’s profile. • Knowledge-Hallucination (KH). Drawing inspiration from recent studies on hallucinations in LLMs (Rawte et al., 2023; Zhang et al., 2023), we include knowledge hallucination in the evaluation of role-playing dialogue. To enhance the user experience, the RPCA should maintain consistency with the character’s identity and avoid responding to queries involving unknown knowledge. • Persona-Behavior (PB). A character’s behaviors, typically described within brackets, improve the embodied feeling of users by portraying fine-grained actions, expressions, and tones. Consistent behavior is indicative of an effective RPCA. • Persona-Utterance (PU). Alongside behavior, a character’s speaking style is also important. Each character has unique expression habits. Therefore, the RPCA’s utterances should align with these habits to adeptly mimic the character. 5.3 Role-playing Attractiveness As a conversational agent in the entertainment field, it is essential for an RPCA to be sensitive to the user’s emotions. Therefore, we introduce the character attractiveness dimension to assess the attraction of an RPCA during conversation. From the user’s perspective, we consider four key dimensions: human-likeness, communication skills, expression diversity, and empathy. • Human-Likeness (HL). In the era of publicly available LLMs, these models often suffer from a perceived ’machine-like’ quality in their responses. Most LLMs, designed primarily for information seeking, tend to provide robotic and emotionless answers. However, in role-playing conversations, it is crucial for the 5 11840 RPCA to exhibit a more human-like persona to minimize user resistance. • Communication Skills (CS). In human society, the ability to skilfully communicate, often referred to as Emotional Quotient (EQ), significantly influences an individual’s likability. Accordingly, users are more likely to engage with an RPCA that demonstrates higher EQ, mirroring the popularity of individuals with strong communication skills in daily life. • Expression Diversity (ED). The dialogues within CharacterEval are sourced from existing novels, scripts, and various literary works, featuring characters with rich and diverse expressive abilities in both their behaviors and utterances. Therefore, an RPCA should strive to express this diversity in conversation to provide users with a more immersive experience. • Empathy (Emp.). While the primary role of an RPCA is not that of an emotional counselor, its ability to express empathy can significantly impact its favorability of users. Evaluating empathy in role-playing conversations advances the RPCA to come across as a more warm and friendly conversational partner. 5.4 Personality Back-Testing Following the recent works on LLM personality testing (Pan and Zeng, 2023; Huang et al., 2023; Wang et al., 2023c), we conducted personality back-testing to assess the role-playing capability of the RPCA within the context of personality dimensions. In this study, we employed the MyersBriggs Type Indicator (MBTI) (Myers, 1962), a well-established personality classification method. To obtain the necessary labels, we collected MBTIs of characters featured in CharacterEval from an archive website5, which hosts a substantial character’s MBTIs. Using these MBTIs as ground truth, we evaluated the accuracy of the MBTI assessment 6 of RPCAs. 6 Experiment 6.1 Dataset Statistic We split our CharacterEval into the training set and test set based on examples instead of conversations, where an example is composed of a tu5https://www.personality-database.com/ 6https://www.16personalities.com/ 5 10 15 20 25 30 35 40 45 Turns 0.0 0.2 0.4 0.6 0.8 1.0 Accumulated Ratio Ratio = 0.8 Figure 3: Turns distribution of examples in test set. Training Test # Characters 77 # Conversations 1,785 Avg. Turns / Conv. 9.28 Avg. Tokens / Conv. 369.69 # Examples 6,811 4,564 Table 1: The statistic of CharacterEval dataset. ple (Character, Context, Response). The statistic of the dataset is as Table 1 shows. Specifically, we display the turns distribution of test set in Figure 3 to explore the dataset feature. It is notably that over 20% examples have more than 10 turn in dialogue. These statistic demonstrates the multi-turn property of CharacterEval, satisfying the evaluation of RPCA’s performance at longer turns. 6.2 Experimental Setting CharacterEval employs a comprehensive set of fine-grained subjective metrics (twelve metrics in conversational ability, character consistency, and role-playing attractiveness dimensions) to assess the multi-dimensional capabilities of an RPCA. However, it is important to note that a single evaluated example may not adequately represent all facets of RPCAs. Therefore, we introduce annotators to sparsely evaluate the performance matrix. This approach entails that each example in CharacterEval is assessed using a subset of these subjective metrics, leading to more differentiated evaluation results. Then, based on these selected metrics for each example, we recruit 12 annotators to score responses generated by different models on a fivepoint scale. The human judgments are used to develop a role-playing reward model (CharacterRM), with Baichuan2-13B-base as the backbone. Experi6 11841 Models Specialized Model Size Open Source Primarily Language Creator ChatGLM3 ✗ 6B ✓ zh Tsinghua & Zhipu AI XVERSE ✗ 7B, 13B ✓ zh XVERSE Qwen ✗ 7B, 14B ✓ zh Alibaba Inc. InternLM ✗ 7B, 20B ✓ zh SenseTime & Shanghai AI lab Baichuan2 ✗ 7B, 13B ✓ zh Baichuan Inc. CharacterGLM ✓ undisclosed ✗ zh Tsinghua & Lingxin Xingchen ✓ undisclosed ✗ zh Alibaba Inc. MiniMax ✓ undisclosed ✗ zh MiniMax Inc. BC-NPC-Turbo ✓ undisclosed ✗ zh Baichuan Inc. GPT-3.5 ✗ undisclosed ✗ en OpenAI GPT-4 ✗ undisclosed ✗ en OpenAI Table 2: LLMs evaluated in our experiments. Metric Char-RM 1-shot 2-shot 3-shot Flu. 0.613 0.475 0.571 0.560 Coh. 0.607 0.493 0.577 0.604 Cons. 0.573 0.563 0.484 0.483 KE 0.509 0.241 0.332 0.407 KA 0.336 0.239 0.182 0.187 KH 0.411 0.377 0.380 0.332 PB 0.879 0.253 0.305 0.244 PU 0.472 0.394 0.432 0.563 HL 0.497 0.271 0.308 0.318 CS 0.686 0.489 0.350 0.371 ED 0.765 0.209 0.298 0.301 Emp. 0.385 0.407 0.403 0.371 Overall 0.631 0.362 0.385 0.375 Table 3: Pearson correlation coefficient (Pearson, 1901) with human judgments of GPT-4 and our CharacterRM (abbr. Char-RM). We report the performance of GPT4 under different settings: 1-shot, 2-shot, and 3-shot. Bold indicates the highest score. mental result shows that our CharacterRM exhibits a higher correlation with humans than GPT-4, as Table 3 shows. Although the performance of GPT4 will improve with the number of demonstration increase, the cost of it makes evaluation hard to implement. Consequently, we utilize our CharacterRM for subsequent evaluation of subjective metrics. In the personality back-test, we collect 54 ground MBTIs of characters in our dataset. The RPCAs should answer the MBTI questionnaires and then the accuracy will be computed. 6.3 Evaluated LLMs In this work, we assess the performance of 10 baselines with different parameters, encompassing both open-source and closed-source models. For the open-source models, we evaluate their chatversion instead of base-version. For the closedsource models, we utilize their official APIs to conduct performance evaluations. Specifically, we employ the gpt-4 version as the GPT-4, and gpt-3.5-turbo-1106 version as GPT-3.5 in our experiments. Among the evaluated models, CharacterGLM, MiniMax, Xingchen, and BC-NPC-Turbo are tailored for role-playing conversations, while the remaining models are designed for general chat applications. Notably, GPT-4 and GPT-3.5 stand out as the only two models trained on the dataset primarily composed of the corpus with the English language. We consistently employ the same prompt for each model, with minor adjustments made only for closed-source models. Significantly, GPT-3.5 demonstrates the weakest performance in CharacterEval. Its tendency to generate overly safe responses, such as “I am just an AI assistant and cannot perform role-playing,” highlights its limitations for role-playing applications. This issue stems from the over-alignment by RLHF (Christiano et al., 2017), making it unsuitable for dynamic role-playing interactions. 6.4 Overall Performance The results across four dimensions are clearly illustrated in Figure 4. BC-NPC-Turbo outperforms in three of these dimensions, whereas GPT-4 is distinguished in personality back-testing. Models specifically designed for role-playing dialogues, such as Xingchen, MiniMax, and BC-NPC-Turbo, demonstrate superior outcomes due to their targeted training. In the realm of open-source models, InternLM20B and Baichuan2-13B show impressive potential. Despite lacking specialized customization for roleplaying conversations, these models present commendable results in most evaluation dimensions. In contrast, GPT-4’s effectiveness diminishes in 7 11842 0.2 0.4 0.6 0.8 1.0 Conversational Ability Character Consistency Role-playing Attractiveness Personality Back-Testing ChatGLM3-6B XVERSE-13B Baichuan2-13B Qwen-14B InternLM-20B CharacterGLM Xingchen Minimax BC-NPC-Turbo GPT-3.5 GPT-4 Figure 4: The comprehensive comparison of LLMs on four dimensions. Since CharacterGLM can not complete personality back-testing, we mark the result using ’X’ instead. Chinese role-playing conversations. Its primary training in English corpus limits the adaptability in complex role-playing scenarios and the deep understanding of Chinese culture. 6.5 Detailed Result The detailed performance across thirteen metrics is presented in Table 4. Regarding conversational capabilities, BC-NPCTurbo exhibits superior performance, evidenced by its excellent conversational consistency, as well as comparative fluency and coherency. In contrasting open-source and closed-source models, it is difficult to declare a definitive winner in this dimension. However, when we compare the homogeneous models, such as Qwen-7B versus Qwen14B, and XVERSE-7B versus XVERSE-13B, examining models of the same series, such as Qwen7B versus Qwen-14B, and XVERSE-7B versus XVERSE-13B, it becomes obvious that an increase in the number of parameters can enhance conversational abilities. In the category of models with fewer than 10 billion parameters, Baichuan2-7B and InternLM-7B demonstrate comparable competencies. In the role-playing specialized models, MiniMax stands out for its performance and only falls behind BC-NPC-Turbo. In contrast, GPT-4 and GPT-3.5 do not exhibit a marked superiority in this dimension. Furthermore, it is posited that complex role-playing conversations and scenarios in Chinese might challenge the GPT series, potentially leading to their diminished performance. In terms of character consistency, the most crucial aspect for role-playing conversations, BCNPC-Turbo still leads significantly. It exhibits the highest accuracy in knowledge accuracy, minimal knowledge hallucinations, and consistent utterances and behaviors when acting as a character. Otherwise, MiniMax also shows notable performance, compared with the open-sourced models and remaining closed models. Once again, the GPT series falls short compared to Chinese LLMs in this dimension. Nonetheless, it is important to acknowledge that GPT-4 excels in knowledge exposure, underlining its strengths in knowledge-intensive tasks. Despite this, in the realm of knowledge accuracy, particularly concerning the understanding of Chinese classical characters, GPT-4 does not exhibit distinct superiority. Furthermore, BC-NPC-Turbo stands out in roleplaying attractiveness, as demonstrated by its outstanding human-likeness and diverse expressions. As a state-of-the-art LLM, GPT-4 exhibits remarkable performance in communication skills, significantly surpassing other models. This reflects its powerful generalization ability, even in the Chinese role-playing scenario. Interestingly, InternLM-20B emerges as the leader in empathy, highlighting its unique potential to provide emotional support. Similar conclusions are also observed in the personality back-test, where BC-NPC-Turbo, MiniMax, and GPT-4 demonstrate comparable levels of accuracy. In this particular dimension, the models are required to respond to multi-choice questions that are designed to reveal the underlying values of the roles they are portraying. Since this task does not demand extensive expression in the character’s text style, GPT-4 exhibits the best performance. This result highlights their ability to accurately embody a character’s fundamental personality traits and values. 7 Conclusion In this work, we aim to build a comprehensive benchmark to evaluate recent Role-Playing conversational Agents (RPCAs). We introduce GPT-4 to extract the dialogues from the existing novels and scripts, proceeding with strict human filtering. After a series of processing, we release a high-quality multi-turn role-playing dataset. Besides, we construct a comprehensive evaluation system to assess the multi-dimensional ability of RPCAs. We also collect human annotation to train a character-based 8 11843 Character Consistency Personality KE KA KH PB PU Avg. Back-Testing ChatGLM3-6B 2.016 2.792 2.704 2.455 2.812 2.556 0.532 XVERSE-7B 1.834 2.774 2.763 2.564 2.887 2.564 0.620 Baichuan2-7B 1.813 2.849 2.929 2.830 3.081 2.700 0.625 Qwen-7B 1.956 2.728 2.633 2.605 2.780 2.540 0.606 InternLM-7B 1.782 2.800 2.781 2.719 3.016 2.620 0.630 XVERSE-13B 1.977 2.828 2.862 2.579 2.915 2.632 0.630 Baichuan2-13B 1.802 2.869 2.946 2.808 3.081 2.701 0.639 Qwen-14B 1.988 2.800 2.811 2.744 2.900 2.649 0.620 InternLM-20B 1.945 2.916 2.920 2.753 3.041 2.715 0.648 CharacterGLM 1.640 2.819 2.738 2.301 2.969 2.493 Xingchen 1.636 2.768 2.743 2.772 3.055 2.595 0.630 MiniMax 1.835 2.910 2.944 2.774 3.125 2.718 0.685 BC-NPC-Turbo 1.802 2.964 2.993 2.910 3.151 2.764 0.681 GPT-3.5 1.716 2.339 2.212 1.921 2.316 2.101 0.653 GPT-4 2.250 2.855 2.785 2.721 2.873 2.697 0.694 Conversational Ability Role-playing Attractiveness Flu. Coh. Cons. Avg. HL CS ED Emp. Avg. ChatGLM3-6B 3.269 3.647 3.283 3.399 3.064 2.932 1.969 2.993 2.739 XVERSE-7B 3.393 3.752 3.518 3.554 3.395 2.743 2.013 2.936 2.772 Baichuan2-7B 3.551 3.894 3.827 3.757 3.670 2.728 2.115 2.984 2.874 Qwen-7B 3.187 3.564 3.229 3.327 3.036 2.791 2.052 2.838 2.679 InternLM-7B 3.527 3.823 3.744 3.698 3.546 2.622 2.070 2.897 2.784 XVERSE-13B 3.444 3.811 3.559 3.605 3.319 2.939 2.045 3.018 2.830 Baichuan2-13B 3.596 3.924 3.864 3.795 3.700 2.703 2.136 3.021 2.890 Qwen-14B 3.351 3.765 3.510 3.542 3.354 2.871 2.237 2.970 2.858 InternLM-20B 3.576 3.943 3.717 3.745 3.582 2.885 2.132 3.047 2.911 CharacterGLM 3.414 3.717 3.737 3.623 3.738 2.265 1.966 2.812 2.695 Xingchen 3.378 3.807 3.754 3.646 3.757 2.272 2.100 2.799 2.732 MiniMax 3.609 3.932 3.811 3.784 3.768 2.672 2.150 3.017 2.902 BC-NPC-Turbo 3.578 3.898 3.916 3.798 3.836 2.643 2.336 2.971 2.946 GPT-3.5 2.629 2.917 2.700 2.749 2.565 2.422 1.660 2.526 2.293 GPT-4 3.332 3.669 3.343 3.448 3.143 3.184 2.153 3.010 2.873 Table 4: Detailed evaluation results on CharacterEval. The best performances are highlighted in bold, while suboptimal ones are marked with underline. It is notable that the score for CharacterGLM in personality back-testing is unavailable, hence it is replaced by a “-”. reward model to measure the subjective metrics, for later convenient re-implementation. Extensive experimental results indicate that Chinese LLMs entail more promising capabilities than GPT-4 in Chinese role-playing conversations. Limitations The CharacterEval benchmark for Role-Playing Conversational Agents (RPCAs) in Chinese presents several limitations: (1) Dataset Diversity: The dataset primarily focuses on characters from specific Chinese novels and scripts, which may not fully represent the diversity of role-playing scenarios; (2) Subjectivity in Evaluation: Despite using a multifaceted approach, the evaluation’s reliance on subjective human judgment can lead to inconsistent outcomes; (3) Cultural and Linguistic Scope: The benchmark’s focus on Chinese dialogues limits its applicability to other linguistic and cultural contexts. These limitations highlight the need for ongoing updates to the dataset and evaluation methods, as well as efforts to broaden the benchmark’s cultural and linguistic relevance. Ethical Consideration In developing CharacterEval, a benchmark for assessing Chinese Role-Playing Conversational Agents (RPCAs), we have carefully considered several ethical dimensions to ensure our research adheres to high ethical standards: 9 11844 (1) Data Privacy and Permissions: We confirm that all materials used, especially dialogues derived from copyrighted Chinese novels and scripts, have been utilized in non-commercial purpose, respecting copyright laws and privacy policies. (2) Fairness and Transparency in Annotation: In creating the CharacterRM (role-playing reward model), we have implemented a rigorous selection and training process for our annotators to ensure the fairness and transparency of their contributions. We have taken measures to address potential biases and ensure the annotations are consistent, highquality, and reflective of diverse perspectives. (3) Responsible Use of RPCAs: Aware of the potential for emotional engagement and the risks associated with the misuse of AI-generated content, we will outlin ethical guidelines for the deployment of RPCAs. Our research includes safeguards to prevent the misuse of these agents, ensuring they are used in ways that are beneficial and respectful to users. Acknowledgement This work was supported by the National Natural Science Foundation of China (NSFC Grant No. 62122089), Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098, and Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the “Double-First Class” Initiative, Renmin University of China, the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China. References Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2023. A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109. Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. Acm Sigkdd Explorations Newsletter, 19(2):25–35. Nuo Chen, Yan Wang, Haiyun Jiang, Deng Cai, Yuhan Li, Ziyang Chen, Longyue Wang, and Jia Li. 2023. Large language models meet harry potter: A dataset for aligning dialogue agents with characters. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8506–8520. Xiuyi Chen, Fandong Meng, Peng Li, Feilong Chen, Shuang Xu, Bo Xu, and Jie Zhou. 2020. Bridging the gap between prior and posterior knowledge selection for knowledge-grounded dialogue generation. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 3426–3437. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30. Floris Den Hengst, Mark Hoogendoorn, Frank Van Harmelen, and Joost Bosman. 2019. Reinforcement learning for personalized dialogue management. In IEEE/WIC/ACM International Conference on Web Intelligence, pages 59–67. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning. arXiv preprint arXiv:2301.00234. Chen Gao, Xiaochong Lan, Zhihong Lu, Jinzhu Mao, Jinghua Piao, Huandong Wang, Depeng Jin, and Yong Li. 2023. S3: Social-network simulation system with large language model-empowered agents. arXiv preprint arXiv:2307.14984. Yingqiang Ge, Wenyue Hua, Jianchao Ji, Juntao Tan, Shuyuan Xu, and Yongfeng Zhang. 2023. Openagi: When llm meets domain experts. arXiv preprint arXiv:2304.04370. Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R Lyu. 2023. Chatgpt an enfj, bard an istj: Empirical study on personalities of large language models. arXiv preprint arXiv:2305.19926. Cheng Li, Ziang Leng, Chenxi Yan, Junyi Shen, Hao Wang, Weishi MI, Yaying Fei, Xiaoyang Feng, Song Yan, HaoSheng Wang, et al. 2023. Chatharuhi: Reviving anime character in reality via large language model. arXiv preprint arXiv:2308.09597. Linxiao Li, Can Xu, Wei Wu, Yufan Zhao, Xueliang Zhao, and Chongyang Tao. 2020. Zero-resource knowledge-grounded dialogue generation. Advances in Neural Information Processing Systems, 33:8475– 8485. Yukun Ma, Khanh Linh Nguyen, Frank Z Xing, and Erik Cambria. 2020. A survey on empathetic dialogue systems. Information Fusion, 64:50–70. Mohsen Mesgar, Edwin Simpson, and Iryna Gurevych. 2020. Improving factual consistency between a response and persona facts. arXiv preprint arXiv:2005.00036. 10 11845 Isabel Briggs Myers. 1962. The myers-briggs type indicator: Manual (1962). OpenAI. 2022. Openai: Introducing chatgpt. Keyu Pan and Yawen Zeng. 2023. Do llms possess a personality? making the mbti test an amazing evaluation for large language models. arXiv preprint arXiv:2307.16180. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pages 1–22. Karl Pearson. 1901. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin philosophical magazine and journal of science, 2(11):559–572. Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. arXiv preprint arXiv:2309.05922. Yunfan Shao, Linyang Li, Junqi Dai, and Xipeng Qiu. 2023. Character-llm: A trainable agent for roleplaying. arXiv preprint arXiv:2310.10158. Tianhao Shen, Sun Li, and Deyi Xiong. 2023. Roleeval: A bilingual role evaluation benchmark for large language models. arXiv preprint arXiv:2312.16132. Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2022. Pddl planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. Chan Hee Song, Jiaman Wu, Clayton Washington, Brian M Sadler, Wei-Lun Chao, and Yu Su. 2023. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2998–3009. Quan Tu, Chuanqi Chen, Jinpeng Li, Yanran Li, Shuo Shang, Dongyan Zhao, Ran Wang, and Rui Yan. 2023. Characterchat: Learning towards conversational ai with personalized social support. arXiv preprint arXiv:2308.10278. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2023a. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432. Xintao Wang, Yaying Fei, Ziang Leng, and Cheng Li. 2023b. Does role-playing chatbots capture the character personalities? assessing personality traits for role-playing chatbots. arXiv preprint arXiv:2310.17976. Xintao Wang, Yunze Xiao, Jen tse Huang, Siyu Yuan, Rui Xu, Haoran Guo, Quan Tu, Yaying Fei, Ziang Leng, Wei Wang, et al. 2023c. Incharacter: Evaluating personality fidelity in role-playing agents through psychological interviews. arXiv preprint arXiv:2310.17976. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Zekun Moore Wang, Zhongyuan Peng, Haoran Que, Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu, Hongcheng Guo, Ruitong Gan, Zehao Ni, Man Zhang, et al. 2023d. Rolellm: Benchmarking, eliciting, and enhancing role-playing abilities of large language models. arXiv preprint arXiv:2310.00746. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824– 24837. Boyang Xue, Weichao Wang, Hongru Wang, Fei Mi, Rui Wang, Yasheng Wang, Lifeng Shang, Xin Jiang, Qun Liu, and Kam-Fai Wong. 2023. Improving factual consistency for knowledge-grounded dialogue systems via knowledge enhancement and alignment. arXiv preprint arXiv:2310.08372. Rui Yan, Juntao Li, Zhou Yu, et al. 2022. Deep learning for dialogue systems: Chit-chat and beyond. Foundations and Trends® in Information Retrieval, 15(5):417–589. Chen Zhang, Yiming Chen, Luis Fernando D’Haro, Yan Zhang, Thomas Friedrichs, Grandee Lee, and Haizhou Li. 2021. Dynaeval: Unifying turn and dialogue level evaluation. arXiv preprint arXiv:2106.01112. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023. Siren’s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. Xueliang Zhao, Wei Wu, Chongyang Tao, Can Xu, Dongyan Zhao, and Rui Yan. 2020. Low-resource knowledge-grounded dialogue generation. arXiv preprint arXiv:2002.10348. 11 11846 Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. Yinhe Zheng, Guanyi Chen, Minlie Huang, Song Liu, and Xuan Zhu. 2019. Personalized dialogue generation with diversified traits. arXiv preprint arXiv:1901.09672. Yinhe Zheng, Rongsheng Zhang, Minlie Huang, and Xiaoxi Mao. 2020. A pre-training based personalized dialogue generation model with persona-sparse data. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9693–9700. Hanxun Zhong, Zhicheng Dou, Yutao Zhu, Hongjin Qian, and Ji-Rong Wen. 2022. Less is more: Learning to refine dialogue history for personalized dialogue generation. arXiv preprint arXiv:2204.08128. Jinfeng Zhou, Zhuang Chen, Dazhen Wan, Bosi Wen, Yi Song, Jifan Yu, Yongkang Huang, Libiao Peng, Jiaming Yang, Xiyao Xiao, et al. 2023. Characterglm: Customizing chinese conversational ai characters with large language models. arXiv preprint arXiv:2311.16832. 12 11847 A Robustness Analysis To evaluate the robustness of RPCAs, we select a range of models—InternLM-20B, MiniMax, BCNPC-Turbo, and GPT-4—for analysis. We aim to assess their effectiveness in different stages of a conversation. As illustrated in Figure 5, there is a noticeable trend where most models demonstrate a decline in performance as conversations progress. Remarkably, InternLM-20B maintains consistent performance in terms of character consistency and conversational ability. This could be attributed to the fact that these models, primarily designed for role-playing, have not significantly focused on longer dialogue sequences. This oversight is likely due to the challenges associated with collecting extensive role-playing conversation data. Similarly, GPT-4 exhibits a declined trend under longer conversations, affected by the complex Chinese role-playing scenarios. Our findings indicate that future advancements in RPCA development should focus on enhancing capabilities for longer conversational scenarios, ensuring more stable and consistent role-playing interactions. [1, 5] [6, 10] [10, 15] [16, 20] Turn 2.6 2.7 2.8 2.9 Score (a) Character Consistency [1, 5] [6, 10] [10, 15] [16, 20] Turn 3.4 3.6 3.8 Score (b) Conversational Ability InternLM-20B Minimax BC-NPC-Turbo GPT-4 Figure 5: Model performance across the different stages of the conversation. B Evaluation Result by GPT-4 Although GPT-4 has demonstrated the selfenhancement bias (Zheng et al., 2023) and has a lower correlation with human judgement 3, we present the evaluation result by GPT-4 in a 2-shot setting for reference, as shown in Table 5. C FAQs Q1: Only Chinese novels and scripts limit the diversity of role-playing scenarios and characters. We are in the process of expanding the size of our dataset and will include a broader range of language types. Please stay tuned, as this work is intended to be a starting point, not the endpoint. Your mention is valuable in guiding our efforts to enhance the diversity and richness of our dataset. Q2: Subjective metrics rely on a limited number of human annotations, potentially introducing biases. We are committed to continuously augmenting our dataset with additional human annotations to ensure our evaluation results are more reliable, consistent, and unbiased. We acknowledge the importance of addressing these concerns and are actively working to improve the robustness of our evaluation methodology. Q3: Since the data sources are from the public domain (e.g., TV shows), a comprehensive analysis of data leakage should be conducted. While our data is indeed derived from actual domains, we hold the point that evaluating role-playing capabilities by human for entirely novel characters poses a significant challenge due to the lack of prior knowledge about these characters. Therefore, our dataset construction leverages existing data sources to test the model’s ability to utilize known information for role-playing. Additionally, our evaluation metrics are designed to be reference-free, thereby minimizing the impact of data leakage. This approach ensures a more accurate assessment of a model’s ability to engage in role-playing scenarios while relieving concerns related to data leakage. Q4: The data only contains Chinese data, making it of limited interest to the broader audience. As mentioned in response to Q1 of Reviewer s1sR, we are currently expanding our dataset to include a wider range of language types. This expansion is part of our ongoing effort to make our research more inclusive and relevant to a global audience. We believe that diversifying the dataset will significantly enhance its utility and appeal, facilitating broader applications and research in role-playing and character analysis. CharacterEval is intended to be a starting point, not the endpoint. Q4: The data only contains Chinese data, making it of limited interest to the broader audience. As mentioned in response to Q1 of Reviewer s1sR, we are currently expanding our dataset to include a wider range of language types. This expansion is part of our ongoing effort to make our research more inclusive and relevant to a global audience. We believe that diversifying the dataset will significantly enhance its utility and appeal, facilitating broader applications and research in role-playing 13 11848 Character Consistency Personality KE KA KH PB PU Avg. Back-Testing ChatGLM3-6B 4.437 4.411 4.175 4.462 4.431 4.383 0.532 XVERSE-7B 4.498 4.655 4.533 4.593 4.651 4.586 0.62 Baichuan2-7B 4.506 4.665 4.531 4.633 4.686 4.604 0.625 Qwen-7B 4.303 4.375 4.257 4.415 4.413 4.353 0.606 InternLM-7B 4.367 4.497 4.403 4.454 4.638 4.472 0.63 XVERSE-13B 4.709 4.812 4.611 4.743 4.802 4.735 0.63 Baichuan2-13B 4.672 4.841 4.733 4.771 4.812 4.766 0.639 Qwen-14B 4.637 4.644 4.530 4.674 4.688 4.635 0.62 InternLM-20B 4.699 4.734 4.568 4.676 4.735 4.682 0.648 CharacterGLM 4.157 4.679 4.450 4.495 4.640 4.484 Xingchen 4.366 4.638 4.488 4.650 4.704 4.569 0.63 MiniMax 4.692 4.827 4.674 4.776 4.849 4.763 0.685 BC-NPC-Turbo 4.478 4.811 4.655 4.730 4.833 4.701 0.681 GPT-3.5 3.793 3.858 3.549 3.837 3.866 3.781 0.653 GPT-4 4.924 4.923 4.899 4.912 4.906 4.913 0.694 Conversational Ability Role-playing Attractiveness Flu. Coh. Cons. Avg. HL CS ED Emp. Avg. ChatGLM3-6B 4.160 4.552 4.182 4.298 4.360 3.620 3.410 3.570 3.740 XVERSE-7B 4.591 4.725 4.392 4.569 4.601 3.608 3.331 3.535 3.769 Baichuan2-7B 4.636 4.760 4.596 4.664 4.608 3.497 3.240 3.610 3.739 Qwen-7B 4.201 4.540 4.025 4.255 4.333 3.606 3.362 3.379 3.670 InternLM-7B 4.468 4.599 4.189 4.418 4.420 3.396 3.075 3.312 3.551 XVERSE-13B 4.708 4.812 4.559 4.693 4.736 3.736 3.533 3.758 3.941 Baichuan2-13B 4.724 4.847 4.631 4.734 4.726 3.559 3.246 3.670 3.800 Qwen-14B 4.500 4.758 4.439 4.566 4.613 3.631 3.531 3.612 3.847 InternLM-20B 4.497 4.798 4.579 4.625 4.669 3.559 3.399 3.602 3.807 CharacterGLM 4.562 4.538 4.297 4.466 4.429 3.267 2.931 3.032 3.415 Xingchen 4.558 4.677 4.326 4.520 4.584 3.339 3.076 3.155 3.539 MiniMax 4.733 4.819 4.580 4.710 4.735 3.511 3.304 3.557 3.777 BC-NPC-Turbo 4.685 4.770 4.452 4.636 4.581 3.437 3.157 3.355 3.633 GPT-3.5 3.656 3.788 3.873 3.772 3.710 3.162 2.795 3.251 3.230 GPT-4 4.630 4.850 4.656 4.712 4.796 3.947 3.806 3.883 4.108 Table 5: Detailed evaluation results on CharacterEval. The 12 subjective metrics in conversational ability, character consistency and role-playing attractiveness dimensions are evaluated by GPT-4. The best performances are highlighted in bold, while sub-optimal ones are marked with underline. It is notable that the score for CharacterGLM in personality back-testing is unavailable, hence it is replaced by a “-”. and character analysis. CharacterEval is intended to be a starting point, not the endpoint. Q5: The entire evaluation process is unclear, including how all evaluation metrics, human judgments, CharacterRM, and GPT4 are scored. Can these scoring results truly represent the original meanings of the metrics? Our evaluation process is primarily based on predefined annotated examples, which can be found in https://github. com/morecry/CharacterEval. (1) Regarding the human annotation, we recruited 12 annotators including 8 males and 4 females with age from 18 to 26, divided into two groups. Each group scores all samples across specified metrics following the guidelines of the predefined annotated examples. Consequently, each (Role information, context, response, metric) tuple receives two scores. In cases of scoring discrepancies, the respective annotators discuss to reach a consensus on the score. (2) Regarding the GPT-4 evaluation, we employ the predefined annotated examples as demonstration for In-context learning to compute the scores. The API version we used is gpt-4-preview-1106. (3) CharacterRM is developed based on the human annotation results on training set of CharacterEval. The annotated scores are normalized to [0,1] by dividing by 5. We then enhance Baichuan13B-Base with a classifier, applying a template to the (Role information, context, response, metric) 14 11849 tuple, processing it through Baichuan-13B-Base and the classifier. We choose the classifier output of the last token and normalize it with a sigmoid function to obtain the reward score. The training aims to fit these normalized human scores using mean squared error loss. We use LoRA to optimzer the CharacterRM with such config that lora rank is 32, lora alpha 16, lora dropout is 0.05, trainable layers are all W_pack and o_proj layers, learning rate is 2e-5, epoch is 5, batch_size is 8, and compute dtype is bfloat16. The inference follows a similar procedure as training. 15 11850
2024
638
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11851–11861 August 11-16, 2024 ©2024 Association for Computational Linguistics Generative Cross-Modal Retrieval: Memorizing Images in Multimodal Language Models for Retrieval and Beyond Yongqi Li1, Wenjie Wang2*, Leigang Qu2, Liqiang Nie3, Wenjie Li1*, Tat-Seng Chua2 1The Hong Kong Polytechnic University 2National University of Singapore 3Harbin Institute of Technology (Shenzhen) {liyongqi0,wenjiewang96,leigangqu,nieliqiang}@gmail.com [email protected] [email protected] Abstract The recent advancements in generative language models have demonstrated their ability to memorize knowledge from documents and recall knowledge to respond to user queries effectively. Building upon this capability, we propose to enable multimodal large language models (MLLMs) to memorize and recall images within their parameters. Given a user query for visual content, the MLLM is anticipated to “recall” the relevant image from its parameters as the response. Achieving this target presents notable challenges, including inbuilt visual memory and visual recall schemes within MLLMs. To address these challenges, we introduce a generative cross-modal retrieval framework, which assigns unique identifier strings to represent images and involves two training steps: learning to memorize and learning to retrieve. The first step focuses on training the MLLM to memorize the association between images and their respective identifiers. The latter step teaches the MLLM to generate the corresponding identifier of the target image, given the textual query input. By memorizing images in MLLMs, we introduce a new paradigm to crossmodal retrieval, distinct from previous discriminative approaches. The experiments demonstrate that the generative paradigm performs effectively and efficiently even with large-scale image candidate sets. The code is released at https://github.com/liyongqi67/GRACE. 1 Introduction Recently, we have witnessed the explosive development of generative large language models (LLMs), such as GPT series (Radford et al., 2019; Brown et al., 2020) and LLaMA (Touvron et al., 2023a,b). Undergone extensive pretraining on document corpora and instruction tuning, these language models have demonstrated an impressive ability to memorize a lot of knowledge in their parameters and *Corresponding authors. Who is Sheldon Cooper? Sheldon Cooper is a fictional character from the TV show “The Big Bang Theory” … What does Sheldon Cooper look like? Sheldon is tall and slender with short, light brown hair. He has a fair complexion and a somewhat boyish appearance...... GPT4 What if LLMs could return images? Figure 1: Real cases from GPT4 illustrate the necessity of visual outputs for LLMs. effectively recall them to answer users’ instructions and queries. As shown in Figure 1, GPT41 could directly respond to the user’s question, “Who is Sheldon Cooper?”, without any external document or database. Building upon the advancements of LLMs, multimodal LLMs (MLLMs) (Alayrac et al., 2022; Li et al., 2023a; Liu et al., 2023; Zhu et al., 2023; Huang et al., 2023) have been developed to expand the capabilities beyond text and allow users to express their needs using visual input. Despite the impressive capabilities of LLMs and MLLMs, their responses are limited to textual outputs. For instance, a user might ask, “What does Sheldon Cooper look like?” as shown in Figure 1. While the MLLM tries to describe the person’s appearance, it is often said that “an image is worth a thousand words.” It would greatly enhance the response capabilities of MLLMs if they could give visual outputs, like a photograph in this case. A straightforward solution is to enhance MLLMs with external image synthesis tools, like diffusion models (Dhariwal and Nichol, 2021; Ho et al., 2020) and Generative Adversarial Networks (Goodfellow et al., 2020), for visual output capabilities. 1https://openai.com/gpt-4. 11851 However, a significant challenge with these modules is their propensity to produce unrealistic or hallucinatory images, which cannot accurately describe real-world images, such as a photograph of “Sheldon Cooper”. The integration of an image retrieval module (Radford et al., 2021) seems a more viable solution. Nonetheless, such a combination often encounters a transition gap between two independent modules (Lewis et al., 2020). Considering the massive benefits of LLMs in memorizing textual knowledge, a bold and innovative idea emerges: Is it possible to equip MLLMs with the ability to memorize visual information within their parameters for retrieval and beyond? In this light, we formulate a generative cross-modal retrieval task: given a user query for visual content, MLLMs are expected to recall desired images from their parameters directly as the response. Accomplishing this task poses a significant challenge, necessitating the presence of two essential abilities of MLLMs: 1) Visual memory. As the prerequisite requirement, the MLLM model must possess the capability to memorize visual information within its parameters. This goes beyond simply encoding images into dense vectors within a vector database. It necessitates a distinct, differentiable, and integrated visual memory scheme within MLLMs’ parameters. 2) Visual recall. Given a textual query, the MLLM should be able to recall the relevant visual information from the complicated visual memory bank. Above this, for user comprehension, the activated visual information must be grounded to the complete and original images rather than mere patches or fragmented visuals. In this work, we propose a novel GeneRAtive Cross-modal rEtrieval framework, GRACE, to overcome the above issues. GRACE assigns images unique identifiers, where each identifier is a distinct string representing an image. Based on the identifiers, GRACE comprises two training steps, as illustrated in Figure 2. 1) Learning to memorize. Given an image, the MLLM is trained to generate the corresponding identifier string via the standard text generation loss. The goal of this phase is for the MLLM to effectively learn and memorize the associations between the visual content of images and their respective identifiers. 2) Learning to retrieve. The MLLM is trained to generate the identifier string of the relevant image while given a textual query. In this way, the MLLM learns to associate user queries with visual memory. After the two training steps above, GRACE enables generative cross-modal retrieval: given a textual query, the MLLM generates an identifier string corresponding to a real image. We delve into GRACE from various perspectives, including different identifier types, effectiveness, and efficiency of the generative paradigm. We evaluate GRACE on text-image matching datasets to verify the feasibility of generative cross-modal retrieval. Without any image’s visual information during inference, GRACE performs comparably to the advanced two-tower approaches (e.g., CLIP (Radford et al., 2021)) and demonstrates higher efficiency with large-scale image sizes. It is acknowledged that as a new retrieval paradigm, GRACE still lags behind one-tower approaches. One-tower approaches are only applicable to ranking stage due to their low efficiency, while GRACE and CLIP are specifically designed for the retrieval stage. By comprehensive analysis, we hope to comprehensively understand its capabilities and limitations. We believe exploring generative cross-modal retrieval holds great significance. • Benefiting from inbuilt visual memory within MLLMs, GRACE introduces a new paradigm to cross-modal retrieval. GRACE transforms the original matching problem into a generation problem, eliminating the need for negative samples during training and retrieval index during inference. No matter the size of the image set, the retrieval efficiency remains constant. This new cross-modal retrieval paradigm leaves much room for investigation. • Inbuilt visual memory serves for retrieval, yet its utility extends beyond mere retrieval. In Section 4.5, we demonstrate that the MLLM could describe the memorized image and even answer questions about the memorized images, just like humans do. This opens up the possibility of injecting personalized visual experiences of humans into MLLMs for them to memorize and understand an individual’s journey, and accomplish more visual tasks. 2 Related Work 2.1 Cross-modal Retrieval The current cross-modal retrieval (text-image matching) approaches can be categorized into the two frameworks and the one-tower framework 11852 based on how modality interaction is handled. Onetower framework (Chen et al., 2020; Diao et al., 2021; Lee et al., 2018; Qu et al., 2021) embraces fine-grained cross-modal interactions to achieve matching between fragments (e.g., objects and words). As for the two-tower framework (Chen et al., 2021; Faghri et al., 2017; Zheng et al., 2020; Qu et al., 2020), images and texts are independently mapped into a joint feature space in which the semantic similarities are calculated via cosine function or Euclidean distance. Both the one-tower framework and the two-tower framework formulate the cross-modal retrieval as a discriminative problem, which relies on discriminative loss and negative samples to learn an embedding space. In this work, we explore a new generative paradigm for cross-modal retrieval. 2.2 Generative Retrieval Generative retrieval is an emerging new retrieval paradigm in text retrieval, which generates identifier strings of passages as the retrieval target. Instead of generating entire passages, this approach uses identifiers to reduce the amount of useless information and make it easier for the model to memorize and learn (Li et al., 2024). Different types of identifiers have been explored in various search scenarios, including passage titles (Web URLs), numeric IDs, and substrings of passages, as shown in previous studies (De Cao et al., 2020; Tay et al., 2022; Li et al., 2023c,b,d; Sun et al., 2024). Generative retrieval gains a lot of attention in text retrieval, as it could take advantage of the powerful generative language models. Zhang et al. proposed a generative image-to-image retrieval framework. However, facilitating cross-modal retrieval (textto-image retrieval) in a generative way is still an untapped problem. 2.3 Multimodal Language Model We have witnessed the explosive development of generative language models, such as GPT (Radford et al., 2019; Brown et al., 2020) and LLaMA (Touvron et al., 2023a), that demonstrate remarkable capabilities in instruction following and in-context learning. Building upon the advancements of LLMs, MLLMs (Alayrac et al., 2022; Li et al., 2023a; Liu et al., 2023; Zhu et al., 2023; Huang et al., 2023) have been developed to enable LLMs to process images as input. Despite the success of MLLMs in various vision-language tasks, they currently lack the ability to unify cross-modal retrieval into their application. In this work, we propose a generative cross-modal retrieval framework that empowers MLLMs to retrieve relevant images from their parameters given textual queries. 3 Method 3.1 Preliminary Task definition. Generative cross-modal retrieval defines new requirements, i.e., removing visual input during inference, for cross-modal retrieval, but could be evaluated with original cross-modal tasks. Text-to-image retrieval aims to retrieve relevant images from a database DI when given a textual query q. Multimodal language model. As our method is conducted based on multimodal language models, it is essential to give relevant background of multimodal language models. Multimodal language models could be regarded as generative language models that incorporate image inputs, including GPT4V2, BILP (Li et al., 2023a), flamingo (Alayrac et al., 2022), and Kosmos (Huang et al., 2023). Considering factors including convenience and model sizes, we have chosen Flamingo as the backbone for our method and took the open-flamingo implementation (Awadalla et al., 2023). Flamingo consists of three main components: a generative language model, a visual encoder, and cross-attention layers. The visual encoder is responsible for extracting patch features from the input images. The generative language model receives text input that includes a special token, “<image>”, which indicates the presence of an image. Through the cross-attention layers, the “<image>” token could attend to the patch features extracted by the visual encoder. This allows Flamingo to predict the next text token based on all previous text tokens and the most recent image. For more detailed information, please refer to the original paper on Flamingo. 3.2 Overview In this work, we present GRACE, a novel generative cross-modal retrieval framework, as illustrated in Figure 2. As previously discussed, addressing the challenges of visual memory and visual recall is essential for generative cross-modal retrieval. Towards this objective, GRACE assigns unique iden2https://openai.com/research/ gpt-4v-system-card. 11853 Visual Encoder Language Model MLLM id: 12138 Predict the identifier for the <image> Visual Encoder Language Model MLLM id: 12138 Predict the image identifier  corresponding to the given query: A girl hops across the river, from rock to rock. Visual Encoder Language Model MLLM id: 13768 Predict the image identifier  corresponding to the given query:  Three dogs play together in a field. (a) Learning to memorize (b) Learning to retrieve (c) Inference Figure 2: Illustration of our proposed generative cross-modal framework, GRACE, which involves two training steps. (a) Learning to memorize: GRACE trains an MLLM model to memorize images into its parameters. (b) Learning to retrieve: GRACE trains the model to generate the target image’s identifiers given queries. (c) Inference: The MLLM directly generates identifiers as the retrieval results. String identifier '13768' Numeric identifier '1 3 7 6 8' Semantic identifier 'Three dogs play together in the field' Structured identifier 'C_1_3 C_2_1' Atomic identifier 'image_13768' (a) Different identifier types C_1_1 C_1_2 C_1_3 C_2_1 C_2_2 : C_1_3 C_2_1 : C_1_3 C_2_2 Images Cluster (b) Structured identifiers Figure 3: (a) depicts an image accompanied by various identifier types. (b) shows the formation of structured identifiers, where each image’s identifier is represented as its unique path within a cluster tree. tifiers to images in the dataset DI. This strategy allows the model to learn mappings from images to their respective identifiers, facilitating visual memory. Moreover, the model could generate identifiers as retrieval results rather than generate real images. Representing images as identifiers underpins our training scheme, which is divided into two core steps: “learning to memorize” and “learning to retrieve”. The two training steps are designed to enable the model to effectively memorize images in parameters and subsequently learn to recall them in response to textual queries. 3.3 Image Identifiers Image identifiers are crucial for the whole framework, and we explore the following different types of identifiers: String identifier. We randomly shuffle the images in DI, and assign them digital numbers ranging from 1 to |DI|. It is noted that the digital numbers are represented as strings in MLLMs and may be tokenized into multiple tokens determined by the tokenizer. For instance, an image may be assigned the identifier “13768” and tokenized into two tokens: “13” and “768”. Numeric identifier. Similar to the string identifier, the numeric identifier ranges from 1 to |DI|. However, we include spaces in the numeric identifier, resulting in the tokenization into individual digits. For example, an image with the identifier “1 3 7 6 8” will be tokenized into the sequence of tokens “1”, “3”, “7”, “6”, and “8”. It is worth noting that the numeric identifier only utilizes ten tokens from the vocabulary to represent images, but the sequence length is typically longer than that of the string identifier. Semantic identifier. Since the identifiers are utilized to represent images, image captions that describe the content of images can be considered as identifiers. These image captions are naturally token sequences that can be learned by multimodal language models. Some images in |DI| belong to the test set, and their captions should not be utilized. To avoid data leaks, we train an image caption model based on the training set and generate captions for the images in the test set as their identifiers. Structured identifier. We assign structure identifiers to images using an unsupervised clustering approach. We utilize the image encoder in CLIP to obtain the embeddings of images. Subsequently, we apply the k-means algorithm (Ahmed et al., 2020) to cluster these embeddings, resulting in all images being grouped into k clusters. Each document is then assigned an identifier based on the number of their cluster IDs. For clusters that contain more than a certain number of documents (denoted as c), we recursively apply the algorithm (Tay et al., 2022). In this process, the identifier of the next level is appended to the existing identifier, forming a hierarchical structure. We represent each cluster using special tokens, such as "C_1_3", 11854 which indicates the third cluster in the first level. These special tokens are added to the token vocabulary of the multimodal language model. Similar images tend to have similar structured identifiers, meaning they have similar paths in the cluster tree. Atomic identifier. We assign a dedicated token as its identifier to identify each image uniquely. We expand the token vocabulary by introducing new tokens to ensure compatibility with the existing tokens. Each image is then assigned a special token, such as "I_13768", which is a complete token in the vocabulary and will not be further tokenized into sub-tokens. This approach allows us to avoid any conflicts with the original tokens while providing a distinct identifier for each image. We present the various types of identifiers for the same image in Figure 3, highlighting their distinct characteristics. It is evident that different identifier types possess different attributes. String, numeric, and atomic identifiers do not provide any prior knowledge about the image content, whereas semantic and structured identifiers do. Furthermore, the use of structured and atomic identifiers necessitates the inclusion of new tokens in the vocabulary, whereas the other identifier types do not require such modifications. 3.4 Learning to Memorize We have represented images in the dataset DI using unique identifiers, that is, as a sequence of tokens. Then we train a multimodal language model, denoted as MLLM, to encapsulate these images within its parameters. Specifically, for an image i ∈DI, we train the model to associate this image with its corresponding identifier, denoted as I. This process is formulated as follows: I = MLLM(i; inst-m), (1) where inst-m is a textual instruction given as “Predict the identifier for the <image>”. Here, “<image>” is a placeholder token in Flamingo, designed to focus on the visual features of the input. This learning to memorize step allows the model to learn the mappings from visual inputs to their corresponding identifiers, to effectively encode imagelevel visual memories within its parameters. 3.5 Learning to Retrieve Merely memorizing images within its parameters is insufficient for the MLLM. The model must be capable of recalling the corresponding images in response to users’ queries. To achieve this, we train the MLLM to predict the appropriate identifier when given a specific query q. This process is outlined as follows: I = MLLM(q; inst-r), (2) where inst-r is a textual instruction, “Predict the image identifier corresponding to the given query”. 3.6 Inference Post-training, the MLLM model could retrieve images akin to text generation. The process involves inputting a query into the MLLM, and then the model predicts several identifier strings through beam search. Since each identifier uniquely corresponds to an image, the generation results are the retrieval results. Constrained generation. To confine the generation to within-corpus results and ensure they fall within the test set, we implement constrained beam search in the MLLM. This approach leverages a Trie, a form of k-ary search tree, for efficient key location within a set. Specifically, we store all image identifiers into the Trie. The Trie structure, upon receiving a prefix string, suggests potential tokens found in the identifiers. This mechanism ensures that every generated identifier accurately matches an existing image’s identifier. Furthermore, we employ beam search (Sutskever et al., 2014), a widelyused technique, for generating multiple identifiers concurrently. These identifiers are each assigned a language model score, facilitating the creation of a ranked list based on these scores. Consequently, the ranked identifiers correspond to a ranked list of images. 4 Experiments 4.1 Datasets and Baselines We evaluated our proposed generative cross-modal retrieval framework, GRACE, on two commonlyused datasets: Flickr30K (Young et al., 2014) and MS-COCO (Lin et al., 2014). Flickr30K contains 31,783 images sourced from Flickr. Each image is associated with five human-annotated sentences. We adopted the data split used by Li et al., comprising 29,783 images for training, 1,000 for validation, and 1,000 for testing. MS-COCO comprises 123,287 images, and each MS-COCO image comes with five sentences of annotations. We followed the dataset split proposed in (Lee et al., 2018), utilizing 113,287 images for training, 5,000 for 11855 Paradigm Methods Flickr30K MS-COCO (5K) R@1 R@5 R@10 R@1 R@5 R@10 Two-tower VSE++ (Faghri et al.) 39.6 70.1 79.5 30.3 59.4 72.4 Dual-path (Zheng et al.) 39.1 69.2 80.9 25.3 53.4 66.4 CAMERA (Qu et al.) 58.9 84.7 90.2 39.0 70.5 81.5 CLIP (Radford et al.) 58.4 81.5 88.1 37.8 62.4 72.7 GRACE Numeric Identifier 22.5 28.9 29.4 0.03 0.14 0.28 String Identifier 30.5 39.0 40.4 0.12 0.37 0.88 Semantic Identifier 22.9 34.9 37.4 13.3 30.4 35.9 Structured Identifier 37.4 59.5 66.2 16.7 39.2 50.3 Atomic Identifier 68.4 88.9 93.7 41.5 69.1 79.1 Table 1: Performance of text-to-image retrieval on Flickr30K and MS-COCO (5K) datasets. The best results in each group are marked in Bold, while the second-best ones are underlined. One-tower approaches demonstrate superior performance on the two datasets, but they are not considered as baselines due to their high computational overhead, which makes them impractical for the retrieval stage. validation, and 5,000 for testing. Consistent with prior studies (Young et al., 2014; Chen et al., 2021), we evaluated our method using the standard recall metric R@K where K is set to 1, 5, and 10. Considering the efficiency and applicability, we compared GRACE with two-tower approaches, including VSE++ (Faghri et al., 2017), Dualpath (Zheng et al., 2020), CAMERA (Qu et al., 2020), and CLIP (Radford et al., 2021), as our baseline models. One-tower approaches usually have heavy computational overhead, focusing on the ranking stage rather than the retrieval stage. Therefore, we did not include them as baselines. Implement Details are detailed in Appendix A. 4.2 Overall Results The summarized comparisons are presented in Table 1. Analysis of this table led to the following observations: 1) GRACE demonstrated the capability to recall relevant images in response to textual queries without input of image content. This underscores the feasibility of generative cross-modal retrieval. 2) We also noticed variability in performance among GRACE with different identifiers. Specifically, numeric and string identifiers yielded very low performance on the MS-COCO dataset. This poor performance can be attributed to the lack of pre-knowledge provided by these identifiers to the MLLM. The inconsistent correlation between similar images and their identifiers makes it challenging for the MLLM to memorize and establish accurate relationships, especially as the dataset size increases. Furthermore, numeric identifiers underperform string identifiers, likely due to their requirement for more generation steps, which increases the chance of errors. 3) In contrast, semantic identifiers, which are based on the image’s content, showed better results than numeric and string identifiers. However, their effectiveness was somewhat limited due to the minimal differentiation among semantic identifiers for different images. This was particularly problematic in cases where images shared the same captions, causing the model to generate semantically correct but contextually incorrect identifiers. 4) Structured identifiers achieved good performance by effectively utilizing the image’s embedding information through a clustering approach. This hierarchical structure significantly enhanced the MLLM’s ability to memorize all images in the dataset. 5) Finally, atomic identifiers were found to be the most effective, even outperforming the CLIP model. This approach assigns a unique token in the vocabulary for each image, ensuring distinct identification. However, this method also has its challenges, as increasing the number of images directly enlarges the vocabulary size of the MLLM, potentially impacting scalability. These findings highlight the importance of identifier types in generative cross-modal retrieval and shed light on the trade-offs involved in different approaches. 4.3 Ablation Study Our approach integrates two key training steps: learning to memorize and learning to retrieve. Does the “learning to memorize” phase significantly enhance retrieval performance? During the inference stage, we employed constrained generation to en11856 GRACE Flickr30K R@1 R@5 R@10 Numeric Identifier 22.5 28.9 29.4 w/o learning to memorize 18.2 24.3 24.9 w/o constrained generation 7.72 16.7 21.1 String Identifier 30.5 39.0 40.4 w/o learning to memorize 26.1 33.3 34.6 w/o constrained generation 10.9 22.3 28.0 Semantic Identifier 22.9 34.9 37.4 w/o learning to memorize 19.3 31.2 34.3 w/o constrained generation 0.6 2.3 3.0 Structured Identifier 37.4 59.5 66.2 w/o learning to memorize 36.5 61.1 68.2 w/o constrained generation 10.2 22.3 29.3 Table 2: Ablation study results for GRACE. The term “w/o learning to memorize” indicates the omission of the “learning to memorize” training step, and “w/o constrained generation” refers to free generation without any restriction during the inference stage. sure the prediction of valid identifiers. How crucial is constrained generation to the overall retrieval process? To address these questions, we performed experiments by selectively omitting the “learning to memorize” step and the constrained generation process. The outcomes of these experiments are detailed in Table 2. In our experiments, we observed a slight decrease in performance when the “learning to memorize” training step was removed. This suggests that while important, this step is not the sole contributor to effective retrieval. Intriguingly, the “learning to retrieve” phase can be considered another form of memorization, where the model focuses on the image’s description rather than its visual content. As a result, the model retains some capability to recall correct images even without the “learning to memorize” step. However, a significant decline in performance was noted upon removing the constrained generation step. This can be attributed to two primary factors. (1) Generation of out-ofcorpus identifiers: without constrained generation, the model tends to predict identifiers that do not correspond to any image in the corpus. This issue is especially pronounced with semantic identifiers, where the model may generate any textual description, leading to inaccurate retrieval. (2) Prediction of identifiers belonging to the training set. For other types of identifiers, while the model still predicts special tokens corresponding to these identifiers, it often predicts images in the training set. The vast number of images in the training set could also be relevant to the given textual query, significantly increasing the difficulty of recalling the correct image 0 50 100 150 200 250 300 0 5 10 15 20 25 Efficiency Image Numbers (K) CLIP GRACE Figure 4: The efficiency of CLIP and GRACE varies with image size, measured in terms of queries processed per second. As the image size increases, GRACE demonstrates superior efficiency. in the test set. 4.4 Efficiency Analysis In large-scale cross-modal retrieval, efficiency emerges as a crucial factor. This is why the onetower framework, effective for small-scale ranking, falls short in the retrieval stage. To address this, we conducted experiments comparing the efficiency of CLIP and GRACE. CLIP can pre-encode all images into vectors, incurring most of its inference cost from text encoding and calculating the similarity between text embeddings and image embeddings. In contrast, the generative framework necessitates generating identifiers. We assessed the query latency of both CLIP and GRACE to varying image sizes, with detailed results presented in Figure 4. Our findings are insightful. Firstly, CLIP’s inference speed decreases progressively as image size increases, owing to the escalating number of similarity calculations required. Secondly, the inference speed of our generative framework remains nearly constant, a result of encoding all images directly into its parameters. Thirdly, when image sizes exceed a certain threshold (about 150,000 images), our generative framework surpasses CLIP in terms of inference speed, and this advantage grows as image sizes continue to increase. Lastly, these findings underscore that the generative framework is not only capable of large-scale image retrieval but can also perform comparably to two-tower approaches. 4.5 Beyond Cross-modal Retrieval We enable the MLLM to memorize images within its parameters using unique identifiers. Once the images are adequately memorized, the MLLM can produce the corresponding images (identifiers) to respond to users’ queries, as illustrated in Figure 5 (a). 11857 Instruction to MLLM: Please describe the context of the image 3756. MLLM output: Three women and a man are drinking beer at a table. Instruction to MLLM: Please describe the context of the image 13234. MLLM output: A soccer team is practicing or playing the game. Instruction to MLLM: Please describe the context of the image 16857. MLLM output: Several people walking at the park. id: 3756 id: 13234 id: 16857 Instruction to MLLM: Predict the image id  given query: Several people walking at the park with a little girl in green shirt ... MLLM output: 8015 id: 8015 id: 539 id: 14333 Instruction to MLLM: Predict the image id  given query: Spectators about to watch a woman use a drill. MLLM output: 539 Instruction to MLLM: Predict the image id  given query: The number ten player on the white softball team gets  ... MLLM output: 14333 (a) Retrieve from memorized images (b) Describe memorized images Instruction to MLLM: In image 20457, what is the event associated with this type of bike? MLLM output: motocross id: 20457 id: 2523 id: 28966 (c) QA over memorized images Instruction to MLLM: In image 2523, what is this animal doing? MLLM output: eating Instruction to MLLM: In image 28966, what kind of fruit is that? MLLM output: orange Figure 5: Cases of interaction with memorized images for an MLLM include retrieving the memorized images, describing them, and answering questions about them, based on specific instructions. It is noted that the MLLM model responds to user instructions without any image input, relying solely on memorized visual information. While the visual memory in the MLLM facilitates image retrieval, its applications are not restricted to retrieval alone after other instruction tunings. We present two examples in Figure 5 (b) and Figure 5 (c), respectively. • Describing memorized images: As the MLLM has successfully memorized certain images, it is capable of providing a description of the image’s content when prompted. As depicted in the examples shown in Figure 5, when given an instruction such as “please describe the context of the image 3,756”, the model is able to provide a description of the image, albeit not in great detail. • QA over memorized images: Similarly, the model is capable of answering some questions over the memorized images. Given an instruction consisting of the image identifier and question, the model can answer based solely on memorization without any image input. 4.6 Beam Size Analysis We conducted experiments to analyze the beam size of GRACE, as detailed in Appendix B. 5 Conclusion and Future Work In this paper, we delved into a novel memorization mechanism for the MLLM to memorize images within its parameters. Building upon inbuilt visual memory within MLLM, we proposed a generative cross-modal retrieval framework, which introduces a fresh paradigm in cross-modal retrieval. This paradigm transforms the original matching problem into a generation problem, eliminating the need for negative samples during training and image indexing during inference. Our experiments demonstrate that the generative paradigm performs effectively and efficiently even with large-scale image sizes. Furthermore, we showcased the MLLM’s ability to interact (e.g., describe and QA) with memorized images, following specific instructions. Moving forward, we aim to further develop this topic from the following perspectives. On the one hand, although our generative framework achieves comparable performance to previous cross-modal retrieval approaches, there are still challenges to address, such as the limitations of current identifiers. Exploring more effective identifiers, like “visual tokens (Van Den Oord et al., 2017)”, would help to enhance generative cross-modal retrieval further. On the other hand, since we have enabled MLLMs to memorize and interact with images, it opens up the possibility of injecting personalized visual experiences of humans into MLLMs for them to understand an individual’s visual journey and accomplish more visual tasks. 11858 Acknowledgments The work described in this paper was supported by National Natural Science Foundation of China (62076212), Research Grants Council of Hong Kong (PolyU/5210919, PolyU/15207821, and PolyU/15207122), and PolyU internal grants (ZVQ0). Limitations This work introduces a new paradigm in text-image retrieval, but it also has some limitations to be addressed. 1) The evaluation of GRACE’s image retrieval ability on Flickr30K and MS-COCO was compared with two-tower baselines. However, it is important to note that Flickr30K and MS-COCO are also used as benchmarks for text-image ranking approaches, where one-tower frameworks have dominated. This may confuse newcomers to the field, as they may perceive GRACE and two-tower approaches as lagging behind the one-tower framework. However, it should be noted that GRACE and two-tower approaches focus on image retrieval, placing high demands on retrieval efficiency, while one-tower approaches are primarily suitable for the ranking stage, allowing for more time-consuming calculations to improve performance. 2) The identifiers currently used by GRACE are not as satisfactory as expected, only yielding results comparable to previous methods. However, as a pioneering work, the main significance of this work lies in validating the feasibility of generative cross-model retrieval. Further research is expected to enhance this paradigm. Ethics Statement The datasets used in our experiment are publicly released and labeled through interaction with humans in English. In this process, user privacy is protected, and no personal information is contained in the dataset. The scientific artifacts that we used are available for research with permissive licenses. And the use of these artifacts in this paper is consistent with their intended use. Therefore, we believe that our research work meets the ethics of ACL. References Mohiuddin Ahmed, Raihan Seraj, and Syed Mohammed Shamsul Islam. 2020. The k-means algorithm: A comprehensive survey and performance evaluation. Electronics, 9(8):1295. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al. 2023. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Hui Chen, Guiguang Ding, Xudong Liu, Zijia Lin, Ji Liu, and Jungong Han. 2020. Imram: Iterative matching with recurrent attention memory for crossmodal image-text retrieval. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12655–12663. Jiacheng Chen, Hexiang Hu, Hao Wu, Yuning Jiang, and Changhu Wang. 2021. Learning the best pooling strategy for visual semantic embedding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 15789–15798. Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2020. Autoregressive entity retrieval. In International Conference on Learning Representations. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780– 8794. Haiwen Diao, Ying Zhang, Lin Ma, and Huchuan Lu. 2021. Similarity reasoning and filtration for imagetext matching. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 1218–1226. Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. 2017. Vse++: Improving visualsemantic embeddings with hard negatives. arXiv preprint arXiv:1707.05612. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2020. Generative adversarial networks. Communications of the ACM, 63(11):139–144. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. Advances in neural information processing systems, 33:6840– 6851. 11859 Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, et al. 2023. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045. Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. 2018. Stacked cross attention for image-text matching. In Proceedings of the European Conference on Computer Vision, pages 201– 216. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023a. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, and Yun Fu. 2019. Visual semantic reasoning for imagetext matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4654–4662. Yongqi Li, Xinyu Lin, Wenjie Wang, Fuli Feng, Liang Pang, Wenjie Li, Liqiang Nie, Xiangnan He, and TatSeng Chua. 2024. A survey of generative search and recommendation in the era of large language models. arXiv preprint arXiv:2404.16924. Yongqi Li, Nan Yang, Liang Wang, Furu Wei, and Wenjie Li. 2023b. Generative retrieval for conversational question answering. Information Processing & Management, 60(5):103475. Yongqi Li, Nan Yang, Liang Wang, Furu Wei, and Wenjie Li. 2023c. Learning to rank in generative retrieval. arXiv preprint arXiv:2306.15222. Yongqi Li, Nan Yang, Liang Wang, Furu Wei, and Wenjie Li. 2023d. Multiview identifiers enhanced generative retrieval. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6636–6648. ACL. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. arXiv preprint arXiv:2304.08485. Leigang Qu, Meng Liu, Da Cao, Liqiang Nie, and Qi Tian. 2020. Context-aware multi-view summarization network for image-text matching. In Proceedings of the 28th ACM International Conference on Multimedia, pages 1047–1055. Leigang Qu, Meng Liu, Jianlong Wu, Zan Gao, and Liqiang Nie. 2021. Dynamic modality interaction modeling for image-text retrieval. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1104–1113. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the International Conference on Knowledge Discovery & Data Mining, pages 3505–3506. Weiwei Sun, Lingyong Yan, Zheng Chen, Shuaiqiang Wang, Haichao Zhu, Pengjie Ren, Zhumin Chen, Dawei Yin, Maarten Rijke, and Zhaochun Ren. 2024. Learning to tokenize for generative retrieval. Advances in Neural Information Processing Systems, 36. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27. Yi Tay, Vinh Q Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. arXiv preprint arXiv:2202.06991. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Aaron Van Den Oord, Oriol Vinyals, et al. 2017. Neural discrete representation learning. Advances in neural information processing systems, 30. 11860 Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78. Yidan Zhang, Ting Zhang, Dong Chen, Yujing Wang, Qi Chen, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Fan Yang, et al. 2023. Irgen: Generative modeling for image retrieval. arXiv preprint arXiv:2303.10126. Zhedong Zheng, Liang Zheng, Michael Garrett, Yi Yang, Mingliang Xu, and Yi-Dong Shen. 2020. Dual-path convolutional image-text embeddings with instance loss. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 16(2):1–23. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. A Implement Details We selected the open-flamingo (Awadalla et al., 2023) with the 3B parameters as our model’s backbone. The visual encoder of the open-flamingo is a 12-layer visual transformer, while its language model is based on MPT-1B3. We adopted the deepspeed (Rasley et al., 2020) training framework to train the model on 4×24GB NVIDIA A5000 GPUs. We froze the visual encoder and fine-tuned the language model as well as cross-attention layers. We employed the Adam optimizer, setting a learning rate of 1e-4 and a batch size of 64 for each GPU. On the Flickr30K dataset, our training included 1,000K steps for learning to memorize and 3,000K steps for learning to retrieve. For the MS-COCO dataset, these numbers were increased to 2,000K and 6,000K steps, respectively. To accelerate the memorizing of images that without queries, we adopt the common technique from generative text retrieval to generate pseudo-queries for these images. More specifically, we train the MLLM model by feeding it images and generating the corresponding queries from the training set, and then utilize the trained model to predict pseudo-queries for images in the test set. We added these pseudo pairs into the training set for the training phases of GRACE. We have trained the GRACE several times to confirm that the improvement is not a result of random chance and present the mid one. The training duration was approximately 12 hours for Flickr30K and 24 hours for MS-COCO. 3https://huggingface.co/anas-awadalla/ mpt-1b-redpajama-200b. B Beam Size Analysis GRACE relies on beam search to obtain top-k retrieval results. We conducted detailed experiments to understand the impact of varying beam sizes, and the findings are illustrated in Figure 6. The atomic identifier is excluded from this experiment as it only requires one generation step, and beam size will not affect its performance. 10 20 30 40 50 22 24 26 28 30 32 34 36 38 40 Recall@1 Beam Size Structured Identifier Semantic Identifier String Identifier 10 20 30 40 50 35 40 45 50 55 60 65 70 Recall@10 Beam Size Structured Identifier Semantic Identifier String Identifier Figure 6: The retrieval performance of different identifiers varies by beam sizes in terms of Recall@1 and Recall@10. Increasing the beam size exhibits marginal benefits. This observation aligns with our expectations, as candidates with larger beam sizes generally score lower, diminishing their likelihood of being the top result. In terms of Recall@10, we observed a notable improvement in performance with the expansion of the beam size. This enhancement is attributed to the inclusion of candidates that would otherwise be missed in scenarios with a more constrained beam size. C Image Tokenization GRACE Flickr30K R@1 R@5 R@10 Image tokenization 50.4 73.2 76.9 Table 3: Results for GRACE with image tokenization on Flickr30K. As a part of future work, we also assess the effectiveness of using visual tokens as identifiers. Specifically, we follow the IRGen method (Zhang et al., 2023) to discretize each image into 4 visual tokens. The results of this evaluation are presented in Table 3. It is evident from the results that visual tokens as identifiers outperform most other types of identifiers. This is attributed to the fact that visual tokens are derived from the content of the image, demonstrating the potential of image tokenization in cross-modal retrieval. 11861
2024
639
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1161–1172 August 11-16, 2024 ©2024 Association for Computational Linguistics Linguistically Conditioned Semantic Textual Similarity Jingxuan Tu and Keer Xu and Liulu Yue Bingyang Ye and Kyeongmin Rim and James Pustejovsky Department of Computer Science Brandeis University Waltham, Massachusetts, USA {jxtu,keerxu,liuluyue,byye,krim,jamesp}@brandeis.edu Abstract Semantic textual similarity (STS) is a fundamental NLP task that measures the semantic similarity between a pair of sentences. In order to reduce the inherent ambiguity posed from the sentences, a recent work called Conditional STS (C-STS) has been proposed to measure the sentences’ similarity conditioned on a certain aspect. Despite the popularity of C-STS, we find that the current C-STS dataset suffers from various issues that could impede proper evaluation on this task. In this paper, we reannotate the C-STS validation set and observe an annotator discrepancy on 55% of the instances resulting from the annotation errors in the original label, ill-defined conditions, and the lack of clarity in the task definition. After a thorough dataset analysis, we improve the C-STS task by leveraging the models’ capability to understand the conditions under a QA task setting. With the generated answers, we present an automatic error identification pipeline that is able to identify annotation errors from the C-STS data with over 80% F1 score. We also propose a new method that largely improves the performance over baselines on the C-STS data by training the models with the answers. Finally we discuss the conditionality annotation based on the typed-feature structure (TFS) of entity types. We show in examples that the TFS is able to provide a linguistic foundation for constructing C-STS data with new conditions. 1 Introduction Semantic textual similarity (STS) is an essential NLP task that measures the semantic similarity between two sentences (Agirre et al., 2012). It is also a popular benchmark for developing tasks such as text embedding learning (Conneau and Kiela, 2018; Reimers and Gurevych, 2019; Thakur et al., 2021) and language understanding (Wang et al., 2018). While the STS datasets have been developed and improved over the past years (Agirre et al., 2013, 2014, 2015, 2016; Cer et al., 2017), the task itself Figure 1: A problematic example from the C-STS dataset. The binarity of the condition cannot be mapped to a 5-point similarity scale. The label can be subjective depending on how much inference is made from the context. No guideline on the scenario when the information regarding the condition is missing. still suffers from sentence ambiguity and subjectivity to judgment (Deshpande et al., 2023). A new task called Conditional STS (C-STS) has been proposed to resolve those issues (Deshpande et al., 2023). It is designed to disambiguate the similarity between two sentences by measuring the similarity on a given condition. An accompanying dataset was also proposed to test models on the CSTS task. Despite the popularity of C-STS, we observe certain limitations in the C-STS dataset that could hinder the understanding and proper evaluation of models on this task. As illustrated in Figure 1, these limitations primarily revolve around annotation errors, ill-defined conditions, and a general lack of clarity in task definition. Taking into account the significance of these issues, we intend to improve the C-STS dataset by addressing the existing problems that we observed. We start by reannotating the C-STS validation set. By identifying an apparent annotation error rate of 55%1 in their validation set, we analyze the provenance of the errors and discrepancies between the original and relabeled datasets. To further explore the utility of the condition and how it is understood by language models, we treat 1Calculated from the comparison between the original and reannotated labels. 1161 it as a Question Answering (QA) task and leverage large language models (LLMs) to generate the answer to the question that is constructed from the condition. We find that the LLM-generated answers can better capture the similarity between two sentences and fit closely to our reannotated labels by having a higher Spearman’s Correlation. Based on this finding, we propose an approach to identify potential annotation errors from the C-STS dataset utilizing the LLM-generated answers, achieving over 80% F1 score on the validation set. We also propose a new method to improve the C-STS task by training the models with the answers. We show that both supervised and generative models can efficiently and effectively learn the condition information encoded in the answers, improving the performance over baselines by a large margin. Finally, we discuss a new annotation specification of the conditionality that aims to improve the formulation of the conditions with a more concrete semantic base. We use the entity type identified from the sentence pair as the surface condition text that is described by its underlying typed-feature structure (TFS) (Carpenter, 1992; Copestake, 2000; Penn, 2000). We exemplify that TFS-based conditions can be successfully adopted to sentence pairs from the current C-STS dataset. We summarize the main contributions of this paper as threefold. We reannotate the C-STS validation set and propose an error identification pipeline that can be applied to the whole dataset to identify potential annotation errors and ambiguities; we propose a QA-facilitated method that largely improves the model performance on the C-STS task; we discuss using TFS as a new annotation specification to improve the conditionality in C-STS dataset with a more concrete semantic base. We make the source code and dataset publicly available.2 2 Related Work and Background Semantic similarity tasks The semantic similarity between texts has long been a key issue in understanding natural language. Agirre et al. (2012) proposed the first STS task that measures the similarity between a sentence pair. Following this, Agirre et al. (2013) proposed the second STS task that covered more text genres in the dataset. There are also STS tasks (Agirre et al., 2014, 2015, 2016; Cer et al., 2017) with focuses on measuring sentence similarity under a multilingual and cross-lingual 2https://github.com/brandeis-llc/L-CSTS setting. Abdalla et al. (2023) proposed a new text similarity task that measures the semantic relatedness between two sentences. Conditional STS More related to our work, the C-STS task (Deshpande et al., 2023) introduced conditions on top of the traditional STS tasks, and it measured the sentence similarity regarding the given condition. The C-STS dataset includes 18,908 instances. Each instance contains a sentence pair, a condition, and a scalar for the similarity score on the 5-point Likert scale (Likert, 1932). In this paper, we conduct the annotation and experiments on the C-STS validation set that consists of 2,834 instances. Deshpande et al. (2023) evaluated the C-STS dataset on different baselines such as SimCSE (Gao et al., 2021) and GPT models (Brown et al., 2020; OpenAI, 2023) by training or prompting with the sentence pairs and the conditions directly. However, our QA-based method uses the generated answers as the model input. QA-facilitated tasks Question answering tasks are useful for extracting and inferring information from the texts that is relevant to the question. Recent work utilized QA to improve other NLP tasks. Eyal et al. (2019) and Deutsch et al. (2021) applied QA as an automatic evaluation metric for the summarization. Gunasekara et al. (2021) used QA to improve the summarization directly. Other works involved the application of QA for data augmentation (Mekala et al., 2022) and question generation (Tu et al., 2022a,b). In this paper, we apply QA to generate condition-based answers for error identification and to improve models on the C-STS task. Figure 2: Distribution of top 10 frequent features and entities from the conditions in the C-STS dataset. For the singleton with no explicit mention of the feature, we default the condition features from this group to type. 1162 3 Reannotating C-STS We analyze the dataset and describe the annotation process for relabeling the C-STS validation set.3 3.1 Condition analysis We analyze the composition of the condition texts in the dataset. We observe that the majority of the conditions are short phrases in the format of [feature] of [entity] (e.g., the color of animals) or simply a singleton [entity] (e.g., the hobby). We plot the distributions of frequent features and entities from the condition texts in the full C-STS dataset in Figure 2, we notice that the dataset is skewed by having a long tail distribution of the conditions. The top 10 frequent features and entities appear in 88% and 45% of the total conditions respectively. Within the top 10 lists, Type and Number are the dominating features; People and Animal are the dominating entities. 3.2 Annotation Analysis To understand how conditions affect the human judgment of sentence similarity, we conduct a pilot annotation study on 150 instances sampled based on the frequency of condition features (e.g., type, number, etc.) in the C-STS training set. Annotators are asked to reannotate those instances following the public C-STS annotation guideline. We measure the agreement between the original labels and reannotated labels and find a low agreement of 40% accuracy (exact label match) and 50.4 Spearman’s Correlation. We characterize the common issues that cause the annotation divergence in Table 1. The similarity of the sentences under number conditions cannot be mapped to a 5-point scale due to the binarity of the value comparison (e.g., 1 = 1, 1 ̸= 5). This issue also happens to other condition features such as gender and age. The condition can also be ambiguous, especially when it is a singleton. In the second example, the similarity between the two mentions of the table can be subjective, based either on the type of table or multiple features associated with the table such as shape, size, etc. The original C-STS task does not specify how much inference from the context is allowed to judge the similarity. This increases the label inconsistency between the annotators. In the third example, although the room type is not explicitly mentioned 3The original labels of the C-STS test set is not publicly available, so we use the validation split for further annotation and experiments in this paper. in the first sentence, we can still confidently infer it is bathroom because of the mention of toilet and sink in the context. In the last example, the condition can be invalid if the information regarding the condition cannot be extracted or inferred from the sentence. 3.3 Condition-aware Annotation on C-STS We reannotate the C-STS validation set to fix common annotation errors and resolve the aforementioned issues that cause the low agreement score. The annotation was done by 4 researchers and graduate students from the linguistics and computer science departments of a US-based university. Each annotator is familiar with the C-STS annotation guideline and is well-trained through the trial annotation on the pilot set with 150 instances. To resolve the issues that are identified from the pilot study, we ask annotators to follow additional annotation rules that are detailed as follows. Incommensurable mapping For binary conditions, only labels 1, 5, or 3 are permitted, representing equal, unequal and possible equal (e.g., comparing 3 and several in the number conditions). Ambiguous condition Given the intuition that type is always the primary feature in comparing the similarity between two entities, if conditions are singletons or have no features, we default it to the type of [entity]. Inference degree Annotators are only allowed to make direct inference to the implicitly mentioned information with high confidence. For example, snow hill indicates the weather, tennis indicates the instruments being used, etc. Invalid condition We annotate invalid instances with the label -1 and exclude those instances from the reannotated dataset. After removing 214 instances with invalid conditions, we create a relabeled C-STS validation set that consists of 2,620 samples. Figure 3 shows the label distribution of the original and relabeled CSTS validation set. Compared to the original set, the new annotation contains more extreme labels such as 1 and 5, while labels in between such as 2, 3, 4 are less frequent. This is due to the high frequency of the instances with binary condition features in the original dataset. 1163 Sentence Pair Condition Label Issue Female tennis player, standing on one foot after returned the ball. A tennis player is getting ready to hit the ball at a tennis match. number of people 3 / 5 Incommensurable Mapping An oak table and chairs in a dining room, with a doorway to the kitchen. Three couches positioned around a coffee table in a living room. the table 1 / 3 Ambiguous Condition A toilet in a stall with a sink attached and a console attached to the lid. A bathroom with tiled walls, a toilet, sink and a garbage can in it. room type 2 / 4 Inference Degree A cat hissing as it attempts to fit itself into a bowl that it is to big to fit in. A black cat with a red tag sitting down with a bookshelf in the background. color of animal 2 / -1 Invalid Condition Table 1: Examples with common issues that cause the judgment divergence between the original and reannotated labels. Text that is relevant to the conditions is highlighted. Figure 3: The similarity score distribution of the original and relabeled C-STS validation set. 4 QA for C-STS With the consideration on scaling the relabeling task to the full C-STS dataset, we explore effective approaches to identifying potential mislabeled instances automatically. We apply QA as a pre-task for identifying information from the sentences that is relevant to the condition, and leverage LLMs to generate answers from condition-transformed questions. We show that the generated answers correlate better to the reannotated labels, and can be used as a reliable intermediate resource for identifying potential annotation errors from the original C-STS dataset. 4.1 Answer Generation GPT prompting For each instance, we start by transforming its condition into a question with the format What is [condition]?.4 In order to generate high-quality answers, we conduct the QA task with LLMs under a prompting fashion. Each prompt consists of a brief instruction, the original sentence and the condition-transformed question (Appendix A.1). In the instruction, we ask the model to summarize each answer into a word or phrase to reduce potential noise and hallucinated content (Bouyamourn, 2023). We experiment with GPT-3.5 (Brown et al., 2020) and GPT-4 (OpenAI, 2023) to generate answers. We use the OpenAI API with versions gpt-3.5-turbo-1106 and 4We convert the condition text to lowercase and remove the period at the end of the text. gpt-4-0125-preview. Answer quality analysis We evaluate the quality of the answers that are generated from the two models on 200 instances sampled from the C-STS validation set. We ask two annotators to measure the quality of the answer on a 5-point Likert scale from unrelated to very accurate. Answers from GPT-3.5 have a Spearman’s Correlation of 4.54 and accuracy of 75% (labels with 5), While the Spearman and accuracy of GPT-4 answers are 4.07 and 69.5% respectively. We observe from the data that although GPT-4 is a more recent and proficient model, it tends to avoid generating direct answers with no clear instructions or contexts. Although this mechanism can help the model reduce hallucination, it misses what we consider as “relevant answers” in our task. (1) A baby is swaddled and someone is putting his hands on it. [the age of child] GPT-3.5: Infant, GPT-4: Not specified In example 1, the answer from GPT-3.5 is infant, which implies a rough range of the age, while GPT4 refuses to generate an answer because the actual child age is not explicitly mentioned in the text. Since our task involves measuring the text similarity of answers, we choose to use GPT-3.5 because it tends to generate more content-enriched answers than GPT-4. Answer correlation analysis Given the high quality of the GPT-generated answers, we evaluate the correlation between the answers and the reannotated labels. We first run GPT-3.5 to generate the answers for all the instances from the C-STS validation set. Then we encode each answer into an embedding using the GPT embedding encoder with version text-embedding-ada-002, and compute the cosine similarity between the embeddings of the answers from the same instance. We calculate the Spearman’s Correlation between the answer 1164 Figure 4: Answer generation and error identification pipeline on the C-STS validation set. similarities and reannotated labels (55.44), and between the original labels and reannotated labels (49.22). The result shows that the cosine similarity of GPT-generated answers correlates more closely with the reannotated labels, and thus can better reflect the similarity of the sentence pairs on the given condition. 4.2 Error Identification With the analysis that answers generated from GPT3.5 are of high quality and correlate better with the reannotated labels, we propose a method to automatically identify potential annotation errors from the C-STS dataset. As illustrated in Figure 4, the generated answers are the input to our error identification pipeline that consists of three steps: (1) clustering answers into groups with different topics; (2) encoding and ranking answer pairs in each cluster and mapping the similarity ranks to the label from 1 to 5; (3) identifying error candidates based on the difference between original and new rank labels. Answer clustering Most of the generated answers can be grouped into different topics (e.g, answers related to numbers, colors, etc.), we apply K-means (Arthur and Vassilvitskii, 2007) to cluster the answer pairs. Each answer pair is concatenated and then encoded into a single embedding for the clustering. The purpose of this step is to rank the similarity of the answers more accurately by clustering similar answer pairs together. Answer similarity mapping We encode each answer from a pair into an embedding and compute the cosine similarity between the two embeddings. Within each answer cluster, We rank all the answer pairs based on their embedding similarity and map the ranking of each answer pair to a new label (called rank label) on a scale from 1 to 5 based on a predefined ranking distribution. Error candidate selection We identify the potential error candidates by comparing the original Distribution No Cluster Clustered F1 Spear. F1 Spear. Best K Even 74.3 52.9 74.5 60.8 3 Original 75.4 55.4 76.6 59.6 10 Reannotated 79.4 49.1 82.4 65.7 10 Table 2: Error identification results with or without kmeans clustering under different distribution approaches on the test split of the relabeled C-STS validation set. The results under clustered are from the cluster number (Best K) selected in the range of 1-20 that yields the highest F1 score. labels to the rank labels. For each answer pair, if its rank label is different from the original label, we choose this instance as a candidate. 4.3 Evaluation We evaluate our error identification pipeline on the C-STS validation set. We use half of the dataset to generate answer clusters and ranking distributions, and the other half to evaluate three ranking distributions with different number of clusters. For the even distribution, we map the sorted answer pair rankings evenly to rank labels from 1 to 5 (e.g, 20% of lowest ranked instances has rank label 1). For the original or Reannotated label distribution, we map the rankings based on the distribution of original/reannotated labels. We use precision, recall, and F1 as the evaluation metrics. 4.3.1 Results We show the evaluation results in Table 2. Our pipeline is effective in identifying potential annotation errors with the baseline F1 of 74.3, significantly higher than the 55% error rate in the C-STS validation set. The distribution from the reannotated label performs the best among all three distributions, suggesting the label consistency in the relabeled data. Comparing to no cluster, ranking within the answer clusters improves the most on the reannotated label distribution, boosting F1 from 79.4 to 82.4. As a positive byproduct, clustering 1165 Sentence Pair Condition Response Issue A couple of small pieces of cake sitting on top of a white plate. A paper plate with a sandwich and a slice of pizza, both partly eaten. the amount of plates Two small pieces of cake One plate GPT hallucination A group of people on a hill looking a city and two people are flying a kite. Man playing with kite far up in the sky, showing only the deep blue sky. the kite Recreational activity High-flying entertainment Vague condition A plate full of cut salad with a fork and a glass full of cold drink with ice. A dish which contains cauliflower and meat is on top of a wooden tray. presence of meat No meat Meat included Semantic mismatch Table 3: Examples with common issues that cause the unidentified annotation errors. can also improve the Spearman’s Correlation between the rank labels and reannotated labels, indicating that answers clustered into optimal number of groups can help produce more accurate and correlated rank mapping of the similarity. Overall, we show the effectiveness of our error identification pipeline. It can also be easily extended and applied to identify errors from the rest of the C-STS dataset with the help from the generated answers. 4.3.2 Analysis We briefly characterize the cases where annotation errors are not identified by the pipeline. The complete examples are shown in Table 3. Incorrect answer The incorrectness of the generated answers can be caused by: (1) hallucination from the GPT model; (2) vague or invalid condition. For example, in the sentence A couple of cake ... a white plate., the GPT wrongly answers two pieces of cake to the condition the amount of plates. In another example, the condition the kite is not specific enough for both annotators and the GPT model, so the generated answer is highly depended on subjectivity. These mistakes lead to an inaccurate answer similarity and thus cause a misaligned rank label. Semantic mismatch Error identification relies on the ranking of the answer pair similarity. However, answers even with the opposite meaning can have a high similarity in the embedding space. In an example, the answers are No meat and Meat included, which are opposite to each other. However, the embeddings of the two answers have a high similarity due to the overlapped token meat. 5 Experiments We propose a new method that can improve the C-STS task by utilizing the LLM-generated answers. Instead of using the sentence pairs and the conditions directly as the model input, our method decomposes C-STS into two subtasks: generating answers that encode the essential semantic information about the condition, and learning the simModel Method Spearman Pearson Biencoder SimCSEBASE 49.6 48.5 SimCSELARGE 71.7 70.8 QA-SimCSEBASE 73.9 73.4 QA-SimCSELARGE 75.9 75.4 Triencoder SimCSEBASE 0.8 1.7 SimCSELARGE 12.8 13.2 QA-SimCSEBASE 73.9 73.4 QA-SimCSELARGE 73.4 73.1 Crossencoder SimCSEBASE 37.2 38.7 SimCSELARGE 43.0 43.3 QA-SimCSEBASE 71.4 71.1 QA-SimCSELARGE 72.9 72.3 GPT-3.5 Base Prompt 9.1 13.5 QA Prompt 66.1 64.8 GPT-4 Base Prompt 64.2 63.3 QA Prompt 64.4 60.2 Table 4: Evaluation results on the test split of the C-STS relabeled set. We compare our methods (QA-based) with baselines under different model settings. ilarity score between the answer pair. We evaluate our method and compare with baseline models (Deshpande et al., 2023) under both fine-tuning and prompting settings. We randomly select 70% instances from the reannotated C-STS validation set for training and the remaining 30% for testing. 5.1 Model Setup Fine-tuning models We evaluate our method on three encoding configurations, cross-encoder, biencoder (Reimers and Gurevych, 2019) and triencoder (Deshpande et al., 2023). Unlike the baselines that encode sentences directly, our method chooses to encode the answers on all three encoding configurations. Cross-encoder encodes the concatenation of the answer pair and condition all together, while bi-encoder concatenates the condition to each answer and encodes them separately. Tri-encoder has three encoders that encode sentences and the condition all separately. We use supervised SimCSE (Gao et al., 2021), one of the best-performing embedding models as the base sentence encoder for all three encoding configurations. 1166 We fine-tune each model on the training set and evaluate on the testing set. We use Spearman’s Correlation as the primary evaluation metric. Prompting models We compare our method with LLM baselines under a zero-shot prompt learning setting. We evaluate the results on GPT-3.5 and GPT-4. Similar to the experiments on fine-tuned models, instead of asking about the sentences directly, we formulate the prompt by instructing the model to score the similarity between the answers regarding the condition (Appendix A.1). 5.2 Results We show the model results in Table 4. For all models, our method (QA-based) improves the baselines by a large margin (73.1 improvement on triencoder), indicating the usefulness of model learning with answers. Fine-tuned models generally perform better than LLMs models, suggesting that the C-STS task is sensitive to the in-domain training. Compared with the large version of the models, our method makes more improvement to the base models. For example in the bi-encoder setting, the improvement for large and base model baselines is 4.2 and 24.3 respectively. This indicates the effectiveness of the method, especially on models with smaller sizes and fewer parameters. This may be due to the answers being already encoded with relevant semantic information, thus reducing the reasoning complexity for the small models. This similar pattern also applies to LLM baselines, where GPT-4 performs much better than GPT-3.5 on the base prompt. However on the QA prompt, GPT-3.5 achieves slightly better result than GPT-4. QA transforms C-STS into an easier sentence similarity task that enables the use of more cost or resource efficient models without harming the performance much. Figure 5: Model (SimCSE with bi-encoder and GPT) evaluation results on original and relabeled C-STS validation set. Method Fine-tuning No condition Inference SimCSEBASE 49.6 -0.4 1.2 SimCSELARGE 71.7 4.4 1.0 QA-SimCSEBASE 73.9 73.3 49.4 QA-SimCSELARGE 75.9 73.5 54.7 Table 5: Evaluation results (Spearman) from bi-encoder models under different training settings. Fine-tuning: using models fine-tuned on the training data; No condition: encoding answers only for fine-tuning; Inference: inferencing results on the untuned models. 5.3 Analysis Curse from the mislabeled data We evaluate the effect of different label sets on the model performance. In Figure 5, we show the model results on the C-STS validation set with original or reannotated labels. We notice that for all models, performance on the reannotated labels is increased by at least 40% over the original label (except GPT3.5). This suggests that the original C-STS dataset may not be able to truly reflect the capability of the existing models. The noise and vagueness from the original labels pose “challenges” to models to learn linguistic patterns. Among the LLM baselines, we notice that C-STS is particularly difficult to GPT3.5 (9.1 Spearman). However, it can be improved significantly with our QA prompt. Semantic information encoded with QA We evaluate how much semantic information can be encoded in the generated answers. As shown in Table 5, the condition is critical to the finetuned baselines (-0.4 to 49.6 on the SimCSEBASE). However, training with no condition has minimal effect on the performance with the QA methods (2.4 difference on SimCSELARGE). Under the inference setting, the baselines perform poorly due to the additional reasoning complexity from the conditions. However, our method still shows strong performance even without any fine-tuning. This echoes our previous finding that the QA subtask in C-STS is able to improve the efficiency of model training . Non-GPT answer generation performance We evaluate different answer generation models including multiple versions of Flan-T5 (Chung et al., 2022) and GPT-3.5. We fine-tune the QASimCSEBASE on bi-encoder setting with answers generated from different models, and compare the results in Table 6. Except Flan-T5SMALL, all the other models perform better than the baseline which is fine-tuned on sentences only. The results 1167 Generation Model Size (Param.) Spearman Flan-T5SMALL 77M ↓43.7 Baseline‡ N/A 49.6 Flan-T5BASE 248M ↑53.9 Flan-T5LARGE 783M ↑55.8 Flan-T5XL 2.75B ↑62.3 GPT-3.5 175B ↑66.1 Table 6: Evaluation results from QA-SimCSEBASE biencoder with difference answer generation models. ‡: Fine-tuning on sentences only. suggest that our method is robust and does not necessarily rely on proprietary models such as GPT3.5. Flan-T5XL is able to achieve comparable results to GPT-3.5 with only 1.5% of the parameter size. Even the base version of Flan-T5 with 248M parameters performs better than the baseline. 6 Discussion: Improving Conditionality Although the idea of C-STS is highly appreciated, we notice that the current C-STS dataset has certain issues ranging from the errors in the annotation to the lack of rigor in the condition definition. In this section, we discuss a new annotation specification of conditionality based on a word’s lexical attribute value features, or Typed Feature Structure (TFS) (Carpenter, 1992; Copestake, 2000), and exemplify the annotation of the new conditionality on several sentence pairs from the C-STS dataset. 6.1 TFS as the Condition A TFS is a data structure that can be used to represent systematic linguistic information about both words and phrases in language. For lexical items, the feature structure is defined as a set of attributes and their values for a word type. Each feature can have a distinct value for an object of that type (Pustejovsky and Batiukova, 2019). For example, in the term small table, the value of the feature SIZE for the object table is small. In order to adopt the TFS to construct conditions in the C-STS, we use the word/entity type as the condition, and use the values of feature structure to annotate the similarity score regarding the condition. Consider Figure 6 as an example. We set the entity type Animal as the condition, and create a feature structure for each of the sentences. The final similarity score is calculated from the weighted sum of the individual similarity label annotated for each non-empty feature. The feature type is the primary feature that contributes the most to the final similarity score. It is worth noting that TFS-based conditionality is highly extendable and customizable depending on the annotation needs. The condition can be selected from any node in a linguistic type hierarchy (e.g., animal to mammal to dog in WordNet (Miller, 1994)). Correspondingly, the feature set for each condition can also be identified from existing lexical resources such as Schema.org5 and ConceptNet (Speer et al., 2016). Lastly, one can also decide how much weight from each feature needs to be assigned for calculating the final similarity score. We leave the details on the TFS design to future research. Figure 6: Illustration of the feature structures for the condition Animal. Similarity score is calculated from the weighted sum of the similarity label for each feature. 6.2 Qualitative Analysis We apply the TFS to annotate example sentence pairs. TFS conditionality enables a finer-grained mapping from entity semantic similarity to the final label, and mitigates the binarity from the evaluation on a single condition feature. (2) S1: On the plate there is croissant sandwich ... S2: ... next to a hamburger on a green plate. Condition: the food, Score: 2.9 Consider example 2. Although the type of food is different (croissant sandwich v.s. hamburger), other features have high similarity: both food items are in the sandwich category (i.e., category is the supertype of the type of the food) and held in a plate; the amount of food are both one dish. While most of the entity features have similar or identical values, the primary feature balances it out and maps the final score to a relatively low similarity. TFS also improves the clarity and objectivity of the conditions. Even when the entity type is the same, other feature values inferred from the sentence are able to differentiate the entities. 5https://schema.org/docs/about.html 1168 (3) S1: A little girl is posing with a baseball bat ... S2: A kid is holding a baseball in a glove ... Condition: the activity, Score: 3.8 In example 3, both sentences indicate the type of the activity is baseball, but participant and instrument are different: the first sentence mentions a girl with a bat; while the second one omits the gender of the kid and the instrument is glove. 7 Conclusion In this paper, we made a comprehensive analysis and improvement to the C-STS task. With the reannotation effort on the original C-STS data, we identified and resolved annotation errors and discrepancies that could hinder the evaluation of the task. We showed that C-STS can be naturally treated as a two-step reasoning task. We applied QA to accomplish the first reasoning step by producing high-quality and correlated answers, and showed that the generated answers can be used effectively to automatically identify annotation errors and improve the C-STS task under both supervised and prompting model settings. Finally, we proposed to use the typed-feature structure in C-STS to construct more semantically informed conditions. We believe that our work has led to a better execution of the C-STS task. We hope that our analysis and improvement on the C-STS can facilitate further developments by future researchers. Limitation We reannotate the C-STS validation set and show that our method can improve the model performance on the test split of the validation set. We primarily use our reannotated labels as the gold for the analysis and modeling. Although we have four annotators who are well trained for this work, there might still exist unintentional errors or subjective judgment. Due to the limitations of the resources, we could not scale the manual annotation to the full dataset by the time the paper is written. Finally we decide to use the validation set because the original labels for the test set are not public. However, our dataset analysis (§3) on the 150 instances from the train set does show that the errors and issues do exist in the whole C-STS dataset. We believe that our error identification method can facilitate a more efficient reannotation work, and our modeling results and TFS-based conditionality can be generalized to the rest of C-STS data. Acknowledgements This work was supported in part by NSF grant 2326985 to Dr. Pustejovsky at Brandeis University. Thanks to Yifei Wang and Zhengyang Zhou for providing mathematical background knowledge on the error identification section of the paper. Thank you to Jin Zhao for all the support. Thanks the reviewers for their comments and suggestions. The views expressed herein are ours alone. References Mohamed Abdalla, Krishnapriya Vishnubhotla, and Saif Mohammad. 2023. What makes sentences semantically related? a textual relatedness dataset and empirical study. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 782–796, Dubrovnik, Croatia. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Iñigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic textual similarity, English, Spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252–263, Denver, Colorado. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81–91, Dublin, Ireland. Association for Computational Linguistics. Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 497–511, San Diego, California. Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385– 393, Montréal, Canada. Association for Computational Linguistics. Eneko Agirre, Daniel Cer, Mona Diab, Aitor GonzalezAgirre, and Weiwei Guo. 2013. *SEM 2013 shared 1169 task: Semantic textual similarity. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32–43, Atlanta, Georgia, USA. Association for Computational Linguistics. David Arthur and Sergei Vassilvitskii. 2007. kmeans++: the advantages of careful seeding. In ACM-SIAM Symposium on Discrete Algorithms. Adam Bouyamourn. 2023. Why LLMs hallucinate, and how to get (evidential) closure: Perceptual, intensional, and extensional learning for faithful natural language generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3181–3193, Singapore. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Robert L Carpenter. 1992. The logic of typed feature structures. (No Title). Daniel Cer, Mona Diab, Eneko Agirre, Iñigo LopezGazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1–14, Vancouver, Canada. Association for Computational Linguistics. Hyung Won Chung, Le Hou, S. Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Wei Yu, Vincent Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed Huai hsin Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. ArXiv, abs/2210.11416. Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Ann Copestake. 2000. Definitions of typed feature structures. Natural Language Engineering, 6(1):109–112. Ameet Deshpande, Carlos Jimenez, Howard Chen, Vishvak Murahari, Victoria Graf, Tanmay Rajpurohit, Ashwin Kalyan, Danqi Chen, and Karthik Narasimhan. 2023. C-STS: Conditional semantic textual similarity. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5669–5690, Singapore. Association for Computational Linguistics. Daniel Deutsch, Tania Bedrax-Weiss, and Dan Roth. 2021. Towards question-answering as an automatic metric for evaluating the content quality of a summary. Transactions of the Association for Computational Linguistics, 9:774–789. Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation metric for news article summarization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3938–3948, Minneapolis, Minnesota. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Chulaka Gunasekara, Guy Feigenblat, Benjamin Sznajder, Ranit Aharonov, and Sachindra Joshi. 2021. Using question answering rewards to improve abstractive summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 518–526, Punta Cana, Dominican Republic. Association for Computational Linguistics. Rensis Likert. 1932. A technique for the measurement of attitude scales. Dheeraj Mekala, Tu Vu, Timo Schick, and Jingbo Shang. 2022. Leveraging QA datasets to improve generative data augmentation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9737–9750, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. George A. Miller. 1994. WordNet: A lexical database for English. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774. Gerald Penn. 2000. The algebraic structure of attributed type signatures. Ph.D. thesis, Citeseer. James Pustejovsky and Olga Batiukova. 2019. The lexicon. Cambridge University Press. 1170 Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Robyn Speer, Joshua Chin, and Catherine Havasi. 2016. Conceptnet 5.5: An open multilingual graph of general knowledge. In AAAI Conference on Artificial Intelligence. Nandan Thakur, Nils Reimers, Johannes Daxenberger, and Iryna Gurevych. 2021. Augmented SBERT: Data augmentation method for improving bi-encoders for pairwise sentence scoring tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 296–310, Online. Association for Computational Linguistics. Jingxuan Tu, Eben Holderness, Marco Maru, Simone Conia, Kyeongmin Rim, Kelley Lynch, Richard Brutti, Roberto Navigli, and James Pustejovsky. 2022a. SemEval-2022 task 9: R2VQ – competencebased multimodal question answering. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1244–1255, Seattle, United States. Association for Computational Linguistics. Jingxuan Tu, Kyeongmin Rim, and James Pustejovsky. 2022b. Competence-based question generation. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1521–1533, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355, Brussels, Belgium. Association for Computational Linguistics. A Appendix A.1 Prompts We include various prompts that we used for the experiments in this section. Figure 7 shows the prompt for the answer generation on GPT and FlanT5 models. Figure 8 and Figure 9 show the prompts for LLM baselines and our method. We instruct the LLMs to score based on answers instead of questions. A.2 Answer Clustering We plot the change of the error identification results with different number of clusters in Figure 10. The reannotated distribution consistently performs Figure 7: GPT prompt for the answer generation. Figure 8: GPT prompt for the LLM baselines that is adopted from (Deshpande et al., 2023). Figure 9: GPT prompt for our QA method on LLM. better than the other two. The optimal cluster number is 10, 10, 3 for reannotation, original, and even distributions respectively. Figure 10: Error identification performance (F1 score) with different number of clusters on three distribution methods. We show the answer topics identified from the clusters in Table 7. We use the cluster setting that produces the best result when the number of clusters is 10 and summarize the topics from each cluster. Most clusters capture the topics or patterns uniformly from the answers, such as color and food. A few clusters have a mix of topics. 1171 1 2 3 4 5 food various topics number color object 6 7 8 9 10 gender, animal location activity room related activity Table 7: Topic from answer Clusters. Model Input Output gpt-3.5-turbo-0125 $0.0005 / 1K tokens $0.0015 / 1K tokens gpt-4-0125-preview $0.01 / 1K tokens $0.03 / 1K tokens text-embedding-ada-002 $0.00010 / 1K tokens N/A Table 8: Pricing of GPT models used in the paper. A.3 Model Details We use OpenAI API to run GPT models. The pricing for the models used in the paper is shown in Table 8. We fine-tune SimCSEBASE and SimCSELARGE with different encoding configurations on a single Titan Xp GPU. We use the same hyperparameter setting with the baseline models in (Deshpande et al., 2023). The training time is less than 10 minutes. We use Flan-T5 with different sizes for inference only. We run Flan-T5SMALL and FlanT5BASE on CPU machines, and run Flan-T5LARGE and Flan-T5XL on a Titan Xp GPU. 1172
2024
64
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11862–11875 August 11-16, 2024 ©2024 Association for Computational Linguistics Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction Yice Zhang1,3, Jie Zeng1∗, Weiming Hu1∗, Ziyi Wang1∗, Shiwei Chen1,2, and Ruifeng Xu1,2,3† 1 Harbin Institute of Technology, Shenzhen, China 2 Peng Cheng Laboratory, Shenzhen, China 3 Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies [email protected],[email protected] Abstract Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review, which is the most representative and challenging task in aspect-based sentiment analysis. A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods. To tackle this issue, we propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels, aiming to filter out mismatches and thereby enhance the effectiveness of selftraining. We highlight two critical aspects to ensure the scorer’s effectiveness and reliability: the quality of the training dataset and its model architecture. To this end, we create a humanannotated comparison dataset and train a generative model on it using ranking-based objectives. Extensive experiments on public ASQP datasets reveal that using our scorer can greatly and consistently improve the effectiveness of self-training. Moreover, we explore the possibility of replacing humans with large language models for comparison dataset annotation, and experiments demonstrate its feasibility.1 1 Introduction Aspect-Based Sentiment Analysis (ABSA) aims to recognize aspect-level opinions and sentiments from user-generated content (Pontiki et al., 2014). This problem has consistently attracted interest owing to its proficiency in distilling and summarizing fine-grained opinions from vast data (Do et al., 2019; Nazir et al., 2022; Zhang et al., 2023a). The most representative and challenging task in ABSA is Aspect Sentiment Quad Prediction (ASQP) (Cai ∗The three authors contribute equally to this work. † Corresponding Authors 1We release our code and data at https://github. com/HITSZ-HLT/ST-w-Scorer-ABSA. Oh, and the service was spot on. (service, service general, spot on, positive) Small would feed most of my family. (NULL, restaurant miscellaneous, Small, positive) I'll come home and throw some spices on it to help it out a bit. (spices, food quality, NULL, positive) I will only return if I have to. (NULL, restaurant general, NULL, positive) Pseudo-label Scorer 0.9996 0.0055 0.0064 0.0081 Figure 1: Illustration of our pseudo-label scorer. et al., 2021; Zhang et al., 2021a). This task formulates aspect-level opinions and sentiments as quadruples, each consisting of an aspect term, aspect category, opinion term, and sentiment polarity. For example, given a review “the food is great and reasonably priced,” the output of ASQP would be {(food, food_quality, great, positive), (food, food _prices, reasonably priced, positive)}. As a fine-grained problem, ABSA faces the challenge of insufficient labeled data, which is particularly severe in the ASQP task. This issue limits the performance of existing models. Many efforts explore data augmentation methods to alleviate this issue. They synthesize new samples by modifying existing ones (Li et al., 2020; Hsu et al., 2021), applying self-training techniques (Wang et al., 2021), or utilizing generative methods (Yu et al., 2023; Deng et al., 2023; Zhang et al., 2023b). However, a significant limitation of these methods is that the synthetic samples often inevitably contain mismatches between sentences and labels, which can adversely affect model learning. To reduce such mismatches, this paper introduces a pseudo-label scorer for data augmentation. As illustrated in Figure 1, the scorer assesses the degree of match between the review and its pseudolabel. If we have a sufficiently robust scorer, we can filter out all mismatched samples, thereby sig11862 nificantly enhancing the effectiveness of data augmentation. We propose that the effectiveness and reliability of this scorer hinge on two critical aspects: (1) the quality of the training dataset and (2) its architecture along with the training objective. We discuss these two aspects below. For the first aspect, previous works typically produce negative labels by modifying real labels using heuristic rules (Wang et al., 2021; Mao et al., 2022). However, such negative labels are usually simplistic and patterned, limiting the scorer’s learning. To overcome this limitation, we create a humanannotated comparison dataset. Specifically, we train an ASQP model with existing labeled data, use it to infer several pseudo-labels for unlabeled data, and then have human annotators choose the most appropriate pseudo-labels. The labels chosen by annotators are designated as positive labels, while the rest as negative labels. Our dataset, in contrast to the rule-based datasets, is more challenging and better aligned with human judgment. For the second aspect, previous works formalize label-scoring as a question-answering problem (Wang et al., 2021) or embed the discriminative matching token into the label (Mao et al., 2022). However, our findings suggest that these methods underperform in complex tasks like ASQP, due to their limited capacity to model the interactions between reviews and pseudo-labels. Recent works in preference optimization reveal that the language model itself can serve as a scorer (Rafailov et al., 2023; Yuan et al., 2023). This motivates us to use the conditional likelihoods that a generative model assigns to a pseudo-label as the measure of its quality. Compared with the previous methods, this approach enables the scorer to examine the plausibility of a pseudo-label in a token-by-token fashion, thus offering a more comprehensive and effective scoring. We then fine-tune this scorer on our comparison dataset using ranking-based objectives. Upon developing this pseudo-label scorer, we apply it in a data augmentation framework, specifically opting for the self-training framework due to its simplicity. We conduct extensive experiments on public ASQP datasets to examine its effectiveness and further investigate the following questions: (1) how does the pseudo-label scorer perform using our comparison data and model architecture?; (2) is it feasible to replace humans with large language models to annotate the comparison data?; and (3) how to utilize the scorer to filter out lowquality samples? Furthermore, inspired by Ma et al. (2023), we extend the application of this scorer, employing it as a reranker for multiple candidate labels, and assess its impact and effectiveness. Our contributions are summarized as follows: (1) To the best of our knowledge, we are the first to apply a pseudo-label scorer to data augmentation in the ASQP task. (2) We investigate how to enhance the scorer’s effectiveness and reliability from both dataset and model architecture perspectives. (3) We empirically demonstrate that the proposed pseudo-label scorer can significantly and consistently enhance the performance of existing models. 2 Background Task Evolution. Aspect-Based Sentiment Analysis (ABSA) is a fine-grained sentiment analysis problem, aiming at recognizing user opinions and sentiments towards specific aspects (Zhang et al., 2023a). Within ABSA, aspects are generally defined as aspect categories or aspect terms. Aspect categories are predefined categories like food_quality and service_general in restaurant reviews, or laptop_portability and display_quality in laptop reviews. Aspect terms are explicit mentions of these aspect categories in the text. Based on this, Pontiki et al. (2014) formally define four subtasks: aspect category detection, aspect category sentiment classification, aspect term extraction, and aspect term sentiment classification. These tasks sequentially identify aspects and determine their corresponding sentiments. Opinion term refers to words or phrases that express subjective sentiments. Sentiment expressions towards aspects often rely on these opinion terms, making them vital clues for recognizing aspects and their sentiment polarities. Consequently, many researchers focus on aspect and opinion coextraction (Wang et al., 2016; Li and Lam, 2017; Fan et al., 2019; Chen et al., 2020; Zhao et al., 2020). Building on this, Peng et al. (2020) propose the Aspect Sentiment Triplet Extraction (ASTE) task, conceptualizing an aspect-level sentiment as a triplet consisting of an aspect term, opinion term, and sentiment polarity. Following ASTE, Cai et al. (2021); Zhang et al. (2021a) extend the triplet by incorporating the aspect category, evolving it into Aspect Sentiment Quad Prediction (ASQP). ASQP 11863 is currently the most representative and challenging task within ABSA. ABSA Methods. Early efforts in ABSA primarily focus on identifying one or two sentiment elements. They mainly design specific model structures to establish interactions between sentiment elements and their contexts, as well as among different sentiment elements (Wang et al., 2016; Li and Lam, 2017; Ma et al., 2017; Chen et al., 2017; Xu et al., 2018; Xue and Li, 2018; Li et al., 2018; Fan et al., 2019; Zeng et al., 2019). Recent efforts have shifted towards compound tasks like ASTE and ASQP, proposing various end-to-end methods. These includes machine reading comprehensionbased methods (Chen et al., 2021; Mao et al., 2021), table-filling methods (Wu et al., 2020; Chen et al., 2022; Zhang et al., 2022), span-based methods (Xu et al., 2021; Li et al., 2022; Cai et al., 2021), and generative methods (Yan et al., 2021; Zhang et al., 2021b,a; Mao et al., 2022; Gao et al., 2022; Hu et al., 2023a,b; Gou et al., 2023). Among these, generative methods have emerged as mainstream due to their universality and capacity to exploit rich label semantics. Generative Aspect-Based Sentiment Analysis, abbreviated as GAS, is a unified generative framework proposed by Zhang et al. (2021b). The core idea of this framework is to transform sentiment elements into a label sequence and then use a SEQ2SEQ model to learn the dependencies between the input text and the label sequence. For the ASQP task, we can convert it into a label sequence using the following template: seqlabel = c1 | s1 | a1 | o1 ; · · · ; cn | sn | an | on, where ci denotes the aspect category, si denotes the sentiment polarity, ai denotes the aspect term, and oi denotes the opinion term. 3 Comparison Dataset We need to construct a comparison dataset to facilitate the training and evaluation of the pseudo-label scorer. This dataset comprises samples each containing a review sentence accompanied by several pseudo-labels, where one is the positive label and the others are negative labels. We train the scorer by requiring it to assign high scores to positive labels and low scores to negative labels. Previous works typically produce negative labels using heuristic rules (Wang et al., 2021; Mao et al., 2022). They randomly modify elements in the existing positive labels, such as altering boundaries or conducting substitutions. Such negative labels are patterned and easily distinguishable, limiting the learning potential of the scorer. Therefore, this paper employs human annotators to construct this comparison dataset. 3.1 Data Preparation Aligned with existing ASQP datasets (Cai et al., 2021; Zhang et al., 2021a), we collect reviews from two domains: Restaurant and Laptop. The restaurant reviews are from the Yelp Dataset2, and the laptop reviews are from the Amazon Laptop Dataset3 (Ni et al., 2019). We segment these reviews into individual sentences. Next, we employ the existing labeled dataset to train an ASQP model (Zhang et al., 2021b) and then utilize this model to generate four pseudo-labels for each review sentence via beam search. 3.2 Annotation Process For a review sentence and its four pseudo-labels, annotators are presented with six options. The first four options correspond to the four pseudolabels, the fifth option indicates that none of the pseudo-labels are appropriate, and the sixth option suggests that the review sentence does not express any sentiment or the expressed sentiment is difficult to infer. When annotators choose the fifth option, they are required to write an alternative label. The annotation process is organized into multiple batches, each containing about 200 samples. To ensure accuracy, every sample is independently annotated by three different annotators. In cases of discrepancy among their annotations, a fourth annotator steps in to resolve the inconsistency. Furthermore, at the conclusion of each batch, the four annotators meet to discuss and reconcile any disagreements. Each annotator is provided with annotation guidelines4 and existing labeled ASQP datasets. In instances where conflicts arise between the two, we prioritize adherence to the guidelines. AI Annotation. Choosing the most appropriate label, while much simpler than annotating an ASQP 2https://www.yelp.com/dataset 3https://nijianmo.github.io/amazon/ index.html 4https://alt.qcri.org/semeval2014/ task4/data/uploads/semeval14_absa_ annotationguidelines.pdf and https://alt. qcri.org/semeval2016/task5/data/uploads/ absa2016_annotationguidelines.pdf. 11864 Datasets P1 P2 P3 P4 P5 P6 Total ACOS-Laptop-Comp 853 149 88 44 167 204 1505 ACOS-Rest-Comp 894 109 49 23 122 1003 2200 ACOS-Laptop-Comp-AI 1744 410 213 122 0 259 2748 ACOS-Rest-Comp-AI 1796 276 110 60 0 1620 3862 ASQP-Rest15-Comp-AI 1585 369 157 94 0 1223 3428 ASQP-Rest16-Comp-AI 1560 431 114 76 0 1337 3518 Table 1: Statistics of the comparison datasets. P1-P6 correspond to the number of samples for options 1 to 6, respectively. label from scratch, remains a laborious task. Therefore, we explore the feasibility of using ChatGPT as a substitute for human annotators. To ensure the quality of AI annotations, we carefully craft prompts for each ASQP dataset. Additionally, we incorporate three strategies to enhance the annotation process: self-consistency, self-assessment, and rationale augmentation. Details of AI annotation can be found in Appendix A. 3.3 Statistics We construct two human-annotated and four AIannotated comparison datasets. Their basic statistical information is presented in Table 1. In the training phase of the scorer, we exclude samples corresponding to option 6 and reserve a portion of the data as the development set for hyperparameter tuning and model selection. Specifically, for the Restaurant datasets, we set aside 200 samples, and for the Laptop datasets, we allocate 300 samples. Consequently, this leaves around 1,000 training samples in the human-annotated datasets and approximately 2,000 training samples in the AI-annotated datasets. 4 Our Approach 4.1 Pseudo-label Scorer The objective of the pseudo-label scorer is to score the match between a review and a pseudo-label. Previous efforts formalize this scoring task as a question-answering problem (Wang et al., 2021) or embed the discriminative matching token into the label (Mao et al., 2022). However, these methods struggle to effectively capture the interaction between reviews and pseudo-labels. Inspired by recent works in preference optimization (Rafailov et al., 2023; Yuan et al., 2023; Song et al., 2023), we utilize a generative model as the scorer. Given a review sentence x and a pseudo label y, their matching score is quantified by the conditional probability assigned by the generative model: s(x, y) ∝p(y|x) = Y t p(yt|y<t, x). (1) Compared to previous methods, this approach integrates the likelihood of each token in the pseudolabel to derive its overall score, thereby providing a comprehensive and effective scoring. Training. We optimize the pseudo-label scorer on the annotated comparison dataset with the rankingbased training objective. Specifically, we design a simple listwise objective5 as follows: LLIST = −log p(yp|x) Z , (2) Z = p(yp|x) + X yn p(yn|x), (3) where yp denotes the positive label, yn denotes the negative label, and Z is the normalization factor. In addition to the comparison dataset, we also incorporate the original ASQP dataset to further enhance the training of the scorer. Labels from the original ASQP dataset are treated as additional positive labels and are combined with the positive labels from the comparison dataset. We additionally maximize the scores of these positive labels to enhance the scorer. The combined loss function is formulated as follows: L =L1 + αL2, (4) L1 =E(x,Y)∼DCOMPLLIST(x, Y), (5) L2 =E(x,yp)∼DCOMP∪DASQP −log p(yp|x), (6) where DCOMP represents the comparison dataset, DASQP represents the original ASQP dataset, Y denotes the set of several pseudo-labels of the sentence x, and α is a hyperparameter. 4.2 Self-Training with Data Filtering Self-training (Scudder, 1965), a simple and classic semi-supervised technique, can be applied for data augmentation. It consists of three main steps: (1) training an initial model with the existing labeled dataset, (2) using this model to generate pseudolabels for unlabeled data, and (3) finally incorporating these pseudo-labeled data into the labeled dataset. However, this method inevitably introduces low-quality pseudo-labels, where the label does not accurately match the given review. To 5Besides the listwise objective, we also explore pointwise and pairwise objectives, which are presented in Appendix B. 11865 overcome this issue, we implement a two-stage filtering process that leverages both the initial model and the pseudo-label scorer. Confidence-based Filtering. We first use the confidence of the initial model in the pseudo-label as the measure of its quality. Thus, we filter out those samples with minimum confidence below a certain threshold. Formally, we retain samples (x, y) satisfying h min t p(yt|y<t, x) i ≥γ1, (7) where γ1 is a hyper-parameter and is empirically set to 0.7. Scorer-based Filtering. Next, we use the pseudolabel scorer to evaluate the remaining samples. We observe that pseudo-labels with low scores are consistently of poor quality. Besides, while samples with high scores generally exhibit good label quality, their sentences tend to be overly simple, offering limited helpfulness for subsequent model training. Therefore, we retain only those samples whose scores fall between thresholds γ2 and γ3, which can be formulated as follows: γ2 ≤s(x, y) ≤γ3. (8) 4.3 Pseudo-label Scorer as Reranker Reranking is originally a concept in information retrieval, referring to the process of rescoring and reranking preliminary candidate results. Ma et al. (2023) show that incorporating a reranking step can enhance performance in information extraction tasks. In this paper, we claim that our pseudo-label scorer can serve as such a reranker. Specifically, for a given review, we first utilize an ASQP model to generate four candidate labels via beam search and then select the best one from these candidates using our pseudo-label scorer. The selected candidate is utilized as the final output. 5 Experiments 5.1 Experiment Setup Datasets. We evaluate our approach on four public ASQP datasets. These datasets originate from the SemEval Challenges (Pontiki et al., 2015, 2016) and Amazon platform during 2017 and 2018. The quad-level annotations are provided by Cai et al. (2021) and Zhang et al. (2021a). Detailed statistics of these datasets are presented in Table 2. Besides, Datasets Train Dev Test #S #Q #S #Q #S #Q ACOS-Laptop 2934 4172 326 440 816 1161 ACOS-Rest 1530 2484 171 261 583 916 ASQP-Rest15 834 1354 209 347 537 795 ASQP-Rest16 1264 1989 316 507 544 799 Table 2: Statistics of four ASQP datasets (Cai et al., 2021; Zhang et al., 2021a). #S and #Q represent the number of sentences and quads. to train the pseudo-label scorer, we construct several comparison datasets, the statistics of which can be found in Table 1. Implementation Details. We utilize T5-large (Raffel et al., 2020) as the backbone for our pseudolabel scorer. During the training phase, we set both the batch size and the number of training epochs to 10. For other hyperparameters, including the learning rate and α, we perform a simple hyperparameter search. Once the scorer is trained, we apply it to score and rank the pseudolabeled samples6. For datasets ACOS-Rest, ASQP-Rest15, and ASQP-Rest16, we retain samples with scores falling within the top 10% to 40%; for the ACOS-Laptop dataset, this range is set from 20% to 50%. From these retained samples, we randomly select 10,000 samples and merge them with the original labeled dataset to form the augmented dataset. To reduce the impact of randomness, we run our approach five times and report the average results. Baselines. To validate the effectiveness of the proposed approach, we integrate it into two typical ASQP methods: GAS (Zhang et al., 2021b) and MUL (Hu et al., 2023b). We run these two methods on the augmented dataset and incorporate a reranking step during the inference phase to enhance the predictions. We also benchmark our approach against a range of other methods, including EXTRACT-CLASSIFY (Cai et al., 2021), PARAPHRASE (Zhang et al., 2021a), SEQ2PATH (Mao et al., 2022), DLO/ILO (Hu et al., 2022), LEGOABSA (Gao et al., 2022), MVP (Gou et al., 2023), GENDA (Wang et al., 2023), and CHATGPT (fewshot) (Xu et al., 2023). 6The source for the pseudo-labeled data is identical to that for the comparison data, originating from the Yelp Dataset and Amazon Laptop Dataset. But there is no overlap between them. The ASQP model used for pseudo-labeling is GAS (Zhang et al., 2021b). 11866 Objectives ACOS-Laptop ACOS-Rest Wang et al. (2021) 50.67 56.10 Mao et al. (2022) 64.34 72.00 Ours 67.74 78.50 Table 3: Comparison results of the architecture for the pseudo-label scorer (accuracy, %). Wang et al. (2021) formalize label-scoring as a question-answering problem. Mao et al. (2022) append a discriminative matching token to the label. Annotation Schemes ACOS-Laptop ACOS-Rest NONE 60.53 74.10 HUMANN-1234 63.60 75.00 HUMANN-12345 64.60 76.60 HUMANN-12345* 67.74 78.50 AIANN-1234* 68.67 79.60 Table 4: Experimental results of annotating the comparison dataset (accuracy, %): (1) NONE denotes the approach where neither human nor AI annotations are used, and the pseudo-label with the highest model confidence is selected as the positive label; (2) HUMANN1234 represents the annotation scheme where human annotators choose the best pseudo-label out of four; (3) HUMANN-12345 extends HUMANN-1234 by allowing human annotators to write an additional label when none of the four options are suitable; (4) AIANN-1234 mirrors HUMANN-1234, but with ChatGPT replacing human annotators; (5) methods with * indicate the training of the scorer using both the comparison dataset and the original ASQP dataset. 5.2 Analysis of Pesudo-label Scorer Given the importance of the pseudo-label scorer in our framework, we first undertake an analysis of it, focusing on two key aspects: its model architecture and the training dataset. Model Architecture. We use the conditional likelihood the generative model assigns to a pseudolabel as its scoring metric. To examine the effectiveness of our approach, we conduct experiments on two human-annotated comparison datasets and benchmark our approach against previous methods (Wang et al., 2021; Mao et al., 2022). As Table 3 illustrates, previous methods, especially the question-answering method, perform poorly in the ASQP task. In contrast, our approach achieves a significant advantage, demonstrating its effectiveness. Comparison Dataset. We conduct experiments to compare different annotation schemes and list the results in Table 4. We have the following observaDatasets w/ P6 w/o P6 Kappa Accu Kappa Accu ACOS-Laptop-Comp-AI 62.71 79.20 65.84 86.30 ACOS-Rest-Comp-AI 67.44 80.90 47.12 87.15 Table 5: Consistency between AI- and human-annotated comparison Data (%). P6 refers to samples with option 6 selected. We calculate the consistency both before and after removing these samples. (a) (b) Figure 2: Performance trends of comparison data with increasing data quantity (accuracy, %): (a) results on ACOS-Laptop; (b) results on ACOS-Rest. tions. (1) Utilizing humans or AI to annotate the comparison data is crucial, as their performance is noticeably superior to that without annotation. Particularly, allowing human annotators to write a label when no option is suitable can notably enhance performance. (2) Training the scorer with a combination of the comparison data and the original ASQP dataset is more effective than using the comparison data alone. (3) AI-annotated comparison data can achieve even better results than human-annotated comparison data. We conduct a further analysis of AI Annotation. Table 5 presents the consistency between AI- and human-annotated data. Although the consistency is not very high statistically, considering the subjective nature of this task, the quality of AI annotation is acceptable. Additionally, a significant advantage of AI annotation lies in its cost-effectiveness relative to human annotation, enabling the efficient acquisition of a large amount of annotated data. Figure 2 illustrates the performance trends of human- and AI-annotated data relative to their quantities. Although AI-annotated data exhibits 11867 Methods ACOS-Laptop ACOS-Rest ASQP-Rest15 ASQP-Rest16 Pre Rec F1 Pre Rec F1 Pre Rec F1 Pre Rec F1 EXTRACT-CLASSIFY (Cai et al., 2021) 45.56 29.28 35.80 38.54 52.96 44.61 35.64 37.25 36.42 38.40 50.93 43.77 PARAPHRASE (Zhang et al., 2021a) 46.16 47.72 46.93 56.63 59.30 57.93 SEQ2PATH (Mao et al., 2022) 42.97 58.41 DLO (Hu et al., 2022) 43.40 43.80 43.60 60.02 59.84 59.18 47.08 49.33 48.18 57.92 61.80 59.79 ILO (Hu et al., 2022) 44.14 44.56 44.35 58.43 58.95 58.69 47.78 50.38 49.05 57.58 61.17 59.32 LEGO-ABSA (Gao et al., 2022) 45.80 57.70 MVP (Gou et al., 2023) 43.92 61.54 51.04 60.39 GENDA (Wang et al., 2023) 49.74 50.29 50.01 60.08 61.70 60.88 CHATGPT (few-shot) (Xu et al., 2023) 21.72 27.65 24.33 38.39 46.40 42.02 29.66 37.86 33.26 36.09 46.93 40.81 GAS (Zhang et al., 2021b) 43.46 42.69 43.07 59.81 57.51 58.63 47.15 46.01 46.57 57.30 57.82 57.55 + ST 44.35 42.75 43.54 59.95 58.98 59.46 46.86 47.65 47.25 57.95 58.75 58.35 + ST & C-FILTER 45.14 44.00 44.56 60.57 58.82 59.67 49.04 47.77 48.40 59.18 59.72 59.45 + ST & CS-FILTER 46.23 44.41 45.30 63.41 60.00 61.66 + ST & CS-FILTER & RERANK 46.76 45.00 45.86 64.66 61.33 62.95 + ST & CS-FILTER (AI) 46.44 44.01 45.19 62.69 60.24 61.44 50.92 49.86 50.38 60.87 61.30 61.08 + ST & CS-FILTER & RERANK (AI) 47.00 45.05 46.01 63.74 61.25 62.47 51.59 51.90 51.74 62.55 64.31 63.51 MUL (Hu et al., 2023b) 44.38 43.65 44.01 61.22 59.87 60.53 49.12 50.39 49.75 59.24 61.75 60.47 MUL (Our Reproduction) 42.79 41.95 42.45 61.22 59.80 60.50 48.28 49.74 48.99 58.42 60.68 59.52 + ST 43.38 42.98 43.23 61.17 59.89 60.67 47.94 49.21 48.57 57.45 59.37 58.39 + ST & C-FILTER 44.59 43.67 44.13 62.11 60.20 61.14 48.74 49.06 48.90 59.44 61.07 60.25 + ST & CS-FILTER 44.67 43.72 44.19 63.57 60.67 62.09 + ST & CS-FILTER & RERANK 46.88 44.74 45.78 66.18 61.75 63.89 + ST & CS-FILTER (AI) 44.89 44.07 44.47 64.28 61.31 62.76 50.78 51.17 50.97 61.39 62.68 62.03 + ST & CS-FILTER & RERANK (AI) 47.05 45.32 46.17 65.43 61.92 63.63 51.94 52.00 51.97 63.46 64.31 63.88 Table 6: Experimental results on four ASQP datasets (%). C-FILTER indicates the application of confidence-based filtering. CS-FILTER represents the integration of confidence-based and scorer-based filtering. Methods marked with AI indicate that the pseudo-label scorer used is trained on AI-annotated comparison data. lower performance at the same quantity, the scalability of AI annotation allows it to catch up and potentially exceed human-annotated data’s performance when using more data. For instance, more than 2,000 AI-annotated samples can equal or outperform 1,000 human-annotated samples. Consequently, we can conclude that for the ASQP task, it is feasible to replace humans with AI to annotate the comparison data. 5.3 Analysis of Self-Training Main Results. We develop a self-training framework using the pseudo-label scorer, with the experimental results presented in Table 6. According to these results, our approach substantially and consistently improves the performance of existing ASQP methods (Zhang et al., 2021b; Hu et al., 2023b). Specifically, GAS achieves F1-score improvements of 2.94%, 4.32%, 5.17%, and 5.96% across the four datasets, averaging at 4.60%; MUL achieves F1score improvements of 3.72%, 3.39%, 2.98%, and 4.36% across these datasets, averaging at 3.61%. Upon integrating our approach, both GAS and MUL outperform previous methods. These results demonstrate the effectiveness of our approach. Furthermore, we have the following observations. (1) The two-stage filtering process, namely CS-FILTER, greatly enhances the effectiveness of self-training. In most datasets, it results in over 2% improvement compared to self-training alone, highlighting the importance of data filtering in the self-training framework. (2) Incorporating a rerank step can further improve performance, by around 1%. (3) Using AI-annotated data in downstream self-training can attain results comparable to those using human-annotated data. This further indicates the feasibility of replacing human annotators with AI for comparison data annotation. (4) ChatGPT performs poorly on the ASQP task, suggesting that using it directly for this task does not fully leverage its capabilities. Conversely, using it for comparison data annotation effectively exploits its strengths. (5) It can be noted that our filtering strategy offers relatively limited improvements on ACOS-Laptop. We attribute this to potential inconsistency between its ASQP annotations and our comparison annotations. A more detailed discussion is available in Further Analysis. 11868 Figure 3: Performance of GAS on the augmented dataset under different match scores (F1-score, %). Effect of Match Score. Our approach relies on the match scores output by the pseudo-label scorer for data filtering. We conduct experiments to examine the impact of these scores on self-training performance. Figure 3 illustrates that the performance incrementally increases with increasing match scores. Nevertheless, beyond a certain threshold, further increases in match scores lead to a decline in performance. This phenomenon confirms our hypothesis that samples with too low scores suffer from poor label quality, adversely affecting model learning, and samples with too high scores tend to be overly simple, providing limited helpfulness for subsequent model training. Effect of Data Quantity. The quantity of pseudolabeled samples is another important factor in the effectiveness of self-training. We conduct experiments to analyze its impact. As illustrated in Figure 4, there is an overall upward trend in performance with increased data quantity. Notably, this trend is more stable and pronounced following twostage filtering, underscoring the necessity of data filtering. Furthermore, we notice a decrease in self-training performance when the number of augmented samples exceeds 20,000. This suggests that there is a limit to improving performance by simply increasing data quantity. Balancing diversity and label quality to enhance the effectiveness of selftraining warrants further exploration in subsequent research. 5.4 Further Analysis Comparison Data as Additional Labeled Data. One feasible approach for utilizing comparison data is to treat each sample along with its positive label as an additional labeled ASQP sample. We analyze the effectiveness of this approach, and (a) Performance on ACOS-Laptop (b) Performance on ACOS-Rest (c) Performance on ASQP-Rest15 (d) Performance on ASQP-Rest16 Figure 4: Performance of GAS under different numbers of augmented samples (F1-score, %). the results in Table 7 reveal: (1) this approach can improve performance, with human-annotated comparison data outperforming AI-annotated data; (2) under conditions of equal data volume, it is superior to self-training without data filtering, indicating that the quality of comparison data is better than that of pseudo-labeled data; and (3) however, it falls significantly short of self-training with data filtering. These findings suggest that utilizing comparison data to train a pseudo-label scorer is more effective than simply treating it as additional labeled data. Pseudo-label Scorer as the ASQP Model. The pseudo-label scorer is architecturally a generative model and can potentially be used as an ASQP 11869 Methods Laptop Rest Rest15 Rest16 Avg GAS 43.07 58.63 46.57 57.55 + ST & C-FILTER (1k) 43.16 59.10 47.45 59.48 +0.84 + ST & C-FILTER (2k) 43.66 59.03 47.24 58.33 +0.61 + ST & C-FILTER (10k) 44.56 59.67 48.40 59.45 +1.57 + ST & CS-FILTER (10k) 45.30 61.66 50.38 61.08 +3.15 + COMPDATA (1k) 43.51 60.20 + COMPDATA (2K, AI) 42.99 59.77 48.15 59.73 +1.20 Table 7: Experimental results of using comparison data as additional labeled data (F1-scorer, %). The humanand AI-annotated comparison datasets contain about 1,000 and 2,000 samples, respectively. Methods Laptop Rest Rest15 Rest16 GAS 43.07 58.63 46.57 57.55 + Our Approach 45.86 62.95 + Our Approach (AI) 46.01 62.47 51.74 63.51 PSEUDO-LABEL SCORER 43.93 60.66 PSEUDO-LABEL SCORER (AI) 44.06 60.00 51.59 61.49 Table 8: Experimental results of using pseudo-label scorer as the ASQP model (F1-scorer, %). model. We evaluate this possibility and list the results in Table 8. A surprising finding is that directly using the scorer to predict quads achieves good performance, though it generally falls short of using it for filtering and ranking. This suggests that, besides training the scorer, leveraging comparison data to enhance the ASQP model is a promising direction, deserving of in-depth investigation in future research. Assessing Label Quality in ASQP Data. Beyond assessing pseudo-labeled data, our scorer can assess the quality of existing ASQP data. We conduct a statistical analysis of match scores for ASQP samples and list the results in Table 9. Our analysis reveals relatively low match scores for the ACOS-laptop dataset, suggesting either poor annotation quality or low consistency with our comparison data. We manually review 100 samples with match scores low 0.1 and find that 73% of the data contradicts the annotation guidelines, including 44% with errors in aspect category annotations, 8% in aspect or opinion term annotations, and 6% in sentiment annotations. Moreover, we experiment with removing samples with low match scores. The results presented in Table 10 show that this removal not only preserves model performance but enhances it. These findings indicate that our scorer is an effective tool for assessing label quality in existing datasets and that the removal of Laptop Rest Rest15 Rest16 < 0.1 19.12% 3.92% 3.24% 1.50% < 0.3 30.30% 7.32% 5.52% 2.37% < 0.5 40.52% 10.46% 7.07% 3.56% < 0.7 52.56% 14.58% 9.59% 5.46% < 0.9 70.25% 25.88% 15.83% 9.43% Table 9: Statistics of match scores in the ASQP training datasets. Ratio of Removal Laptop Rest Rest15 Rest16 Avg 0% 43.07 58.63 46.57 57.55 2% 43.38 59.44 47.20 58.27 +0.62 4% 43.50 59.51 48.09 59.10 +1.10 6% 43.89 59.24 46.78 58.25 +0.59 8% 43.42 59.38 46.68 58.43 +0.52 10% 44.25 59.09 46.57 59.20 +0.82 Table 10: Performance after removing samples with low match scores in the training set (F1-scorer, %). low-quality samples is advantageous. Analysis of Reranking. We present the analysis of the reranking step in Appendix C. 6 Conclusions In this paper, we introduce a pseudo-label scorer for the Aspect Sentiment Quad Prediction (ASQP) task to reduce mismatches in data augmentation. We propose that the effectiveness and reliability of this scorer hinge on two critical aspects: the quality of the training dataset and its model architecture. To this end, we create both human- and AI-annotated comparison datasets and propose a scoring method based on a generative model. Upon developing this scorer, we apply it to data filtering in a self-training framework and further employ it as a reranker to enhance ASQP models. Detailed experiments and analysis demonstrate the effectiveness of our comparison datasets and the proposed architecture. Furthermore, experimental results on four public ASQP datasets reveal that our scorer significantly and consistently improves the performance of existing methods. 11870 Acknowledgements We thank the anonymous reviewers for their valuable suggestions to improve the quality of this work. We also thank Qianlong Wang for the enlightening discussions that significantly contributed to this work. This work was partially supported by the National Natural Science Foundation of China 62176076, Natural Science Foundation of Guangdong 2023A1515012922, the Shenzhen Foundational Research Funding JCYJ20220818102415032, the Major Key Project of PCL2021A06, and Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies 2022B1212010005. Limitations While our approach significantly enhances the effectiveness of data augmentation and improves the performance of existing ASQP models, it also suffers from the following limitations: • Data augmentation generally comprises two pivotal components: data synthesis and quality control. While this paper focuses primarily on the latter, the former is equally vital for the success of data augmentation. Given that models trained on limited labeled data may underperform in certain categories or contexts, targeted data synthesis can mitigate these issues. A comprehensive exploration of both data synthesis and quality control is essential for developing an effective and robust data augmentation framework. • The implementation of our approach necessitates manually annotated comparison data. Although we could use large language models to replace human annotators, crafting and refining prompts still demands meticulous human expertise and is notably time-intensive. We argue that these limitations offer promising directions for future research. References Hongjie Cai, Rui Xia, and Jianfei Yu. 2021. Aspectcategory-opinion-sentiment quadruple extraction with implicit aspects and opinions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 340–350, Online. Association for Computational Linguistics. Hao Chen, Zepeng Zhai, Fangxiang Feng, Ruifan Li, and Xiaojie Wang. 2022. Enhanced multi-channel graph convolutional network for aspect sentiment triplet extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2974–2985, Dublin, Ireland. Association for Computational Linguistics. Peng Chen, Zhongqian Sun, Lidong Bing, and Wei Yang. 2017. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 452–461, Copenhagen, Denmark. Association for Computational Linguistics. Shaowei Chen, Jie Liu, Yu Wang, Wenzheng Zhang, and Ziming Chi. 2020. Synchronous double-channel recurrent network for aspect-opinion pair extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6515– 6524, Online. Association for Computational Linguistics. Shaowei Chen, Yu Wang, Jie Liu, and Yuelin Wang. 2021. Bidirectional machine reading comprehension for aspect sentiment triplet extraction. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12666–12674. Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, and Lidong Bing. 2023. Bidirectional generative framework for cross-domain aspect-based sentiment analysis. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12272–12285, Toronto, Canada. Association for Computational Linguistics. Hai Ha Do, PWC Prasad, Angelika Maag, and Abeer Alsadoon. 2019. Deep learning for aspect-based sentiment analysis: A comparative review. Expert Systems with Applications, 118:272–299. Zhifang Fan, Zhen Wu, Xin-Yu Dai, Shujian Huang, and Jiajun Chen. 2019. Target-oriented opinion words extraction with target-fused neural sequence labeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2509–2518, Minneapolis, Minnesota. Association for Computational Linguistics. 11871 Tianhao Gao, Jun Fang, Hanyu Liu, Zhiyuan Liu, Chao Liu, Pengzhang Liu, Yongjun Bao, and Weipeng Yan. 2022. LEGO-ABSA: A prompt-based task assemblable unified generative framework for multi-task aspect-based sentiment analysis. In Proceedings of the 29th International Conference on Computational Linguistics, pages 7002–7012, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Zhibin Gou, Qingyan Guo, and Yujiu Yang. 2023. MvP: Multi-view prompting improves aspect sentiment tuple prediction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4380–4397, Toronto, Canada. Association for Computational Linguistics. Ting-Wei Hsu, Chung-Chi Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2021. Semantics-preserved data augmentation for aspect-based sentiment analysis. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4417–4422, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Hanxu Hu, Yunqing Liu, Zhongyi Yu, and Laura PerezBeltrachini. 2023a. Improving user controlled tableto-text generation robustness. In Findings of the Association for Computational Linguistics: EACL 2023, pages 2317–2324, Dubrovnik, Croatia. Association for Computational Linguistics. Mengting Hu, Yinhao Bai, Yike Wu, Zhen Zhang, Liqi Zhang, Hang Gao, Shiwan Zhao, and Minlie Huang. 2023b. Uncertainty-aware unlikelihood learning improves generative aspect sentiment quad prediction. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13481–13494, Toronto, Canada. Association for Computational Linguistics. Mengting Hu, Yike Wu, Hang Gao, Yinhao Bai, and Shiwan Zhao. 2022. Improving aspect sentiment quad prediction via template-order data augmentation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7889–7900, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Kun Li, Chengbo Chen, Xiaojun Quan, Qing Ling, and Yan Song. 2020. Conditional augmentation for aspect term extraction via masked sequence-tosequence generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7056–7066, Online. Association for Computational Linguistics. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 946–956, Melbourne, Australia. Association for Computational Linguistics. Xin Li and Wai Lam. 2017. Deep multi-task learning for aspect term extraction with memory interaction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2886–2892, Copenhagen, Denmark. Association for Computational Linguistics. You Li, Yongdong Lin, Yuming Lin, Liang Chang, and Huibing Zhang. 2022. A span-sharing joint extraction framework for harvesting aspect sentiment triplets. Knowledge-Based Systems, 242:108366. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI’17, page 4068–4074. AAAI Press. Yubo Ma, Yixin Cao, YongChing Hong, and Aixin Sun. 2023. Large language model is not a good few-shot information extractor, but a good reranker for hard samples! Yue Mao, Yi Shen, Jingchao Yang, Xiaoying Zhu, and Longjun Cai. 2022. Seq2Path: Generating sentiment tuples as paths of a tree. In Findings of the Association for Computational Linguistics: ACL 2022, pages 2215–2225, Dublin, Ireland. Association for Computational Linguistics. Yue Mao, Yi Shen, Chao Yu, and Longjun Cai. 2021. A joint training dual-mrc framework for aspect based sentiment analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15):13543–13551. Ambreen Nazir, Yuan Rao, Lianwei Wu, and Ling Sun. 2022. Issues and challenges of aspect-based sentiment analysis: A comprehensive survey. IEEE Transactions on Affective Computing, 13(2):845–863. Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188–197, Hong Kong, China. Association for Computational Linguistics. Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, and Luo Si. 2020. Knowing what, how and why: A near complete solution for aspect-based sentiment analysis. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8600–8607. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, Véronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud María Jiménez-Zafra, and Gül¸sen Eryi˘git. 2016. SemEval-2016 task 5: Aspect based sentiment analysis. In Proceedings of the 10th International 11872 Workshop on Semantic Evaluation (SemEval-2016), pages 19–30, San Diego, California. Association for Computational Linguistics. Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. SemEval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 486–495, Denver, Colorado. Association for Computational Linguistics. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35, Dublin, Ireland. Association for Computational Linguistics. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1). H. Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Transactions on Information Theory, 11(3):363–371. Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. 2023. Preference ranking optimization for human alignment. An Wang, Junfeng Jiang, Youmi Ma, Ao Liu, and Naoaki Okazaki. 2023. Generative data augmentation for aspect sentiment quad prediction. In Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 128– 140, Toronto, Canada. Association for Computational Linguistics. Qianlong Wang, Zhiyuan Wen, Qin Zhao, Min Yang, and Ruifeng Xu. 2021. Progressive self-training with discriminator for aspect term extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 257–268, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Wenya Wang, Sinno Jialin Pan, Daniel Dahlmeier, and Xiaokui Xiao. 2016. Recursive neural conditional random fields for aspect-based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 616– 626, Austin, Texas. Association for Computational Linguistics. Zhen Wu, Chengcan Ying, Fei Zhao, Zhifang Fan, Xinyu Dai, and Rui Xia. 2020. Grid tagging scheme for aspect-oriented fine-grained opinion extraction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2576–2585, Online. Association for Computational Linguistics. Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2018. Double embeddings and CNN-based sequence labeling for aspect extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 592–598, Melbourne, Australia. Association for Computational Linguistics. Lu Xu, Yew Ken Chia, and Lidong Bing. 2021. Learning span-level interactions for aspect sentiment triplet extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4755–4766, Online. Association for Computational Linguistics. Xiancai Xu, Jia-Dong Zhang, Rongchang Xiao, and Lei Xiong. 2023. The limits of chatgpt in extracting aspect-category-opinion-sentiment quadruples: A comparative analysis. Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2514–2523, Melbourne, Australia. Association for Computational Linguistics. Hang Yan, Junqi Dai, Tuo Ji, Xipeng Qiu, and Zheng Zhang. 2021. A unified generative framework for aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2416–2429, Online. Association for Computational Linguistics. Jianfei Yu, Qiankun Zhao, and Rui Xia. 2023. Crossdomain data augmentation with domain-adaptive language modeling for aspect-based sentiment analysis. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1456–1470, Toronto, Canada. Association for Computational Linguistics. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. Biqing Zeng, Heng Yang, Ruyang Xu, Wu Zhou, and Xuli Han. 2019. Lcf: A local context focus mechanism for aspect-based sentiment classification. Applied Sciences, 9(16). Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, and Wai Lam. 2021a. Aspect sentiment quad prediction as paraphrase generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9209– 9219, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. 11873 Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021b. Towards generative aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 504–510, Online. Association for Computational Linguistics. Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2023a. A survey on aspect-based sentiment analysis: Tasks, methods, and challenges. IEEE Transactions on Knowledge and Data Engineering, 35(11):11019–11038. Yice Zhang, Yifan Yang, Meng Li, Bin Liang, Shiwei Chen, and Ruifeng Xu. 2023b. Target-to-source augmentation for aspect sentiment triplet extraction. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12165–12177, Singapore. Association for Computational Linguistics. Yice Zhang, Yifan Yang, Yihui Li, Bin Liang, Shiwei Chen, Yixue Dang, Min Yang, and Ruifeng Xu. 2022. Boundary-driven table-filling for aspect sentiment triplet extraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6485–6498, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. He Zhao, Longtao Huang, Rong Zhang, Quan Lu, and Hui Xue. 2020. SpanMlt: A span-based multi-task learning framework for pair-wise aspect and opinion terms extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3239–3248, Online. Association for Computational Linguistics. A Details of AI Annotation We employ ChatGPT7 to replace humans for comparison data annotation. The prompt is depicted in Figure 5. We continuously refine the guidelines and demonstrations in the prompt based on the annotation results. Furthermore, to further enhance annotation quality, we adopt three strategies: • Self-consistency: For each sample, we input it into ChatGPT twice. We keep the sample if the results are consistent. • Self-assessment: We require ChatGPT to evaluate its confidence level on a scale from 1 to 5 after each judgment. We retain only those samples with a confidence level is 5. 7Available at https://chat.openai.com/. The specific model used is gpt-4-1106-preview. Task: Analyze a restaurant review and select the most suitable pseudo-label from the provided options (only codename required). If the review is unrelated to the restaurant (e.g., about beauty salon, medical, repair, or hotel), select 'Irrelevant Domain'. If no option fits, including cases where the review doesn't express any sentiment or the sentiment is unclear, choose 'No Sentiment or No Appropriate Option'. Understand the Pseudo-label Components: - Aspect Category: Consists of an entity label and attribute label, with possible values like {category space}. - Sentiment Polarity: Indicates the sentiment as positive, negative, or neutral. Neutral is used for mild sentiments and doesn't mean objectivity (e.g. “Food was okay, nothing great”). - Aspect Term: The explicit reference in the review to the aspect category, like a named entity, a common noun, or a multi-word term. When an entity is only implicitly referred to (e.g., through pronouns) or inferred in a sentence, it‘s assigned the value 'NULL'. - Opinion Term: The words or phrases expressing sentiments. If the review implies sentiment without explicit words, it's marked as 'NULL'. Key Points to Remember: 1. First, check if the review is relevant and expresses any sentiment. {other guidelines} Examples for Reference: {demonstration examples} Your Task: 1. Review: "There was only 1 time that food came in under a half hour." Pseudo-label options: A. {SERVICE#GENERAL, negative, "NULL", "NULL"} B. {FOOD#STYLE_OPTIONS, negative, "food", "NULL"} C. {SERVICE#GENERAL, neutral, "NULL", "NULL"} D. {FOOD#STYLE_OPTIONS, neutral, "food", "NULL"} Provide your answer as: {"1. Rationale": "Your detailed rationale", "1. Best Choice": "Your choice", "1. Confidence": "Your confidence (1-5)"} Figure 5: Prompt for AI Annotation in ACOS-Rest. • Rationale augmentation: We instruct ChatGPT to provide reasoning and explanation before making its judgment. Additionally, to reduce annotation costs, we integrate four samples into one prompt and annotate them at once. B Ranking Objectives for Training Scorer We optimize the pseudo-label scorer on the annotated comparison dataset and explore three rankingbased training objectives: pointwise, pairwise, and listwise approaches. The pointwise approach classifies positive and negative samples separately and 11874 can be formulated as follows: LPOINT = −log p(yp|x) − X yn log(1 −p(yn|x)), (9) where yp denotes the positive label, and yn denotes the negative label. The pairwise approach focuses on the relative quality of labels. We implement two pairwise training objectives, detailed as follows: LPAIR1 = − X yn log σ  β log p(yp|x) p(yn|x)  , (10) LPAIR2 = X yn max(0, p(yn|x) −p(yp|x)), (11) where β is a hyper-parameter. Lastly, the listwise approach optimizes the ranking of the entire list. We design a simple listwise training objective, as outlined in Equation 2. B.1 Experiment Results Objectives ACOS-Laptop ACOS-Rest Pointwise 67.93 77.80 Pairwise1 67.13 77.00 Pairwise2 66.87 77.40 Listwise 67.74 78.50 Table 11: Comparison results of four ranking-based objectives (accuracy, %). Experimental results in Table 11 reveal that pointwise and listwise objectives outperform two pairwise objectives, with the listwise objective being slightly better overall. Consequently, we adopt the listwise objective as the default training objective. C Analysis of Reranking Laptop Rest Rest15 Rest16 Avg ST-CS 45.30 61.66 50.38 61.08 54.61 ST-CS & RERANK 45.86 62.95 51.74 63.51 +1.41 ST-CS & RERANK♮ 63.22 75.63 66.27 76.29 +15.75 Table 12: Experimental results of reranking (F1-score, %). ST-CS denotes self-training with confidence- and scorer-based filtering. ♮indicates the performance achieved using a perfect reranker. We apply our pseudo-label scorer as a reranker to rescore the candidate labels generated by the ASQP model, selecting the highest-scoring one as the final result. Table 12 shows that this reranking step significantly improves performance on the ASQP task, resulting in an average F1 improvement of 1.41%. Additionally, if we consider a hypothetical perfect reranker that always selects the optimal candidate, Table 12 shows that the performance gain could reach up to 15.75%. This significant potential boost underscores the value of further exploring the reranking step. Laptop Rest Rest15 Rest16 Best Candidates 1-st 67.70% 69.37% 61.15% 65.11% 2-nd 13.77% 15.44% 17.17% 16.29% 3-rd 10.44% 8.64% 13.48% 11.14% 4-th 8.09% 6.55% 8.19% 7.46% Scorer’s Preferred Choices 1-st 67.62% 75.09% 66.48% 74.08% 2-nd 17.84% 14.03% 17.77% 15.34% 3-rd 8.63% 6.79% 9.20% 6.70% 4-th 5.91% 4.08% 6.55% 5.18% Table 13: Proportions of the best and preferred candidate labels selected by the reranker. Furthermore, we rank the four candidate labels obtained via beam search according to their confidence and then analyze the distribution of the best labels and those preferred by our scorer. As illustrated in Table 13, in fewer than 70% of cases, the candidate label with the highest confidence is considered the best. In comparison, our scorer tends to favor candidates with higher confidence, highlighting areas for further improvement in this reranking step. 11875
2024
640
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11876–11896 August 11-16, 2024 ©2024 Association for Computational Linguistics Learning to Generate Answers with Citations via Factual Consistency Models Rami Aly1*, Zhiqiang Tang2, Samson Tan2, George Karypis2 1University of Cambridge 2Amazon Web Services [email protected], {zqtang|samson|karypis}@amazon.com Abstract Large Language Models (LLMs) frequently hallucinate, impeding their reliability in mission-critical situations. One approach to address this issue is to provide citations to relevant sources alongside generated content, enhancing the verifiability of generations. However, citing passages accurately in answers remains a substantial challenge. This paper proposes a weakly-supervised fine-tuning method leveraging factual consistency models (FCMs). Our approach alternates between generating texts with citations and supervised fine-tuning with FCM-filtered citation data. Focused learning is integrated into the objective, directing the fine-tuning process to emphasise the factual unit tokens, as measured by an FCM. Results on the ALCE few-shot citation benchmark with various instruction-tuned LLMs demonstrate superior performance compared to in-context learning, vanilla supervised fine-tuning, and state-of-the-art methods, with an average improvement of 34.1, 15.5, and 10.5 citation F1 points, respectively. Moreover, in a domain transfer setting we show that the obtained citation generation ability robustly transfers to unseen datasets. Notably, our citation improvements contribute to the lowest factual error rate across baselines. 1 Introduction Large language models (LLMs; Brown et al., 2020) have showcased remarkable capabilities in information-seeking tasks. However, concerns persist regarding their trustworthiness, stemming from their opaque nature and factual hallucinations in their responses (Zhang et al., 2023; Mallen et al., 2023). One proposed solution involves the production of citations to retrieved text passages alongside LLM-produced statements (Liu et al., 2023; Rashkin et al., 2023). By integrating citations into generated responses, readers can easily verify * Work done while interning at Amazon Web Services. Question: Who has the highest goals in world football? Retrieved Passages: [1] ...Pelé’s 1281 goals are recognized by FIFA as the highest total achieved by a professional footballer, although the Soccer Statistic Foundation (rssf) recognizes only 767 goals in official mode, occupying the third place after Josef Bican (805) and Romario (772) . . . [2] have beaten Gerd Müller’s record of 85 goals in a year, the Football Association of Zambia . . . Answer (LlaMA-2-Chat-70B): According to IFFHS International Federation of Football History and Statistics, Lionel Messi has the highest goals in world football with 1281 goals [1]. However, some sources, such as the Soccer Statistic Foundation (rssf), recognize only 767 goals in official mode, placing Messi in third place after Josef Bican (805) and Romario (772) [2]. Figure 1: An ALCE-ASQA question with a generated answer prompted via in-context learning. Two error classes are common: information not supported by the sources (red) and incorrect citation to the sources (blue). LLMs statements. The ability to accurately produce citations enables LLMs to generate responses more closely aligned with cited sources, alleviating hallucinations (Gao et al., 2023b; Yue et al., 2023). Despite its significance, accurate citation generation proves to be challenging. State-of-the-art LLMs, such as ChatGPT (OpenAI, 2023), and commercial generative chat engines, such as Bing Chat, produce accurate citations only for less than 60% of generated statements (Gao et al., 2023b; Liu et al., 2023). Figure 1 illustrates typical citation errors, including hallucinated statements or citations associated with incorrect claims. Hence, there is a necessity to train LLMs to generate citations accurately. This paper focuses on teaching LLMs to generate citations for retrieval-augmented longform question answering (LFQA), tackling two main challenges: the scarcity of high-quality labeled data at scale and the risk of compromising original language and generalization capacities during fine-tuning for citation generation. 11876 To address these challenges, we present CaLF (Citation Learning via Factual Consistency Models), a fine-tuning strategy that enables LLMs to learn citation generation without sacrificing their language capacities. As illustrated in Figure 2, the cornerstone of our approach is factual consistency models (FCMs; Kryscinski et al., 2020, inter alia), initially introduced as a neural measure of consistency between a claim and its context. We use FCMs to gauge whether cited passages support a generated statement. Our method incorporates FCMs in two designs. Firstly, we propose a weakly-supervised training strategy, where an LLM generates diverse responses with citations, an FCM filters high-quality citation data, and the LLM is fine-tuned on the filtered data. Secondly, we utilize focused learning to adjust the loss contribution of each answer token based on its factual relevance, as measured by an FCM. The intuition is to have the LLM concentrate on tokens related to factual knowledge during fine-tuning, minimizing the impact on its original language capacities. We evaluate CaLF on various LLMs, including Llama2 (Touvron et al., 2023), Mistral-Instruct, and MistralOrca (Jiang et al., 2023). On the ALCE automatic few-shot citation evaluation benchmark (Gao et al., 2023b), CaLF enhances citation metrics over the in-context learning and baseline finetuning, with an average improvement of 34.1 and 15.5 F1, respectively, while maintaining fluency and correctness. All LLMs trained via CaLF , outperform the state-of-the-art model Self-Rag (Asai et al., 2024) and ChatGPT (OpenAI, 2023), with an average improvement of 24.8 and 10.5 citation F1 points, respectively. Domain transfer experiments, testing citation quality on a dataset different from the training dataset, highlight CaLF’s ability to generalize across tasks and domains. Additionally, on the FactScore biography generation benchmark (Min et al., 2023), CaLF demonstrates an improvement in factuality. Finally, human evaluation results indicate that CaLF yields more preferable answers compared to the fine-tuning baseline. 2 Related Work LFQA with Citations To produce citations alongside a response, the generation can be conditioned on a few high-quality in-context examples with embedded citations (Gao et al., 2023b; Li et al., 2023). In contrast, Gao et al. (2023a); Bohnet et al. (2022) propose to edit an already generated A: Tehre exist multiple aa approach for the aareason that the cheifaaa has b[1] Focused Learning Q: Who played Galen in the Planet of the Apes? Retrieved Documents A: In the 1968 film Planet of the Apes, Galen was played by Wright King [2]... Shapley Token Relevance Weights 3. 4. Factual Consistency Model Diverse Answer Generation Answer Filtering 2. 1. Retrieved Documents used for iteration k+1 A: In the 1968 film Planet of the Apes, Galen was played by Wright King [2]... A: In the 1968 film Planet of the Apes, Galen was played by Wright King [2]... LLM LLM Figure 2: A schematic view of our iterative citation fine-tuning method CaLF. It uses a factual consistency model to: i) create weakly supervised training instances by filtering diversely sampled responses, ii) adjust the loss contribution of each answer token according to its Shapley relevance for factual consistency prediction. response post-hoc to attribute to retrieved sources, causing computational overhead during inference. Alternatively, proprietary work has explored human preference learning for citation production (Menick et al., 2022; Thoppilan et al., 2022; Nakano et al., 2021). Reinforcement learning from human feedback is expensive and typically more brittle than supervised fine-tuning. Very recently, Asai et al. (2024) incorporate critique tokens, which serve as a feedback and citation mechanism, into GPT-4 obtained instruction-tuning data. These tokens allow for flexible retrieval-augmented generation with citations. In contrast to previous work, CaLF incorporates openly available FCMs as critique models into the fine-tuning process of already instructiontuned LLMs while not modifying inference, maintaining efficiency. Factual Consistency Models FCMs assess whether all the factual information in a claim is consistent w.r.t to the information conveyed in its grounding text. The task shares strong similarities to natural language inference (NLI) (Bowman et al., 2015; Dagan et al., 2005). However, in contrast to NLI, factual consistency is not evaluated on subjec11877 tive or opinionated statements. Work that explores FCMs for improving the generation is mainly constrained to summarization. Aharoni et al. (2023) filter samples from a large-scale dataset according to an FCM while Muller et al. (2023) use an FCM to rerank and select source passages for crosslingual summarization. Tian et al. (2023) incorporate FCMs to improve factuality via direct preference optimization (Rafailov et al., 2023). Instead of using FCMs, Deng et al. (2023) improve the factuality of generations by measuring the cosine similarity between a token’s embedding and relevant knowledge to adjust a token’s loss contribution. Our work is the first to explore the use of model explainability mechanisms, namely Shapley values (Shapley, 1953), for incorporating FCMs directly into the fine-tuning process of an LLM. 3 Preliminaries Task description. Given an information-seeking question q, such as shown in Figure 1, the task is to generate a long-form answer ˆy = {s1, · · · , sn}, consisting of sentences si, conditioned on passages P retrieved from a knowledge base. A long-form answer with citations needs to ensure that one or multiple relevant passages Ci ⊆P are cited in each generated sentence si (indicated by brackets with a passage index, e.g. “[1]” for p1), such that the generated information in si follows from the cited passages Ci. This task definition strictly requires all facts to originate from the retrieved passages, ensuring that ˆy is fully verifiable by P. We further assume the availability of few-shot training samples (q, P, y) ∈D to learn citation production. Generation with LLMs. In this work, we are interested in using LLMs to generate long-form answers with citations. An answer ˆy is generated autoregressively, computing the next token distributions conditioned on the question q, retrieved passages P, and the answer generated so far ˆy<t: Q|ˆy| t=1 pθ(ˆyt | q, P, ˆy<t), with θ being the parameterization of the LLM. Consequently, a model is updated on a gold answer y by minimizing the negative log-likelihood loss (NLL): LNLL = −1 | y | |y| X t=1 log pθ(yt | q, P, y<t), (1) Factual Consistency Models. An FCM ϕ measures the factual consistency between a sentence si and a collection of citations Ci: o = ϕ(si, Ci), with o ∈{0, 1} being the binary consistency prediction of the FCM. While FCMs are often modelled via modified NLI models that output a binary prediction directly (Gekhman et al., 2023; Utama et al., 2022), AlignScore (Zha et al., 2023) produces a single calibrated continuous value o = pϕ(consistent | si, Ci) ∈[0, 1]. In our scenario, such a scalar is beneficial for computing Shapley values and for controlling the consistency’s strictness. We refer to Mosca et al. (2022) for background on Shapley values and the SHAP framework in the context of NLP. 4 Citation Learning via FCMs CaLF is a fine-tuning strategy for producing longform answers with citations. CaLF’s main assumption is that an FCM ϕ can be leveraged as a supervision signal to improve citation quality of an LLM F via an iterative training procedure, as illustrated in Figure 2 and Algorithm 1. CaLF alternates between two modes in each iteration k. First, it generates weakly-supervised training data ˆDk to enrich the training corpus by sampling diverse answers from a fine-tuned LLM Fk−1 and filters them via the FCM ϕ (Sec. 4.1). Second, it fine-tunes the LLM Fk−1 on ˆDk + D with a modification of the NLL objective, which re-weights the loss contribution of individual tokens according to their importance for ensuring factual consistency, as measured by the FCM ϕ (Sec. 4.2). The number of iterations is determined dynamically by stopping when the proportion of filtered examples over the candidates does not improve between iterations or once the maximum number of iterations is reached. 4.1 Answer Generation for Training To generate weakly supervised training data (x, P, ˆy) ∈ˆDk with answer ˆy containing citations to passages P, we assume the availability of a collection of information-seeking questions x ∈X and a list of atomic facts A expected in an answer to x. For questions X, an LLM generates a collection of answer candidates ˆY, conditioned on retrieved passages P, selected by an out-of-thebox retrieval system R. As indicated in Algorithm 1, we produce answer candidates using either the fine-tuned model Fk−1 from the previous iteration, or, in the case k = 0, we use in-context prompting with the few-shot examples D. We focus on the generation of diverse answer candidates to enrich the weakly supervised training data. First, we 11878 Algorithm 1 The training procedure of CaLF. Input: LLM F; FCM ϕ; Retriever R; questions X and answer facts A; few-shot examples D; Iterations K. Output: Fine-tuned LLM FK; Citation Data ˆDK. 1: U0 ←{⊕si∈y(Norm(SHAPϕ(si))) | (x, P, y) ∈D} 2: W0 ←{Align(U, Tϕ(ˆy), TF(ˆy) | U ∈U0} 3: P ←{R(x, KB) | x ∈X} 4: k ←0 5: while k ≤K and | ˆ Dk−1| | ˆYk−1| ≥| ˆ Dk−2 ˆYk−1 | do 6: if k = 0 then ▷Data Generation (§4.1) 7: ˆYk ←Diverse Sampl.IC(F, X, P, D) 8: else 9: ˆYk ←Diverse Sampl.(Fk−1, X, P) 10: end if 11: ˆDk ←{(x, P, ˆy) | ˆy ∈ˆYk ∧Q(ˆy, A, P) = 1} 12: Uk ←{ ⊕ si∈ˆyNorm(SHAPϕ(si))) | (x, P, ˆy) ∈ˆDk} 13: Wk ←{Align(Tϕ(ˆy), TF(ˆy) | U ∈Uk} 14: Fk ←Update F via FL ▷Focused Learning (§4.2) 15: ∇LFL(D + ˆDk, W0 + Wk) 16: k ←k + 1 17: end while use sampling strategies such as nucleus sampling (Holtzman et al., 2020), temperature scaling (Guo et al., 2017), and diverse beam search (Vijayakumar et al., 2018). Second, we consider citation replacements in answer sentences si in ˆy to diversify the answer candidates beyond the output of the LLM. Specifically, for generated citations Ci in a sentence si, we generate two citation replacements, sampled according to the passage probability measured via the retriever R since the direct computation of ϕ(si, Ci) over all citation options is infeasible. Each answer candidate ˆy ∈ˆY is subsequently filtered via an answer quality assurance function Qϕ : ˆy →{0, 1}, measured via the FCM ϕ, to obtain ˆDt with weakly-supervised cited answers ˆy: Q =            1 if Citation-Recall(ˆy, C, ϕ) > Θ ∧Citation-Precision(ˆy, C, ϕ) > Θ ∧Correctness(ˆy, A, ϕ) > Θ 0 else, with C being the citations assigned to sentences in ˆy, and Θ being a dynamically determined quality threshold that adjusts such that the size ˆDt is above a minimum viable size. Citation-Recall(ˆy, C) = 1 n P si∈ˆy ϕ(si, Ci) measures the factual coverage of citations, Citation-Precision(ˆy, C) = 1 |C| P Ci∈C 1 |Ci| P ci,j∈Ci max(ϕ(si, ci,j), 1 − ϕ(Ci\{ci,j},si)) measures the relevance of citations, and Correctness(ˆy, A) measures the proportion of facts A covered in ˆy. These definitions largely align with the ones in the benchmark of Gao et al. (2023b) for evaluating citations. And in the 1974 American science fiction tv series Planet of the Apes, Galen was played by Roddy McDowall [1]. [1] Title: Planet of the Apes. Text: [...] Roddy McDowall returned to the franchise as Galen, a chimpanzee who joins the astronauts Cited Passages Factual Consistency Model Generated Answer (2 sentences) A: In the 1968 film Planet of the Apes, Galen was played by Wright King [2]. [2] Title: Planet of the Apes (1968 film). Text: animal psychologist Zira (Kim Hunter) and surgeon Galen (Wright King) [...] SHAP ... King Wright by played was en Gal , Ap es the of was played by R oddy McD ow all . s1 s2 C1 C2 o(s2) o(s1) ... King by played was en Gal , Ap es the of was played by R oddy McD ow all . Wright 2. Weight Normalization W 3. Token Alignment ... ing by played was en Galen , Apes the of was played by R oddy McDowall ow all . Wright es K 1. Shapley Token Relevance 1.0 0.5 1.0 0.5 1.0 0.5 U Figure 3: The computation of relevance weights W for rescaling the loss according to Eq. 2. We first use SHAP to measure the token importance for predicting ϕ(si, Ci) = oi. We adjust for differences in scale of Wϕ,i for sentences si and differences in tokenization between the FCM and the LLM. 4.2 Focused Learning for Factual Consistency To emphasize the learning of producing citations, we measure the relevance for each token in an answer for ensuring the factual consistency between y and the retrieved passages P. We subsequently modify the NLL loss computation (see Eq. 1) for the instruction-tuning of the LLM F by re-weighting the loss contribution of the t−th token according to relevance weights wt ∈W: LFL = −1 | y | |y| X j=1 wtlog pθ(yt | q, P, y<t). (2) In contrast to NLL where the loss is computed as the arithmetic mean over the token-level loss, our focused learning loss LFL emphasizes tokens which are considered of higher importance for ensuring 11879 factual consistency between a generated statement and the cited passages according to an FCM ϕ. The mechanism for obtaining the relevance weights W is illustrated in Figure 3. We first compute and normalize Shapley values for the factual consistency prediction between y and its cited passages using the FCM ϕ, obtaining U. These normalized relevance values for the tokens of the FCM are subsequently mapped to tokens of the LLM F via an alignment algorithm. As seen in Figure 3, given a generated statement such as “In the 1968 film Planet of the Apes, Galen was played by Wright King.”, the relevance weights W consider factual tokens such as Wright and King more important than of and was, emphasizing units of information important for factually accurate citation. Computation of Token Relevance. We first compute relevance weights U over the FCM ϕ by computing Shapley values for the factual consistency prediction between an answer’s sentence si and its citations Ci: oi = ϕ(si, Ci). Shapley values assign importance scores wt to each feature (here token) in si concerning prediction oi. Since Shapley values distribute the prediction score oi along all the tokens of sentence si, the value of oi and the length of si impacts the scale of assigned values, as shown in Figure 3, potentially biasing the loss towards shorter sentences and amplifying idiosyncrasies of the FCM. Thus, we normalize the assigned Shapley values for each sentence via min-max normalization Norm(Ui): { ut−min(Ui) max(Ui)−min(Ui) | ut ∈Ui}, with Ui being token weights for sentence si. Thus, the token with the highest and lowest Shapley value is assigned a relevance score of 0 and 1, respectively.1 The computation of weights U for a given response y can subsequently be summarized as: U = ⊕n i=1Norm(SHAPϕ(si)) (3) with ⊕being a concatenation operator. Since citation tokens do not bear any semantic meaning themselves, they are excluded from the computation of ϕ(si, Ci) and are thus not yet contained in U. Therefore, we further insert a weight of 1 for each citation token of y into U. Feature Importance Mapping. Since the obtained relevance weights U are specific to the tokenizer used for ϕ, an LLM that uses a different 1The exploration of alternative normalization functions is left to future work. tokenization may not directly apply U to re-weight the loss LFL (c.f. Eq. 2). To this end, we define an alignment function that maps the FCM token weights U to LLM token weights W for the same sequence y. The alignment function first maps the shortest possible token span yϕ,l:l+m from the FCM’s tokenizer Tϕ to the span yF,p:p+q from the LLM’s tokenizer TF, with j, l ≥1.2 The relevance score for each LLM token in yF,p:p+q is computed as the average relevance score over the aligned FCM span yϕ,l:l+m: Wp:p+q = { P l≤t<m(ut) |yϕ,l:l+m| | yt ∈ yF,p:p+q} In the example of Figure 3, the weight for the LLM’s token McDowall is computed as the average over the weights for the three aligned FCM tokens (McD, ow, all). Further, since both LLM tokens K and ing are aligned to the single token King of the FCM, both tokens are set to the same relevance weight according to our algorithm. 5 Evaluation Datasets & Metrics. We conduct experiments on long-form QA datasets of the ALCE citation benchmark (Gao et al., 2023b), namely their version of ASQA (Stelmakh et al., 2022) and ELI5 (Fan et al., 2019). We further evaluate models on BIO (Min et al., 2023). For our domain transfer experiments, we further consider Hagrid (Kamalloo et al., 2023) as a source for training. Since Hagrid’s answers are generated by GPT-4 without annotations for factual coverage, we do not use it for evaluation as a target. For training, ALCE contains | D |= 4 cited gold instances. For Hagrid, we randomly sample 4 instances for CaLF. Details are in Appendix A.1. We follow the official metrics and nomenclature of Gao et al. (2023b) to measure correctness (via EM Recall), fluency (via MAUVE (Pillutla et al., 2021)), and citation F1 (via an NLI-trained T5-11B model (Honovich et al., 2022)). We further measure Rouge-L (Lin, 2004) and introduce a more strict variation of their correctness metric: passage-grounded correctness (Correct. in P), which only considers information from responses which are supported by the retrieved passages P. Subsequently, this metric ignores factual content produced by an LLM’s parametric memory if not explicitly derivable from the retrieved passages, isolating factual grounding from a model’s parametric memory. We use FactScore (Min et al., 2023) to evaluate the biographies of BIO. Detailed metric 2Note that we implement this mapping function tailored to the specific tokenizers used by our models, see Appendix A.2. 11880 Method ALCE-ASQA ALCE-ELI5 Similarity Fluency Correct. Correct. Citation Similarity Fluency Correct. Correct. Citation Rouge-L MAUVE EM Rec. in P F1 Rouge-L MAUVE EM Rec. in P F1 ChatGPT (Gao et al., 2023b) – 66.6 40.4 – 73.1 – 57.2 12.0 – 50.5 GPT-4 (Gao et al., 2023b) – 67.1 41.3 – 71.9 – 38.4 14.2 – 46.9 AGREE (Ye et al., 2024) – – 40.9 – 75.1 – – – – – Self-RAG 7B (Asai et al., 2024) 35.7 74.3 30.0 – 67.3 16.9 32.6 9.7 5.4 27.6 BP, T5-3B (Fierro et al., 2024) – – 33.8 – 77.8 – – 5.2 – 60.9 Llamav2-7B-chat In-context 35.90.3 77.83.1 35.00.6 25.70.6 49.91.0 20.50.2 36.22.5 17.70.6 10.80.6 38.20.6 Few-shot FT 34.90.4 69.24.3 32.00.4 22.30.7 55.01.8 21.30.2 58.22.2 17.80.6 11.21.1 48.72.9 Ours 37.80.4 86.03.7 37.70.6 29.30.4 70.42.5 20.81.0 59.611.5 17.00.3 11.90.2 66.55.9 Mistral-Instr. 7B In-context 36.70.3 85.52.7 34.40.4 27.80.4 22.30.9 21.60.9 43.84.8 19.10.4 11.10.2 19.50.6 Few-shot FT 38.10.2 87.70.8 36.10.9 29.41.1 66.74.5 20.50.3 48.04.9 15.51.2 10.00.6 49.94.0 Ours 37.21.2 84.57.5 36.41.6 30.01.1 76.21.9 21.80.2 58.24.0 19.50.8 13.10.2 66.04.7 MistralOrca 7B In-context 38.70.1 54.71.8 40.20.3 31.90.2 55.60.8 20.90.1 29.30.8 20.80.4 12.50.5 43.30.5 Few-shot FT 38.41.8 78.614.7 38.43.8 29.94.7 62.63.6 19.41.9 60.513.6 17.31.8 10.91.2 57.76.5 Ours 40.30.2 84.03.3 41.71.2 34.50.5 81.52.5 20.41.5 62.74.6 18.42.1 13.10.7 73.14.2 Table 1: Main results on ALCE using only 4 training samples as D, measured across 3 random seeds. Fine-tuning via CaLF substantially improves citation quality (Citation F1) and correct information grounded in passages (Correct. in P) over alternative training strategies and competitive baselines while maintaining factual coverage (EM Recall), fluency (MAUVE), and recall-oriented similarity (ROUGE-L) to gold responses without citations. descriptions (e.g. citation recall and precision) and results are in Appendix A.3. Experimental Setup. Following recommendations for weakly supervised learning (Zhu et al., 2023) and few-shot learning (Alex et al., 2021), we do not consider a validation set for hyperparametertuning, representing real-world scenarios more accurately. Due to computational constraints, we use LoRA (Hu et al., 2022) for parameter-efficient finetuning of CaLF and our fine-tuning baselines. We use Alignscore as our FCM ϕ in all experiments unless otherwise mentioned. A threshold is used to map Alignscore’s output o into a binary prediction for Q. Alignscore is substantially different from the FCM model used for evaluating citations, using a different architecture and training data (c.f. App. A.2). ASQA, Hagrid, and BIO use Wikipedia as their underlying knowledge base while ELI5 uses CommonCrawl. We use the same retrievers as Gao et al. (2023b) and Asai et al. (2024) to maintain comparability. Further implementation details are in Appendix A.2. Baselines. We compare CaLF on identical instruction-tuned LLMs against both in-context prompting and few-shot fine-tuning (Few-shot FT) via Eq. 1, trained on D. In our domain transfer experiments, the entire 335 training samples of Hagrid are used to train the fine-tuning baseline (FT). Furthermore, we consider state-of-the-art models, namely the most powerful in-context prompted baselines from Gao et al. (2023b), ChatGPT (gpt-3.5-turbo-0301) and GPT-4 (gpt-4-0613; 8K context window). Due to context length limitations, in-context prompting uses 2 randomly sampled instances. We further compare against the best results of AGREE (Ye et al., 2024) which use PaLM 2’s text-bison-001. Finally, we evaluate against open-source models, including SelfRAG 7B (Asai et al., 2024), based on Llama2, and Blueprint (BP) (Fierro et al., 2024), based on T53B. 5.1 Main Results Table 1 shows the main in-domain results, with mean and standard deviation computed over three seeds. CaLF improves citation F1 across datasets and models by 34.1 and 15.5 points over both in-context learning (In-context) and baseline finetuning (Few-shot FT), respectively. Impressively, all tested LLMs with CaLF outperform Self-RAG, ChatGPT, and GPT-4 with an average improvement of 24.8, 10.5, and 12.9 citation points, respectively. Moreover, CaLF achieves high citation scores while the overall quality of the response remains high. We observe modest improvement in correctness but substantial gains in grounded correctness, indicating that CaLF is better at including verifiable facts in its responses than our baselines. This contrasts observations for other models: GPT4 trades off improvements in correctness at the cost of citation quality, resulting in ChatGPT having overall higher citation scores than GPT-4. Notably, CaLF improves on correctness over GPT-4 while also producing higher-quality citations than ChatGPT, beating both models at their best-performing metric. Similarly to GPT, BP produces accurate citations but has low correctness, especially on ELI5. 11881 Method Source→Target Similarity Fluency Correct. Correct. Citation Method Source→Target Similarity Fluency Correct. Correct. Citation RougeL MAUVE EM Rec. in P F1 RougeL MAUVE EM Rec. in P F1 Self-RAG 7B 35.7 74.3 30.0 – 67.3 Self-RAG 7B 16.9 32.6 9.7 5.4 27.6 Llama2-7B-chat Zero-Shot →ASQA 36.1 47.5 35.6 27.1 31.0 Zero-Shot →ELI5 20.0 26.5 15.3 9.7 24.4 Few-shot FT ELI5→ASQA 37.2 74.0 37.7 31.6 64.9 Few-shot FT ASQA→ELI5 17.1 20.8 11.4 6.8 32.9 Ours ELI5→ASQA 37.1 75.7 36.1 30.4 73.1 Ours ASQA→ELI5 21.3 35.0 18.2 11.0 36.5 MistralOrca-7B Zero-Shot →ASQA 39.0 78.9 39.5 31.6 5.7 Zero-Shot →ELI5 21.3 35.0 22.2 12.6 10.4 Few-shot FT ELI5→ASQA 39.7 90.1 38.5 31.4 71.7 Few-shot FT ASQA→ELI5 20.9 41.1 19.7 10.6 40.4 Few-shot FT Hagrid→ASQA 36.7 66.1 37.8 29.5 51.3 Few-shot FT Hagrid→ELI5 21.3 58.1 20.9 12.7 32.3 Ours ELI5→ASQA 40.1 86.6 40.0 33.2 79.5 Ours ASQA→ELI5 21.2 31.3 20.4 12.5 57.3 Ours Hagrid→ASQA 39.7 80.8 38.6 32.3 80.0 Ours Hagrid→ELI5 21.1 32.3 20.2 12.9 54.6 Table 2: Results for our zero-shot domain transfer setting, when trained on a source dataset (D) and evaluated on a different target dataset without any in-context instances. CaLF’s citation quality (Citation F1) and passage-grounded correctness (Correct. in P) is superior to all baselines, without additional inference costs. Moreover, training via CaLF also leads to an overall improvement in ROUGE-L and MAUVE across models over the fine-tuning baseline by an average of 1.0 and 5.5, respectively. 5.2 Domain Transfer We run domain transfer experiments to evaluate the generalization of CaLF’s citation production, by training an LLM with CaLF on a source dataset and measuring performance on a different target dataset. Table 2 shows our domain transfer results. In every source-target configuration and across instruction-tuned models, CaLF outperforms zeroshot in-context learning, Few-shot FT, FT, and SelfRAG in both citation quality and correctness. CaLF (using MistralOrca-7B) exhibits small variability regarding the training source D, with a citation F1 difference ∆∗→ASQA of 0.5 and ∆∗→ELI5 of 2.7, respectively. While CaLF’s citation quality is comparable in-domain versus in a transfer setting on ASQA (−∆3.9, see Table 1), we observe larger differences on ELI5 (−∆11.9), since for ELI5 the knowledge source and question scope greatly differ from the Wikipedia-based source datasets. 5.3 Factuality We evaluate CaLF’s factual precision using FactScore. Results are shown in Table 3 for state-of-the-art methods taken from Asai et al. (2024), our fine-tuning baseline, and CaLF with MistralOrca-7B. CaLFscores the highest with 83.4, indicating that the improved citation quality translates to higher factual accuracy. Interestingly, while citation recall can be considered a stricter measure than the FactScore, the former is much higher for CaLF. We postulate the difference is caused by retrieval inaccuracies. Subsequently, we adjust predictions to abstain from answering if none of the passages’ titles match with the BIO entity.3 As seen in Table 3, the FactScore improves to 86.1 and 88.9 for our fine-tuning baseline and CaLF, respectively. While these results are promising they also highlight the importance of accurate retrieval and high-quality knowledge bases so that citation production can translate into improved factuality, an observation also made in Menick et al. (2022); Kryscinski et al. (2020). Method FS Citation Recall ChatGPT w/o retrieval 71.8 – Llamav2-13B-chat 79.9 – Self-RAG 7B 81.2 – Self-RAG 13B 80.2 – Few-shot FT 78.7 69.3 Few-shot FT + Retrieval Filtering 86.1 69.3 Ours 83.4 92.7 Ours + Retrieval Filtering 88.9 92.7 Table 3: FactScore (FS) evaluation. MistralOrca7B is used with Few-shot FT and Ours. Other results are taken from Asai et al. (2024). + Retrieval Filtering: the model abstains when no passage title matches the entity. 6 Discussion Ablation. Table 4 shows an ablation for the two mechanisms introduced in CaLF, the generation of weakly-supervised training data (WS) and the focused learning loss (LFL). By adding WS, we see an improvement of 18.3 and 16.7 citation recall and precision points on ASQA, respectively. By 3Abstaining is explicitly incorporated into FactScore. 11882 adding LFL, we further improve on top of WS by 6.3 and 8.2 recall and precision points, respectively. On ELI5, we observe similar improvements. Correctness Citation Citation Method in P Recall Precision ASQA LLM 25.8 57.5 55.2 LLM + WS 29.0 (+3.2) 75.8 (+18.3) 71.9 (+16.7) LLM + WS + LFL 33.8 (+8.0) 84.1 (+24.6) 83.8 (+28.9) ELI5 LLM 11.0 57.3 51.0 LLM + WS 11.5 (+0.5) 61.5 (+4.2) 57.2 (+6.2) LLM + WS + LFL 12.5 (+1.5) 72.1 (+14.8) 66.6 (+15.6) Table 4: Ablation for CaLF (MistralOrca-7B). WS: weakly-supervised training, LF L: Focused learning. Iterative Training. Performance of CaLF across iterations is shown in Figure 4 on ASQA. We see a majority of citation performance improvements within the first three iterations after which citation F1 stabilizes. We further observe consistent improvements to MAUVE, ROUGE-L, and grounded correctness across iterations. Importantly, we do not observe erratic or unstable performance, indicating the robustness of our iterative training procedure. The proportion of filtered examples ˆDk over ˜Yk as our dynamic stopping criterion matches the citation performance well, improving efficiency by early stopping once saturated (here iteration 4). 1 2 3 4 5 6 7 8 Iterations 30 40 50 60 70 80 Score (lines) 0.06 0.08 0.10 0.12 0.14 0.16 Acceptance Ratio of t on Rouge Mauve Correctness Gr. Correctness Attribution Recall Attribution Precision Acceptance Ratio (Stopping Criterion) Figure 4: Evaluation metrics and CaLF’s dynamic stopping criterion over the number of iterations on ASQA. Selection of FCM. We further investigate the extent to which the quality of an FCM translates to improved citation production for CaLF. To this end, we replace the FCM with: (i) a DeBERTav3 model adjusted for factual consistency (Steen et al., 2023), (ii) the current T5-based state-of-the-art on the TRUE benchmark (Gekhman et al., 2023), (iii) the FCM Honovich et al. (2022) used for citation evaluation. We constrain ˆDt to 32 samples and run without LFL since for decoders the computation of Shapley values over their vocabulary (prediction) is more involved. Table 5 shows results on the TRUE benchmark (summarization subset) and the citation F1 on ASQA with CaLF. We generally observe that better scores on the TRUE benchmark translate to higher citation scores when incorporated into CaLF. The only exception is the FCM Honovich et al. (2022), which is used in both training and evaluation, scoring disproportionately high in terms of citation F1, most likely due to model biases leaking through evaluation. While Gekhman et al. (2023) performs the best on TRUE, it is much more expensive to run at scale for CaLF than AlignScore. Method TRUE AUC Citation F1 Steen et al. (2023) 79.5 72.5 Honovich et al. (2022)† 82.7 81.9 AlignScore (Zha et al., 2023) 84.8 74.0 Gekhman et al. (2023) 87.8 77.7 Table 5: Measuring the extent to which the quality of an FCM (TRUE benchmark) translates to CaLF by comparing the quality of an FCM (TRUE benchmark) and CaLF. †Citation evaluation model. Adversarial Baselines. Long-form QA is notoriously difficult to automatically evaluate (Xu et al., 2023; Krishna et al., 2021). Thus, to assess whether the automatic evaluation metrics of Gao et al. (2023b) can be exploited, we designed several adversarial baselines, such as copying retrieved passages as the answer or citing all passages for every generated sentence. Results are shown in Table 6. Every baseline that attempts to trick one metric performs poorly on another. For instance, copying passages (Copy) results in very poor MAUVE while citing all passages (Ours w/ cite all) results in very low citation precision. A more detailed discussion is shown in Appendix A.3. Human Evaluation. We further conduct a human evaluation to compare the quality of generated responses produced by CaLF with Few-shot FT (using MistalOrca-7B). Human subjects are tasked to judge the models’ generations regarding citation quality, informativeness, coherence, and fluency, following human evaluation in Gao et al. (2023b) and recommendations in Zhong et al. (2022). The latter three metrics are measured using a five-point Likert scale, with five being the highest score. We 11883 Method Similarity Fluency Correct. Correct. Citation Citation Avg length Rouge-L MAUVE EM Rec. in P Recall Precision Ours 40.5 80.1 39.9 33.8 84.1 83.8 71.2 Copy (No length limit) 25.3 11.8 50.6 49.5 98.2 98.7 628.4 Copy (Truncated - 1st paragraph) 34.0 16.8 36.0 34.6 97.9 98.4 210.7 Copy (Truncated - 100 tokens) 33.3 16.2 22.8 20.9 96.1 96.3 100.0 Ours w/ all set to [1] 40.5 80.1 39.9 33.8 49.0 49.5 71.2 Ours w/ cite all 40.5 80.1 39.9 33.8 75.6 36.4 71.2 Ours w/o EOS token 33.0 20.2 46.9 39.7 71.1 81.7 256.0 Table 6: Evaluation of adversarial baselines on ASQA, using MistralOrca-7B. Results are shown for seed 42. Copy: Use retrieved passages directly as a response. w/ all set to [1]: Replacement of all citations in the generated response with citations to the first passage. w/ cite all: Replacement of citations in the generated response with citations to all passages. w/o EOS token: Remove EOS token from training. The average gold answer length is 113 tokens. randomly sample 30 instances for each model from ELI5, ASQA, and BIO, resulting in 180 instances. Results are shown in Table 7. CaLF achieves substantially higher citation quality, fluency, and answer coherence ratings than across datasets with an F1 of 91.5 versus 72.7 for the baseline. While average informativeness ratings are comparable between CaLF and the baseline , on ELI5 particularly we observe low informativeness with CaLF. This is caused by frequent retrieval errors since ELI5 uses a weak retriever (BM25) to efficiently traverse over its large and noisy knowledge base. Instead of using its parametric memory when passages contain little relevant information, we observe that CaLF still produces accurately grounded responses, however, at the cost of less informative content. Method Citation InformaCoherence Fluency F1 tiveness of 100 ↑ Likert Scale 1 to 5 ↑ Few-shot ASQA 65.2 4.07 4.07 4.57 FT ELI5 68.7 4.13 4.07 4.5 BIO 84.3 4.6 4.57 4.67 Avg 72.7 4.27 4.23 4.58 Ours ASQA 93.7 4.67 4.67 4.93 ELI5 82.7 3.3 3.93 4.17 BIO 97.3 4.87 4.77 4.83 Avg 91.5 4.28 4.46 4.64 Table 7: Human evaluation study. Informativeness, coherence, and fluency are judged using a five-point Likert scale. Results confirm automatic evaluation, putting CaLF ahead in terms of citation quality while maintaining high response quality. Efficiency At inference time, CaLF matches the efficiency of the few-shot FT baseline, contrasting previous methods that cause significant overhead, including in-context learning (increased input length), Self-RAG (tree-decoding with critique tokens), and post-hoc editing. The training complexity of CaLF can be described as O(K × |X)|), with K and X being the number of iterations and the collection size of questions we generate weaklysupervised data of, respectively. The generation of diverse answer candidates is computationally the most involved while the filtering of answer candidates and the computation of Shapley Values is efficient due to the small size of the FCM we use (Alignscore, 355M parameters). To quantify the computational cost, we measured training times using a single A100 40GB GPU. We measure a training time of 13h51min for CaLF and 1h2min for the few-shot FT baseline. While there’s a notable disparity in training time, it’s essential to note that achieving comparable performance to CaLF via regular fine-tuning would necessitate training on significantly more data, resulting in additional training and, importantly, data annotation cost. 7 Conclusion This paper presented CaLF, a fine-tuning method for LLMs to produce accurate citations alongside generated text. It focuses on using FCMs as a training signal by filtering candidate answers with citations and by re-weighting the LLM’s objective function according to the tokens’ factual importance. CaLF outperforms all baselines in terms of citation quality and passage-grounded correctness while ensuring that the overall quality of responses remains high, measured via both automated and human evaluation. In a domain transfer setting we further validate the generalizability of CaLF. Moreover, we discuss the benefits of accurate citation production for improving factuality and highlight the importance of each component of CaLF through a systematic ablation. Future work looks at incorporating a learned mechanism into CaLFto abstain from answering if none of the retrieved passages are considered relevant to the question. 11884 Limitations The assumption that every generated sentence requires a citation is an oversimplification we adopt from Gao et al. (2023b). However, real-world dialogue agents commonly introduce their response with non-factual phrases or sentences (e.g. Of course I can help!). A potential solution is to introduce an extrapolatory label as described in Yue et al. (2023). Furthermore, CaLF is not entirely model-agnostic out-of-the-box, since the tokenization alignment algorithm was designed with the idiosyncrasies of the particular LLMs and FCMs in mind. Moreover, inaccuracies or biases of the factual consistency models could negatively affect the citation quality of CaLF (c.f. Sec. 6). A common strategy to mitigate biases involves an alignment stage, where models are explicitly optimized on based on preference data. It would be interesting to explore alignment in the context of CaLF applied to highly domain-specific questions, such as those compiled in the recent dataset of Malaviya et al. (2024). Finally, our paper focuses exclusively on improving generation, yet as highlighted in Section 6, high-quality retrieval systems are vital for citations to be effective. When retrieved passages contain non-factual information or lack useful content, produced answers might not be informative. While the reliance of retrieved passage is answer production poses a limitation, language model’s intrinsic memory for content generation forfeits the transparency and verifiability benefits inherent in citation production. Balancing the use of citation production and intrinsic memory while preserving the advantages of both remains an ongoing challenge. Future directions might consider a joint fine-tuning procedure of generator and retriever and even capturing their rich interactions, such as abstaining from answering questions when retrieved passages are irrelevant. The action of abstaining to answer could serve as a signal to the retrieval system to seek alternative sources of information. Ethics Statement Our paper improves the ability of LLMs to produce accurate citations to sources alongside their generated answers to improve the verifiability of LLMs’ responses. As noted in Section 5.3, we caution against equating improved citation quality with improved factuality since even correctly cited passages can be factually incorrect or misleading without additional context about the passage and its source. Citations can therefore invoke a false sense of trust and amplify observations made in Si et al. (2023). It remains important for users to stay critical and reflect on generated responses. By improving citation accuracy our paper simplifies the verification process but does not eliminate it. Furthermore, Huang and Chang (2024) argue that the incorporation of citations can help to address intellectual property and associated ethical issues in the deployment of LLMs due to the increased verifiability of responses made. Finally, we recognize a potential risk of dual-use by adversarial actors: similarly to how humans cherry-pick data to support a strong bias or prior, we cannot guarantee that an LLM will not exhibit similar behaviour when manipulated with passages that contain misinformation or miss relevant context. Acknowledgements The authors would like to thank the anonymous reviewers for their time and detailed feedback on our paper. We further thank Tony Hu for being part of the human evaluation. References Roee Aharoni, Shashi Narayan, Joshua Maynez, Jonathan Herzig, Elizabeth Clark, and Mirella Lapata. 2023. Multilingual summarization with factual consistency evaluation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3562–3591, Toronto, Canada. Association for Computational Linguistics. Neel Alex, Eli Lifland, Lewis Tunstall, Abhishek Thakur, Pegah Maham, C. Jess Riedel, Emmie Hine, Carolyn Ashurst, Paul Sedille, Alexis Carlier, Michael Noetel, and Andreas Stuhlmüller. 2021. RAFT: A real-world few-shot text classification benchmark. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2024. Self-RAG: Learning to retrieve, generate, and critique through self-reflection. In The Twelfth International Conference on Learning Representations. Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214–217, Barcelona, Spain. Association for Computational Linguistics. Bernd Bohnet, Vinh Q Tran, Pat Verga, Roee Aharoni, Daniel Andor, Livio Baldini Soares, Massimiliano Ciaramita, Jacob Eisenstein, Kuzman Ganchev, 11885 Jonathan Herzig, et al. 2022. Attributed question answering: Evaluation and modeling for attributed large language models. arXiv preprint arXiv:2212.08037. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Machine learning challenges workshop, pages 177–190. Springer. Yifan Deng, Xingsheng Zhang, Heyan Huang, and Yue Hu. 2023. Towards faithful dialogues via focus learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4554–4566, Toronto, Canada. Association for Computational Linguistics. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Constanza Fierro, Reinald Kim Amplayo, Fantine Huot, Nicola De Cao, Joshua Maynez, Shashi Narayan, and Mirella Lapata. 2024. Learning to plan and generate text with citations. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2023a. RARR: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477–16508, Toronto, Canada. Association for Computational Linguistics. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023b. Enabling large language models to generate text with citations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6465–6488, Singapore. Association for Computational Linguistics. Zorik Gekhman, Jonathan Herzig, Roee Aharoni, Chen Elkind, and Idan Szpektor. 2023. TrueTeacher: Learning factual consistency evaluation with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2053–2070, Singapore. Association for Computational Linguistics. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning, pages 1321–1330. PMLR. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Or Honovich, Roee Aharoni, Jonathan Herzig, Hagai Taitelbaum, Doron Kukliansy, Vered Cohen, Thomas Scialom, Idan Szpektor, Avinatan Hassidim, and Yossi Matias. 2022. TRUE: Re-evaluating factual consistency evaluation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3905–3920, Seattle, United States. Association for Computational Linguistics. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan AllenZhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. Jie Huang and Kevin Chen-Chuan Chang. 2024. Citation: A key to building responsible and accountable large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Ehsan Kamalloo, Aref Jafari, Xinyu Zhang, Nandan Thakur, and Jimmy Lin. 2023. Hagrid: A human-llm collaborative dataset for generative information-seeking with attribution. arXiv preprint arXiv:2307.16883. Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to progress in long-form question answering. In Proceedings of the 2021 Conference of the North 11886 American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4940–4957, Online. Association for Computational Linguistics. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023. Helma: A large-scale hallucination evaluation benchmark for large language models. arXiv preprint arXiv:2305.11747. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Nelson Liu, Tianyi Zhang, and Percy Liang. 2023. Evaluating verifiability in generative search engines. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7001–7025, Singapore. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations. Chaitanya Malaviya, Subin Lee, Sihao Chen, Elizabeth Sieber, Mark Yatskar, and Dan Roth. 2024. ExpertQA: Expert-curated questions and attributed answers. In 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9802–9822, Toronto, Canada. Association for Computational Linguistics. Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy CampbellGillingham, Geoffrey Irving, et al. 2022. Teaching language models to support answers with verified quotes. arXiv preprint arXiv:2203.11147. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12076–12100, Singapore. Association for Computational Linguistics. Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering ambiguous open-domain questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5783– 5797, Online. Association for Computational Linguistics. Edoardo Mosca, Ferenc Szigeti, Stella Tragianni, Daniel Gallagher, and Georg Groh. 2022. SHAP-based explanation methods: A review for NLP interpretability. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4593– 4603, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Benjamin Muller, John Wieting, Jonathan Clark, Tom Kwiatkowski, Sebastian Ruder, Livio Soares, Roee Aharoni, Jonathan Herzig, and Xinyi Wang. 2023. Evaluating and modeling attribution for cross-lingual question answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 144–157, Singapore. Association for Computational Linguistics. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted questionanswering with human feedback. arXiv preprint arXiv:2112.09332. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y Zhao, Yi Luan, Keith B Hall, Ming-Wei Chang, et al. 2021. Large dual encoders are generalizable retrievers. arXiv preprint arXiv:2112.07899. R OpenAI. 2023. Gpt-4 technical report. arxiv 2303.08774. View in Article, 2. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. MAUVE: Measuring the gap between neural text and human text using divergence frontiers. In Advances in Neural Information Processing Systems. Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Thirty-seventh Conference on Neural Information Processing Systems. Hannah Rashkin, Vitaly Nikolaev, Matthew Lamm, Lora Aroyo, Michael Collins, Dipanjan Das, Slav Petrov, Gaurav Singh Tomar, Iulia Turc, and David Reitter. 2023. Measuring attribution in natural language generation models. Computational Linguistics, pages 1–64. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual 11887 Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Lloyd S Shapley. 1953. A value for n-person games. In Harold W. Kuhn and Albert W. Tucker, editors, Contributions to the Theory of Games II, pages 307– 317. Princeton University Press, Princeton. Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng, Hal Daumé III, and Jordan Boyd-Graber. 2023. Large language models help humans verify truthfulness–except when they are convincingly wrong. arXiv preprint arXiv:2310.12558. Julius Steen, Juri Opitz, Anette Frank, and Katja Markert. 2023. With a little push, NLI models can robustly and efficiently predict faithfulness. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 914–924, Toronto, Canada. Association for Computational Linguistics. Ivan Stelmakh, Yi Luan, Bhuwan Dhingra, and MingWei Chang. 2022. ASQA: Factoid questions meet long-form answers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8273–8288, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239. Katherine Tian, Eric Mitchell, Huaxiu Yao, Christopher D Manning, and Chelsea Finn. 2023. Finetuning language models for factuality. arXiv preprint arXiv:2311.08401. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Prasetya Utama, Joshua Bambrick, Nafise Moosavi, and Iryna Gurevych. 2022. Falsesum: Generating document-level NLI examples for recognizing factual inconsistency in summarization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2763–2776, Seattle, United States. Association for Computational Linguistics. Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Fangyuan Xu, Yixiao Song, Mohit Iyyer, and Eunsol Choi. 2023. A critical evaluation of evaluations for long-form question answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3225–3245, Toronto, Canada. Association for Computational Linguistics. Xi Ye, Ruoxi Sun, Sercan Ö Arik, and Tomas Pfister. 2024. Effective large language model adaptation for improved grounding. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Xiang Yue, Boshi Wang, Ziru Chen, Kai Zhang, Yu Su, and Huan Sun. 2023. Automatic evaluation of attribution by large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4615–4635, Singapore. Association for Computational Linguistics. Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu. 2023. AlignScore: Evaluating factual consistency with a unified alignment function. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11328–11348, Toronto, Canada. Association for Computational Linguistics. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023. Siren’s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219. Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2023– 2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Dawei Zhu, Xiaoyu Shen, Marius Mosbach, Andreas Stephan, and Dietrich Klakow. 2023. Weaker than you think: A critical look at weakly supervised learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14229–14253, Toronto, Canada. Association for Computational Linguistics. 11888 A Appendix A.1 Datasets The four datasets considered for evaluation cover different question-answering tasks and domains: ASQA is a disambiguation task built on top of AmbigQA (Min et al., 2020), ELI5 contains realworld highly open-ended questions and answers from an online forum (Reddit), Hagrid contains entity-specific questions, and BIO is a biography generation task containing simple person-related questions. Hagrid’s training set as provided by Kamalloo et al. (2023) consists of 1,922 GPT-4 generated answers, out of which 335 instances are human-labeled as both attributable and informative. Only these 335 instances are used as training data. For Hagrid, we randomly sample 4 instances for CaLF and use all its 335 training samples for our fine-tuning baseline. We use each datasets’ original training set to sample questions X and atomic facts A. Since Hagrid does not have any annotations for factual coverage, we compute the quality assurance function Q only over the Citation conditions. BIO consists of two evaluation sets. As recommended by the authors and following Self-RAG, we use the second, and more difficult, evaluation set in our experiments. The test set of ALCE-ASQA and ALCE-ELI5 consists of 1000 randomly sampled instances from their respective development set. For more details on the ALCE benchmark, we refer to Gao et al. (2023b). A.2 Experimental Setup Token alignment algorithm We design the token alignment algorithm around the idiosyncrasies of the LLMs’ and FCM’s tokenizers. Llama2, Mistral-Instruct, and MistralOrca use the Llama tokenizer4, a BPE model (Sennrich et al., 2016) based on sentencepiece5. Similarly, the FCM, Alignscore, uses a RoBERTa tokenizer which is derived from the GPT-2 tokenizer, also using BPE. Both Llama and RoBERTa tokenizers treat spaces as parts of the tokens. In contrast to RoBERTa tokenizer, the Llama tokenizer indicates each space via an underscore ("_"). We exclude special tokens of either tokenizer (e.g. "<s>" or "<0x0A>") from the alignment procedure. 4https://huggingface.co/docs/transformers/ main/en/model_doc/llama#transformers. LlamaTokenizer 5https://github.com/google/sentencepiece?tab= readme-ov-file Given a sentence, tokenized both by FCM and LLM, we we first strip the FCM tokens (i.e. remove spaces) and remove underscores for tokens of the LLM, essentially removing all spaces to unify their representations. The algorithm then checks whether the current tokens are equal or subsets of one another, with and without considering the memory of tokens not yet aligned but iterated over by the pointers. For instance, consider the FCM tokens: ’Maw’, ’syn’, ’ram’, and LLM tokens for the same word: is _M’, ’aws’, ’yn’, ’ram’. For the first two FCM tokens, none of the LLM tokens can be directly aligned to it. Our algorithm ensures to find the smallest sequence of tokens in both the LLM and FCM that can be aligned to each other (here the spans are ’Mawsyn’ and ’ram’) and assigns the relevance score as described in Section 4.2. We find the smallest sequence by keeping a list of tokens for each FCM and LLM that could not yet be aligned and increment the respective pointers according to which sequence needs to continue (e.g. Checking that [M] + [was] continues [Maw] potentially, requires incrementing the FCM pointer, since an ’s’ is missing in [Maw]. This algorithm is efficient and runs in linear time. Generating Cited Answers for Training. While datasets such as ELI5 have over 250K training instances we could use to generate weaklysupervised training data, we constrained the size of X to 1000 samples, since it is computationally infeasible to run our weakly-supervised data generation procedure on all training samples of the dataset. The threshold Θ is determined dynamically. Starting with a value of 0.9, if the number of samples in ˆD is below 3, we consider the threshold too high for the given LLM and reduce it by 0.1 until the requirement is met. Passage Retrieval. For the retriever R we use GTR (Ni et al., 2021) for ASQA and Hagrid (Wikipedia), BM25 for ELI5 (CommonCrawl6), and Contriever-MS MARCO (Izacard et al., 2022) for BIO (Wikipedia), to maintain comparability with Gao et al. (2023b) and Asai et al. (2024). We use |P| = 3 passages throughout due to context length limitations. We use the same indices as provided by ALCE, subsequently, each passage has a length of 100 tokens. Answer Generation. We use the same instructions/prompts across all models, specifically the 6http://commoncrawl.org 11889 default prompts from ALCE. In contrast to the code of ALCE7, we used the chat templates across all models and baselines8 to render the inputs, since we observed crucial tokens missing during the finetuning procedure otherwise. The only exception is ELI5, where we observed that chat templates resulted in worse performance and used ALCE’s strategy plus relevant formatting tokens instead across all models and baselines. For inference, we use MAP with a beam size of 1, instead of sampling with temperature scaling as done in ALCE, assuming that this results in more factual outputs.9 Factual Consistency Model Our choice of using AlignScore as our FCM ϕ was further motivated by its dissimilarity to the citation evaluation model (Honovich et al., 2022) (in contrast to e.g. TrueTeacher (Gekhman et al., 2023) which in principle performed better but is more similar to the evaluation model). While AlignScore is an encoderbased RoBERTa model, TRUE (Honovich et al., 2022) is an encoder-decoder T5-11B model. Moreover, their training data is different. AlignScore was trained on 15 datasets of various tasks, including natural language inference, summarization, information retrieval, and paraphrasing while TRUE has only seen 6 natural language inference-related datasets. Training & Hyperparameters We set the learning rate to 3−4 and train for a total of 100 steps across all models and experiments. The maximum generation length is set to 256 tokens, however, most generations stay far below this limit (see Appendix A.3. We use adamw (Loshchilov and Hutter, 2019) as the optimizer. We use a batch size of 1 during training with gradient accumulation, resulting in an effective batch size of 4. For LoRA, we use a rank r = 4 and apply it to all parts of the attention mechanism. For fine-tuning, we exclude tokens of the prompts from the loss computation that are not part of the gold answer, so we are not fine-tuning the instructions, only the answers that follow after the instruction. The iterative training process stops after at most 8 iterations due to computational constraints. We further down-weight the 7https://github.com/princeton-nlp/ALCE 8https://huggingface.co/docs/transformers/ main/en/chat_templating 9Note that we do not use constrained decoding for citation generation despite its apparent suitability here. Yet, the position of citation markers within a sentence can be rather flexible, which we consider a desired property for naturally produced text. loss contribution of the EOS token to 0.02 since models tend to otherwise produce very short singlesentence responses. Implementation Details We use the Huggingface checkpoints for LLama2-7B10, MistralOrca7B11, and Mistral-Instruct-7B,12 as well as for all FCMs we considered in our experiments, namely AlignScore (RoBERTa-large, 355M parameters) 13, TRUE (T5, 11B parameters)14, (Steen et al., 2023)15 (DeBERTaV3, 304M parameters), and TrueTeacher (T5, 11B parameters)16. The Mistral models are licensed under Apache2.0, AlignScore and (Steen et al., 2023) are licensed under MIT, TrueTeacher is licensed under cc-by-nc-4.0, and Llama2 is licensed under the llama license17. Subsequently, our research is consistent with the licenses’ intended use. The models are intended to be used for English. We use the ALCE data and prompts from their repository18, and Huggingface’s Datasets for Hagrid19. For running the experiments, we used a combination of A100 40GB and A10G with 23GB GPUs. We use the Python package rouge-score20 for computing the ROUGE, and the package mauve-text21 for computing the MAUVE score. We adopt the code of ALCE for the citation and correctness metrics and define passagegrounded correctness ourselves. We use NLTK (Bird and Loper, 2004) for some pre-and postprocessing steps. Baselines. Results from state-of-the-art methods are taken from their respective papers. The only exception is Self-RAG on ELI5, which we have run by ourselves using the authors’ repository and 10https://huggingface.co/meta-llama/ Llama-2-7b-chat-hf 11https://huggingface.co/Open-Orca/ Mistral-7B-OpenOrca 12https://huggingface.co/mistralai/ Mistral-7B-Instruct-v0.1 13https://huggingface.co/yzha/AlignScore, and their repository https://github.com/yuh-zha/ AlignScore 14https://huggingface.co/google/t5_xxl_true_ nli_mixture 15https://huggingface.co/juliussteen/ DeBERTa-v3-FaithAug 16https://huggingface.co/google/t5_11b_ trueteacher_and_anli 17https://github.com/facebookresearch/llama/ blob/main/LICENSE 18https://github.com/princeton-nlp/ALCE 19https://huggingface.co/datasets/miracl/hagrid 20https://pypi.org/project/rouge-score/ 21https://pypi.org/project/mauve-text/ 11890 their models22. At the point of submission, their repository produces results which are worse than reported in their paper, as explained by the authors due to a bug23, which potentially impacts scores reported of Self-RAG on ELI5. While the authors have stated their intentions to fix this issue, the problem was not resolved as of our submission date. We will update their scores once a fix is available. A.3 Evaluation Detailed descriptions for each evaluation metric are shown in Table 8. Results shown in Table 2 are computed with default random seed 42. Ablation Table 9 Table 10 show the ablation results for CaLFon Mistral-Instruct 7B and Llamav27B-chat, respectively. The results largely align with observations made for MistralOrca-7B in table 4. We observe that passage-grounded correctness either slightly decreases or remains comparable to the baseline when using the weakly-supervised training without our focused learning objective LFL. Once the objective is added,vcorrectness improves consistently across datasets and models over the baseline. Adversarial Baselines The automated metrics proposed by Gao et al. (2023b) do not explicitly control for the generation of irrelevant information (i.e. factual precision, such as FactScore)24. Subsequently, their metrics favour longer responses for coverage-based measures (i.e. correctness), as also pointed out by Asai et al. (2024). This raises the question of whether the automated metrics deployed can be tricked with trivial responses or certain response patterns. Table 6 shows results across metrics for several such baselines on ASQA. Our approach has an average response length of 71.2 tokens, comparable to typical fine-tuning with 61.0 tokens, both being substantially shorter than the dataset’s average gold answer length with 113 tokens. In contrast, the retrieved passages themselves are 521.8 tokens long. Considering these as the answer themselves, we indeed achieve a higher correctness score (50.6) while maintaining perfect citation, however, we observe a substantial decrease in both ROUGE-L and 22https://github.com/AkariAsai/self-rag 23https://github.com/AkariAsai/self-rag/issues/4 24The exclusion of such metric in ALCE is likely due to gold answers not being designed to be factually comprehensive. MAUVE scores. This is intuitive since the retrieved passages are very dissimilar in style from the gold answers. Moreover, MAUVE has an explicit length bias (Pillutla et al., 2021). When explicitly biasing our CaLF to generate long sequences by removing the EOS token during fine-tuning and setting the generation limit to 256 tokens, we also observe a substantial increase in correctness while maintaining much of the attribution performance. Yet, again we observe a substantial decrease in ROUGE-L and MAUVE, highlighting the importance of maintaining all performance metrics high while optimizing attribution, as achieved by CaLF. Finally, we consider replacing the citations made in generated responses from our model with: (i) citations to exclusively the first passage, (ii) citations to all passages. As seen in the table, citation scores are substantially worse than our approach. While in principle (ii) should present an upper bound on attribution recall, we observe lower scores than our model even here. This can be explained by inaccuracies in the evaluation model, being biased towards information at the beginning of the premise (scoring much worse for citations to the last passage). A.4 Human Evaluation Human subjects are tasked to judge the models’ generations regarding: (i) citation recall: judgement whether a generated sentence is fully supported by citations, (ii) citation precision: whether a citation partially or fully supports a sentence, (iii) informativeness: whether the generation helps to answer the question, (iv) coherence: whether generated sentences are semantically and syntactically well-connected, (v) fluency: whether all sentences are grammatically correct and well-readable. While (i), (ii), and (iii) are adopted from the human evaluation in Gao et al. (2023b), we further measure (iv) and (v) as recommended in Zhong et al. (2022). We randomly sample 30 instances for each model for each dataset (ELI5, ASQA, BIO), resulting in 180 instances which are judged via the above criteria. The instances were judged by four subjects, distributed equally (45 samples per subject). Each subject was provided with detailed annotation guidelines, providing examples for each possible annotation option with explanations. The subjects were partially authors and partially volunteers. All subjects were informed how the data would be used and provided consent. All subjects are male under 65 years of age and either from the USA or the UK. 11891 Name Description Computation Gold Answer-based Metrics Fluency (MAUVE) Measures two types of errors: (i) model produces degenerate text (outside of human distribution), (ii) model does not yield diverse text (does not cover human distribution) Rλ = λP + (1 −λ)Q, summarizing KL divergence of KL(P|Rλ) and KL(Q|Rλ, for λ ∈(0, 1), computed via Monte-Carlo estimator (LM embeddings + quantization via k-means). Similarity (ROUGE-L) Measures longest matching sequence of words. Bad approximator for factuality (see e.g. here) Computes F1 for longest common subsequence (LCS). Recall being ratio over reference. Precision ratio over answer. Specialized Metrics Correctness Measures whether atomic units of information from gold answers appear in the generated answer. ASQA: Measures exact match of short answers in generated answers. ELI5: Measures whether atomic statements are consistent with generated answer. Passage-grounded Correctness Measures whether atomic units of information from gold answers appear in the generated answers and whether this information is supported by the retrieved passages. Eliminates to score responses that are correct but not grounded in retrieved passages. ASQA: Measures exact match of short answers in generated answers that can be attributed to retrieved passages. ELI5: Measures whether atomic statements are consistent with generated answer that can be attributed to retrieved passages . Citation Recall Measures the ratio of sentences in generated answers that are consistent with their cited sources/passages. Recall is 1 iff there exists at least one citation and sentence is consistent with citations considered jointly: ϕeval(⊕(Ci), si) = 1. Citation Precision Measures the ratio of citations in generated answer that are not irrelevant. Citation considered irrelevant iff: i) the citation itself does not support the attributed sentence: ϕeval(ci,j, si) = 0 ii) removing the citation does not affect rest of citations to attribute the sentence: ϕeval(⊕(Ci)\{ci,j}, si) = 1. Table 8: Overview of the LFQA evaluation metrics. Correctness, Fluency, and Citation scores are taken from the ALCE (Gao et al., 2023b). Passage-grounded Correctness is a metric we propose to measure information coverage for citable statements (i.e. excluding hallucinated correct information). ϕeval is the evaluation FCM, namely TRUE (Honovich et al., 2022). Correctness Citation Citation Method in P Recall Precision ASQA LLM 28.3 60.9 63.5 LLM + WS 24.1 (-4.2) 72.3 (+11.4) 73.9 (+10.4) LLM + WS + LFL 29.6 (+1.3) 79.2 (+19.7) 80.2 (+16.7) ELI5 LLM 9.1 55.2 40.1 LLM + WS 10.9 (+1.8) 60.9 (+5.4) 59.6 (+19.5) LLM + WS + LFL 12.5 (+3.4) 70.0 (+14.8) 67.0 (+26.9) Table 9: Ablation for CaLF (Mistral-Instruct-7B). WS: weakly-supervised training, LF L: Focused learning. Correctness Citation Citation Method in P Recall Precision ASQA LLM 23.4 58.7 55.3 LLM + WS 23.1 (-0.3) 69.6 (+10.9) 66.3 (+11.0) LLM + WS + LFL 30.7 (+7.3) 76.0 (+17.3) 72.5 (+17.2) ELI5 LLM 11.3 53.2 46.6 LLM + WS 9.1 (-2.1) 67.1 (+13.9) 66.4 (+19.8) LLM + WS + LFL 11.7 (+0.4) 71.2 (+18.0) 63.2 (+16.6) Table 10: Ablation for CaLF (Llama2-7B-chat). WS: weakly-supervised training, LF L: Focused learning. A.5 Qualitative Examples To qualitatively compare the produced answers by CaLF and our few-shot FT baseline we show a randomly selected output for each evaluation dataset. Results are shown in Figure 5, Figure 6, and Figure 7 for ASQA, ELI5, and BIO, respectively. 11892 Method Rouge-L Fluency Correctness Grounded Correct. Citation Recall Citaion Precision Avg length ASQA ChatGPT (Gao et al., 2023b) – 66.6 40.4 – 73.6 72.5 – Vicuna-13B (Gao et al., 2023b) – 82.6 31.9 – 51.1 50.1 – Self-RAG 7B (Asai et al., 2024) 35.7 74.3 30.0 – 66.9 67.8 – In-context (Llamav2-7B-chat) 35.9 84.1 34.5 25.2 50.4 50.0 93.3 Few-Shot FT (Llamav2-7B-chat) 34.7 71.3 33.1 23.4 58.7 55.3 52.5 In-context (Mistral-Instruct 7B) 36.4 86.6 34.1 28.0 21.7 23.6 75.9 Few-Shot FT (Mistral-Instruct 7B) 37.6 82.5 35.3 28.3 60.9 63.5 105.7 In-context (MistralOrca 7B) 38.8 53.9 40.1 32.4 52.5 61.0 64.5 Few-Shot FT (MistralOrca 7B) 37.5 81.9 37.3 25.8 57.5 55.2 61.0 Ours (Llamav2-7B-chat) 37.9 84.3 37.8 29.5 72.8 72.3 86.9 Ours (Mistral-Instruct 7B) 37.7 87.5 35.2 29.6 79.2 80.2 79.5 Ours (MistralOrca 7B) 40.5 80.1 39.9 33.8 84.1 83.8 71.2 ELI5 ChatGPT (Gao et al., 2023b) – 57.2 12.0 – 51.1 50.0 – Vicuna-13B (Gao et al., 2023b) – 58.2 10.0 – 15.6 19.6 – Self-RAG 7B (Asai et al., 2024) 16.9 32.6 9.7 5.4 23.3 33.9 – In-context (Llamav2-7B-chat) 19.7 37.7 14.1 8.6 39.9 27.6 110.7 Few-Shot FT (Llamav2-7B-chat) 21.3 54.9 17.8 11.3 53.2 46.6 138.4 In-context (Mistral-Instruct 7B) 20.5 62.3 17.4 11.7 43.8 44.9 111.2 Few-Shot FT (Mistral-Instruct 7B) 19.5 37.8 13.8 9.1 55.2 40.1 93.0 In-context (MistralOrca 7B) 20.8 27.7 20.5 12.4 45.4 41.8 94.8 Few-Shot FT (MistralOrca 7B) 20.5 44.9 18.4 11.0 57.3 51.0 100.8 Ours (Llamav2-7B-chat) 21.3 69.5 17.2 11.7 71.2 63.2 141.2 Ours (Mistral-Instruct 7B) 21.8 53.5 18.9 12.5 70.0 67.0 143.8 Ours (MistralOrca 7B) 20.7 68.3 18.6 12.5 72.1 66.6 108.3 Table 11: Main in-domain results on ASQA, ELI5 using CaLF for fine-tuning various instruction-tuned LLMs, using only 4 initial samples D. Results are shown for default random seed 42. Method Source Rouge Fluency Correct. Gr. Correct. Attr. Recall Attr. Precision Length Target Dataset: ASQA Self-RAG 7B Critique tokens 35.7 74.3 30.0 – 66.9 67.8 – Zero-Shot (Llama2-Chat-7B) – 36.1 47.5 35.6 27.1 24.3 42.8 126.9 Zero-Shot (MistralOrca) – 39.0 78.9 39.5 31.6 5.3 6.1 63.6 Few-shot FT (Llama2-Chat-7B) ELI5 37.2 74.0 37.7 31.6 67.9 62.2 150.6 Few-shot FT (MistralOrca) ELI5 39.7 90.1 38.5 31.4 73.8 69.7 94.1 FT (MistralOrca) Hagrid 36.7 66.1 37.8 29.5 50.8 51.8 57.6 Ours (Llama2-Chat-7B) ELI5 37.1 75.7 36.1 30.4 77.8 69.0 141.6 Ours (MistralOrca) ELI5 40.1 86.6 40.0 33.2 80.4 78.5 101.0 Ours (MistralOrca) Hagrid 39.7 80.8 38.6 32.3 78.9 81.1 64.9 Target Dataset: ELI5 Self-RAG 7B Instr. tuning w/ critique tokens 16.9 32.6 9.7 5.4 23.3 33.9 – Zero-Shot (Llama2-Chat-7B) – 20.0 26.5 15.3 9.7 17.8 39.0 111.1 Zero-Shot (MistralOrca) – 21.3 35.0 22.2 12.6 8.1 14.4 120.5 Few-shot FT (Llama2-Chat-7B) ASQA 17.1 20.8 11.4 6.8 33.3 32.6 70.4 Few-shot FT (MistralOrca) ASQA 20.9 41.1 19.7 10.6 39.8 41.0 92.1 FT (MistralOrca) Hagrid 21.3 58.1 20.9 12.7 28.6 37.0 112.4 Ours (Llama2-Chat-7B) ASQA 21.3 35.0 18.2 11.0 38.6 34.7 137.7 Ours (MistralOrca) ASQA 21.2 31.3 20.4 12.5 57.4 57.2 97.2 Ours (MistralOrca) Hagrid 21.1 32.3 20.2 12.9 54.0 55.3 101.7 Table 12: Results for our zero-shot domain transfer setting, when trained on a source dataset and evaluated on a different target dataset without any in-context instances. Results are shown for default random seed 42. 11893 Question: What is the genre of holes by louis sachar? Retrieved Passages: [1] Title: Louis Sachar Text: Louis Sachar Louis Sachar ( ; born March 20, 1954) is an American young-adult mystery-comedy author. He is best known for the "Wayside School" series and the award-winner, "Holes". "Holes" won the 1998 U.S. National Book Award for Young People’s Literature and the 1999 Newbery Medal for the year’s "most distinguished contribution to American literature for children". In 2013, it was ranked sixth among all children’s novels in a survey published by "School Library Journal". After graduating from high school, Sachar attended Antioch College for a semester before transferring to University of California, Berkeley, during which time he began helping [2] Title: Holes (novel) Text: American teenager with a criminal history. A new friendship with Ginny, who has cerebral palsy, a reunion with former friend X-Ray, a ticket-scalping scheme, a beautiful pop singer, and a frame-up all test Armpit’s resolve to "Just take small steps and keep moving forward". Holes (novel) Holes is a 1998 young adult mystery comedy novel written by Louis Sachar and first published by Farrar, Straus and Giroux. The book centers around an unlucky teenage boy named Stanley Yelnats, who is sent to Camp Green Lake, a juvenile corrections facility in a desert in Texas, after being falsely accused of theft. [3] Title: Holes (novel) Text: Holes (novel) Holes is a 1998 young adult mystery comedy novel written by Louis Sachar and first published by Farrar, Straus and Giroux. The book centers around an unlucky teenage boy named Stanley Yelnats, who is sent to Camp Green Lake, a juvenile corrections facility in a desert in Texas, after being falsely accused of theft. The plot explores the history of the area and how the actions of several characters in the past have affected Stanley’s life in the present. These interconnecting stories touch on themes such as racism, homelessness, illiteracy, and arranged marriage. The book was both a Few-shot FT Baseline Response: "Holes by Louis Sachar is a genre of young adult mystery comedy novel [2]. It was first published by Farrar, Straus and Giroux and won the 1998 U.S. National Book Award for Young People’s Literature and the 1999 Newbery Medal for the year’s ¨most distinguished contribution to American literature for children¨[1]. The novel centers around an unlucky teenage boy named Stanley Yelnats, who is sent to Camp Green Lake, a juvenile corrections facility in a desert in Texas, after being falsely accused of theft [2]." CaLF Response: The genre of "Holes" by Louis Sachar is a young adult mystery-comedy [1]. The novel explores themes such as racism, homelessness, illiteracy, and arranged marriage [3]. Figure 5: ASQA Example of generated answers with citations given a question and retrieved passages. We compare the answers produced by the Few-shot FT baseline model with those generated by CaLF. 11894 Question: Why are soap operas instantly identifiable. Retrieved Passages: [1] Title: New Soap Chronicles of the Sun To Debut On TV5Monde USA - Soap Opera Spy Text: with Soap Opera Spy for all the latest news, updates and spoilers on all of your favorite daytime and primetime television shows right here! Young and the Restless Sharon Case Talks Mariah-Sharon Drama and Why Nick Is The One Days Of Our Lives Spoilers: JJ Pays It Forward – Talks New Nurse Out Of Suicide, Instantly Smitten With Her [2] Title: TV Soap Opera - English, Belonging Text: Number 96, which set a new standard and wholly different tone for Australian TV ’soapies’. Other popular and successful local soap operas included The Box (1974), The Young Doctors (1976), The Restless Years (1977), and Prisoner (1979). Teacher to survey the class and lead class discussion: What television soap operas do you watch and why? What soap character do you most closely identify with and why? Viewing questions: Megan Spencer describes the new phase of Neighbours as breaking with the old tradition of “marketing ourselves back to ourselves”. How important do you think it is for viewers to see their [3] Title: Framing The Hobbit – The Golden Lasso Text: per second. What does this mean? Well normal movies and TV shows are shown at 24 frames per second and our minds naturally fill in the missing bits. We don’t even realize that it’s happening. However, soap operas, home movies, and other videos are shot at 30 frames per second. This is why you can instantly tell a soap opera or reality TV show from other programs just by looking at it for a few seconds. In general, our minds link video and high frame rates to low budget programs. Despite Peter Jackson’s assurance that it will revolutionize movies, many Few-shot FT Baseline Response: Soap operas are instantly identifiable because of their distinct tone and style, which is often characterized by a specific storyline and target audience [1]. Additionally, soap operas are easily distinguishable from other programs by their frame rate, which is 30 frames per second, as opposed to the standard 24 frames per second [3]. CaLF Response: Soap operas are instantly identifiable because they are shown at 30 frames per second, which is different from the 24 frames per second used for normal movies and TV shows [3]. Our minds naturally fill in the missing bits, making it easy to tell a soap opera or reality TV show from other programs just by looking at it for a few seconds [3]. Figure 6: ELI5 Example of generated answers with citations given a question and retrieved passages. We compare the answers produced by the Few-shot FT baseline model with those generated by CaLF. 11895 Question: Tell me a bio of Riccardo Tisci. Retrieved Passages: [1] Title: Riccardo Tisci Text: Riccardo Tisci Riccardo Tisci (; born 1974) is an Italian fashion designer. He studied in Italy at the Design Istituto d’Arte Applicata in Cantù until the age of 17, and then graduated from London’s Central Saint Martins Academy in 1999. In 2005, Tisci was given the title of creative director for Givenchy Women’s haute couture and ready-to-wear lines. In May 2008 he was additionally named as menswear and accessories designer of the Givenchy men’s division. In March 2018, it was announced he had been appointed chief creative officer of Burberry, succeeding Christopher Bailey. Tisci’s apparent fascination with Gothic touches (dark, [2] Title: Riccardo Tisci Text: rappers Jay-Z and Kanye West to create the cover for "Watch the Throne", as well as the covers for singles H•A•M and Otis. He has now recently collaborated again with Kanye West for his label GOOD Music on their album "Cruel Summer". Riccardo Tisci Riccardo Tisci (; born 1974) is an Italian fashion designer. He studied in Italy at the Design Istituto d’Arte Applicata in Cantù until the age of 17, and then graduated from London’s Central Saint Martins Academy in 1999. In 2005, Tisci was given the title of creative director for Givenchy Women’s haute couture and ready-to-wear lines. [3] Title: Riccardo Tisci Text: Tisci worked for companies such as Puma and Coccapani before signing a three-year contract with Ruffo Research, a company that has helped launch the careers of several fashion designers, such as Sophia Kokosalaki. Tisci has mentioned in an interview that he would like to work with an Iranian brand, although it is not known how realistic his wish is. Upon the expiration of his contract in July 2004, Tisci spent time living in India, where he began to work on his own collection. In September 2004, during the Milan Fashion Week, Tisci debuted his first Riccardo Tisci Collection for Fall Few-shot FT Baseline Response: Several years after graduating from London’s Central Saint Martins Academy in 1999, Riccardo Tisci was given the title of creative director for Givenchy Women’s haute couture and ready-to-wear lines in 2005 [1]. Later, in May 2008, he was additionally named as menswear and accessories designer of the Givenchy men’s division [1]. And in March 2018, he was appointed chief creative officer of Burberry, succeeding Christopher Bailey [3]. Tisci’s apparent fascination with Gothic touches can be seen in his collaboration with rappers Jay-Z and Kanye West to create the cover for ¨Watch the Throne¨, as well as the covers for singles H•A•M and Otis [2]. He has also collaborated again with Kanye West for his label GOOD Music on their album ¨Cruel Summer¨[2]. CaLF Response: Riccardo Tisci is an Italian fashion designer, born in 1974 [1]. He studied at the Design Istituto d’Arte Applicata in Cantù in Italy until the age of 17 and later graduated from London’s Central Saint Martins Academy in 1999 [1]. Tisci was appointed creative director for Givenchy Women’s haute couture and ready-to-wear lines in 2005 [1]. In May 2008, he was also named as menswear and accessories designer for the Givenchy men’s division [1]. In March 2018, Tisci was appointed chief creative officer of Burberry, succeeding Christopher Bailey [1]. Tisci has collaborated with rappers Jay-Z and Kanye West to create the cover for "Watch the Throne" and the covers for singles H•A•M and Otis [2]. He has also worked with Kanye West for his label GOOD Music on their album "Cruel Summer" [2]. Figure 7: BIO Example of generated answers with citations given a question and retrieved passages. We compare the answers produced by the Few-shot FT baseline model with those generated by CaLF. 11896
2024
641
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11897–11916 August 11-16, 2024 ©2024 Association for Computational Linguistics Improving Text Embeddings with Large Language Models Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei Microsoft Corporation {wangliang,nanya,xiaolhu,yang.linjun,ranganm,fuwei}@microsoft.com Abstract In this paper, we introduce a novel and simple method for obtaining high-quality text embeddings using only synthetic data and less than 1k training steps. Unlike existing methods that often depend on multi-stage intermediate pretraining with billions of weakly-supervised text pairs, followed by fine-tuning with a few labeled datasets, our method does not require building complex training pipelines or relying on manually collected datasets that are often constrained by task diversity and language coverage. We leverage proprietary LLMs to generate diverse synthetic data for hundreds of thousands of text embedding tasks across 93 languages. We then fine-tune open-source decoder-only LLMs on the synthetic data using standard contrastive loss. Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labeled data. Furthermore, when fine-tuned with a mixture of synthetic and labeled data, our model sets new state-of-the-art results on the BEIR and MTEB benchmarks. 1 Introduction Text embeddings are vector representations of natural language that encode its semantic information. They are widely used in various natural language processing (NLP) tasks, such as information retrieval (IR), question answering, semantic textual similarity, bitext mining, item recommendation, etc. In the field of IR, the first-stage retrieval often relies on text embeddings to efficiently recall a small set of candidate documents from a large-scale corpus using approximate nearest neighbor search techniques. Embedding-based retrieval is also a crucial component of retrieval-augmented generation (RAG) (Lewis et al., 2020), which is an emerging paradigm that enables large language models (LLMs) to access dynamic external knowledge without modifying the model parameters. Source attribution of generated text is another important application of text embeddings (Gao et al., 2023) that can improve the interpretability and trustworthiness of LLMs. Previous studies have demonstrated that weighted average of pre-trained word embeddings (Pennington et al., 2014; Arora et al., 2017) is a strong baseline for measuring semantic similarity. However, these methods fail to capture the rich contextual information of natural language. With the advent of pre-trained language models (Devlin et al., 2019), Sentence-BERT (Reimers and Gurevych, 2019) and SimCSE (Gao et al., 2021) have been proposed to learn text embeddings by fine-tuning BERT on natural language inference (NLI) datasets. To further enhance the performance and robustness of text embeddings, state-of-the-art methods like E5 (Wang et al., 2022b) and BGE (Xiao et al., 2023) employ a more complex multi-stage training paradigm that first pre-trains on billions of weakly-supervised text pairs, and then fine-tunes on several high-quality labeled datasets. Existing multi-stage approaches suffer from several drawbacks. Firstly, they entail a complex multistage training pipeline that demands substantial engineering efforts to curate large amounts of relevance pairs. Secondly, they rely on manually collected datasets that are often constrained by the diversity of tasks and the coverage of languages. For instance, Instructor (Su et al., 2023) is only trained on instructions from 330 English datasets, whereas BGE (Xiao et al., 2023) only focuses on high-resource languages such as English and Chinese. Moreover, most existing methods employ BERT-style encoders as the backbone, neglecting the recent advances of training better LLMs and related techniques such as context length extension (Rozière et al., 2023). In this paper, we propose a novel method for text embeddings that leverages LLMs to overcome the 11897 limitations of existing approaches. We use proprietary LLMs to generate synthetic data for a diverse range of text embedding tasks in 93 languages, covering hundreds of thousands of embedding tasks. Specifically, we use a two-step prompting strategy that first prompts the LLMs to brainstorm a pool of candidate tasks, and then prompts the LLMs to generate data conditioned on a given task from the pool. To cover various application scenarios, we design multiple prompt templates for each task type and combine the generated data from different templates to boost diversity. For the text embedding models, we opt for fine-tuning powerful open-source LLMs rather than small BERT-style models. Since LLMs such as Mistral (Jiang et al., 2023) have been extensively pre-trained on webscale data, contrastive pre-training that proves to be important for BERT models (Wang et al., 2022b) offers little additional benefit. We demonstrate that Mistral-7B, when finetuned solely on synthetic data, attains competitive performance on the BEIR (Thakur et al., 2021) and MTEB (Muennighoff et al., 2023) benchmarks. This is particularly intriguing considering that this setting does not involve any labeled data. When fine-tuned on a mixture of synthetic and labeled data, our model achieves new state-of-the-art results, surpassing previous methods by a significant margin (+2%). The entire training process requires less than 1k steps. Moreover, we empirically validate that our model can effectively perform personalized passkey retrieval for inputs up to 32k tokens by altering the rotation base of the position embeddings, extending the context length beyond the conventional 512 token limit. Regarding its multilinguality, our model excels on high-resource languages. However, for low-resource languages, there is still room for improvement as current opensource LLMs are not adequately pre-trained on them. 2 Related Work Text Embeddings are continuous lowdimensional representations of text and have been extensively applied to various downstream tasks such as information retrieval, question answering, and retrieval-augmented generation (RAG). Early work on text embeddings includes latent semantic indexing (Deerwester et al., 1990) and weighted average of word embeddings (Mikolov et al., 2013). More recent methods exploit supervision from natural language inference (Bowman et al., 2015) and labeled query-document pairs, such as the MS-MARCO passage ranking dataset (Campos et al., 2016), to train text embeddings (Reimers and Gurevych, 2019; Conneau et al., 2017; Gao et al., 2021). However, labeled data are often limited in terms of task diversity and language coverage. To address this challenge, methods like Contriever (Izacard et al., 2021), OpenAI Embeddings (Neelakantan et al., 2022), E5 (Wang et al., 2022b), and BGE (Xiao et al., 2023) adopt a multi-stage training paradigm. They first pre-train on large-scale weakly-supervised text pairs using contrastive loss and then fine-tune on small-scale but high-quality datasets. In this paper, we demonstrate that it is possible to obtain state-of-the-art text embeddings with single-stage training. Synthetic Data Synthetic data generation is a widely studied topic in information retrieval research, with various methods proposed to enhance retrieval systems with artificially created data. For instance, Doc2query (Nogueira et al., 2019), InPars (Bonifacio et al., 2022), and Promptagator (Dai et al., 2022) generate synthetic queries for unlabeled documents, which are then leveraged for document expansion or model training. GPL (Wang et al., 2022a) employs a cross-encoder to produce pseudo-labels for query-document pairs. Similarly, Query2doc (Wang et al., 2023) generates pseudo-documents for query expansion by fewshot prompting LLMs. Unlike these methods, our approach does not rely on any unlabeled documents or queries and thus can generate more diverse synthetic data. Another related line of work focuses on knowledge distillation from black-box LLMs by training on synthetic data generated from them. DINO (Schick and Schütze, 2021) generates synthetic text pairs for semantic textual similarity. Unnatural Instructions (Honovich et al., 2022) is a synthetic instruction following dataset by prompting existing LLMs. Orca (Mukherjee et al., 2023) and Phi (Gunasekar et al., 2023) propose to train better small language models by using high-quality synthetic data from GPT-3.5/4 (OpenAI, 2023). Large Language Models With the popularization of ChatGPT, large language models (LLMs) have demonstrated remarkable capabilities in in11898 You have been assigned a retrieval task: {task} Your mission is to write one text retrieval example for this task in JSON format. The JSON object must contain the following keys: - "user_query": a string, a random user search query specified by the retrieval task. - "positive_document": a string, a relevant document for the user query. - "hard_negative_document": a string, a hard negative document that only appears relevant to the query. Please adhere to the following guidelines: - The "user_query" should be {query_type}, {query_length}, {clarity}, and diverse in topic. - All documents should be at least {num_words} words long. - Both the query and documents should be in {language}. … (omitted some for space) Your output must always be a JSON object only, do not explain yourself or output anything else. Be creative! {"user_query": "How to use Microsoft Power BI for data analysis", "positive_document": "Microsoft Power BI is a sophisticated tool that requires time and practice to master. In this tutorial, we'll show you how to navigate Power BI … (omitted) ", “hard_negative_document”: “Excel is an incredibly powerful tool for managing and analyzing large amounts of data. Our tutorial series focuses on how you…(omitted)” } Brainstorm a list of potentially useful text retrieval tasks. Here are a few examples for your reference: - Provided a scientific claim as query, retrieve documents that help verify or refute the claim. - Search for documents that answers a FAQ-style query on children's nutrition. Please adhere to the following guidelines: - Specify what the query is, and what the desired documents are. - Each retrieval task should cover a wide range of queries, and should not be too specific. Your output should always be a python list of strings only, with about 20 elements, and each element corresponds to a distinct retrieval task in one sentence. Do not explain yourself or output anything else. Be creative! ["Retrieve company's financial reports for a given stock ticker symbol.", "Given a book name as a query, retrieve reviews, ratings and summaries of that book.", "Search for scientific research papers supporting a medical diagnosis for a specified disease.“ … (omitted for space)] new session Figure 1: An example two-step prompt template for generating synthetic data with GPT-4. We first prompt GPT-4 to brainstorm a list of potential retrieval tasks, and then generate (query, positive, hard negative) triplets for each task. “{...}” denotes a placeholder that will be replaced by sampling from a predefined set of values. Full prompts are available in Appendix C. struction following and few-shot in-context learning (Brown et al., 2020). However, the most advanced LLMs such as GPT-4 (OpenAI, 2023) are proprietary and have little technical details disclosed. To bridge the gap between proprietary and open-source LLMs, several notable efforts have been made, such as LLaMA-2 (Touvron et al., 2023) and Mistral (Jiang et al., 2023) models. A major limitation of LLMs is that they lack awareness of recent events and private knowledge. This issue can be partly mitigated by augmenting LLMs with information retrieved from external sources, a technique known as retrieval-augmented generation (RAG). On the other hand, LLMs can also serve as foundation models to enhance text embeddings. RepLLaMA (Ma et al., 2023) proposes to fine-tune LLaMA-2 with bi-encoder architecture for ad-hoc retrieval. SGPT (Muennighoff, 2022), GTR (Ni et al., 2022b), and Udever (Zhang et al., 2023a) demonstrate the scaling law of text embeddings empirically, but their performance still falls behind small bidirectional encoders such as E5 (Wang et al., 2022b) and BGE (Xiao et al., 2023). In this paper, we present a novel approach to train state-of-the-art text embeddings by exploiting the latest advances of LLMs and synthetic data. 3 Method 3.1 Synthetic Data Generation Utilizing synthetic data generated by advanced LLMs such as GPT-4 presents a compelling opportunity, especially in terms of enhancing diversity 11899 across a multitude of tasks and languages. Such diversity is essential for developing robust text embeddings that can perform well across different tasks, be it semantic retrieval, textual similarity, or clustering. To generate diverse synthetic data, we propose a simple taxonomy that categorizes embedding tasks into several groups, and then apply different prompt templates to each group. Asymmetric Tasks This category comprises tasks where the query and document are semantically related but are not paraphrases of each other. Depending on the length of the query and document, we further divide asymmetric tasks into four subgroups: short-long match, long-short match, short-short match, and long-long match. For instance, short-long match tasks involve a short query and a long document, which is a typical scenario in commercial search engines. For each subgroup, we design a two-step prompt template that first prompts LLMs brainstorm a list of tasks, and then generates a concrete example conditioned on the task definition. In Figure 1, we show an example prompt for the short-long match subgroup. The full output is available in Table 16. The outputs from GPT-4 are mostly coherent and of high quality. In our preliminary experiments, we also attempted to generate the task definition and query-document pairs using a single prompt, but the data diversity was not as satisfactory as the proposed two-step approach. Symmetric Tasks Symmetric tasks involve queries and documents that have similar semantic meanings but different surface forms. We examine two application scenarios: monolingual semantic textual similarity (STS) and bitext retrieval. We design two distinct prompt templates for each scenario, tailored to their specific objectives. Since the task definition is straightforward, we omit the brainstorming step for symmetric tasks. To further boost the diversity of the prompts and thus the synthetic data, we incorporate several placeholders in each prompt template, whose values are randomly sampled at runtime. For example, in Figure 1, the value of “{query_length}” is sampled from the set “{less than 5 words, 5-10 words, at least 10 words}”. To generate multilingual data, we sample the value of “{language}” from the language list of XLM-R (Conneau et al., 2020), giving more weight to high-resource languages. Any generated data that does not conform to the predefined JSON format are discarded during the parsing process. We also remove duplicates based on exact string matching. 3.2 Training Given a relevant query-document pair (q+, d+), we first apply the following instruction template to the original query q+ to generate a new one q+ inst: q+ inst = Instruct: {task_definition} \n Query: {q+} (1) where “{task_definition}” is a placeholder for a one-sentence description of the embedding task. For generated synthetic data, we use the outputs from the brainstorming step. For other datasets, such as MS-MARCO, we manually craft the task definitions and apply them to all the queries in the dataset. We do not modify the document side with any instruction prefix. In this way, the document index can be prebuilt, and we can customize the task to perform by changing only the query side. Given a pretrained LLM, we append an [EOS] token to the end of the query and document, and then feed them into the LLM to obtain the query and document embeddings (hq+ inst, hd+) by taking the last layer [EOS] vector. To train the embedding model, we adopt the standard InfoNCE loss L over the in-batch negatives and hard negatives: min L = −log φ(q+ inst, d+) φ(q+ inst, d+) + X ni∈N (φ(q+ inst, ni)) (2) where N denotes the set of all negatives, and φ(q, d) is a function that computes the matching score between query q and document d. In this paper, we adopt the temperature-scaled cosine similarity function as follows: φ(q, d) = exp(1 τ cos(hq, hd)) (3) τ is a temperature hyper-parameter, which is fixed to 0.02 in our experiments. 4 Experiments 4.1 Statistics of the Synthetic Data Figure 2 presents the statistics of our generated synthetic data. We manage to generate 500k examples with 150k unique instructions using Azure 11900 short-long 167k long-short 122k short-short 13k long-long 17k bitext 89k sts 99k distribution of task types English 43.1% Polish 3.0% Japanese 2.9% Italian 2.9% Russian 2.9% Indonesian 2.9% German 2.9% Persian 2.9% Spanish 2.8% Chinese 2.8% French 2.8% Portuguese 2.8% Dutch 2.8% Arabic 2.7% Others 19.8% distribution of languages Figure 2: Task type and language statistics of the generated synthetic data (see Section 3.1 for task type definitions). The “Others” category contains the remaining languages from the XLM-R language list. # of datasets → Class. Clust. PairClass. Rerank Retr. STS Summ. Avg 12 11 3 4 15 10 1 56 Unsupervised Models Glove (Pennington et al., 2014) 57.3 27.7 70.9 43.3 21.6 61.9 28.9 42.0 SimCSEbert-unsup (Gao et al., 2021) 62.5 29.0 70.3 46.5 20.3 74.3 31.2 45.5 Supervised Models SimCSEbert-sup (Gao et al., 2021) 67.3 33.4 73.7 47.5 21.8 79.1 23.3 48.7 Contriever (Izacard et al., 2021) 66.7 41.1 82.5 53.1 41.9 76.5 30.4 56.0 GTRxxl (Ni et al., 2022b) 67.4 42.4 86.1 56.7 48.5 78.4 30.6 59.0 Sentence-T5xxl (Ni et al., 2022a) 73.4 43.7 85.1 56.4 42.2 82.6 30.1 59.5 E5large-v2 (Wang et al., 2022b) 75.2 44.5 86.0 56.6 50.6 82.1 30.2 62.3 GTElarge (Li et al., 2023) 73.3 46.8 85.0 59.1 52.2 83.4 31.7 63.1 BGElarge-en-v1.5 (Xiao et al., 2023) 76.0 46.1 87.1 60.0 54.3 83.1 31.6 64.2 Ours E5mistral-7b + full data 78.5 50.3 88.3 60.2 56.9 84.6 31.4 66.6 w/ synthetic data only 78.2 50.5 86.0 59.0 46.9 81.2 31.9 63.1 w/ synthetic + msmarco 78.3 49.9 87.1 59.5 52.2 81.2 32.7 64.5 Table 1: Results on the MTEB benchmark (Muennighoff et al., 2023) (56 datasets in the English subset). The numbers are averaged for each category. Please refer to Table 17 for the scores per dataset. OpenAI Service 1, among which 25% are generated by GPT-35-Turbo and others are generated by GPT-4. The total token consumption is about 180M. The predominant language is English, with coverage extending to a total of 93 languages. For the bottom 75 low-resource languages, there are about 1k examples per language on average. Please see Table 16 in the appendix for examples of synthetic data. In terms of data quality, we find that a portion of GPT-35-Turbo outputs do not strictly follow the guidelines specified in the prompt templates. Nevertheless, the overall quality remains acceptable, and preliminary experiments have demonstrated the benefits of incorporating this data subset. 1https://oai.azure.com/ 4.2 Model Fine-tuning and Evaluation The pretrained Mistral-7b (Jiang et al., 2023) checkpoint is fine-tuned for 1 epoch using the loss in Equation 2. We follow the training recipe from RankLLaMA (Ma et al., 2023) and utilize LoRA (Hu et al., 2022) with rank 16. To further reduce GPU memory requirement, techniques including gradient checkpointing, mixed precision training, and DeepSpeed ZeRO-3 are applied. For the training data, we utilize both the generated synthetic data and a collection of 13 public datasets, yielding approximately 1.8M examples after sampling. More details are available in Appendix A. To provide a fair comparison with some previous work, we also report results when the only labeled supervision is the MS-MARCO passage 11901 High-resource Languages Low-resource Languages en fr es ru te hi bn sw BM25 (Zhang et al., 2023b) 35.1 18.3 31.9 33.4 49.4 45.8 50.8 38.3 mDPR (Zhang et al., 2023b) 39.4 43.5 47.8 40.7 35.6 38.3 44.3 29.9 mE5base (Wang et al., 2024) 51.2 49.7 51.5 61.5 75.2 58.4 70.2 71.1 mE5large (Wang et al., 2024) 52.9 54.5 52.9 67.4 84.6 62.0 75.9 74.9 E5mistral-7b + full data 57.3 55.2 52.2 67.7 73.9 52.1 70.3 68.4 Table 2: nDCG@10 on the dev set of the MIRACL dataset for both high-resource and low-resource languages. We select the 4 high-resource languages and the 4 low-resource languages according to the number of candidate documents. The numbers for BM25 and mDPR come from Zhang et al. (2023b). For the complete results on all 16 languages, please see Table 6. Retrieval Classification MTEB All 20 30 40 50 60 70 80 90 Performance +8.2 +4.3 +5.7 XLM-R-large + full data original w/ cont. pre-train Retrieval Classification MTEB All 20 30 40 50 60 70 80 90 Performance +0.0 +0.2 +0.1 E5-mistral-7b + full data original w/ cont. pre-train Figure 3: Effects of contrastive pre-training. Detailed numbers are in Appendix Table 7. ranking (Campos et al., 2016) dataset. We evaluate the trained model on the MTEB benchmark (Muennighoff et al., 2023). Note that the retrieval category in MTEB corresponds to the 15 publicly available datasets in the BEIR benchmark (Thakur et al., 2021). Evaluation of one model takes about 3 days on 8 V100 GPUs due to the need to encode a large number of documents. Although our model can accommodate sequence length beyond 512, we only evaluate on the first 512 tokens for efficiency. Official metrics are reported for each category. For more details about the evaluation protocol, please refer to the original papers (Muennighoff et al., 2023; Thakur et al., 2021). 4.3 Main Results In Table 1, our model “E5mistral-7b + full data” attains the highest average score on the MTEB benchmark, outperforming the previous state-of-the-art model by 2.4 points. In the “w/ synthetic data only” setting, no labeled data is used for training, and yet the performance remains quite competitive. We posit that generative language modeling and text embeddings are the two sides of the same coin, with both tasks requiring the model to have a deep understanding of the natural language. Given an embedding task definition, a truly robust LLM should be able to generate training data on its own and then be transformed into an embedding model through light-weight fine-tuning. Our experiments shed light on the potential of this direction, and more research is needed to fully explore it. Model BEIR MTEB OpenAI text-embedding-3-large 55.4 64.6 Cohere-embed-english-v3.0 55.0 64.5 voyage-lite-01-instruct 55.6 64.5 UAE-Large-V1 54.7 64.6 E5mistral-7b + full data 56.9 66.6 Table 3: Comparison with commercial models and the model that tops the MTEB leaderboard (as of 202312-22) (Li and Li, 2023). “BEIR” is the average nDCG@10 score over 15 public datasets in the BEIR benchmark (Thakur et al., 2021). “MTEB” is the average score over 56 datasets in the English subset of the MTEB benchmark (Muennighoff et al., 2023). For the commercial models listed here, little details are available on their model architectures and training data. In Table 3, we also present a comparison with several commercial text embedding models. However, due to the lack of transparency and documentation about these models, a fair comparison is not feasible. We focus especially on the retrieval per11902 Query: what is the pass key for Malayah Graves? Doc1: <prefix filler> Malayah Graves's pass key is 123. Remember it. 123 is the pass key for Malayah Graves. <suffix filler> Doc2: <prefix filler> Cesar McLean's pass key is 456. Remember it. 456 is the pass key for Cesar McLean. <suffix filler> …… Figure 4: Illustration of the personalized passkey retrieval task adapted from Mohtashami and Jaggi (2023). The “<prefix filler>” and “<suffix filler>” are repeats of “The grass is green. The sky is blue. The sun is yellow. Here we go. There and back again.” In addition, each document has a unique person name and a random passkey inserted at a random position. The task is to retrieve the document that contains the given person’s passkey from 100 candidates. 256 512 1k 2k 4k 8k 16k 32k Context Length 0 20 40 60 80 100 Top1 Accuracy window 4k, base 10^4 window 32k, base 10^4 window 32k, base 10^5 window 32k, base 10^6 Figure 5: Accuracy of personalized passkey retrieval as a function of input context length. For each context length, we randomly generate 50 queries and compute the top-1 accuracy. formance on the BEIR benchmark, since retrievalaugmented generation is an emerging technique to enhance LLM with external knowledge and proprietary data. As Table 3 shows, our model outperforms the current commercial models by a significant margin. 4.4 Multilingual Retrieval To assess the multilingual capabilities of our model, we conduct an evaluation on the MIRACL dataset (Zhang et al., 2023b), which comprises human-annotated queries and relevance judgments across 18 languages. The validation set contains labels for 16 languages. As shown in Table 2, our model surpasses mE5large on high-resource languages, notably on English. Nevertheless, for lowresource languages, our model remains suboptimal compared to mE5base. We attribute this to the fact that Mistral-7B is predominantly pre-trained on English data, and we anticipate that future multilingual LLMs will leverage our method to bridge this gap. To evaluate our model’s cross-lingual retrieval capability, we report Bitext mining results in Table 4. For baselines including mContriever (Izacard et al., 2021), LaBSE (Feng et al., 2022), and mE5 (Wang et al., 2024), we evaluate the results BUCC 2018 4 langs Tatoeba 112 langs mContriever 93.7 37.7 LaBSE 98.8 81.1 mE5base 98.1 68.1 mE5large 98.6 75.7 E5mistral-7b 98.9 70.1 Table 4: Bitext mining results. BUCC 2018 (Zweigenbaum et al., 2018) contains 4 high-resource languages. Tatoeba (Artetxe and Schwenk, 2019) consists of 112 English-centric language pairs. using publicly available checkpoints. Our observations indicate that, similar to the MIRACL retrieval, E5mistral-7b excels in bitext mining for high-resource languages only. 5 Analysis 5.1 Is Contrastive Pre-training Necessary? Weakly-supervised contrastive pre-training is one of the key factors behind the success of existing text embedding models. For instance, Contriever (Izacard et al., 2021) treats random cropped spans as positive pairs for pre-training, while E5 (Wang et al., 2022b) and BGE (Xiao et al., 2023) collect and 11903 Datasets Class. Clust. PairClass. Rerank Retr. STS Summ. Avg E5mistral-7b 78.3 49.9 87.1 59.5 52.2 81.2 32.7 64.5 w/ LLaMA-2 7b init. 76.2 48.1 85.1 58.9 49.6 81.2 30.8 62.9-1.6 w/ msmarco data only 71.6 47.1 86.1 58.8 54.4 79.5 31.7 62.7-1.8 pooling type w/ mean pool 77.0 48.9 86.1 59.2 52.4 81.4 30.8 64.1-0.4 w/ weighted mean 77.0 49.0 86.1 59.2 52.0 81.4 30.2 64.0-0.5 LoRA rank w/ r=8 78.4 50.3 87.1 59.3 53.0 81.0 31.7 64.8+0.3 w/ r=32 78.4 50.3 87.4 59.5 52.2 81.2 30.6 64.6+0.1 instruction type w/o instruction 72.3 47.1 82.6 56.3 48.2 76.7 30.7 60.3-4.2 w/ task type prefix 71.1 46.5 79.7 54.0 52.7 73.8 30.0 60.3-4.2 Table 5: Results on the MTEB benchmark with various hyperparameters. The first row corresponds to the default setting, which employs last-token pooling, LoRA rank 16, and natural language instructions. Unless otherwise stated, all models are trained on the synthetic and MS-MARCO passage ranking data. filter text pairs from various sources. This section re-evaluates the necessity of contrastive pre-training for LLMs, particularly those that have been pre-trained on trillions of tokens. Figure 3 shows that contrastive pre-training benefits XLM-Rlarge, enhancing its retrieval performance by 8.2 points when fine-tuned on the same data, which aligns with prior findings. However, for Mistral-7B based models, contrastive pre-training has negligible impact on the model quality. This implies that extensive auto-regressive pre-training enables LLMs to acquire good text representations, and only minimal fine-tuning is required to transform them into effective embedding models. 5.2 Extending to Long Text Embeddings Existing evaluation datasets for text embedding models are typically short, to evaluate the longcontext capability of our model, we introduce a novel synthetic task called personalized passkey retrieval, which is illustrated in Figure 4. This task requires encoding the passkey information in a long context into the embeddings. We compare the performance of different variants by changing the sliding window size and the RoPE rotation base (Su et al., 2024) in Figure 5. The results show that the default configuration with 4k sliding window attains 100% accuracy within 4k tokens, but the accuracy deteriorates quickly as the context length grows. Naively extending the sliding window size to 32k results in worse performance. By changing the RoPE rotation base to 105, the model can achieve over 90% accuracy within 32k tokens. However, this entails a minor trade-off in performance for shorter contexts. A potential avenue for future research is to efficiently adapt the model to longer contexts through lightweight post-training (Zhu et al., 2023). 5.3 Analysis of Training Hyperparameters Table 5 presents the results under different configurations. We notice that the Mistral-7B initialization holds an advantage over LLaMA-2 7B, in line with the findings from Mistral-7B technical report (Jiang et al., 2023). The choice of pooling types and LoRA ranks does not affect the overall performance substantially, hence we adhere to the default setting despite the marginal superiority of LoRA rank 8. On the other hand, the way of adding instructions has a considerable impact on the performance. We conjecture that natural language instructions better inform the model regarding the embedding task at hand, and thus enable the model to generate more discriminative embeddings. Our framework also provides a way to customize the behavior of text embeddings through instructions without the need to fine-tune the model or re-build document index. 6 Conclusion This paper shows that the quality of text embeddings can be substantially enhanced by exploiting LLMs. We prompt proprietary LLMs such as GPT4 to generate diverse synthetic data with instructions in many languages. Combined with the strong language understanding capability of the Mistral model, we establish new state-of-the-art results for nearly all task categories on the competitive MTEB 11904 benchmark. The training process is much more streamlined and efficient than existing multi-stage approaches, thereby obviating the need for intermediate pre-training. For future work, we aim to further improve the multilingual performance of our model and explore the possibility of using open-source LLMs to generate synthetic data. Limitations In comparison to the mainstream BERT-style encoders, the employment of LLMs, such as Mistral7B, for text embeddings results in a significantly increased inference cost. The development of more advanced GPUs and better kernel implementations may enhance the efficiency of the inference process. With regards to storage cost, our model is comparatively more expensive, with embeddings of 4096 dimensions. Early successes in reducing embedding dimensions while maintaining competitive performance have been demonstrated through techniques such as Matryoshka representation learning (Kusupati et al., 2022). For synthetic data generation, we rely on manual prompt engineering to elicit high-quality outputs from proprietary LLMs. Automatic prompt optimization presents a promising avenue for improving the quality of synthetic data. Acknowledgements We would like to thank anonymous reviewers for their valuable comments, and ACL 2024 and ACL Rolling Review organizers for their efforts. Opinions expressed in this paper are solely those of the authors and do not represent the views of their employers. References Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2017. A simple but tough-to-beat baseline for sentence embeddings. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597–610. Luiz Henrique Bonifacio, Hugo Abonizio, Marzieh Fadaee, and Rodrigo Nogueira. 2022. Inpars: Unsupervised dataset generation for information retrieval. Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Daniel Fernando Campos, Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, Li Deng, and Bhaskar Mitra. 2016. Ms marco: A human generated machine reading comprehension dataset. ArXiv preprint, abs/1611.09268. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 670–680, Copenhagen, Denmark. Association for Computational Linguistics. Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith Hall, and Ming-Wei Chang. 2022. Promptagator: Few-shot dense retrieval from 8 examples. In The Eleventh International Conference on Learning Representations. DataCanary, hilfialkaff, Lili Jiang, Meg Risdal, Nikhil Dandekar, and tomtung. 2017. Quora question pairs. Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, and Richard Harshman. 11905 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391–407. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2022. Languageagnostic bert sentence embedding. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 878–891. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Tianyu Gao, Howard Yen, Jiatong Yu, and Danqi Chen. 2023. Enabling large language models to generate text with citations. ArXiv preprint, abs/2305.14627. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio Cesar Teodoro Mendes, Allison Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero C. Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, S. Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuan-Fang Li. 2023. Textbooks are all you need. ArXiv preprint, abs/2306.11644. Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural instructions: Tuning language models with (almost) no human labor. ArXiv preprint, abs/2212.09689. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Towards unsupervised dense information retrieval with contrastive learning. ArXiv preprint, abs/2112.09118. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. ArXiv preprint, abs/2310.06825. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769– 6781, Online. Association for Computational Linguistics. Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, Kaifeng Chen, Sham M. Kakade, Prateek Jain, and Ali Farhadi. 2022. Matryoshka representation learning. In Neural Information Processing Systems. Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Xianming Li and Jing Li. 2023. Angle-optimized text embeddings. ArXiv preprint, abs/2309.12871. Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. 2023. Towards general text embeddings with multi-stage contrastive learning. ArXiv preprint, abs/2308.03281. Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, and Jimmy Lin. 2023. Fine-tuning llama for multi-stage text retrieval. ArXiv preprint, abs/2310.08319. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In ICLR. Amirkeivan Mohtashami and Martin Jaggi. 2023. Landmark attention: Random-access infinite context length for transformers. ArXiv preprint, abs/2305.16300. Niklas Muennighoff. 2022. Sgpt: Gpt sentence embeddings for semantic search. ArXiv preprint, abs/2202.08904. Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. MTEB: Massive text embedding benchmark. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 2014–2037, Dubrovnik, Croatia. Association for Computational Linguistics. 11906 Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Hassan Awadallah. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. ArXiv preprint, abs/2306.02707. Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas A. Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David P. Schnurr, Felipe Petroski Such, Kenny Sai-Kin Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and code embeddings by contrastive pre-training. ArXiv preprint, abs/2201.10005. Jianmo Ni, Gustavo Hernandez Abrego, Noah Constant, Ji Ma, Keith Hall, Daniel Cer, and Yinfei Yang. 2022a. Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1864–1874, Dublin, Ireland. Association for Computational Linguistics. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernandez Abrego, Ji Ma, Vincent Zhao, Yi Luan, Keith Hall, Ming-Wei Chang, and Yinfei Yang. 2022b. Large dual encoders are generalizable retrievers. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9844–9855, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019. Document expansion by query prediction. ArXiv preprint, abs/1904.08375. OpenAI. 2023. Gpt-4 technical report. ArXiv preprint, abs/2303.08774. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543, Doha, Qatar. Association for Computational Linguistics. Yifu Qiu, Hongyu Li, Yingqi Qu, Ying Chen, QiaoQiao She, Jing Liu, Hua Wu, and Haifeng Wang. 2022. DuReader-retrieval: A large-scale Chinese benchmark for passage retrieval from web search engine. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5326–5338, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, I. Evtimov, Joanna Bitton, Manish P Bhatt, Cristian Cantón Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre D’efossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2023. Code llama: Open foundation models for code. ArXiv preprint, abs/2308.12950. Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6943– 6951. Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2023. One embedder, any task: Instruction-finetuned text embeddings. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1102– 1121, Toronto, Canada. Association for Computational Linguistics. Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. 2024. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063. Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. ArXiv preprint, abs/2307.09288. Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. 2022a. GPL: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2345–2360, Seattle, United States. Association for Computational Linguistics. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, 11907 and Furu Wei. 2022b. Text embeddings by weaklysupervised contrastive pre-training. ArXiv preprint, abs/2212.03533. Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, and Furu Wei. 2024. Multilingual e5 text embeddings: A technical report. arXiv preprint arXiv:2402.05672. Liang Wang, Nan Yang, and Furu Wei. 2023. Query2doc: Query expansion with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9414–9423, Singapore. Association for Computational Linguistics. Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighof. 2023. C-pack: Packaged resources to advance general chinese embedding. ArXiv preprint, abs/2309.07597. Xiaohui Xie, Qian Dong, Bingning Wang, Feiyang Lv, Ting Yao, Weinan Gan, Zhijing Wu, Xiangsheng Li, Haitao Li, Yiqun Liu, et al. 2023. T2ranking: A large-scale chinese benchmark for passage ranking. ArXiv preprint, abs/2304.03679. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics. Xin Zhang, Zehan Li, Yanzhao Zhang, Dingkun Long, Pengjun Xie, Meishan Zhang, and Min Zhang. 2023a. Language models are universal embedders. ArXiv preprint, abs/2310.08232. Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 127–137, Punta Cana, Dominican Republic. Association for Computational Linguistics. Xinyu Crystina Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David AlfonsoHermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. 2023b. Miracl: A multilingual retrieval dataset covering 18 diverse languages. Transactions of the Association for Computational Linguistics, 11:1114–1131. Dawei Zhu, Nan Yang, Liang Wang, Yifan Song, Wenhao Wu, Furu Wei, and Sujian Li. 2023. Pose: Efficient context window extension of llms via positional skip-wise training. In The Twelfth International Conference on Learning Representations. Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2018. Overview of the third bucc shared task: Spotting parallel sentences in comparable corpora. In Proceedings of 11th Workshop on Building and Using Comparable Corpora, pages 39–42. A Implementation Details Baseline Models For results with mE5base and mE5large, we use the public checkpoints available at https://huggingface. co/intfloat/multilingual-e5-base and https://huggingface.co/intfloat/ multilingual-e5-large respectively. For experiments in Table 5, we follow the SGPT (Muennighoff, 2022) paper for the implementation of weighted mean pooling. For the “w/ task type prefix” setting, we prepend “classify: ” for the long-short matching subgroup, and “query: ” for other asymmetric tasks. No prefix is added for symmetric tasks. Training Data For the “E5mistral-7b + full data” setting, our training data comprises generated synthetic data, ELI5 (Fan et al., 2019)(sample ratio 0.1), HotpotQA (Yang et al., 2018), FEVER (Thorne et al., 2018), MIRACL (Zhang et al., 2023b), MSMARCO passage ranking (sample ratio 0.5) and document ranking (sample ratio 0.2) (Campos et al., 2016), NQ (Karpukhin et al., 2020), NLI (Gao et al., 2021), SQuAD (Karpukhin et al., 2020), TriviaQA (Karpukhin et al., 2020), Quora Duplicate Questions (DataCanary et al., 2017)(sample ratio 0.1), MrTyDi (Zhang et al., 2021), DuReader (Qiu et al., 2022), and T2Ranking (Xie et al., 2023)(sample ratio 0.5) datasets. We only include the training set of each dataset. For the datasets without hard negatives, we use mE5base to mine top 100 hard negatives. After sampling, we obtain approximately 1.8 million examples. The entire training process takes fewer than 1k steps to complete. Hyperparameters for Fine-tuning When fine-tuning Mistral-7b 2, the batch size is set to 2048 and the learning rate is 10−4 with 100 step warmup and linear decay. The weight decay is 0.1. We add 1 hard negative for each query-document pair. The fine-tuning process takes roughly 18 hours on 32 V100 GPUs with a maximum sequence length 512. We add LoRA adapters to all linear layers, resulting in a total of 42M trainable parameters. Our implementation is based on the HuggingFace PEFT library at https://github.com/huggingface/peft. 2https://huggingface.co/mistralai/ Mistral-7B-v0.1 11908 nDCG@10 Recall@100 BM25 mDPR mE5base mE5large E5mistral-7b full BM25 mDPR mE5base mE5large E5mistral-7b full ar 48.1 49.9 71.6 76.0 73.3 88.9 84.1 95.9 97.3 96.0 bn 50.8 44.3 70.2 75.9 70.3 90.9 81.9 96.6 98.2 96.0 en 35.1 39.4 51.2 52.9 57.3 81.9 76.8 86.4 87.6 90.2 es 31.9 47.8 51.5 52.9 52.2 70.2 86.4 88.6 89.1 87.5 fa 33.3 48.0 57.4 59.0 52.1 73.1 89.8 91.2 92.9 88.0 fi 55.1 47.2 74.4 77.8 74.7 89.1 78.8 96.9 98.1 96.7 fr 18.3 43.5 49.7 54.5 55.2 65.3 91.5 90.0 90.6 92.8 hi 45.8 38.3 58.4 62.0 52.1 86.8 77.6 92.6 93.9 89.9 id 44.9 27.2 51.1 52.9 52.7 90.4 57.3 87.4 87.9 88.4 ja 36.9 43.9 64.7 70.6 66.8 80.5 82.5 96.0 97.1 95.1 ko 41.9 41.9 62.2 66.5 61.8 78.3 73.7 91.6 93.4 89.4 ru 33.4 40.7 61.5 67.4 67.7 66.1 79.7 92.7 95.5 95.0 sw 38.3 29.9 71.1 74.9 68.4 70.1 61.6 95.6 96.7 95.5 te 49.4 35.6 75.2 84.6 73.9 83.1 76.2 98.0 99.2 95.1 th 48.4 35.8 75.2 80.2 74.0 88.7 67.8 98.0 98.9 96.5 zh 18.0 51.2 51.5 56.0 54.0 56.0 94.4 92.1 93.3 90.1 Avg 39.3 41.5 62.3 66.5 62.9 78.7 78.8 93.1 94.3 92.6 Table 6: nDCG@10 and Recall@100 on the dev set of the MIRACL dataset for all 16 languages. Datasets Class. Clust. PairClass. Rerank Retr. STS Summ. Avg XLM-Rlarge + full data 72.9 38.7 84.5 53.8 42.0 82.3 29.7 58.0 w/ cont. pre-train 77.2 47.3 85.5 58.6 50.2 84.4 30.7 63.7 E5mistral-7b + full data 78.5 50.3 88.3 60.2 56.9 84.6 31.4 66.6 w/ cont. pre-train 78.7 50.1 87.7 60.9 56.9 84.9 30.2 66.7 Table 7: Detailed results for the effects of contrastive pre-training. For the “E5mistral-7b w/ cont. pre-train” setting, we pre-train Mistral-7B following the mE5 recipe for 10k steps. Artifacts The model and dataset release information is available at https://github.com/ microsoft/unilm/tree/master/e5. We release our trained models and evaluation scripts to facilitate reproducibility and further research. B Test Set Contamination Analysis To assess the test set contamination on all the datasets in the MTEB benchmark, we perform a string match based analysis between the test set and our training set, disregarding differences in character case and spacing. We categorize the train-test overlaps into three types: • Low entropy texts. These are texts such as “i need a coffee” and “what does that mean”, which are not considered as contamination because they are common expressions that can occur in various contexts. • Question overlap. We identify 4 test set questions in the DBPedia dataset that also appear in the TriviaQA training set. Given that they constitute a minor portion of the test set, their impact on the overall performance is insignificant. • Retrieval corpus overlap. Several retrieval datasets share the same retrieval corpus. For instance, the DBPedia, NQ, and TriviaQA datasets all use Wikipedia passages, even though their query sets are different. This is a standard evaluation practice in the field of information retrieval, and we do not regard it as contamination. In summary, we did not detect substantial contamination risks that could alter the main findings of this paper. Another aspect to consider is the possibility of test set contamination in the training data of Mistral-7B and GPT-4. However, since the training data of these models is not publicly accessible, it is challenging to estimate the degree of such contamination. Given their widespread use in the research community, we believe it is still a valid comparison if other works also employ these models. C Prompts for Synthetic Data Generation For asymmetric tasks, we list the four prompt templates in Table 8, 9, 10, and 11. For symmetric 11909 Brainstorm a list of potentially useful text retrieval tasks. Here are a few examples for your reference: - Retrieve relevant documents for a short keyword web search query that asks for weather information. - Search for documents that answers a FAQ-style query on children’s nutrition. Please adhere to the following guidelines: - Specify what the query is, and what the desired documents are. - Each retrieval task should cover a wide range of queries, and should not be too specific. Your output must always be a python list of strings only, with about 20 elements, and each element corresponds to a distinct retrieval task in one sentence. Do not explain yourself or output anything else. Be creative! You have been assigned a retrieval task: {task} Your mission is to write one text retrieval example for this task in JSON format. The JSON object must contain the following keys: - "user_query": a string, a random user search query specified by the retrieval task. - "positive_document": a string, a relevant document for the user query. - "hard_negative_document": a string, a hard negative document that only appears relevant to the query. Please adhere to the following guidelines: - The "user_query" should be {query_type}, {query_length}, {clarity}, and diverse in topic. - All documents must be created independent of the query. Avoid copying the query verbatim. It’s acceptable if some parts of the "positive_document" are not topically related to the query. - All documents should be at least {num_words} words long. - The "hard_negative_document" contains some useful information, but it should be less useful or comprehensive compared to the "positive_document". - Both the query and documents should be in {language}. - Do not provide any explanation in any document on why it is relevant or not relevant to the query. - Both the query and documents require {difficulty} level education to understand. Your output must always be a JSON object only, do not explain yourself or output anything else. Be creative! Table 8: Prompt template for the short-long matching subgroup. For placeholders, “{query_type}” ∈{extremely long-tail, long-tail, common}, “{query_length}” ∈{less than 5 words, 5 to 15 words, at least 10 words}, “{difficulty}” ∈{high school, college, PhD}, “{clarity}” ∈{clear, understandable with some effort, ambiguous}, “{num_words}” ∈{50, 100, 200, 300, 400, 500}. tasks, the prompts templates are available in Table 12 and 13. To generate multilingual data, we sample the value of “{language}” from the language list of XLM-R (Conneau et al., 2020) with higher probability for high-resource languages. When prompting GPT-4/3.5, we set the sampling temperature to 1.0 and the top-p hyperparameter to 1.0, which is higher than the default setting to encourage more diversity. D Instructions for Training and Evaluation We manually write instructions for training datasets, as listed in Table 14. For evaluation datasets, the instructions are listed in Table 15. 11910 Brainstorm a list of potentially useful text classification tasks. Please adhere to the following guidelines: - Tasks should cover a diverse range of domains and task types. Your output must always be a python list of strings only, with about 20 elements, and each element corresponds to a distinct text classification task in one sentence. Do not explain yourself or output anything else. Be creative! You have been assigned a text classification task: {task} Your mission is to write one text classification example for this task in JSON format. The JSON object must contain the following keys: - "input_text": a string, the input text specified by the classification task. - "label": a string, the correct label of the input text. - "misleading_label": a string, an incorrect label that is related to the task. Please adhere to the following guidelines: - The "input_text" should be {num_words} words and diverse in expression. - The "misleading_label" must be a valid label for the given task, but not as appropriate as the "label" for the "input_text". - The values for all fields should be in {language}. - Avoid including the values of the "label" and "misleading_label" fields in the "input_text", that would make the task too easy. - The "input_text" is {clarity} and requires {difficulty} level education to comprehend. Your output must always be a JSON object only, do not explain yourself or output anything else. Be creative! Table 9: Prompt template for the long-short matching subgroup. For placeholders, “{num_words}” ∈{"less than 10", "at least 10", "at least 50", "at least 100", "at least 200"}, “{difficulty}” ∈{high school, college, PhD}, “{clarity}” ∈{clear, understandable with some effort, ambiguous}. Brainstorm a list of text matching tasks where both the queries and the groundtruth documents are very short (one or two sentences, even a short phrase). Here are a few examples: - Given a scientific paper title, retrieve the title of papers that cite the given paper. - Match a word with its definition. - Provided a notable person’s name, identify their occupation or achievement. Your output must always be a python list of strings only, with about 20 elements, and each element corresponds to a distinct task in one sentence. Do not explain yourself or output anything else. Be creative! You have been assigned a text matching task: {task} Your mission is to write one example for this task in JSON format. The JSON object must contain the following keys: - "input": a string, a random input specified by the task. - "positive_document": a string, a relevant document for the "input" according to the task. Please adhere to the following guidelines: - The values of all fields should be in {language}. - Both the "input" and "positive_document" should be very short (a sentence or a phrase), avoid substantial word overlaps, otherwise the task would be too easy. - The "input" and "positive_document" should be independent of each other. Your output must always be a JSON object only, do not explain yourself or output anything else. Be creative! Table 10: Prompt template for the short-short matching subgroup. We do not generate negative documents as the matching task is already reasonably difficult. 11911 Brainstorm a list of text matching tasks where the queries are long documents. Here are a few examples: - Given a document that supports a debatable argument, find another document that contains opposite arguments. - Provided a lengthy business proposal, retrieve competitive business strategies in the same industry. Your output must always be a python list of strings only, with about 20 elements, and each element corresponds to a distinct task in one sentence. Do not explain yourself or output anything else. Be creative! You have been assigned a text matching task: {task} Your mission is to write one example for this task in JSON format. The JSON object must contain the following keys: - "input": a string, a random input specified by the task. - "positive_document": a string, a relevant document for the "input" according to the task. Please adhere to the following guidelines: - The values of all fields should be in {language}. - Both the "input" and "positive_document" should be long documents (at least 300 words), avoid substantial word overlaps, otherwise the task would be too easy. - The "input" and "positive_document" should be independent of each other. Your output must always be a JSON object only, do not explain yourself or output anything else. Be creative! Table 11: Prompt template for the long-long matching subgroup. We do not generate negative documents for API latency reasons. Write a {unit} triple with varying semantic similarity scores in JSON format. The semantic similarity score ranges from 1 to 5, with 1 denotes least similar and 5 denotes most similar. Please adhere to the following guidelines: - The keys in JSON are "S1", "S2", and "S3", the values are all strings in {language}, do not add any other keys. - There should be some word overlaps between all three {unit}s. - The similarity score between S1 and S2 should be {high_score}. - The similarity score between S1 and S3 should be {low_score}. - The {unit}s require {difficulty} level education to understand and should be diverse in terms of topic and length. Your output must always be a JSON object only with three keys "S1", "S2" and "S3", do not explain yourself or output anything else. Be creative! Table 12: Prompt template for monolingual STS. For placeholders, “{high_score}” ∈{4, 4.5, 5}, “{low_score}” ∈ {2.5, 3, 3.5}, “{unit}” ∈{sentence, phrase, passage}, “{difficulty}” ∈{elementary school, high school, college}. Write a {unit} triple with one {unit} in {src_lang} and two {unit}s in {tgt_lang} with varying translation qualities in JSON format. The triple is denotes as ("S1", "S2", "S3"). The translation quality score ranges from 1 to 5, with higher scores are better. Please adhere to the following guidelines: - The values of "S1" is a string in {src_lang}, the value of "S2" and "S3" are strings in {tgt_lang}. - There should be some word overlaps between "S2" and "S3". - The translation quality score of "S2" with respect to "S1" should be {high_score}. - The translation quality score of "S3" with respect to "S1" should be {low_score}. - "S3" should be grammatical and fluent, but contain some keyword or number translation errors, or miss some information, or contain some redundant information. - "S1" requires {difficulty} level education to understand and should be diverse in terms of topic and length. Your output must always be a JSON object only with three keys "S1", "S2" and "S3", do not explain yourself or output anything else. Be creative! Table 13: Prompt template for bitext retrieval. For placeholders, “{high_score}” ∈{4, 4.5, 5}, “{low_score}” ∈ {1.5, 2, 2.5}, “{unit}” ∈{sentence, phrase, passage}, “{difficulty}” ∈{elementary school, high school, college}. 11912 Dataset Instruction ELI5 Provided a user question, retrieve the highest voted answers on Reddit ELI5 forum HotpotQA Given a multi-hop question, retrieve documents that can help answer the question FEVER Given a claim, retrieve documents that support or refute the claim MIRACL / MrTyDi / NQ / SQuAD / TriviaQA Given a question, retrieve Wikipedia passages that answer the question Retrieve Wikipedia passages that answer the question NLI Given a premise, retrieve a hypothesis that is entailed by the premise Retrieve semantically similar text MS-MARCO Given a web search query, retrieve relevant passages that answer the query Given a web search query, retrieve relevant documents that answer the query Quora Duplicates Given a question, retrieve questions that are semantically equivalent to the given question Find questions that have the same meaning as the input question DuReader / T2Ranking Given a Chinese search query, retrieve web passages that answer the question Table 14: Instructions for each training dataset. 11913 Task Name Instruction AmazonCounterfactualClassif. Classify a given Amazon customer review text as either counterfactual or notcounterfactual AmazonPolarityClassification Classify Amazon reviews into positive or negative sentiment AmazonReviewsClassification Classify the given Amazon review into its appropriate rating category Banking77Classification Given a online banking query, find the corresponding intents EmotionClassification Classify the emotion expressed in the given Twitter message into one of the six emotions: anger, fear, joy, love, sadness, and surprise ImdbClassification Classify the sentiment expressed in the given movie review text from the IMDB dataset MassiveIntentClassification Given a user utterance as query, find the user intents MassiveScenarioClassification Given a user utterance as query, find the user scenarios MTOPDomainClassification Classify the intent domain of the given utterance in task-oriented conversation MTOPIntentClassification Classify the intent of the given utterance in task-oriented conversation ToxicConversationsClassif. Classify the given comments as either toxic or not toxic TweetSentimentClassification Classify the sentiment of a given tweet as either positive, negative, or neutral ArxivClusteringP2P Identify the main and secondary category of Arxiv papers based on the titles and abstracts ArxivClusteringS2S Identify the main and secondary category of Arxiv papers based on the titles BiorxivClusteringP2P Identify the main category of Biorxiv papers based on the titles and abstracts BiorxivClusteringS2S Identify the main category of Biorxiv papers based on the titles MedrxivClusteringP2P Identify the main category of Medrxiv papers based on the titles and abstracts MedrxivClusteringS2S Identify the main category of Medrxiv papers based on the titles RedditClustering Identify the topic or theme of Reddit posts based on the titles RedditClusteringP2P Identify the topic or theme of Reddit posts based on the titles and posts StackExchangeClustering Identify the topic or theme of StackExchange posts based on the titles StackExchangeClusteringP2P Identify the topic or theme of StackExchange posts based on the given paragraphs TwentyNewsgroupsClustering Identify the topic or theme of the given news articles SprintDuplicateQuestions Retrieve duplicate questions from Sprint forum TwitterSemEval2015 Retrieve tweets that are semantically similar to the given tweet TwitterURLCorpus Retrieve tweets that are semantically similar to the given tweet AskUbuntuDupQuestions Retrieve duplicate questions from AskUbuntu forum MindSmallReranking Retrieve relevant news articles based on user browsing history SciDocsRR Given a title of a scientific paper, retrieve the titles of other relevant papers StackOverflowDupQuestions Retrieve duplicate questions from StackOverflow forum ArguAna Given a claim, find documents that refute the claim ClimateFEVER Given a claim about climate change, retrieve documents that support or refute the claim CQADupstackRetrieval Given a question, retrieve detailed question descriptions from Stackexchange that are duplicates to the given question DBPedia Given a query, retrieve relevant entity descriptions from DBPedia FEVER Given a claim, retrieve documents that support or refute the claim FiQA2018 Given a financial question, retrieve user replies that best answer the question HotpotQA Given a multi-hop question, retrieve documents that can help answer the question MSMARCO Given a web search query, retrieve relevant passages that answer the query NFCorpus Given a question, retrieve relevant documents that best answer the question NQ Given a question, retrieve Wikipedia passages that answer the question QuoraRetrieval Given a question, retrieve questions that are semantically equivalent to the given question SCIDOCS Given a scientific paper title, retrieve paper abstracts that are cited by the given paper SciFact Given a scientific claim, retrieve documents that support or refute the claim Touche2020 Given a question, retrieve detailed and persuasive arguments that answer the question TRECCOVID Given a query on COVID-19, retrieve documents that answer the query STS* Retrieve semantically similar text. BUCC/Tatoeba Retrieve parallel sentences. SummEval Given a news summary, retrieve other semantically similar summaries Table 15: Instructions used for evaluation on the MTEB benchmark. “STS*” indicates we use the same instructions for all the STS tasks. 11914 Task group: short-long matching Task definition: The query is a name of a computer software or app, find documents that offer tutorials, guides or user experiences about the software or app. Generated data: { "user_query": "How to use Microsoft Power BI for data analysis", "positive_document": "Microsoft Power BI is a sophisticated tool that requires time and practice to master. In this tutorial, we’ll show you how to navigate Power BI and perform basic data analysis. First, launch the application and log in with a Microsoft account. Next, create a new project using the Wizard on the left-hand toolbar. Import the dataset you wish to work with by clicking on ’Get Data’. Power BI can handle a variety of file formats, including Excel and CSV. Load your data, then use a sequence of drag-and-drop operations to generate charts and (if applicable) maps. If you are confused at any point, consider using the ’Ask a Question’ feature for clarification.", "hard_negative_document": "Excel is an incredibly powerful tool for managing and analyzing large amounts of data. Our tutorial series focuses on how you can achieve maximum efficiency with Excel. In this guide, we’ll walk you through transferring data from your computer to an Excel spreadsheet as well as colour-coding cells to match various criteria. We’ll also explore VLOOKUP, conditional formatting, and pivot tables, essentially transforming you into a spreadsheet wizard. Once you’ve mastered these techniques, consider joining us for our Advanced Data Analysis training to branch out even further." } Task group: long-short matching Task definition: Identifying severity level of customer complaints in support tickets Generated data: { "input_text": "I am writing to express my intense dissatisfaction with one of your products, a TV that has stopped functioning only a month after purchase. This situation yields less satisfaction to me and speaks voluminously about your quality control procedures in assembly lines. I hope this troubling issue etches into your improvement list for invoking earnest attention.", "label": "High Severity", "misleading_label": "Low Severity" } Task group: short-short matching Task definition: Provided a movie quote, find the movie title in which it is said. Generated data: { "input": "I’m going to make him an offer he can’t refuse.", "positive_document": "The Godfather" } Task group: long-long matching Task definition: Provided a legal brief, retrieve other legal documents presenting similar legal arguments. Generated data: { "input": "In the united states district court for the eastern district of pennsylvania. Plaintiff John Doe brings this action against Defendants ABC Corporation and XYZ Inc., alleging negligence which resulted in his injuries. The Plaintiff alleges ABC Corporation negligently designed, manufactured, and marketed a specific product. The Plaintiff also asserts XYZ Inc. negligently failed to inspect the same product, leading to harmful exposure. Plaintiff respectfully requests this honorable court to hold the defendants liable and award damages. All parties are diverse for purposes of 28 U.S.C. § 1332.", "positive_document": "Before the Superior Court of California, County of Los Angeles. In the matter of the lawsuit initiated by the Plaintiff Jane Smith against the Defendants PQR Industries and LMN Enterprises, charging they negligently designed, produced, and advertised a product that directly caused harm to her. Furthermore, the Plaintiff has claimed that LMN Enterprises negligently fell short in inspecting the product, leading to unsafe usage. The plaintiff hereby requests this esteemed court to declare the defendants guilty and demands compensation for the damages suffered. According to 28 U.S.C. § 1332, complete diversity exists between the parties to this lawsuit." } Task group: bitext matching Task definition: Retrieve parallel sentences. Generated data: { "S1": "Men bugun uzumli tort yasashni rejalashtirdim.", "S2": "I have planned to make a grape cake today.", "S3": "I am going to bake a strawberry pie today." } Task group: monolingual STS Task definition: Retrieve semantically similar text. Generated data: { "S1": "Tom loves to ride his blue bicycle in the park every morning.", "S2": "Every morning, Tom enjoys riding his blue bike in the park.", "S3": "Tom takes his blue pen to school every day." } Table 16: Random samples for each subgroup of the synthetic data. 11915 Dataset w/ synthetic only w/ synthetic + msmarco w/o synthetic data full data BIOSSES 84.2 81.0 85.4 85.5 SICK-R 78.6 78.5 81.7 82.6 STS12 75.8 74.7 77.9 79.7 STS13 84.3 85.3 88.0 88.4 STS14 80.9 81.2 83.7 84.5 STS15 86.2 86.8 89.5 90.4 STS16 85.0 85.3 86.5 87.7 STS17 87.3 87.7 91.0 91.8 STS22 66.0 67.1 66.2 67.0 STSBenchmark 83.5 84.0 87.8 88.6 SummEval 31.9 32.7 31.9 31.4 SprintDuplicateQuestions 93.5 95.8 96.0 95.7 TwitterSemEval2015 78.0 78.5 81.7 81.6 TwitterURLCorpus 86.5 86.9 87.7 87.8 AmazonCounterfactualClass. 79.6 79.9 77.2 78.7 AmazonPolarityClassification 95.8 95.9 93.9 95.9 AmazonReviewsClassification 56.9 55.5 48.2 55.8 Banking77Classification 86.2 87.0 88.8 88.2 EmotionClassification 49.2 47.6 51.0 49.8 ImdbClassification 94.8 94.9 89.0 94.8 MassiveIntentClassification 79.8 79.9 79.6 80.6 MassiveScenarioClassification 81.7 82.4 82.3 82.4 MTOPDomainClassification 95.6 95.9 95.7 96.1 MTOPIntentClassification 84.9 85.9 83.4 86.1 ToxicConversationsClassification 70.2 70.8 70.9 69.6 TweetSentimentExtractionClass. 63.5 63.4 61.6 63.7 AskUbuntuDupQuestions 64.3 65.3 67.4 67.0 MindSmallReranking 33.1 32.8 32.5 32.6 SciDocsRR 86.0 86.0 85.7 86.3 StackOverflowDupQuestions 52.5 53.7 55.9 54.9 ArxivClusteringP2P 51.4 51.2 47.8 50.5 ArxivClusteringS2S 46.5 44.9 44.6 45.5 BiorxivClusteringP2P 44.5 43.3 36.9 43.5 BiorxivClusteringS2S 40.9 40.1 37.0 40.2 MedrxivClusteringP2P 40.5 39.9 32.6 38.2 MedrxivClusteringS2S 38.0 37.9 32.8 37.5 RedditClustering 56.3 55.9 63.1 57.7 RedditClusteringP2P 66.3 64.8 66.4 66.5 StackExchangeClustering 72.9 72.7 74.5 73.1 StackExchangeClusteringP2P 46.1 45.6 34.3 45.9 TwentyNewsgroupsClustering 52.2 52.5 55.6 54.3 ArguAna 52.2 42.7 62.5 61.9 ClimateFEVER 21.1 28.8 25.2 38.4 CQADupstackAndroidRetrieval 40.8 36.0 44.5 43.0 DBPedia 42.0 43.7 47.7 48.9 FEVER 72.5 83.5 73.1 87.8 FiQA2018 38.1 48.4 54.5 56.6 HotpotQA 48.1 64.0 75.6 75.7 MSMARCO 25.7 45.0 42.9 43.1 NFCorpus 35.5 40.0 35.3 38.6 NQ 53.3 63.5 57.3 63.5 QuoraRetrieval 75.0 79.5 89.5 89.6 SCIDOCS 20.6 15.8 19.0 16.3 SciFact 71.5 71.9 74.7 76.4 Touche2020 25.4 32.5 19.1 26.4 TRECCOVID 82.3 87.3 70.8 87.2 Average 63.1 64.5 64.6 66.6 Table 17: Results for each dataset in the MTEB benchmark. The evaluation metrics and detailed baseline results are available in the original paper (Muennighoff et al., 2023). 11916
2024
642
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11917–11928 August 11-16, 2024 ©2024 Association for Computational Linguistics Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning Tianduo Wang† , Shichen Li‡ , Wei Lu† †StatNLP Research Group, Singapore University of Technology and Design ‡Soochow University {tianduo_wang,luwei}@sutd.edu.sg , [email protected] Abstract Teaching small-scale language models to perform math reasoning is a valuable yet challenging task. Besides obtaining labeled data from human experts, one of the most common ways to collect high-quality data is by sampling from a larger and more powerful language model. Although previous works have demonstrated the effectiveness of this method, such a knowledge distillation paradigm can be costly and unstable, especially considering that many large language models, such as GPT-4 (OpenAI, 2023), are closed-source, proprietary, and their behaviors are unpredictable. In this work, to avoid relying on outputs from large models, we demonstrate that the reasoning abilities of small-scale language models can be enhanced through self-training, which involves training models with their own outputs. We also show that the conventional self-training can be further augmented by an alignment algorithm called Direct Preference Optimization (DPO) (Rafailov et al., 2023). We empirically found that models trained with the DPO objective are capable of making better generations that largely benefit multi-turn self-training. The experimental results show our models outperform the existing models with comparable sizes on the GSM8K benchmark with minimal resource requirements.1 1 Introduction Making language models (LMs) perform mathematical reasoning is a valuable, yet challenging research objective (Hendrycks et al., 2021; Cobbe et al., 2021). Recent works focus on enhancing large-scale LMs’ reasoning abilities, e.g., chainof-thought prompting (Wei et al., 2022b; Kojima et al., 2022), continual pretraining (Azerbayev et al., 2024), or adding external verifiers (Li et al., 2023b). However, the research question of how to 1Our code and data are released at https://github.com/ tianduowang/dpo-st. 1e+18 3e+18 1e+19 3e+19 Compute Cost (FLOPs) 15 20 25 30 35 40 Accuracy (%) Ours (Flan-T5-L) Distill from Codex (Flan-T5-L) Distill from Codex (Flan-T5-3B) Distill from Codex (Flan-T5-11B) Distill from PaLM (T5-L) Distill from PaLM (T5-3B) Distill from PaLM (T5-11B) Calcformer (T5-L) Calcformer (T5-3B) GSM8K Performance v.s. Compute cost Figure 1: GSM8K performance v.s. computational cost. Our approach outperforms baseline models with comparable sizes while minimizing the required compute. Comparisons include two knowledge distillation techniques, i.e., Codex distillation (Fu et al., 2023), and PaLM distillation (Magister et al., 2023), and a data augmentation method, Calcformer (Kadlˇcík et al., 2023). improve LMs with smaller sizes (e.g., less than 1 billion parameters) is still under-explored. Recent studies (Fu et al., 2023; Ho et al., 2023; Magister et al., 2023) demonstrate that the reasoning capabilities of smaller LMs can be significantly enhanced through learning from the outputs of larger and more advanced LMs, such as Codex (Chen et al., 2021) and PaLM (Chowdhery et al., 2022). Despite the relative ease of implementation, the associated costs can be substantial. The computational demand, measured in floatingpoint operations (FLOPs), is considerably higher for large LMs. Additionally, the use of proprietary and closed-source large LMs for data annotation can incur significant economic costs. Ho et al. (2023) demonstrated that annotating multiple reasoning chains for a single question with large LMs can markedly enhance the performance of smaller models. Hence, a trade-off between cost and performance exists. 11917 Another line of work focuses on making improvements via self-training (Zelikman et al., 2022; Gulcehre et al., 2023; Singh et al., 2023). Instead of using data generated by larger models, the essence of self-training methods is to make small models learn from their own generations. While these methods demonstrate the effectiveness of utilizing self-generated data, their success largely depends upon the pre-existing abilities of the models. For example, Zelikman et al. (2022) started by few-shot prompting a large model, i.e., GPT-J (Wang and Komatsuzaki, 2021), to self-generate rationales, which is an emergent ability that only comes with sufficiently large models (Wei et al., 2022a). However, it is still unclear whether small-scale LMs can benefit from self-training. Recently, we have witnessed that reinforcement learning from human feedback (RLHF) has emerged as a prominent method to precisely modify LMs’ behavior towards human preference (Ouyang et al., 2022; Casper et al., 2023). In this work, we propose to augment the self-training algorithm with an RLHF training process, i.e., Direct Preference Optimization (Rafailov et al., 2023), for its better performance and stability. Our experimental results demonstrate that the proposed method can significantly improve LMs’ reasoning capabilities while minimizing the computational costs. We visualize the relationship between the downstream task performance and computational cost over a series of specialized models in Figure 1. The computational costs for each model are estimated following previous practice (Kaplan et al., 2020; Yuan et al., 2023). It can be observed that our method not only achieves the highest accuracy, but also minimizes the computational demand by learning from its own generations. Overall, the main contribution of this work can be summarized as follows: • We propose a novel extension to the classic self-training framework with Direct Preference Optimization, and we show its effectiveness through standard math problem-solving tasks. • We demonstrate that this novel extension improves the reasoning abilities of the LMs with minimal computational resource requirements. • We propose an efficient method for integrating LMs with external tools, significantly improving the performance without sacrificing much inference speed. Algorithm 1 Self-training for CoT reasoning tasks Input: pre-trained language model fθ Input: labeled dataset L = {(xi, yi, ai)}l i=1 Input: unlabeled dataset U = {(xi, ai)}u i=1 Output: fine-tuned model fθ′ 1: Fine-tune fθ on L to get fθ′ 2: repeat 3: Build pseudo-labeled dataset S: S = {(xi, ˆyi, ˆai)}s i=1 where xi ∼U and ˆyi, ˆai ∼fθ′(·|xi) 4: Select Sα ⊂S when ˆai = ai 5: Update L ←Sα ∪L 6: Train fθ on L to get a new fθ′ 7: until convergence or max iteration is reached 2 Background Math word problem solving The math word problem solving task can be formulated as a sequence-to-sequence task where the input x is a question asking for an unknown value and the output y is a rationale leading to the answer a (Cobbe et al., 2021). Normally, the answers can be extracted from the rationales via some rule-based methods, e.g., regular expressions. A generated rationale ˆy is regarded as correct if the extracted answer ˆa matches the gold answer a. Formally, the labeled dataset for a math word problem solving task with l instances can be represented as: L = {(xi, yi, ai)}l i=1. (1) A common way for specializing a LM fθ towards math reasoning with the labeled dataset L is supervised fine-tuning (SFT). It optimizes fθ by minimizing the negative log likelihood loss LSFT(θ): E (x,y)∼L − h T X t=1 log fθ(yt|x, y1:t−1) i , (2) where T is the length of the rationale y and we use yt to represent the t-th token in y. Self-training Self-training is one of the earliest approaches in semi-supervised learning (Scudder, 1965; Fralick, 1967) that has risen in popularity recently (He et al., 2019; Amini et al., 2022). This method first regards a base model trained with a labeled dataset L as teacher, and uses it to build a pseudo-labeled dataset S by annotating an unlabeled dataset U. Then, a student model is trained with the combination of L and S that are expected 11918 to outperform the teacher model. Such a framework has been shown effective in a wide range of natural language processing tasks, e.g., natural language understanding (Vu et al., 2021) and generation (He et al., 2019). A formal description of a self-training algorithm for CoT reasoning tasks is provided in Algorithm 1. Previous studies have demonstrated that the quality of the pseudo-labels largely impacts the overall performance of the self-training algorithm (He et al., 2019; Amini et al., 2022). For example, Gulcehre et al. (2023) proposed to select high-quality pseudo-labels with a learned reward function. Zelikman et al. (2022) filtered the generated rationales to include only the ones that lead to correct answers. Although many methods are proposed to select pseudo-labels, few works discuss how to improve the fine-tuned model fθ′ so that more high-quality pseudo-labels can be generated. In this paper, we present a method to enhance fθ′ in each iteration so that higher-quality pseudo-labeled data can be generated. Direct Preference Optimization The Reinforcement Learning from Human Feedback (RLHF) methods align LMs with human preference (Ouyang et al., 2022; Bai et al., 2022). The standard pipeline of RLHF requires to first train a reward model from human preference data. Then, the reward model is used to fine-tune language models via reinforcement learning objective, e.g., Proximal Policy Optimization (Schulman et al., 2017). A recent study propose Direct Preference Optimization (DPO) (Rafailov et al., 2023) to avoid explicitly training a reward model so that language models can be directly tuned with human preference data. The DPO pipeline can be described as follows. First, given some prompt x, sample several completions from the reference model πref (normally it is the model after supervised fine-tuning): y1, y2 ∼πref(· | x). (3) Next, construct the DPO dataset D from the completions based on the human preference: D = {( xi, yi w, yi l )}N i=1, (4) where yi w and yi l represent the winning and losing completions respectively. Then, we optimize the language model πθ to minimize LDPO(πθ; πref) Algorithm 2 DPO-augmented self-training Input: pre-trained language model fθ Input: labeled dataset L = {(xi, yi, ai)}l i=1 Input: unlabeled dataset U = {(xi, ai)}u i=1 Output: fine-tuned model fθ′ # Warm-up stage 1: Fine-tune fθ on L to get fθ′ 2: repeat # DPO step 3: Generate DPO dataset D: D = {( xi, yi w, yi l )}N i=1 where xi ∼U and yi w, yi l ∼fθ′(·|xi) 4: Tune fθ′ with LDPO on D to get fθd # SFT step 5: Build pseudo-labeled dataset S: S = {(xi, ˆyi, ˆai)}s i=1 where xi ∼U and ˆyi, ˆai ∼fθd(·|xi) 6: Select Sα ⊂S when ˆai = ai 7: Update L ←Sα ∪L 8: Train fθ on L to get a new fθ′ 9: until convergence or max iteration is reached which can be defined as follows: E (x,yw,yl)∼D  −log σ  r(yw|x) −r(yl|x)  , (5) where r(·|x) = β log πθ(·|x) πref(·|x) and β is a coefficient that controls πθ’s deviation from πref. 3 Method In this section, we first describe the proposed approach. Then, we demonstrate how we integrate an external calculator into the model’s decoding process which significantly improves LMs’ performance on the downstream tasks. 3.1 DPO-Augmented Self-Training Our approach starts with a warm-up stage, and then followed by an iterative process, where each iteration is composed of two sub-steps: DPO step and SFT step. The iterative process ends when the model performance converges or reaches the maximum iteration. A formal description of the proposed method is illustrated in Algorithm 2. An illustration of our method is presented in Figure 2. Warm-up stage Like classic self-training, we start by fine-tuning the base model fθ to optimize LSFT(θ) on the labeled data L to get a new model 11919 Pre-trained model SFT model DPO model Supervised fine-tuning DPO training Sampling & filtering Pseudo-labeled data SFT data Deduplication Sampling Preference data Human-labeled SFT data Iteration n Figure 2: An illustration of the proposed DPO-augmented Self-Training framework. The conventional Self-Training method uses the SFT model to generate the pseudo-labels for the next iteration. In contrast, our method first optimize the SFT model with Direct Preference Optimization (DPO), and use the DPO model to produce the pseudo-labels. fθ′. After this stage, we assume that fθ′ is capable of solving certain math problems. Specifically, given a math question x, fθ′ will generate a rationale ˆy with answer ˆa. Iterative step 1: DPO step In this step, we first sample rationales ˆy from the fine-tuned model fθ′ given some questions x from U. For each question x, we generate multiple rationales to build the DPO training dataset D. As mentioned, for math problem solving tasks, it is easy to know whether a generated rationale ˆy can be considered as correct. We label rationales with correct answers as winning completions, while consider rationales with incorrect answers as losing completions. Then, we train fθ′ on D to optimize the objective function LDPO and get a DPO model fθd in the end. Iterative step 2: SFT step After obtaining fθd, we use it to generate a new pseudo-labeled dataset S for the next-round supervised fine-tuning: S = {(x, ˆy)|x ∼U, ˆy ∼fθd(·|x)} (6) After generation, we clean S by eliminating rationales with incorrect answers and removing duplicates. Therefore, the pseudo-labeled dataset we obtained in the end is a subset of the original one, i.e., Sα ⊂S. The final training dataset is the combination of the original labeled dataset L and the newly-generated pseudo-labeled dataset Sα. Notice that during this process, once we collect a new dataset, we train from the original base model fθ instead of continually fine-tuning fθ′ to avoid overfitting, following previous practice (Zelikman et al., 2022; Singh et al., 2023). Q: James writes a 3-page letter to 2 different friends twice a week. How many pages does he write a year? A: He writes each friend 3*2=<<3*2=6>>6 pages a week. So he writes 6*2=<<6*2=12>>12 pages every week. That means he writes 12*52=<<12*52=624>>624 pages a year. #### 624 Figure 3: An example from the GSM8K dataset. The calculation annotations are highlighted in blue. All calculation steps are wrapped within special tokens <<...>>. During decoding, the calculator will be triggered when such patterns exist and the model’s output tokens will be overridden by the calculator results. Following Cobbe et al. (2021), the calculation is performed with the python eval() function. 3.2 Batch Decoding with Calculator We empirically observed that, in contrast to large LMs which excel at basic arithmetic calculations (Brown et al., 2020), smaller LMs like FlanT5-Large exhibit poor performance on similar arithmetic tasks. This deficiency significantly impacts their ability to handle math reasoning tasks. To address this, various studies (Parisi et al., 2022; Schick et al., 2023; Kadlˇcík et al., 2023) have proposed augmenting these smaller models with an external calculator to enhance their math reasoning capabilities. However, many of these existing methods are limited to a batch size of one during decoding. This constraint substantially reduces the inference speed, which hinders their widespread adoption. 11920 1 8 16 32 Batch Size 0 5 10 15 20 Speedup 1.0x 5.8x 9.5x 13.9x 6.9x 12.3x 19.9x Calcformer Ours w/ calculator Ours w/o calculator Figure 4: Inference speed comparison between our methods (w/ and w/o calculator) and Calcformer (Kadlˇcík et al., 2023) with varying batch sizes. The results are measured on a single NVIDIA A40 GPU. To alleviate this issue, we propose a simple yet efficient method to enables the use of a large batch size during inference with an external calculator. We adopt the calculator annotation provided in the original GSM8K dataset (Cobbe et al., 2021). Figure 3 demonstrates an example of this annotation and describes how such annotations can be used during decoding. Our models are built with the Transformers library (Wolf et al., 2020). During generation, we adopt a customized LogitsProcessor2 to override the model’s generation. LogitsProcessor provides an interface to modify the language model’s output tokens during generation. To demonstrate the efficiency of the proposed solution, We compare the inference speed of our methods (w/ and w/o calculator) based on Flan-T5Large against an open-source tool-using method, Calcformer (Kadlˇcík et al., 2023) based on T5Large, in Figure 4. We find that when the batch size equals 1, all three methods have a similar inference speed of around 40 tokens per second. However, as the inference batch size increases, the speedup of our methods increases significantly. 4 Experiments In this section, we first describe our experimental setup. Next, we present the performance of our models across a series of math word problem solving tasks, comparing them against a selection of competitive baselines. Finally, we empirically analyze what makes the proposed method effective. 2https://huggingface.co/docs/transformers/ internal/generation_utils#logitsprocessor Dataset Split # Data GSM8K (Cobbe et al., 2021) Train 6,705 Validation 0,768 Test 1,319 MultiArith (Roy and Roth, 2015) Test 0,600 ASDiv (Miao et al., 2020) Test 2,096 SVAMP (Patel et al., 2021) Test 1,000 Table 1: Statistics of the datasets used in our experiments. The original GSM8K dataset only contains train and test split. We randomly select 768 training examples to construct the validation dataset in our experiments. 4.1 Setup Base models We employ Flan-T5 models (Chung et al., 2024) as the base models in our experiments. Specifically, we consider two models of different sizes from the Flan-T5 model family: Flan-T5Base (250M) and Flan-T5-Large (780M). We select Flan-T5 over the original T5 models (Raffel et al., 2019) as our backbone models based on the evidence from previous research (Chung et al., 2024; Fu et al., 2023), which demonstrated that instruction-tuned models, i.e., Flan-T5, outperform their pre-trained counterparts in math reasoning tasks. Besides Flan-T5 models, we also consider the Llama models (Touvron et al., 2023a,b; Meta, 2024) as our base models. Datasets The labeled dataset L used in our experiments comes from the training split of the GSM8K dataset. Our unlabeled dataset U is also built upon GSM8K’s training data by removing its annotated rationales y. For evaluation, we consider three other commonly used math reasoning tasks besides GSM8K: MultiArith (Roy and Roth, 2015), ASDiv (Miao et al., 2020), and SVAMP (Patel et al., 2021). Table 1 shows the statistics information of each dataset. Following previous practice (Fu et al., 2023), we only fine-tune the base models on the GSM8K training data while utilizing the rest three datasets to evaluate our models’ out-of-domain performance as they do not have an official in-domain training split. Evaluation metrics We use accuracy of the greedy decoding results as the main evaluation metric. The questions in the datasets in our experiments ask about the values of the unknown variables. The answers to these questions are real numbers that can be extracted from the model-generated rationales. 11921 Method Base Model GSM8K MultiArith ASDiv SVAMP Supervised Fine-Tuning Flan-T5-Base 18.1 54.2 26.2 19.5 Self-Training Flan-T5-Base 25.9 73.8 28.2 24.2 DPO-aug Self-Training (Ours) Flan-T5-Base 27.2 74.3 29.2 22.6 Supervised Fine-Tuning Flan-T5-Large 30.8 77.2 38.1 33.6 Self-Training Flan-T5-Large 35.6 86.2 42.5 34.8 DPO-aug Self-Training (Ours) Flan-T5-Large 37.4 89.0 42.8 36.8 Table 2: Overall accuracies (%) over four math word problem solving tasks. Inspired by the previous practice (Fu et al., 2023), all the models in this table are only trained with the GSM8K training set (Cobbe et al., 2021). Hence, we report the in-distribution performance for GSM8K, while reporting the out-of-distribution performance for the other three datasets, i.e., MultiArith, ASDiv, and SVAMP. Implementation details In every DPO step, we sample rationales from the SFT model fθ′ to build the DPO training data. We sample 5 rationales from fθ′ per question with a temperature of 0.7. We regard generated rationales ˆy as winning ones yw if it contains the correct answer, while regarding the rest as the losing ones yl. For the SFT steps, we generate 3 rationales per question from the DPO-tuned model fθd also with a temperature of 0.7. Only the correct generated rationales ˆy will be selected to build the pseudo-labeled dataset S. For both generated DPO data and SFT data, we make simple deduplication based on the Jaccard similarity scores with a threshold of 0.7. 4.2 Main Results Baselines We mainly consider two baseline methods to compare with our method: Supervised FineTuning (SFT) and Self-Training (ST). SFT baselines are trained on the original GSM8K annotations with the LSFT(θ) objective. The Self-Training baseline adheres to the procedure outlined in Algorithm 1. Consequently, this makes it the most directly comparable and relevant baseline for evaluating the effectiveness of our approach. Comparison with baselines Table 2 shows the performance of our method compared with the baselines using two base models over four datasets. It can be observed that both the Self-Training baseline and our proposed method outperform the supervised fine-tuning baseline by a large margin, indicating the effectiveness of the general self-training framework. Although the Self-Training baselines make significant improvements over the SFT baselines, the proposed DPO-augmented Self-Training models consistently outperform them on both indomain and out-of-domain tasks. iter 0 iter 1 iter 2 iter 3 15 20 25 30 Accuracy (%) 18.1 24.2 26.0 25.9 18.1 24.6 26.6 27.2 Flan-T5-Base ST Ours iter 0 iter 1 iter 2 iter 3 30 32 34 36 38 Accuracy (%) 30.8 32.9 35.1 35.6 30.8 34.1 35.6 37.4 Flan-T5-Large ST Ours Figure 5: The performance (accuracy) of the proposed method on GSM8K over three iterations. "ST" stands for the Self-Training baseline. The "iter 0" represents the results of the SFT baselines. Effect of iterative training In Figure 5, we demonstrate how our model improve over mulitple iterations using two base models: Flan-T5-Base and Flan-T5-Large. Both the ST baseline and our method start with a SFT step using the original GSM8K annotation at iteration 0. We can observe that, as iterations progress, our method consistently outperforms ST, which suggests that the proposed method offers a substantial improvement over ST after repeated training cycles. We also notice that the degree of improvement gradually approaches zero as the iterative process proceeds, suggesting that the model has converged in the last iteration. 11922 Method Base Model # Annotations Annotator Tools Acc. Supervised fine-tuning CoT (Shridhar et al., 2023) GPT-2-Large 007K Human % 14.1 Self-consistency (Khalifa et al., 2023) Flan-T5-Large 007K Human ! 33.3 GRACE (Khalifa et al., 2023) Flan-T5-Large 007K Human ! 36.3 Calcformer (Kadlˇcík et al., 2023) T5-Large 030K Human ! 34.2 Knowledge Distillation Socratic CoT (Shridhar et al., 2023) GPT-2-Large 007K GPT-3 175B % 21.1 CoT from CodeX (Fu et al., 2023) Flan-T5-Large 100K CodeX % 20.2 CoT from PaLM (Magister et al., 2023) T5-Large 006K PaLM 540B ! 22.2 Ours DPO-aug Self-Training (K=3) Flan-T5-Large 007K Human ! 37.4 DPO-aug Self-Training (K=5) Flan-T5-Large 007K Human ! 39.1 DPO-aug Self-Training (K=10) Flan-T5-Large 007K Human ! 40.0 Table 3: Detailed comparison among existing methods with comparable model sizes on the GSM8K test set. The “Annotator” column indicates how the rationales of the training data are generated. In this column, “Human” refers to the labels from the original GSM8K dataset (Cobbe et al., 2021) that are written by human annotators. The “Tools” column indicates whether external tools, e.g., calculator or code interpreter, are applied during inference. 4.3 Comparison with Existing Methods In this section, we compare our methods with existing approaches. Additionally, we scale up the proposed method by increasing the number of sampled pseudo-labels per question, denoted by the hyperparameter K as in Yuan et al. (2023). Table 3 presents a detailed comparison between our method and exisiting methods using a simialr base model size. The base models we considered include GPT-2-Large (Radford et al., 2019), T5-Large (Raffel et al., 2019), and FlanT5-Large (Chung et al., 2024). All these models have approximately 770 million parameters. As shown in Table 3 our approach not only outperforms other methods on the GSM8K benchmark, but also demonstrates remarkable label efficiency by utilizing only the annotations from the original GSM8K dataset. In Table 4, we further verify the effectiveness of the proposed method with the Llama model family (Touvron et al., 2023a,b; Meta, 2024). comparing them with several state-of-the-art closed-source models as well as open-source models with similar sizes. We observe a substantial performance gap between proprietary and open-source models. Among the open-source models, those utilizing knowledge distillation consistently outperform their counterparts without such enhancement. Notably, our models using Llama-1-7b (Touvron et al., 2023a) and Llama-2-7b (Touvron et al., 2023b) base models surpass other open-source alternatives that do not employ knowledge distillation, achieving accuracies of 44.7% and 54.7% respectively. Furthermore, our model employing the latest Llama-3-8b (Meta, 2024) matches or exceeds the performance of earlier models with knowledge distillation, demonstrating a significant accuracy of 68.8%. Method Base Model Acc. Closed-source models Claude-3-Opus (Anthropic, 2024) 95.0 Claude-2 (Anthropic, 2023) 88.0 GPT-4 (OpenAI, 2023) 92.0 Flan-PaLM-2 (Anil et al., 2023) 84.7 PaLM-2 (Anil et al., 2023) 80.7 Models w/ knowledge distillation RFT-U13B (Yuan et al., 2023) Llama-1-7b 49.3 RFT-U33B (Yuan et al., 2023) Llama-2-7b 51.2 MAmooTH-CoT (Yue et al., 2023) Llama-2-7b 50.5 LEMA (An et al., 2023) Llama-2-7b 54.1 WizardMath (Luo et al., 2023) Llama-2-7b 54.9 MetaMath (Yu et al., 2024) Llama-2-7b 66.5 MuggleMath (Li et al., 2023a) Llama-2-7b 68.4 ToRA (Gou et al., 2024) Llama-2-7b 68.8 Models w/o knowledge distillation SFT (Yuan et al., 2023) Llama-1-7b 35.9 RFT (K=12) (Yuan et al., 2023) Llama-1-7b 41.6 RFT (K=100) (Yuan et al., 2023) Llama-1-7b 41.7 SFT (Yuan et al., 2023) Llama-2-7b 41.6 RFT (K=12) (Yuan et al., 2023) Llama-2-7b 45.3 RFT (K=100) (Yuan et al., 2023) Llama-2-7b 47.5 Ours DPO-ST (K=10) Llama-1-7b 44.7 DPO-ST (K=10) Llama-2-7b 54.7 DPO-ST (K=10) Llama-3-8b 68.8 Table 4: Comparison with the state-of-the-art closesource models and open-source models based on Llama model family (Touvron et al., 2023a,b; Meta, 2024). 11923 30 33 36 39 Dev accuracy (%) 36.1 36.5 Pass@1 60 62 64 66 Dev accuracy (%) 62.9 64.8 Pass@10 2400 2700 3000 2495 2940 # generated CoT pseudo-labels Before DPO step After DPO step Figure 6: Effects of the DPO step. Left: we report the greedy decoding results for Pass@1. Middle: For Pass@10, the solutions are sampled with temperature 0.7. Right: We count the number of generated pseudolabels after deduplication. 4.4 Effects of the DPO Step As mentioned earlier, the main difference between the proposed method and the classic self-training is the DPO step in every iterative process. We now analyze how the DPO steps improve self-training. Figure 6 compares the performance of models before and after the DPO step in the first iteration on the Pass@K metrics. Pass@K measures the probability that at least one of the K generated solutions for a problem is correct, which serves as a gauge for both the quality and the variety of the modelgenerated solutions. The models we investigate here are fine-tuned from the Flan-T5-Large. As shown in Figure 6, the DPO step yields only marginal improvements over the SFT model in the Pass@1 performance on the development set. However, the performance significantly improves when multiple rationales, i.e., 10 solutions per question, are sampled with temperature 0.7 (measured with the Pass@10 metric). This indicates that the DPO training objective makes language models inclined to generate rationales of both high quality and diversity. We also compare the number of generated rationales on the training set L for models with and without the DPO step. Figure 6 (right) clearly shows that the model after the DPO step can produce more SFT data for the next iteration. 4.5 Effects of External Calculator Driven by the observation that small-scale LMs frequently make basic calculation errors, we develop a simple yet efficient method that integrates an external calculator into the models’ decoding iter 0 iter 1 iter 2 iter 3 10 20 30 40 50 Dev accuracy (%) 36.7 40.5 43.9 44.8 16.3 17.1 17.8 18.0 w/ calculator w/o calculator Figure 7: GSM8K development set accuracy of FlanT5-Large with and without the use of an external calculator during inference. process. To evaluate the impact of this integration, we conduct an ablation study by omitting the calculator and present the findings in Figure 7. Our results indicate that decoding without the calculator markedly reduces accuracy across all iterations. We believe that this is because models will generate large amount of false positive pseudo-labels without calculator, that is, the generated pseudo-labels may have correct final answers but make errors in the intermediate reasoning steps. 5 Related Work Learning from pseudo-labels Supervised finetuning (SFT) is prevalent technique employed to enhance the performance of pre-trained language models on specific downstream tasks (Ouyang et al., 2022; Chung et al., 2024). However, this method heavily depends on the availability of highquality labeled data, which can be both expensive and labor-intensive to procure (Brown et al., 2020). To address this limitation, various strategies have been developed to generate high-quality pseudolabels using either unlabeled or synthetic data for a wide range of applications, including text classification (Xie et al., 2020), sentence representation learning (Wang and Lu, 2022), instruction tuning (Honovich et al., 2022), and math reasoning (Wang and Lu, 2023). Recent advancements in this area primarily focus on two directions: selftraining and knowledge distillation. The key difference between these methods lies in the source of the pseudo-labels: self-training uses the model’s own predictions on unlabeled data, while knowledge distillation utilizes the insights from larger, more powerful models. 11924 Self-training in language model Recently, we have witnessed a large number of works focusing on self-training algorithms for language models (He et al., 2019; Zelikman et al., 2022; Yuan et al., 2023). Most of such methods are built upon the classic self-training framework (Scudder, 1965). He et al. (2019) empirically studied the effectiveness of self-training in natural language generation tasks, e.g., summarization and translation. Zelikman et al. (2022) proposed selftaught reasoner (STaR), which demonstrated that language models can be iteratively improved from its own generation, even there are no gold rationales provided. Yuan et al. (2023) proposed rejection sampling fine-tuning to improve language models’ math reasoning abilities. This method can be interpreted as only executing one iteration of the self-training algorithm. Singh et al. (2023) proposed ReSTEM, a self-improving algorithm based on expectation-maximization framework. This method demonstrates significant improvements in problem-solving tasks, e.g., math reasoning and code generation. Knowledge distillation from LLMs Many of the recent research efforts demonstrated large language models (LLMs) are capable of doing math reasoning (Wei et al., 2022b; Gao et al., 2022; OpenAI, 2023; Anil et al., 2023; Anthropic, 2023, 2024). Therefore, a recent line of work focuses on improving smaller language models’ reasoning abilities by distilling chain-of-thought pseudo-labels from LLMs (Ho et al., 2023; Magister et al., 2023; Fu et al., 2023). For example, Luo et al. (2023) proposed Reinforcement Learning from Evol-Instruct Feedback built upon EvolInstruct (Xu et al., 2023), which requires ChatGPT to provide the training signals. An et al. (2023) demonstrated that language models can effectively learn from the mistakes that can be corrected by larger models, e.g., GPT-4 (OpenAI, 2023), during supervised fine-tuning. Yu et al. (2024) proposed a novel question bootstrapping method with the help of larger models to augment the existing training dataset. Although these methods are shown to have promising experimental results, they are costly to implement as large models cost more FLOPs during inference. Our work demonstrates that smallscale language models can also learn from their own generations like the larger ones (Zelikman et al., 2022), which is more resource-efficient compared with the knowledge distillation methods. 6 Conclusion We present an effective and resource-efficient method called DPO-augmented Self-Training (DPO-ST), which augments the original SelfTraining algorithm with Direct Preference Optimization (Rafailov et al., 2023). Unlike previous studies that improve small-scale language models’ reasoning abilities by distilling a larger and more powerful model, we argue that small models that are trained merely on the limited human-labeled data can improve themselves significantly. We also empirically find that models trained with DPO loss can generate pseudo-labeled data with higher quality and diversity. Our experiments demonstrate that the proposed method not only outperforms existing methods with comparable model sizes on the GSM8K benchmark, but also achieves remarkable resource efficiency in terms of both computational cost and the requirements of human-labeled data. Limitations Use of unlabeled data Our method is built upon the classic self-training algorithm, which provides an effective semi-supervised learning framework that makes good use of unlabeled data. However, this work doesn’t explore the use of unlabeled data explicitly. Future research efforts can be made to explore how to collect high-quality unlabeled data for math word problem solving. In other words, we need to find an efficient method for collecting unlabeled data U = {(xi, ai)}u i=1 that for each math question xi, there is a corresponding groundtruth answer ai. Generalization to other tasks One of the limitations of this work is the narrow scope of our experiments, which were exclusively conducted on math reasoning tasks. The primary reason for this limitation is the lack of appropriate training data for other reasoning tasks. As our method requires a set of training data with chain-of-thought labels, many existing reasoning tasks lack such annotations, making it challenging to extend our experiments beyond the current scope. Future research may focus on identifying and developing suitable datasets for a wider range of reasoning tasks to fully evaluate the applicability and effectiveness of our method across different reasoning tasks. 11925 Acknowledgements This work was done when Shichen Li was a visiting student at the StatNLP Research Group of SUTD. We would like to thank the anonymous reviewers, our meta-reviewer, and senior area chairs for their constructive comments and support on this work. This research/project is supported by Ministry of Education, Singapore, under its Academic Research Fund (AcRF) Tier 2 Programme (MOE AcRF Tier 2 Award No: MOET2EP201220011), the National Research Foundation Singapore and DSO National Laboratories under the AI Singapore Program (AISG Award No: AISG2RP-2020-016), and Ministry of Education, Singapore, under its Tier 3 Programme (The Award No.: MOET320200004). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not reflect the views of the funding agencies. References Massih-Reza Amini, Vasilii Feofanov, Loïc Pauletto, Emilie Devijver, and Yury Maximov. 2022. Self-training: A survey. arXiv preprint arXiv:2202.12040. Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, and Weizhu Chen. 2023. Learning from mistakes makes llm better reasoner. arXiv preprint arXiv:2310.20689. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Anthropic. 2023. Claude 2. https://www.anthropic. com/news/claude-2. Accessed: 2024-05-06. Anthropic. 2024. The claude 3 model family: Opus, sonnet, haiku. Accessed: 2024-05-06. Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. 2024. Llemma: An open language model for mathematics. In Proceedings of ICLR. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, et al. 2020. Language models are few-shot learners. In Proceedings of NeurIPS. Stephen Casper, Xander Davies, Claudia Shi, Thomas Krendl Gilbert, J’er’emy Scheurer, Javier Rando, Rachel Freedman, Tomasz Korbak, David Lindner, et al. 2023. Open problems and fundamental limitations of reinforcement learning from human feedback. Transactions on Machine Learning Research. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2024. Scaling instruction-finetuned language models. Journal of Machine Learning Research, 25(70):1–53. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Stanley C. Fralick. 1967. Learning to recognize patterns without a teacher. IEEE Trans. Inf. Theory. Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. In Proceedings of ICML. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435. Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. 2024. Tora: A tool-integrated reasoning agent for mathematical problem solving. In Proceedings of ACL. Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al. 2023. Reinforced selftraining (rest) for language modeling. arXiv preprint arXiv:2308.08998. 11926 Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio Ranzato. 2019. Revisiting self-training for neural sequence generation. arXiv preprint arXiv:1909.13788. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. In Proceedings of NeurIPS. Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023. Large language models are reasoning teachers. In Proceedings of ACL. Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2022. Unnatural instructions: Tuning language models with (almost) no human labor. arXiv preprint arXiv:2212.09689. Marek Kadlˇcík, Michal Štefánik, Ondˇrej Sotoláˇr, and Vlastimil Martinek. 2023. Calc-x and calcformers: Empowering arithmetical chain-of-thought through interaction with symbolic systems. In Proceedings of EMNLP. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. ArXiv. Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, and Lu Wang. 2023. Grace: Discriminator-guided chain-of-thought reasoning. In Findings of EMNLP. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Proceedings of NeurIPS. Chengpeng Li, Zheng Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, and Chang Zhou. 2023a. Query and response augmentation cannot help out-of-domain math reasoning generalization. arXiv preprint arXiv:2310.05506. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023b. Making language models better reasoners with step-aware verifier. In Proceedings of ACL. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583. Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2023. Teaching small language models to reason. In Proceedings of ACL. Meta. 2024. Llama 3. https://llama.meta.com/ llama3/. Accessed: 2024-06-01. Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing english math word problem solvers. In Proceedings of ACL. OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. In Proceedings of NeurIPS. Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of NAACL. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Proceedings of NeurIPS. Colin Raffel, Noam M. Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of EMNLP. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. In Proceedings of NeurIPS. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347. H. J. Scudder. 1965. Probability of error of some adaptive pattern-recognition machines. IEEE Trans. Inf. Theory. Kumar Shridhar, Alessandro Stolfo, and Mrinmaya Sachan. 2023. Distilling reasoning capabilities into smaller language models. In Findings of ACL. 11927 Avi Singh, John D Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Peter J Liu, James Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, et al. 2023. Beyond human data: Scaling self-training for problem-solving with language models. arXiv preprint arXiv:2312.06585. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Tu Vu, Minh-Thang Luong, Quoc Le, Grady Simon, and Mohit Iyyer. 2021. STraTA: Self-training with task augmentation for better few-shot learning. In Proceedings of EMNLP. Ben Wang and Aran Komatsuzaki. 2021. GPT-J6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/ mesh-transformer-jax. Tianduo Wang and Wei Lu. 2022. Differentiable data augmentation for contrastive sentence representation learning. In Proceedings of EMNLP. Tianduo Wang and Wei Lu. 2023. Learning multi-step reasoning by solving arithmetic tasks. In Proceedings of ACL. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. Transactions on Machine Learning Research. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022b. Chain of thought prompting elicits reasoning in large language models. In Proceedings of NeurIPS. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, et al. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of EMNLP. Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In Proceedings of NeurIPS. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2024. Metamath: Bootstrap your own mathematical questions for large language models. In Proceedings of ICLR. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. 2022. Star: Bootstrapping reasoning with reasoning. In Proceedings of NeurIPS. 11928
2024
643
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11929–11942 August 11-16, 2024 ©2024 Association for Computational Linguistics UltraLink : An Open-Source Knowledge-Enhanced Multilingual Supervised Fine-tuning Dataset Haoyu Wang∗4 Shuo Wang∗†1 Yukun Yan*1 Xujia Wang1 Zhiyu Yang5 Yuzhuang Xu1 Zhenghao Liu6 Liner Yang5 Ning Ding1 Xu Han1,2,3 Zhiyuan Liu1,2,3 Maosong Sun†1,2,3 1Dept. of Comp. Sci. & Tech., Tsinghua University 2Institute for AI, Tsinghua University 3Beijing National Research Center for Information Science and Technology 4BUPT 5BLCU 6Northeastern University, China Abstract Open-source large language models (LLMs) have gained significant strength across diverse fields. Nevertheless, the majority of studies primarily concentrate on English, with only limited exploration into the realm of multilingual abilities. In this work, we therefore construct an open-source multilingual supervised fine-tuning dataset. Different from previous works that simply translate English instructions, we consider both the language-specific and language-agnostic abilities of LLMs. Firstly, we introduce a knowledge-grounded data augmentation approach to elicit more languagespecific knowledge of LLMs, improving their ability to serve users from different countries. Moreover, we find modern LLMs possess strong cross-lingual transfer capabilities, thus repeatedly learning identical content in various languages is not necessary. Consequently, we can substantially prune the language-agnostic supervised fine-tuning (SFT) data without any performance degradation, making multilingual SFT more efficient. The resulting UltraLink dataset comprises approximately 1 million samples across five languages (i.e., En, Zh, Ru, Fr, Es), and the proposed data construction method can be easily extended to other languages. UltraLink-LM, which is trained on the UltraLink dataset, outperforms several representative baselines across many tasks. 1 1 Introduction Thanks to the collaborative efforts of the active large language models (LLMs) community, opensource LLMs are becoming increasingly powerful (Touvron et al., 2023a,b; Jiang et al., 2023), even outperforming some representative closedsource counterparts (OpenAI, 2023; Anil et al., 2023) in some specific tasks (Wei et al., 2023b). * Equal contribution. † Corresponding authors. 1 Both UltraLink and UltraLink-LM will be publicly available at https://github.com/OpenBMB/UltraLink. Figure 1: To equip large language models with not only language-specific knowledge but also language-agnostic expertise, we construct the UltraLink dataset for multilingual SFT. For each language, UltraLink consists of four subsets, encompassing chat data with languagespecific content, chat data with language-agnostic content, math data, and code data. These accomplishments are closely related to the contribution of open-source supervised fine-tuning (SFT) data (Ding et al., 2023; Anand et al., 2023; Peng et al., 2023; Wang et al., 2023; Kim et al., 2023; Xu et al., 2023), which plays a pivotal role in eliciting the instruction-following ability of LLMs and aligning the model behaviour with human preferences. Nevertheless, the focus of existing works is primarily on the construction of English SFT data, resulting in a comparatively limited availability of multilingual SFT resources. To mitigate the challenge of data scarcity, some researchers suggest translating English SFT data into multiple languages. Lai et al. (2023) utilize ChatGPT2 to translate the two essential components, instructions and responses, from Alpacastyle (Taori et al., 2023) English data to other languages. Chen et al. (2023) propose to translate 2https://chatgpt.com/ 11929 both the Alpaca and the ShareGPT3 data. While directly translating English SFT data can effectively support multilingual SFT, there are still two major drawbacks associated with this approach: • Low cultural diversity and imprecise translations caused by cultural differences: translation of English data may not adequately encompass topics specific to non-English regions (e.g., subjects related to Russian culinary culture), leading to a deficiency in language-specific knowledge for LLMs. Moreover, for certain instructions (e.g., what are the most important holidays of the year?), the answers vary in different cultural backgrounds, so directly translating all English conversations may result in numerous distorted translations. • Linearly increased data volume: the total volume of translated SFT data linearly increases with the number of languages. However, the translations across different languages are semantically equivalent, making the model repeatedly learn the same content. We believe that a good multilingual LLM should not only possess language-specific knowledge but also be equipped with language-agnostic skills. Figure 2 gives an example of the two types of instructions. We thus propose a new approach to better construct multilingual SFT data, applicable to any language. Compared to conversation translation (Lai et al., 2023; Chen et al., 2023), our advantages can be illustrated as follows: • Higher cultural diversity and less distorted translations: for language-specific data, we propose a knowledge-grounded data augmentation method. Concretely, Wikipedia is employed4 as a data source for each language to provide more language-specific contexts. For language-agnostic chat data (e.g., the second example in Figure 2), we propose a two-stage translation mechanism. Given high-quality English SFT data, we first filter out the conversations that are specific to certain regions. Then we translate the remaining languageagnostic data. • Pruned data volume: for language-agnostic skills like math reasoning and code generation, 3https://sharegpt.com 4https://www.wikipedia.org 1. Language-Specific Instructions What are some common tea traditions or etiquette observed in England? 2. Language-Agnostic Instructions How do you approach learning a new skill or acquiring knowledge, and what strategies have you found to be effective in your learning process? Figure 2: Examples of instructions with languagespecific and language-agnostic content. through our experiments, we find that it is unnecessary for the model to repeatedly learn identical problems, thanks to the strong crosslingual transfer capabilities of modern LLMs. We can thus significantly prune the amount of math and code SFT data for non-English languages without compromising the model performance. We apply the aforementioned approach to four non-English languages, including Chinese, Russian, French, and Spanish. Note that our method can also be easily extended to other languages. Finally, we train a SFT LLM on the proposed UltraLink dataset, which outperforms several representative open-source multilingual LLMs, demonstrating the effectiveness of our dataset. 2 Data Curation Automatically generating SFT data is now an important research topic for LLMs (Taori et al., 2023; Wang et al., 2023; Ding et al., 2023). For multilingual SFT, it is crucial to consider the influence of cultural diversity on language-specific data, while also integrating language-agnostic universal data that is related to the general abilities of LLMs (i.e., math reasoning). In this work, we propose a data construction framework consisting of two pipelines, as shown in Figure 3. 2.1 Language-Specific Data Curation The cultures around the world are vibrant and diverse, reflecting the lifestyles and perspectives of people from various countries and regions. To better cater to diverse users, the cultural diversity of multilingual LLMs should be improved. In this aspect, we propose a knowledge-grounded data augmentation method, leveraging language-specific data sources to provide intricate and varied cultural backgrounds. Our method mainly contains two 11930 Filter out: 1. Country, Region, Province… 2. Religion, Politics, Celebrities… 3. Poetry, Lyrics, Jokes, Idiom… Translate: 1. Keep code snippets. 2. Keep symbols and numbers. 3. Fluent and natural. Initial Question Initial Answer In-depth Question Expansive Question Following Answer Language-Specific Pipeline Language-Agnostic Pipeline Multilingual Knowledge Base Existing English Code, Math, And Chat SFT Data Rule-based Filter Splitted Text Segment Dialogue Generation Language-Agnostic Multilingual SFT Data Figure 3: The proposed data augmentation method consists of two pipelines. The upper pipeline illustrates the generation of language-specific chat data. Dialogues are generated by LLMs, conditioning on language-specific knowledge extracted from Wikipedia. The language-agnostic pipeline aims to leverage existing high-quality English SFT data, using a two-stage translation mechanism to mitigate translation errors stemming from cultural differences. steps: (1) preparing and sampling knowledge from data sources as cultural backgrounds, and (2) steering LLMs to generate informative conversations given the provided cultural backgrounds. 2.1.1 Knowledge Preparation For each language, we utilize Wikipedia dumps5 as the data source, encompassing a diverse array of topics closely related to the respective culture. We first use an open-source extraction toolkit6 to preprocess the raw dumps and get text descriptions for each entry. Then we use the language identification model provided by fastText (Joulin et al., 2017) to remove contents that are not in the expected language. For Chinese, we also use OpenCC7 to convert traditional Chinese texts into simplified Chinese. Finally, we filter out documents that are shorter than 1K tokens or longer than 10K tokens. The number of tokens is calculated by tiktoken8. Given that most LLMs have a limited context length, we divide the whole text into segments whose lengths are between 1K and 2K. We do not split whole sentences when performing text segmentation. The preprocessed texts are used as contexts for the following dialogue generation pro5https://dumps.wikimedia.org 6https://github.com/attardi/wikiextractor 7https://github.com/BYVoid/OpenCC 8https://github.com/openai/tiktoken cedure. 2.1.2 Dialogue Generation To automatically generate multi-turn dialogues, we designed a question generator and an answer generator, which are both based on GPT-3.5. When generating the dialogue, both the question and answer generators are conditioned on a provided text segment as the cultural background. The used prompts can be divided into four parts: system prompt, principles, cultural background, and dialogue history. The prompt structure is shown in Figure 4. {system prompt} {principles} <document> {cultural background} <\document> {dialogue history} Figure 4: Structure of the prompts used for dialogue generation. The provided cultural background is enclosed within a pair of separators. The system prompt is used to describe the task (i.e., generating the initial question). The principles provide some detailed suggestions for the LLM, which are found important for improving the quality of the generated data. The cultural background is the preprocessed text segment that contains language-specific knowledge. The dialogue history provides the historical questions and 11931 Dataset Dialogues Turns Average Length Question Answer Turn Okapi Dataset (Lai et al., 2023) 207K 207K 28.64 95.72 124.36 Guanaco Dataset (Attardi, 2023) 1173K 1173K 77.58 83.31 160.89 Multialpaca (Wei et al., 2023a) 132K 132K 39.86 83.71 123.57 Phoenix SFT data (Chen et al., 2023) 464K 893K 165.27 200.07 365.34 UltraLink (Ours) 1032K 1623K 87.86 290.35 378.21 Table 1: Comparison between UltraLink and existing open-source multilingual SFT datasets. answers, which are set to an empty string when generating the initial question. Generating the Initial Dialogue The principles used to generate the first question are shown in Figure 5. We ask the involved LLM (i.e., GPT-3.5) to understand the provided cultural background and then propose a related question that can be answered according to the cultural background. For the generation of answers, we provide only a concise description of the principles in Figure 6 due to space limitations. For each language, the principles are translated by humans into the target language. We only show the English version of the prompt to understand the method better. 1. Pose "why" and "how" questions: given the provided document, ask why something happens or how it occurs. The questions should guide respondents to engage in more in-depth analysis and explanation, rather than simply stating facts. 2. Compare and contrast: if the text mentions a phenomenon or viewpoint, you can try comparing it with other similar situations and then pose questions to explore the similarities and differences between them, as well as potential impacts. 3. Predict future developments: if the text refers to a trend or direction of development, you can pose questions to discuss possible changes in the future or express opinions and predictions about a particular trend. 4. Stimulate reflection and discussion: Pose open-ended questions to encourage respondents to delve into deeper reflection and discussion. Figure 5: Principles for generating the initial question. Generating Subsequent Dialogues After generating the initial question and answer, we iteratively produce subsequent dialogues. To improve the diversity of constructed dialogues, we propose two types of subsequent questions. At each turn, we randomly decide whether to present an in-depth ques1. Understand the content. 2. Logically reason about details. 3. Compare relevant situations. 4. Discuss future trends. 5. Engage in deeper discussion. Figure 6: A brief description of the principles for generating the initial answer. tion for a more detailed exploration of the same topic or to generate an expansive question to delve into other subjects. The principles used to ask an in-depth question are shown in Figure 7, while the principles used to ask an expansive question are shown in Figure 8. Note that when generating subsequent dialogues, the cultural background is also provided to the model. 1. Understand the context. 2. Uncover implicit information. 3. Challenge existing viewpoints. 4. Extend the topic. 5. Pose open-ended questions. 6. Delve into more complex logic. Figure 7: A brief description of the principles to ask an in-depth following question. 1. Abstract the theme. 2. Turn into overarching topics. 3. Considering temporal and spatial span. 4. Connect to related fields. 5. Take a global perspective. Figure 8: A brief description of the principles to ask an expansive following question. Using the aforementioned approach, we automatically construct language-specific multi-turn conversations in four languages. The details of constructed data will be illustrated in Section 3, including the average length and some other statistics. 11932 Note that the proposed knowledge-grounded data augmentation approach can also be applied to any other language. 2.2 Language-Agnostic Data Curation In addition to language-specific abilities, the general abilities that are language-agnostic are also essential for LLMs. As numerous high-quality English SFT datasets already encompass a broad spectrum of general abilities, we suggest employing a two-stage translation mechanism to maximize the utility of existing English resources. Our goal is to reduce translation errors caused by cultural differences since some questions can not be directly translated into other languages (e.g., write an English poem where each sentence starts with the letter “A"). In the first stage, we introduce a multi-criteria mechanism to filter out English-specific conversations that are difficult to translate accurately into other languages. Then we use GPT-3.5 to translate the remaining languageagnostic data. In this study, we consider three key components of general abilities for LLMs: chat, math reasoning, and code generation. For chat, we use ShareGPT as the English chat data, which consists of multi-turn dialogues between human users and ChatGPT. For math reasoning, we use MetaMath (Yu et al., 2023) as the English math data. For code generation, we use the Magicoder dataset (Wei et al., 2023b) as the English code data. 2.2.1 Multi-Criteria Filter The criteria employed to filter out English-specific conversations are outlined in Figure 9. Our goal is to retain only conversations whose topics can be discussed in any cultural background. GPT-3.5 is utilized to ascertain whether a conversation contains information relevant to the specified features. For instance, the conversations that include English jokes will be removed before translation. 2.2.2 Translator After the filtering process, the remaining conversations undergo the translation procedure, wherein they are translated into four languages using GPT3.5-turbo to maintain fluency and accuracy. We also provide some translation principles to help GPT-3.5 better perform the translation, which is shown in Figure 10. 1. Full name of *human*. 2. Country, region, state, province, city, address. 3. Conventions, politics, history, and religion. 4. Poetry, rhymes, myths, tales, jokes, and slang. 5. Food, cloth, furniture, construction. 6. Organization, company, product, brand. Figure 9: Criteria used to identify English-specific conversation. We only provide a brief version with a detailed explanation due to space limitations. 1. Ensure the completeness and consistency of content during the translation process, without adding or deleting any information. 2. Ensure that the translated text is fluent and natural, using the most common expressions in the target language whenever possible. Use officially prescribed translations for professional terms and adhere to the target-language expression conventions. 3. If certain terms are not in natural language but are mathematical symbols, programming languages, or LaTeX language, please directly copy the original text. 4. If there are no equivalent translation terms for certain vocabulary, please directly copy the original text. 5. For citations and references, please directly copy the original text. Figure 10: Translation principles. 2.3 Data Pruning English math and code datasets are frequently extensive, exemplified by MetaMath (Yu et al., 2023) with 395K training examples and Magicoder (Wei et al., 2023b) comprising 186K training examples. Assuming the English data consists of N training examples, the overall multilingual dataset would encompass k × N examples if we translate all the English training examples into other languages, where k is the number of languages. The linear increase in data volume will result in higher training costs during SFT. As math and code problems are not closely tied to the cultural backgrounds of different countries, LLMs may have the capability to transfer English math and code abilities into other languages with only limited training examples. In other words, it may not be necessary for LLMs to learn all translated math and code problems. To verify the assumption mentioned above, we conduct experiments on Chinese math and code tasks. 11933 For comparison, we fine-tune Llama-2-7b (Touvron et al., 2023b) in the following two different ways: • From En SFT Model: we first use English math or code data to fine-tune the base model, and then use different amounts of Chinese data to further tune the model. • From Base Model: we directly use Chinese math or code data to fine-tune the base model. 15 25 35 45 55 1k 2k 4k 8k 16k 32k 48k 64k 45.6 45.2 47.2 38.8 34.8 29.6 22.0 15.6 49.6 50.0 50.0 47.6 45.6 44.8 45.6 38.8 From En SFT Model From Base Model Figure 11: Performance on MGSM-Zh with different numbers of Chinese mathematical training examples. 8 16 24 32 40 1k 2k 4k 8k 16k 32k 48k 64k 20.7 20.7 20.1 19.5 20.1 9.8 16.5 16.5 30.5 31.1 26.8 32.9 28.1 26.2 27.4 24.4 From En SFT Model From Base Model Figure 12: Performance on HumanEval-Zh with different numbers of Chinese code training examples. Figure 11 and 12 show the performances of the two types of models. Surprisingly, the involved LLM exhibits strong cross-lingual transfer capabilities. For instance, utilizing only 2K Chinese mathematical training examples can yield a score of 45.6 when fine-tuning from the English SFT model. In contrast, directly fine-tuning the base model with an equivalent amount of Chinese data results in a significantly lower score of 22.0, highlighting the superior performance achieved through transfer from the English SFT model. In the Chinese code generation task, we observe a similar trend, wherein transfer learning from the English SFT model substantially enhances the performance of the model. Moreover, we find that using more Chinese SFT data does not consistently lead to improved performance. For the math task, using 32K Chinese training examples achieves the best result. For the code task, the peak performance is attained with 16K Chinese code generation examples. Hence, we incorporate only 32K mathematical training examples and 16K code training examples for each non-English language in the UltraLink dataset. Lang. Lang.Spec. Lang.Agno. Chat Chat Math Code En 10K 67K 395K 186K Zh 36K 11K 32K 16K Ru 37K 11K 32K 16K Fr 30K 11K 32K 16K Es 34K 11K 32K 16K UltraLink 147K 112K 523K 250K w/o En 137K 45K 128K 64K Table 2: Scales of different components in UltraLink, which are measured by the number of dialogues. 2.4 Data Overlap Strategy UltraLink consists of four parts, including the language-specific chat part, the language-agnostic chat part, the code part, and the math part. It is worthwhile to design different strategies for them because of their intrinsic features. For language-specific data, we utilize respective Wikipedias to prevent overlap, recognizing that different languages often focus on distinct frequentlydiscussed topics. The content of language-agnostic chat data is shared across languages to capture general knowledge. For code and math data, as there is no prior indication that certain languages favour specific code or math topics, we randomly selected examples for each language. 2.5 Data Quality Ensurance It is important to ensure the quality of the data generated by LLM. We deal with this issue from the following aspects. Firstly, We primarily filter Wikipedia articles based on length, as longer texts often contain more detailed information, providing GPT with a comprehensive context for generating accurate dialogue data. Additionally, we attach a paragraph split from Wikipedia to each QA generation to improve answer quality. For instance, answers generated by GPT with a Wiki paragraph in an autobiography context show high relevance and can accurately describe when events occurred, 11934 ensuring correctness. We also leverage the tendency of longer inputs to elicit longer responses to obtain comprehensive answers. To prevent the negative impact of truncation on LLM, we filter out truncated responses from the API based on the window size of the used LLM. As for the quality check of UltraLink, the pass rate for manual inspection is 96%, and the pass rate for answers is 99%. 3 Dataset Statistics 3.1 Data Distribution Table 2 presents the scale of each component in UltraLink, encompassing five languages. Each language contributes four types of SFT data: chat data with language-specific knowledge, chat data with language-agnostic knowledge, math data, and code data. The quantities of language-agnostic segments are approximately equal for the four non-English languages. 3.2 Comparison with Existing Datasets Before us, there are some existing multilingual SFT datasets, where we select four representative datasets for comparison, including the Okapi dataset (Lai et al., 2023), the Guanaco dataset (Attardi, 2023), Multialpaca (Wei et al., 2023a), and the Phoenix SFT data (Chen et al., 2023). We conduct a comparison based on the number of dialogues, the number of conversation turns, and the average lengths across the respective datasets. As shown in Table 1, we find that UltraLink contains fewer dialogues than the Guanaco dataset, but the latter only contains single-turn conversations. Only the Phoenix SFT data and UltraLink include multiturn conversations. We use the number of tokens estimated by tiktoken as the length for each question and answer. The question token length does not include the document. On average, UltraLink exhibits the longest average length per turn (i.e., 378.21 tokens), considering both questions and their corresponding answers. Compared to UltraLink, the Phoenix SFT data has longer questions (165.27 vs. 87.86), but its answers are shorter (200.07 vs. 290.35). For each language, we also estimate the average lengths of questions and answers, and the results are shown in Figure 13. Across all languages, the answer is significantly longer than the question. The primary distinctions between our dataset and existing multilingual SFT datasets can be clarified from the following aspects: 0 150 300 450 600 Zh Ru Fr Es 325.1 453.8 332.1 553.8 61.4 58.4 47.5 55.2 Question Length Answer Length Figure 13: Number of tokens for each language in UltraLink. • Think from two dimensions: Our dataset takes into account both language-specific and language-agnostic aspects, filling a gap left by previous works, which can facilitate the development of more versatile and effective language models. • Not just translation: Our dataset is grounded by Wikipedia, which ensures a higher degree of correctness and significantly reduces the occurrence of misconceptions. This feature is particularly beneficial for tasks that require a solid grounding in factual information. The extracted text from Wikipedia can provide more cultural backgrounds, improving the cultural diversity of the resulting dataset. • Focus on generation: Unlike many existing datasets that are primarily designed for question-answering tasks, our dataset is geared towards knowledge infusion and the generation of extended text. This unique focus makes it a valuable resource for advancing research in these areas. 4 Experiment 4.1 Setup Baselines For thorough comparison, we select several representative multilingual baselines in our experiments, including Bloomz-7b1-mt (BigScience, 2023), Phoenix-inst-chat-7b (Chen et al., 2023), PolyLM-Multialpaca-13b (Wei et al., 2023a), PolyLM-Chat-13b (Wei et al., 2023a), Chimera-inst-chat-13b (Chen et al., 2023), Okapi7b (Lai et al., 2023), Guanaco-7b (Attardi, 2023), Guanaco-13b (Attardi, 2023), and Aya-101 (Üstün et al., 2024). Okapi-7b is fine-tuned by ourselves based on Llama-2-7b using the Okapi dataset. Besides, the Aya-5-reimplement is fine-tuned in this 11935 Model Backbone SFT Data En Zh Es Ru Fr Avg. OMGEval (Chat) Guanaco-13b Llama-2-13b Guanaco Dataset 20.8 7.7 13.5 10.8 10.3 12.5 Aya-101 mt5-xxl Aya SFT Dataset 1.4 1.4 2.2 4.3 2.3 2.3 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 1.2 0.4 0.8 1.1 0.1 0.7 UltraLink-LM Llama-2-13b UltraLink 26.8 19.2 20.2 29.4 24.6 24.0 Multilingual HumanEval (Code) Guanaco-13b Llama-2-13b Guanaco Dataset 18.3 15.9 9.8 8.5 14.6 12.2 Aya-101 mt5-xxl Aya SFT Dataset 0.6 0 0 0 0 0.1 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 6.1 9.75 6.1 8.5 4.3 7.0 UltraLink-LM Llama-2-13b UltraLink 60.4 43.9 40.9 49.4 39.6 46.8 MGSM (Math) Guanaco-13b Llama-2-13b Guanaco Dataset 13.6 10.8 11.2 6.4 5.2 8.4 Aya-101 mt5-xxl Aya SFT Dataset 8.8 4.0 6.0 8.0 9.2 7.2 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 28.8 5.6 18.0 17.2 19.2 17.8 UltraLink-LM Llama-2-13b UltraLink 70.4 56.0 70.4 64.8 63.6 63.7 Multilingual MMLU Guanaco-13b Llama-2-13b Guanaco Dataset 50.6 36.6 44.4 38.3 43.8 42.7 Aya-101 mt5-xxl Aya SFT Dataset 8.8 4.0 6.0 8.0 9.2 7.2 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 51.5 38.7 44.9 40.8 45.2 44.2 UltraLink-LM Llama-2-13b UltraLink 54.2 42.7 49.0 44.4 48.3 47.7 Multilingual Hellaswag Guanaco-13b Llama-2-13b Guanaco Dataset 74.5 43.4 60.6 51.8 58.4 57.7 Aya-101 mt5-xxl Aya SFT Dataset 75.5 50.5 62.7 54.7 61.3 60.9 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 76.7 48.9 62.6 53.8 61.1 60.6 UltraLink-LM Llama-2-13b UltraLink 77.5 52.8 64.8 56.1 63.5 62.9 Multilingual ARC Guanaco-13b Llama-2-13b Guanaco Dataset 60.8 39.4 6.5 13.8 17.7 27.6 Aya-101 mt5-xxl Aya SFT Dataset 73.1 51.9 43.3 45.4 55.8 53.9 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 64.0 47.4 22.1 33.3 45.3 42.4 UltraLink-LM Llama-2-13b UltraLink 76.0 50.0 47.4 51.3 58.9 56.7 Table 3: Performance of the involved multilingual SFT LLMs on 6 benchmarks. work based on Llama-2-13b, using the same languages as UltraLink. The training examples are from Aya Dataset and Aya Collection (Singh et al., 2024). The other baselines are downloaded from Hugging Face Hub9. Training details Based on Llama-2-13b (Touvron et al., 2023a), UltraLink-LM is fine-tuned with the constructed UltraLink dataset for 3 epochs. We use the cosine learning rate schedule and the peak learning rate is set to 2e-5. The warm-up ratio is set to 0.04. Each mini-batch contains 128 training examples in total. The maximum sequence length is 4096. We train the model using 32 A100 GPUs for about 140 hours. Aya-5-reimplement uses the same training setting as UltraLink-LM. 9https://huggingface.co Evaluation We examine the model performance in two categories of tasks, Natural Language Generation (NLG) and Natural Language Understanding (NLU). The NLG evaluation consists of three tasks, including chat, math reasoning, and code generation. For chat, we use OMGEval (Liu et al., 2023) for evaluation, which is a multilingual version of the widely-used English benchmark AlpacaEval (Li et al., 2023). OMGEval is not a mere translated version of AlpacaEval. Instead, it localizes the English questions according to the cultural backgrounds of each language. We employ MGSM (Shi et al., 2023) to evaluate math reasoning abilities, which is also a multilingual benchmark. Since there are no existing multilingual test sets for code generation, we use GPT3.5 with carefully designed prompts to translate HumanEval (Chen et al., 2021) into other lan11936 guages, which serves as the multilingual benchmark to evaluate the code abilities of LLMs. In terms of NLU tasks, we use 3 different benchmarks, MMLU, Hellaswag, and ARC. We use the Okapi (Lai et al., 2023) version of the multilingual evaluation dataset. We use the UltraEval toolkit10 for model inference and evaluation, which supports a wide range of open-source models. 4.2 Results Table 3 describes the results of Guanaco-13b, Aya101, Aya-5-implement, and UltraLink-LM. The detailed results of all baselines can be found in the Appendix (Table 5 and Table 6). In terms of general chat abilities, our model achieves the best average results, which implies the superiority of the proposed UltraLink dataset. For the code generation task, previous multilingual SFT datasets did not take into account the multilingual code abilities, which we think is very important in many real-world scenarios. Our model achieves a score of 60.4 in the English HumanEval benchmark, surpassing even CodeLlama34b-Python (Rozière et al., 2024). In the math reasoning task, our model consistently outperforms all other baselines across all five languages. The performance of UltraLink-LM in both math and code tasks underscores the effectiveness of our method in enabling multilingual LLMs to acquire general abilities. Within the scope of NLU tasks, UltraLink, though not specifically optimized for short-answer datasets, exhibits an appreciable performance superiority. This outcome illustrates the inherent potential that lies within the dataset used. 5 Related Work Supervised Fine-tuning SFT is now a crucial part of constructing a powerful LLM. SODA (Kim et al., 2023) constructs high-quality social dialogues by contextualizing social commonsense knowledge from a knowledge graph. Using the technique of self-instruct (Wang et al., 2023), Alpaca (Taori et al., 2023) is one of the pioneers to leverage ChatGPT to collect SFT data. UltraChat (Ding et al., 2023) utilizes ChatGPT to generate topics in a tree-style structure for the construction of large-scale dialogues. With these efforts, 10https://github.com/OpenBMB/UltraEval English SFT resources are becoming increasingly rich and effective. Multilingual SFT Datasets To enhance the global utility of LLMs, numerous multilingual SFT datasets have been created. Lai et al. (2023) employ ChatGPT to translate Alpaca into various languages. Chen et al. (2023) combine ShareGPT with Alpaca and then translate the two datasets. Attardi (2023) and Wei et al. (2023a) extend tasks from Alpaca by introducing filters and rewrites of seed tasks in different languages, generating datasets through multiple iterations. 6 Conclusion In this work, we propose a knowledge-grounded data augmentation method and a two-stage translation mechanism to construct language-specific and language-agnostic multilingual SFT data, respectively. Experiments demonstrate that the proposed dataset is effective for multilingual LLMs. 7 Limitations In the paper, our proposed data construction framework is only applied to four language types. Nevertheless, the framework can be easily extended to other languages. We leave it to the future work to include more languages. Moreover, due to constraints imposed by the base model, the multilingual capability still faces several limitations. Notably, the model exhibits significantly better performance in English across many tasks. There is a pressing need to continue constructing high-quality pre-training multilingual datasets, to unlock the full potential of multilingual abilities in LLMs. Acknowledgements We are grateful to the anonymous reviewers for their valuable feedback. This work is supported by the National Key R&D Program of China (No.2022ZD0116312), the National Natural Science Foundation of China (No. 62236011), and a grant from the Guoqiang Institute, Tsinghua University. References Yuvanesh Anand, Zach Nussbaum, Adam Treat, Aaron Miller, Richard Guo, Ben Schmidt, GPT4All Community, Brandon Duderstadt, and Andriy Mulyar. 2023. Gpt4all: An ecosystem of open source compressed language models. 11937 Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report. Giusepppe Attardi. 2023. Guanaco. https:// guanaco-model.github.io/. BigScience. 2023. Bloom: A 176b-parameter openaccess multilingual language model. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, Jianquan Li, Xiang Wan, Benyou Wang, and Haizhou Li. 2023. Phoenix: Democratizing chatgpt across languages. ArXiv, abs/2304.10453. Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. 2023. Enhancing chat language models by scaling high-quality instructional conversations. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers. Hyunwoo Kim, Jack Hessel, Liwei Jiang, Peter West, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, and Yejin Choi. 2023. SODA: Million-scale dialogue distillation with social commonsense contextualization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Viet Lai, Chien Nguyen, Nghia Ngo, Thuat Nguyen, Franck Dernoncourt, Ryan Rossi, and Thien Nguyen. 2023. Okapi: Instruction-tuned large language models in multiple languages with reinforcement learning from human feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval. Yang Liu, Lin Zhu, Jingsi Yu, Meng Xu, Yujie Wang, Hongxiang Chang, Jiaxin Yuan, Cunliang Kong, Jiyuan An, Tianlin Yang, Shuo Wang, Zhenghao Liu, Yun Chen, Erhong Yang, Yang Liu, and Maosong Sun. 2023. Omgeval: An open multilingual generative evaluation benchmark for foundation models. https://github.com/blcuicall/OMGEval. OpenAI. 2023. Gpt-4 technical report. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4. 11938 Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2024. Code llama: Open foundation models for code. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. 2023. Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations. Shivalika Singh, Freddie Vargus, Daniel Dsouza, Börje F. Karlsson, Abinaya Mahendiran, Wei-Yin Ko, Herumb Shandilya, Jay Patel, Deividas Mataciunas, Laura OMahony, Mike Zhang, Ramith Hettiarachchi, Joseph Wilson, Marina Machado, Luisa Souza Moura, Dominik Krzemi´nski, Hakimeh Fadaei, Irem Ergün, Ifeoma Okoh, Aisha Alaagib, Oshan Mudannayake, Zaid Alyafeai, Vu Minh Chien, Sebastian Ruder, Surya Guthikonda, Emad A. Alghamdi, Sebastian Gehrmann, Niklas Muennighoff, Max Bartolo, Julia Kreutzer, Ahmet Üstün, Marzieh Fadaee, and Sara Hooker. 2024. Aya dataset: An open-access collection for multilingual instruction tuning. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Xiangpeng Wei, Hao-Ran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, Tianxiang Hu, Shangjie Li, Binyuan Hui, Yu Bowen, Dayiheng Liu, Baosong Yang, Fei Huang, and Jun Xie. 2023a. Polylm: An open source polyglot large language model. ArXiv, abs/2307.06018. Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2023b. Magicoder: Source code is all you need. Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. Metamath: Bootstrap your own mathematical questions for large language models. Ahmet Üstün, Viraat Aryabumi, Zheng-Xin Yong, WeiYin Ko, Daniel D’souza, Gbemileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, Freddie Vargus, Phil Blunsom, Shayne Longpre, Niklas Muennighoff, Marzieh Fadaee, Julia Kreutzer, and Sara Hooker. 2024. Aya model: An instruction finetuned open-access multilingual language model. 11939 A Data Mixing Strategy The method of mixing the data of different languages could impact the performance of LLM. After getting the volume of different languages, we test three data mixing strategies, including sequential, mixed, and zh-only strategies, to fine-tune the base model. We choose MiniCPM-2.4B11 as the backbone to reduce training costs. “Sequential” means the model is trained on English data first, and then trained on Chinese data. “Mixed” represents that the English and Chinese data are randomly mixed during the training process. “Zh-only” stands for fine-tuning the model only with Chinese data. The evaluation is conducted on the Chinese HumanEval test set Data Mixing Strategy HumanEval-Zh Sequential 40.8 Mixed 44.9 Zh-only 32.7 Table 4: The results of different data mixing strategies, evaluated on the Chinese code generation task. Table 4 shows the results, indicating that mixing multilingual SFT data can result in the best performance. We thus employ the “Mixed” strategy when training UltraLink-LM. B Evaluation Results This section details the full experimental results of all the baselines on the 6 examined benchmarks. 11https://github.com/OpenBMB/MiniCPM. 11940 Model Backbone SFT Data OMGEval (Chat) En Zh Es Ru Fr Avg. Bloomz-7b1-mt Bloomz-7b1 xP3mt 0.0 0.9 0.1 0.5 0.3 0.4 Phoenix-inst-chat-7b Bloomz-7b1 Phoenix SFT data 6.9 13.3 7.4 2.9 8.1 7.7 PolyLM-Multialpaca-13b PolyLM-13b Multialpaca 3.4 5.0 2.1 5.1 2.2 3.6 PolyLM-Chat-13b PolyLM-13b Closed-source 7.7 14.0 6.1 5.5 4.8 7.6 Chimera-inst-chat-13b Llama-13b Phoenix SFT data 15.5 9.7 11.8 13.7 13.8 12.9 Okapi-7b Llama-2-7b Okapi Dataset 8.8 6.2 5.0 12.1 8.7 8.2 Guanaco-7b Llama-2-7b Guanaco Dataset 4.6 3.8 0.4 1.8 1.2 2.4 Guanaco-13b Llama-2-13b Guanaco Dataset 20.8 7.7 13.5 10.8 10.3 12.5 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 1.2 0.4 0.8 1.1 0.1 0.7 Aya-101 mt5-xxl Aya SFT Dataset 1.4 1.4 2.2 4.3 2.3 2.3 UltraLink-LM Llama-2-13b UltraLink 26.8 19.2 20.2 29.4 24.6 24.0 Model Backbone SFT Data Multilingual HumanEval (Code) En Zh Es Ru Fr Avg. Bloomz-7b1-mt Bloomz-7b1 xP3mt 8.5 7.3 6.1 8.5 6.1 7.3 Phoenix-inst-chat-7b Bloomz-7b1 Phoenix SFT data 11.0 10.4 8.5 1.2 13.4 12.2 PolyLM-Multialpaca-13b PolyLM-13b Multialpaca 8.5 7.3 6.1 6.1 6.1 6.8 PolyLM-Chat-13b PolyLM-13b Closed-source 10.4 7.9 6.1 7.3 8.5 8.1 Chimera-inst-chat-13b Llama-13b Phoenix SFT data 14.6 13.4 14.6 12.8 14.0 13.9 Okapi-7b Llama-2-7b Okapi Dataset 12.2 11.0 8.5 8.5 8.5 9.8 Guanaco-7b Llama-2-7b Guanaco Dataset 9.2 6.7 11.0 9.8 12.8 9.9 Guanaco-13b Llama-2-13b Guanaco Dataset 18.3 15.9 9.8 8.5 14.6 12.2 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 6.1 9.75 6.1 8.5 4.3 7.0 Aya-101 mt5-xxl Aya SFT Dataset 0.6 0 0 0 0 0.1 UltraLink-LM Llama-2-13b UltraLink 60.4 43.9 40.9 49.4 39.6 46.8 Model Backbone SFT Data MGSM (Math) En Zh Es Ru Fr Avg. Bloomz-7b1-mt Bloomz-7b1 xP3mt 2.8 1.6 2.0 0.4 2.8 1.7 Phoenix-inst-chat-7b Bloomz-7b1 Phoenix SFT data 3.2 3.2 2.8 3.2 3.2 3.1 PolyLM-Multialpaca-13b PolyLM-13b Multialpaca 1.2 2.8 1.6 2.8 2.4 2.4 PolyLM-Chat-13b PolyLM-13b Closed-source 10.8 6.4 4.8 4.4 5.6 5.3 Chimera-inst-chat-13b Llama-13b Phoenix SFT data 14.0 11.6 10.0 12.0 12.8 11.6 Okapi-7b Llama-2-7b Okapi Dataset 4.0 2.4 3.6 4.4 4.8 3.8 Guanaco-7b Llama-2-7b Guanaco Dataset 4.0 1.6 3.2 2.8 4.4 3.0 Guanaco-13b Llama-2-13b Guanaco Dataset 13.6 10.8 11.2 6.4 5.2 8.4 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 28.8 5.6 18.0 17.2 19.2 17.8 Aya-101 mt5-xxl Aya SFT Dataset 8.8 4.0 6.0 8.0 9.2 7.2 UltraLink-LM Llama-2-13b UltraLink 70.4 56.0 70.4 64.8 63.6 63.7 Table 5: Performance of the involved multilingual SFT LLMs on NLG tasks. 11941 Model Backbone SFT Data Multilingual MMLU En Zh Es Ru Fr Avg. Bloomz-7b1-mt Bloomz-7b1 xP3mt 35.9 33.6 34.7 25.9 35.1 33.0 Phoenix-inst-chat-7b Bloomz-7b1 Phoenix SFT data 38.5 35.6 36.5 25.8 36.9 34.7 PolyLM-Multialpaca-13b PolyLM-13b Multialpaca 26.7 25.6 25.0 24.7 25.5 25.5 PolyLM-Chat-13b PolyLM-13b Closed-source 29.3 28.3 25.8 26.2 27.3 27.4 Chimera-inst-chat-13b Llama-13b Phoenix SFT data 48.1 31.9 40.8 37.2 41.8 40.0 Okapi-7b Llama-2-7b Okapi Dataset 8.8 6.2 5.0 12.1 8.7 8.2 Guanaco-7b Llama-2-7b Guanaco Dataset 28.9 25.0 27.1 26.2 27.4 26.9 Guanaco-13b Llama-2-13b Guanaco Dataset 50.6 36.6 44.4 38.3 43.8 42.7 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 51.5 38.7 44.9 40.8 45.2 44.2 Aya-101 mt5-xxl Aya SFT Dataset 39.9 40.7 41.4 40.0 41.2 40.6 UltraLink-LM Llama-2-13b UltraLink 54.2 42.7 49.0 44.4 48.3 47.7 Model Backbone SFT Data Multilingual Hellaswag En Zh Es Ru Fr Avg. Bloomz-7b1-mt Bloomz-7b1 xP3mt 61.1 47.5 48.6 33.1 46.2 47.3 Phoenix-inst-chat-7b Bloomz-7b1 Phoenix SFT data 56.8 49.1 54.3 32.5 53.2 49.2 PolyLM-Multialpaca-13b PolyLM-13b Multialpaca 66.0 49.8 51.3 46.4 50.7 52.8 PolyLM-Chat-13b PolyLM-13b Closed-source 66.6 48.9 52.1 45.6 51.3 52.9 Chimera-inst-chat-13b Llama-13b Phoenix SFT data 65.8 43.2 52.6 45.9 50.7 51.6 Okapi-7b Llama-2-7b Okapi Dataset 63.7 44.6 51.0 45.9 49.6 50.9 Guanaco-7b Llama-2-7b Guanaco Dataset 65.3 37.1 43.7 35.0 42.4 44.7 Guanaco-13b Llama-2-13b Guanaco Dataset 74.5 43.4 60.6 51.8 58.4 57.7 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 76.7 48.9 62.6 53.8 61.1 60.6 Aya-101 mt5-xxl Aya SFT Dataset 75.5 50.5 62.7 54.7 61.3 60.9 UltraLink-LM Llama-2-13b UltraLink 77.5 52.8 64.8 56.1 63.5 62.9 Model Backbone SFT Data Multilingual ARC En Zh Es Ru Fr Avg. Bloomz-7b1-mt Bloomz-7b1 xP3mt 77.5 57.8 60.6 35.6 60.7 58.4 Phoenix-inst-chat-7b Bloomz-7b1 Phoenix SFT data 70.0 47.2 41.2 30.2 51.4 48.0 PolyLM-Multialpaca-13b PolyLM-13b Multialpaca 31.1 25.5 21.5 28.0 29.0 27.0 PolyLM-Chat-13b PolyLM-13b Closed-source 29.3 12.3 26.5 24.4 27.0 23.9 Chimera-inst-chat-13b Llama-13b Phoenix SFT data 66.2 31.2 45.3 42.3 32.2 43.4 Okapi-7b Llama-2-7b Okapi Dataset 59.8 39.9 38.0 38.8 42.9 43.9 Guanaco-7b Llama-2-7b Guanaco Dataset 36.1 25.6 27.3 25.8 27.6 25.5 Guanaco-13b Llama-2-13b Guanaco Dataset 60.8 39.4 6.5 13.8 17.7 27.6 Aya-5-reimplement Llama-2-13b Aya SFT Dataset 64.0 47.4 22.1 33.3 45.3 42.4 Aya-101 mt5-xxl Aya SFT Dataset 73.1 51.9 43.3 45.4 55.8 53.9 UltraLink-LM Llama-2-13b UltraLink 76.0 50.0 47.4 51.3 58.9 56.7 Table 6: Performance of the involved multilingual SFT LLMs on NLU tasks. 11942
2024
644
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11943–11954 August 11-16, 2024 ©2024 Association for Computational Linguistics Document-level Claim Extraction and Decontextualisation for Fact-Checking Zhenyun Deng, Michael Schlichtkrull, Andreas Vlachos Department of Computer Science and Technology, University of Cambridge {zd302, mss84, av308}@cam.ac.uk Abstract Selecting which claims to check is a timeconsuming task for human fact-checkers, especially from documents consisting of multiple sentences and containing multiple claims. However, existing claim extraction approaches focus more on identifying and extracting claims from individual sentences, e.g., identifying whether a sentence contains a claim or the exact boundaries of the claim within a sentence. In this paper, we propose a method for documentlevel claim extraction for fact-checking, which aims to extract check-worthy claims from documents and decontextualise them so that they can be understood out of context. Specifically, we first recast claim extraction as extractive summarization in order to identify central sentences from documents, then rewrite them to include necessary context from the originating document through sentence decontextualisation. Evaluation with both automatic metrics and a fact-checking professional shows that our method is able to extract check-worthy claims from documents more accurately than previous work, while also improving evidence retrieval. 1 Introduction Human fact-checkers typically select a claim in the beginning of their day to work on for the rest of it. Claim extraction (CE) is an important part of their work, as the overwhelming volume of claims in circulation means the choice of what to fact-check greatly affects the fact-checkers’ impact (Konstantinovskiy et al., 2021). Automated approaches to this task have been proposed to assist them in selecting check-worthy claims, i.e., claims that the public has an interest in knowing the truth (Hassan et al., 2017a; Guo et al., 2022). Existing CE methods mainly focus on detecting whether a sentence contains a claim (Reddy et al., 2021; Nakov et al., 2021b) or the boundaries of the claim within a sentence (Wührl and Klinger, 2021; Sundriyal et al., 2022). In real-world scenarios though, claims often need to be extracted from documents consisting of multiple sentences and containing multiple claims, not all of which are relevant to the central idea of the document, and verifying all claims manually or even automatically would be inefficient. Moving from sentence-level CE to documentlevel CE is challenging; we illustrate this with the example in Figure 1. Sentences in orange are claims selected by a popular sentence-level CE method, Claimbuster (Hassan et al., 2017b), that are worth checking in principle but do not always relate to the central idea of the document, and multiple sentences with similar claims are selected, which would not all need to be fact-checked (e.g., sentences 1 and 6). Claims extracted for fact-checking are expected to be unambiguous (Lippi and Torroni, 2015; Wührl and Klinger, 2021), which means that they cannot be misinterpreted or misunderstood when they are considered outside the context of the document they were extracted from, consequently allowing them to be fact-checked more easily (Schlichtkrull et al., 2023). Figure 1 shows an example of claim decontextualisation, where the claim “Bird is scrapping thousands of e-scooters in the Middle East ...... ” requires coreference resolution to be understood out of context, e.g., “Bird” refers to “California scooter sharing start-up Bird”. However, existing CE methods primarily focus on extracting sentence-level claims (i.e., extracting sentences that contain a claim) from the original document (Reddy et al., 2021) and ignore their decontextualisation, resulting in claims that are not unambiguously understood and verified. To address these issues, we propose a novel method for document-level claim extraction and decontexualisation for fact-checking, aiming to extract salient check-worthy claims from documents that can be understood outside the context of the document. Specifically, assuming that salient 11943 Document (CNBC News) [1] Between 8,000 and 10,000 e-scooters are being destroyed in the Middle East by California scooter sharing start-up Bird, according to sources. [2] They belong to Circ, an e-scooter company that was acquired by Bird in January. [3] Bird shut down its entire Middle East operation as a result of Covid-19. [4] Bird is scrapping thousands of e-scooters in the Middle East and shutting down its operations in the majority of the region as a result of the coronavirus pandemic, according to five people familiar with the matter. [5]The e-scooters being scrapped belong to Circ, which was acquired by Bird for an undisclosed sum in January. [6] There are between 8,000 and 10,000 Circ scooters across cities in Qatar, Bahrain and United Arab Emirates, according to one former employee and one company source who asked to be kept anonymous as they’ve signed a confidentiality agreement. ……. [7] But there have been questions about the longevity of their vehicles, with reports suggesting some Bird e-scooters have a life span of just a few months. [8] Last week, it emerged that Uber is scrapping thousands of e-bikes and e-scooters worth millions of dollars after selling its Jump unit to mobility start-up Lime. ……. Document-level Claim Extraction Sentence-level: 8, 6, 1 Document-level: 4, 5, 7 Gold Claim(Fact-checking Organization, Misbar): Bird e-scooters are shutting down service in the Middle East, and scrapping as many as 10,000 scooters. Claim extracted by decontextualising the 4th sentence: California scooter sharing start-up Bird is scrapping thousands of e-scooters in the Middle East and shutting down its operations in the majority of the region as a result of the coronavirus pandemic, according to five people familiar with the matter. https://web.archive.org/web/20210722180850/https://misbar.com/en/factcheck/2020/06/18/arebird-e-scooters-leaving-the-middle-east Figure 1: An example of document-level claim extraction. Document1 is a piece of news from CNBC. Gold Claim2 is annotated by the fact-checking organization, Misbar. Sentences in orange denote check-worthy claims extracted by sentence-level CE (Claimbuster). Sentences in blue denote salient claims extracted by our document-level CE. The claim in green is a decontextualised claim derived from the 4th sentence obtained by our document-level CE. claims are derived from central sentences, i) we recast the document-level CE task into the extractive summarization task to extract central sentences and reduce redundancy; ii) we decontextualise central sentences to be understandable out of context by enriching them with the necessary context; iii) we introduce a QA-based framework to obtain the necessary context by resolving ambiguous information units in the extracted sentence. To evaluate our method we derive a CE dataset3 containing decontextualised claims from AVeriTeC (Schlichtkrull et al., 2023), a recently proposed benchmark for real-world claim extraction and verification. Our method achieves a Precision@1 score of 47.8 on identifying central sentences, a 10% improvement over Claimbuster. This was verified further by a fact-checking professional, as the sentences returned by our method were deemed central to the document more often, and check-worthy more often than those extracted by Claimbuster. Additionally, our method achieved a character-level F score (chrF) (Popovi´c, 2015) of 26.4 against gold decontextualised claims, outperforming all baselines. When evaluated for evidence retrieval potential, the decontextualised claims obtained by enriching original sentences with the necessary context, are better than the original claim sentences, with an average 1.08 improvement in precision. 1https://www.cnbc.com/2020/06/03/ bird-circ-scooters-middle-east.html 2https://misbar.com/en/factcheck/2020/06/18/ are-bird-e-scooters-leaving-the-middle-east 3https://github.com/Tswings/AVeriTeC-DCE 2 Related Work Claim Extraction Claim extraction is typically framed either as a classification task or claim boundary identification task. The former framing focuses on detecting whether a given sentence contains a check-worthy claim. Claimbuster (Hassan et al., 2017b), the most popular method in this paradigm, computes the score of how important a sentence is to be fact-checked. More similar to our work are studies that formulate the task of checkworthy claim detection as a sentence ranking task. For example, Zhou et al. (2021) present a sentencelevel classifier by combining a fine-tuned hatespeech model with one dropout layer and one classification layer to rank sentences. However, these methods were not able to handle the challenges of document-level claim extraction, e.g., avoid redundant claim sentences. The framing of claim extraction as boundary identification focuses on detecting the exact claim boundary within the sentence. Nakov et al. (2021a) propose a BERT-based model to perform claim detection (Levy et al., 2014) by identifying the boundaries of the claim within the sentence. Sundriyal et al. (2022) tackle claim span identification as a token classification task for identifying argument units of claims in the given text. Unlike the above methods, where the claims are extracted from given sentences, our work aims to extract salient checkworthy claims from documents, thus addressing the limitations of sentence-level methods in extracting salient claim sentences and avoiding redundancy. 11944 Sentence Extraction Sentence Decontextualisation Claim Sentence QA-based Context Generation Check Worthiness Estimation Document High-quality Context Document Candidate Central Sentence Bird is scrapping thousands of e-scooters in the Middle East and shutting down its operations in the majority of the region as a result of the coronavirus pandemic, according to five people familiar with the matter. California scooter sharing start-up Bird is scrapping thousands of e-scooters in the Middle East and shutting down its operations in the majority of the region as a result of the coronavirus pandemic, according to five people familiar with the matter. Decontextualised Sentence California scooter sharing start-up Bird is scrapping thousands of e-scooters in the Middle East and shutting down its operations in the majority of the region as a result of the coronavirus pandemic, according to five people familiar with the matter. Document-level claim Step 1: Question Generation Q1: Which company is scrapping thousands of e-scooters in the Middle East? Q2: Bird is scrapping how many e-scooters in the Middle East? Q3: Why is Bird shutting down its operation in the majority of the region? …… Step 2: Question Answering A1: California scooter sharing start-up Bird. A2: thousands. A3: coronavirus pandemic. …… Step3: QA-to-Context S1: California scooter sharing start-up Bird is scrapping thousands of e-scooters in the Middle East. S2: Bird is scrapping thousands of e-scooters in the Middle East. S3: Bird is shutting down its operation in the majority of the region because of the coronavirus pandemic. …… Figure 2: An overview of our document-level claim extraction framework. Given an input document, we first use extractive summarization to rank all sentences and select summary sentences as central sentences. Then, we describe a QA-based framework to generate a specific high-quality context for important information units in the sentence. Next, we use a seq2seq generation model to decontextualise sentences by enriching them with their corresponding context. Finally, a claim check-worthiness classifier is used to select salient check-worthy claim sentences based on the score that reflects the degree to which sentences belong to the check-worthy claim. Decontextualisation Choi et al. (2021) propose two different methods for decontextualisation, based on either a coreference resolution model or a seq2seq generation model. Both methods use the sentences in the paragraph containing the target sentence as context to rewrite it. Newman et al. (2023) utilize an LLM to generate QA pairs for each sentence by designing specific prompts, and then use an LLM with these QA pairs to rewrite each sentence. Sundriyal et al. (2023) propose to combine chain-of-thought and in-context learning for claim normalization. Unlike the above methods, we generate declarative sentences for potentially ambiguous information units in the target sentence based on the whole document, and combine them into context to rewrite the target sentence. 3 Method As illustrated in Figure 2, our proposed documentlevel claim extraction framework consists of four components: i) Sentence extraction (§3.1); extracts the sentences related to the central idea of the document as candidate claim sentences; ii) Context generation (§3.2), extracts context from the document for each candidate sentence; iii) Sentence decontextualisation (§3.3), rewrites each sentence with its corresponding context to be understandable out of context; iv) Check-worthiness estimation (§3.4), selects the final check-worthy claims from candidate decontextualised sentences. 3.1 Sentence Extraction The claims selected by human fact-checkers are typically related to the central idea of the document considered. Thus we propose to model sentence extraction as extractive summarization. For this purpose, we concatenate all the sentences in the document into an input sequence, which is then fed to BertSum (Liu and Lapata, 2019), a document-level extractive summarization method trained on the CNN/DailyMail dataset. Specifically, given a document consisting of n sentences D = {s1, s2, ..., sn}, we first formulate the input sequence C as “[CLS] s1 [SEP] [CLS] s2 [SEP] ... [CLS] sn[SEP]”, where [CLS] and [SEP] denote the start and end token for each sentence, respectively, and then feed them into a pre-trained encoder BERT to obtain the sentence representation s. Finally, a linear layer on sentence representations 11945 S = {s1, ..., si, ..., sn} is used to score sentences. S = BERT(C) scorei = σ(Wsi + b0) (1) where σ is a sigmoid function, si denotes the representation of the i-th [CLS] token, i.e., the representation of the i-th sentence, and scorei denotes the score of the i-th sentence. All sentences are constructed into an ordered set S = {s′ 1, ..., s′ i, ..., s′ n} according to their scores. Since all sentences are ranked by sentence-level scoring, some top-scoring sentences may have the same or similar meaning. To avoid redundancy, we add an entailment model DocNLI (Yin et al., 2021), a more generalizable model trained on five datasets from different benchmarks, on top of the output of BertSum to remove redundant sentences by calculating the entailment scores between sentences, e.g., we first remove the sentences that have an entailment relationship with the top-1 sentence in S, and then repeat this process for the remaining top-2/3/... sentence until we extract k central sentences. Following previous work (Liu and Lapata, 2019), we only select the top-k sentences with the highest scores in Equation 1 as candidate central sentences. S′ = DocNLI(S) (2) where S′ = {s′ 1, s′ 2, ..., s′ k} is a set of central sentences that do not contain the same meaning. 3.2 Context Generation After sentence extraction, the next step is to clarify the (possibly) ambiguous sentences in S′ by rewriting them with their necessary context. Unlike Choi et al. (2021) where the context consists of a sequence of sentences in the paragraph containing the ambiguous sentence, we need to consider the whole document, i.e., sentences from different paragraphs. We propose a QA-based context generation framework to produce a specific context for each ambiguous sentence, which contains three components: i) Question Generation: extracts potentially ambiguous information units from the sentence and generates questions with them as answers; ii) Question Answering: finds more information about ambiguous information units by answering generated questions with the whole document; iii) QAto-Context Generation: converts question-answer pairs into declarative sentences and combines them into context specific to the sentence. In the following subsections, we describe each component in detail. Question Generation. To identify ambiguous information units in candidate central sentences, we first use Spacy4 to extract named entities, pronouns, nouns, noun phrases, verbs and verb phrases in the sentence i as the potentially ambiguous information units Ui = {u1 i , u2 i , ..., uj i, ..., um i }, i ∈[1, 2, ..., k], where uj i denotes the j-th information unit of the i-th candidate sentence s′ i. Once the set of information units for a sentence Ui is identified, we then generate a question for each of them. Specifically, we concatenate uj i and s′ i in which uj i is located as the input sequence and feed it into QG (Murakhovs’ka et al., 2022), a question generator model trained on nine question generation datasets with different types of answers, to produce the question qj i with uj i as the answer. Qi = {qj i }m j=1 = {QG(s′ i, uj i)}m j=1 (3) where Qi denotes the set of questions corresponding to Ui in the sentence s′ i. Question Answering. After question generation, our next step is to clarify ambiguous information units by answering corresponding questions with the document D. Specifically, following Schlichtkrull et al. (2023), we first use BM25 (Robertson et al., 2009) to retrieve evidence E related to the question qj i from D, and then answer qj i with E using an existing QA model (Khashabi et al., 2022) trained on twenty datasets that can answer different types of questions. E = BM25(D, qj i ) aj i = QA(E, qj i ) (4) where aj i denotes a more complete information unit corresponding to ui j, e.g., a complete coreference. We denote all question-answer pairs of the i-th sentence as Pi = {(q1 i , a1 i ), (q2 i , a2 i ), ..., (qm i , am i )}. QA-to-Context Generation. After question answering, we utilize a seq2seq generation model to convert QA pairs Pi into the corresponding context C′ i. Specifically, we first concatenate the question qj i and the answer aj i as the input sequence, and then output a sentence using the BART model (Lewis et al., 2019) finetuned on QA2D (Demszky et al., 2018). QA2D is a dataset with over 500k 4https://spacy.io 11946 Data #sample med.sent avg.sent len.claim len.document Train 830 9 1.09 17 274 Dev 149 6 1.01 16 120 Test 252 5 1.05 16 63 All 1231 7 1.07 17 17 Table 1: Descriptive statistics for AVeriTeC-DCE. #sample refers to the number of samples in AVeriTeC available for claim extraction, i.e., the total number of accessible source url, med.sent refers to the median number of sentences in documents. avg.sent refers to the average number of sentences in claims, len.claim refers to the median length of claims in words, len.document refers to the median length of documents in words. NLI examples that contains various inference phenomena rarely seen in previous NLI datasets. More formally, ˜sj i = BART(qj i , aj i) (5) where ˜sj i is a declarative sentence corresponding to the information unit uj i. Finally, all generated sentences are combined into high-quality context C′ i = {˜s1 i , ˜s2 i , ..., ˜sm i } corresponding to the information units Ui in sentence s′ i, which is then used in the next decontextualisation step to enrich the ambiguous sentences. 3.3 Sentence Decontextualisation Sentence decontextualisation aims to rewrite sentences to be understandable out of context, while retaining their original meaning. To do this, we use a seq2seq generation model T5 (Raffel et al., 2020) to enrich the target sentence with the context generated in the previous step for it. Specifically, we first formulate the input sequence as “[CLS] ˜s1 i [SEP] ˜s2 i ...... [SEP] ˜sm i [SEP] s′ i”, where s′ i denotes the potential ambiguous sentence and [SEP] is a separator token between the context sentences generated. We then feed the input sequence to D (Choi et al., 2021), a decontextualisation model was trained on the dataset annotated by native speakers of English in the U.S. that handles various linguistic phenomena, to rewrite the sentence. Similarly, we set the output sequence to be [CAT] [SEP] y. yi =      D(s′ i, C′ i), if CAT = feasible s′ i , if CAT = infeasible s′ i , if CAT = unnecessary (6) where CAT = feasible or infeasible denotes that s′ i can or cannot be decontextualised, CAT = unnecessary denotes that s′ i can be understood without being rewritten, yi denotes the i-th decontextualised sentence. 3.4 Check-Worthiness Estimation Unlike existing CE methods that determine whether a sentence is worth checking without considering the context, we estimate the check-worthiness of a sentence after decontextualisation because some sentences may be transformed from not checkworthy into check-worthy ones in this process. Specifically, we use a DeBERTa model trained on the ClaimBuster dataset (Arslan et al., 2020) to classify sentences into three categories: Check-worthy Factual Sentence (CFS), Unimportant Factual Sentence (UFS) and Non-Factual Sentence (NFS). Formally, score(yi) = DeBERTa(class = CFS | yi) claim = argmax{score(yi)}k i=1 (7) where score(yi) reflects the degree to which the decontextualised sentence yi belongs to CFS, and claim denotes that the final salient check-worthy claim that can be understood out of context. 4 Dataset We convert AVeriTeC, a recently proposed dataset for real-world claim extraction and verification (Schlichtkrull et al., 2023), into AVeriTeC-DCE, a dataset for the document-level CE task. AVeriTeC is collected from 50 different fact-checking organizations and contains 4568 real-world claims. Each claim is associated with attributes such as its type, source and date. In this work, we mainly focus on the following attributes relevant to claim extraction: i) claim, the claim as extracted by the fact-checkers and decontextualised by annotators, and ii) source url: the URL linking to the original web article of the claim. The task of this work is to extract the salient check-worthy claims from the source url. We also consider whether claims need to be decontextualised when extracting them from documents, as this will directly affect the subsequent evidence retrieval and claim verification. 11947 To extract claim-document pairs from AVeriTeC that can be used for document-level CE, we perform the following filtering steps: 1) Since we focus on the extraction of textual claims, we do not include source urls containing images, video or audio; 2) To extract the sentences containing the claims from source urls, we build a web scraper to extract text data in the source url as the document. We found that the attribute source url is not always available in samples, thus we only select those samples where the web scraper can return text data from source urls. We obtain a dataset AVeriTeC-DCE, containing 1231 available samples, for document-level CE. We do not divide the dataset into train, dev and test sets, as all models we rely on are pre-trained models (e.g., BertSum) and approaches that do not require training (e.g., BM25). Statistics for AVeriTeC-DCE are described in Table 1. 5 Experiments Our approach consists of four components: sentence extraction, context generation, sentence decontextualisation and check-worthiness estimation. As such, we conduct separate experiments to evaluate them, as well as an overall evaluation for document-level CE. 5.1 Sentence Extraction We compare our sentence extraction method, the combination of BertSum and DocNLI stated in Section 3.1, against other baselines through automatic evaluation and human evaluation. Baselines 1) Lead sentence: the lead (first) sentence of most documents is considered to be the most salient, especially in news articles (Narayan et al., 2018); 2) Claimbuster (Hassan et al., 2017b): we use this well-established method to compute the check-worthiness score of each sentence and we rank sentences based on their scores; 3) LSA (Gong and Liu, 2001): a common method of identifying central sentences of the document using the latent semantic analysis technique; 4) TextRank (Mihalcea and Tarau, 2004): a graph-based ranking method for identifying important sentences in the document; 5) BertSum (Liu and Lapata, 2019): a BERT-based document-level extractive summarization method for ranking sentences. Automatic Evaluation Since the central sentences of documents are not given in AVeriTeC, Method P@1 P@3 P@5 P@10 Claimbuster 37.8 59.1 65.7 71.4 Lead Sentence 42.3 LSA 38.4 55.3 62.1 70.4 TextRank 42.7 60.6 65.1 71.2 BertSum 43.4 61.6 67.5 72.3 Ours 47.8 63.1 68.6 73.8 Table 2: Results with different sentence extraction methods. P@k denotes the probability that the first k sentences in the ranked sentences contain the central sentence. Claimbuster Ours IsCheckWorthy 0.36 0.44 IsCentralClaim 0.24 0.68 Table 3: Human Evaluation of sentence extraction on two different dimensions. we cannot evaluate the extracted central sentences by exact matching. Thus, we instead rely on the sentence that has the highest chrF (Popovi´c, 2015) with the claim, as the claim is the central claim annotated by human fact-checkers. We use Precision@k as the evaluation metric, which denotes the probability that the first k sentences in the extracted sentences contain the central sentence. Table 2 shows the results of different sentence extraction methods. We can see that our method outperforms all baselines in identifying the central sentence, achieving a P@1/P@3/P@5/P@10 score of 47.8/63.1/68.6/73.8, which indicates that the combination of the extractive summarization (BertSum) and entailment model (DocNLI) can better capture the central sentences and avoid redundant ones. We found that the common extractive summarization methods (e.g., Lead Sentence, TextRank and BertSum) are better than Claimbuster on P@1, confirming what we had stated in the introduction, that sentence-level CE methods have limitations when they are applied at the document-level CE. Moreover, we observe that the lead sentence achieves a P@1 score of 42.3, indicating that there is a correlation between the sentences selected for factchecking and the lead sentence that often served as the summary. We list the source URLs of the samples for claim extraction in Appendix A1. Human Evaluation To further compare sentences extracted by our method and Claimbuster, 11948 Coreference Resolution Sentence: He has pubicly stated that he sympathized with their cause and even hinted that he would provide them with American resources should they be in need during his 2008 State of the Union address. Decontextualised sentence: President Obama has pubicly stated that he sympathized with their cause and even hinted that he would provide them with American resources should they be in need during his 2008 State of the Union address. Global Scoping Sentence: During the attack, Capitol Police made the request again. Decontextualised sentence: During the attack on Washington D.C., Capitol Police made the request again. Sentence: The government does not have proper storage facilities for stocking such a large amount of excess grain. Decontextualised sentence: The government of India does not have proper storage facilities for stocking such a large amount of excess grain. Bridge Anaphora Figure 3: Case studies of sentence decontextualisation solving linguistic problems, such as coreference resolution, global scoping and bridge anaphora. we asked a fact-checking professional to evaluate the quality of extracted sentences on the following two dimensions: 1) IsCheckWorthy: is the sentence worth checking? 2) IsCentralClaim: is the sentence related to the central idea of the article? We randomly select 50 samples, each containing at least 5 sentences. For simplicity, we only select the top-1 sentence returned by each method for comparison. As shown in Table 3, we observed that 68% of the central sentences extracted by our method are related to the central idea of the document compared to 24% of Claimbuster, which further supports our conclusion obtained by automatic evaluation, i.e., the sentences extracted by our method were more often central to the document, and more often check-worthy than those that extracted by Claimbuster. This indicates that when identifying salient check-worthy claims from documents, it is not enough to consider whether a sentence is worth checking at the sentence level, but also whether the sentence is related to the central idea of the document. Thus, we believe that the claims related to the central idea of the document are the ones that the public is more interested in knowing the truth. 5.2 Decontextualisation To evaluate the effectiveness of decontextualisation on evidence retrieval, for a fair comparison, we select the sentence that has the highest chrF with the claim as the best sentence as we considered in Section 5.1, and conduct a comparison between it and its corresponding decontextualised sentence. The evidence set used for evaluation is retrieved from the Internet using the Google Search API given a claim, each containing gold evidence and additional distractors (Schlichtkrull et al., 2023). We use Precision@k as the evaluation metric. Baselines 1) Coreference model: decontextualisation by replacing unresolved coreferences in the target sentence, e.g., (Joshi et al., 2020); 2) Seq2seq model: decontextualisation by rewriting the target sentence with necessary context (Choi et al., 2021). Retrieval-based Evaluation For a given claim different decontextualisations could be considered correct, thus comparing against the single reference in AVeriTeC would be suboptimal. Thus we prefer to conduct the retrieval-based evaluation, assuming that better decontextualisation improves evidence retrieval, as it should provide useful context for fact-checking. Following previous work (Choi et al., 2021), we compare our QA-based decontextualisation method against other baselines through retrieval-based evaluation. We use BM25 as the retriever to find evidence with different sentences as the query. Table 4 shows the results for evidence retrieval with different decontextualised sentences on the dev set of AVeriTeC-DCE (AVeriTeC only publicly released the train and dev sets). We found that our method outperforms all baselines, and improves the P@3/P@5/P@10 score over the original sentence by 1.42/0.82/0.99, achieving an average 1.08 improvement in precision. After further analysis, we found that only 21/149 sentences are decontextualised by our method, and 17/21 of these sentences obtain better evidence retrieval, with an average improvement of 1.21 in precision over the original sentences, proving that decontextualisation enables evidence retrieval more effectively. Case Study Figure 3 illustrates three case studies of sentence decontextualisation. The first case is an example that requires coreference resolution. To make the sentence understandable out of context, these words (e.g., “He”, “their”) need to be rewritten with the context. After decontextualisa11949 Method P@3 P@5 P@10 Sentence 35.45 44.72 61.31 Coreference 36.02 44.98 61.79 Seq2seq(Context) 36.17 45.04 61.99 Oursseq2seq(Context⋆) 36.87 45.54 62.30 Table 4: Results for evidence retrieval with different decontextualised sentences. Context consists of a sequence of sentences in the paragraph containing the target sentence. Context⋆consists of declarative sentences generated by our context generation module. tion, we can see that “He” is rewritten to “President Obama”, which helps us understand the sentence better without context. As for “their”, we cannot decontextualise it because there is no information about this word in the document. This supports the claim that providing a high-quality context is necessary for better decontextualisation. The second case is an example that requires global scoping, which requires adding a phrase (e.g., prepositional phrase) to the entire sentence to make it better understood. In this case, we add “on Washington D.C.” as a modifier to “During the attack” to help us understand where the attack took place. The third case is an example that requires a bridge anaphora, where the phrase noun “The government” becomes clear by adding a modifier “India”. In summary, decontextualising the claim is helpful for humans to better understand the claim without context. 5.3 Document-level Claim Extraction In this section, we put four components together to conduct an overall evaluation for document-level CE, e.g., extracting the claim in green from the document in Figure 1. We first select top-3 sentences returned by different sentence extraction methods as candidate central sentences, and then feed them into our decontextualisation model to obtain decontextualised claim sentences, finally use a claim check-worthiness classifier to select the final claim. We evaluate performance by calculating the similarity between our final decontextualised claim sentence and the claim decontextualised by factcheckers. We use the chrF as the evaluation metric. The chrF computes the similarity between texts using the character n-gram F-score. Other metrics are reported in Appendix A2. Statistics for decontextualisation are described in Table 5, we observe that 122 out of 1231 (10%) sentences can be decontextualised (feasible); 122 Data #claim #fea. #infea. #unnec. All 1231 122 122 987 Table 5: Statistic of decontextualisation. #fea./infea. denotes the number of sentences that can/cannot be decontextualised, #unnec. denotes the number of sentences that can be understood without context. Method chrF Sentence⋆ Dec. Sentence⋆ Claimbuster 24.3 24.5 Lead Sentence 23.8 LSA 24.1 24.3 TextRank 24.5 25.4 BertSum 25.6 25.9 Ours 25.9 26.4 Table 6: Results of Document-level CE. Sentence⋆denotes the best sentence returned by different sentence extraction methods. Dec. Sentence⋆denotes the decontextualised Sentence⋆. out of 1231 (10%) sentences cannot be decontextualised (infeasible); and 987 out of 1231 (80%) sentences can be understood without being decontextualised (unnecessary), including the lead sentence. Since the lead sentence of the document is often considered to be the most salient, we do not decontextualise the lead sentence. In Table 6, we show the results of original sentences and decontextualised sentences for claim extraction, and our method achieves a chrF of 26.4 on gold claims decontextualised by the fact-checkers, outperforming all baselines. We observe that the performance of claim extraction and sentence extraction is positively correlated, i.e., the closer the extracted sentence is to the central sentence, the more similar the extracted claim is to the claim, which supports our assumption that salient claims are derived from central sentences. For this reason, the performance of our document-level CE is limited by the performance of sentence extraction, i.e., if our sentence extraction method cannot find the gold central sentence, decontextualisation may not improve the performance of CE, and may even lead to a decrease in the performance of CE due to noise caused by decontextualisation. Moreover, empirically, we found that central sentences led to improved overall performance. This might be a consequence of the dataset – social media sites such as Twitter or Facebook are common 11950 sources of claims in AVeriTeC (along with more traditional news), and a Twitter or Facebook thread often contains only a few key points. AVeriTeC was collected by reverse engineering claims which fact-checkers from around the world chose to work on. As such, the distribution of source articles represents what journalists found to be check-worthy and chose to work on. Our work as such reflects the contexts wherein real-world misinformation appears (but indeed, may have a bias towards what works well in those contexts). To further verify the effectiveness of our method, we conduct a comparison on the document-level CE dataset (CLEF-2021, subtask 1B (Shaar et al., 2021)) using our method and Claimbuster. Table 7 shows the results of two different methods for identifying check-worthy claims on the dev set of subtask 1B. We observe that our method outperforms Claimbuster on P@1/3/5/10, indicating that our document-level CE method can better identify check-worthy claims than Claimbuster (sentencelevel CE). This supports our conclusion that the document-level check-worthy claims extracted by our method are the claims that the public is more interested in knowing the truth. Method P@1 P@3 P@5 P@10 Claimbuster 0.111 0.074 0.156 0.089 Ours 0.222 0.185 0.200 0.144 Table 7: Results of different methods for identifying check-worthy claims on the dev set of subtask 1B. 6 Conclusions and Future work This paper presented a document-level claim extraction framework for fact-checking, aiming to extract salient check-worthy claims from documents that can be understood out of context. To extract salient claims from documents, we recast the claim extraction task as the extractive summarization task to select candidate claim sentences. To make sentences understandable out of context, we introduce a QA-based decontextualisation model to enrich them with the necessary context. The experimental results show the superiority of our method over previous methods, including document-level claim extraction and evidence retrieval, as indicated by human evaluation and automatic evaluation. In future work, we plan to extend our document-level claim extraction method to extract salient checkworthy claims from multimodal web articles. Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation programme grant AVeriTeC (Grant agreement No. 865958). We thank David Corney from Full Fact for his help with evaluating the output of our models, and Yulong Chen for his helpful comments and discussions. We would also like to thank the anonymous reviewers for their helpful questions and comments that helped us improve the paper. Limitations While our method has demonstrated superiority in extracting salient check-worthy claims and improving evidence retrieval, we recognize that our method is not able to decontextualise all ambiguous sentences, particularly those that lack the necessary context in the source url. Also, human fact-checkers have different missions, thus checkworthiness claims to one fact-checking organization may not be check-worthiness to another organization (i.e., some organizations check parody claims or claims from satire websites, while others do not). Furthermore, since the documents we use are extracted from the source url, a powerful web scraper is required when pulling documents from source urls. Moreover, our method assumes salient claims are derived from central sentences. Although this assumption is true in most cases, it may be inconsistent with central claims collected by human fact-checkers. Besides, we use the chrF metric to calculate the similarity between claims extracted from the source url and the gold claim, while gold claims are decontextualised by the factcheckers with fact-checking articles and may contain information that is not in the original article, thus the metrics used to evaluate document-level claims are worth further exploring. Ethics Statement We rely on fact-checks from real-world factcheckers to develop and evaluate our models. Nevertheless, as any dataset, it is possible that it contains biases which influenced the development of our approach. Given the societal importance of factchecking, we advise that any automated system is employed with human oversight to ensure that the fact-checkers fact-check appropriate claims. 11951 References Fatma Arslan, Naeemul Hassan, Chengkai Li, and Mark Tremayne. 2020. A benchmark dataset of checkworthy factual claims. In Proceedings of the International AAAI Conference on Web and Social Media, volume 14, pages 821–829. Eunsol Choi, Jennimaria Palomaki, Matthew Lamm, Tom Kwiatkowski, Dipanjan Das, and Michael Collins. 2021. Decontextualization: Making sentences stand-alone. Transactions of the Association for Computational Linguistics, 9:447–461. Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922. Yihong Gong and Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. In Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 19–25. Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A survey on automated fact-checking. Transactions of the Association for Computational Linguistics, 10:178–206. Naeemul Hassan, Fatma Arslan, Chengkai Li, and Mark Tremayne. 2017a. Toward automated fact-checking: Detecting check-worthy factual claims by claimbuster. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1803–1812. Naeemul Hassan, Gensheng Zhang, Fatma Arslan, Josue Caraballo, Damian Jimenez, Siddhant Gawsane, Shohedul Hasan, Minumol Joseph, Aaditya Kulkarni, Anil Kumar Nayak, et al. 2017b. Claimbuster: The first-ever end-to-end fact-checking system. Proceedings of the VLDB Endowment, 10(12):1945–1948. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the association for computational linguistics, 8:64–77. Daniel Khashabi, Yeganeh Kordi, and Hannaneh Hajishirzi. 2022. Unifiedqa-v2: Stronger generalization via broader cross-format training. arXiv preprint arXiv:2202.12359. Lev Konstantinovskiy, Oliver Price, Mevan Babakar, and Arkaitz Zubiaga. 2021. Toward automated factchecking: Developing an annotation schema and benchmark for consistent automated claim detection. Digital threats: research and practice, 2(2):1–16. Ran Levy, Yonatan Bilu, Daniel Hershcovich, Ehud Aharoni, and Noam Slonim. 2014. Context dependent claim detection. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1489– 1500. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Marco Lippi and Paolo Torroni. 2015. Contextindependent claim detection for argument mining. In IJCAI. Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404–411. Lidiya Murakhovs’ka, Chien-Sheng Wu, Philippe Laban, Tong Niu, Wenhao Liu, and Caiming Xiong. 2022. MixQG: Neural question generation with mixed answer types. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1486–1497, Seattle, United States. Association for Computational Linguistics. Preslav Nakov, David Corney, Maram Hasanain, Firoj Alam, Tamer Elsayed, Alberto Barrón-Cedeño, Paolo Papotti, Shaden Shaar, and Giovanni Da San Martino. 2021a. Automated fact-checking for assisting human fact-checkers. arXiv preprint arXiv:2103.07769. Preslav Nakov, Giovanni Da San Martino, Tamer Elsayed, Alberto Barrón-Cedeno, Rubén Míguez, Shaden Shaar, Firoj Alam, Fatima Haouari, Maram Hasanain, Nikolay Babulkov, et al. 2021b. The clef-2021 checkthat! lab on detecting check-worthy claims, previously fact-checked claims, and fake news. In ECIR 2021, Virtual Event, March 28–April 1, 2021, Proceedings, Part II 43, pages 639–649. Springer. Shashi Narayan, Shay B Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. arXiv preprint arXiv:1808.08745. Benjamin Newman, Luca Soldaini, Raymond Fok, Arman Cohan, and Kyle Lo. 2023. A controllable qa-based framework for decontextualization. arXiv preprint arXiv:2305.14772. Maja Popovi´c. 2015. chrf: character n-gram f-score for automatic mt evaluation. In Proceedings of the tenth workshop on statistical machine translation, pages 392–395. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. 11952 Revanth Gangi Reddy, Sai Chinthakindi, Zhenhailong Wang, Yi R Fung, Kathryn S Conger, Ahmed S Elsayed, Martha Palmer, and Heng Ji. 2021. Newsclaims: A new benchmark for claim detection from news with background knowledge. arXiv preprint arXiv:2112.08544. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333–389. Michael Schlichtkrull, Zhijiang Guo, and Andreas Vlachos. 2023. Averitec: A dataset for real-world claim verification with evidence from the web. arXiv preprint arXiv:2305.13117. Shaden Shaar, Maram Hasanain, Bayan Hamdan, Zien Sheikh Ali, Fatima Haouari, Alex Nikolov, Mücahid Kutlu, Yavuz Selim Kartal, Firoj Alam, Giovanni Da San Martino, et al. 2021. Overview of the clef-2021 checkthat! lab task 1 on check-worthiness estimation in tweets and political debates. In CLEF (working notes), pages 369–392. Megha Sundriyal, Tanmoy Chakraborty, and Preslav Nakov. 2023. From chaos to clarity: Claim normalization to empower fact-checking. arXiv preprint arXiv:2310.14338. Megha Sundriyal, Atharva Kulkarni, Vaibhav Pulastya, Md. Shad Akhtar, and Tanmoy Chakraborty. 2022. Empowering the fact-checkers! automatic identification of claim spans on Twitter. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7701–7715, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Amelie Wührl and Roman Klinger. 2021. Claim detection in biomedical twitter posts. arXiv preprint arXiv:2104.11639. Wei Xu, Courtney Napoles, Ellie Pavlick, Quanze Chen, and Chris Callison-Burch. 2016. Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4:401–415. Wenpeng Yin, Dragomir Radev, and Caiming Xiong. 2021. Docnli: A large-scale dataset for documentlevel natural language inference. arXiv preprint arXiv:2106.09449. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Xinrui Zhou, Bohuai Wu, and Pascale Fung. 2021. Fight for 4230 at checkthat! 2021: Domain-specific preprocessing and pretrained model for ranking claims by check-worthiness. In CLEF (Working Notes), pages 681–692. A1 Statistic of Source URLs We described the statistic of the source URLs of the samples for document-level CE in Table A1. URL #sample twitter.com 241 facebook.com 235 perma.cc 63 channelstv.com 37 aljazeera.com 34 president.go.ke 27 gov.za 25 instagram.com 18 c-span.org 16 factba.se 13 axios.com 12 youtu.be 12 rumble.com 12 abcnews.go.com 11 rev.com 11 cnn.com 10 news24.com 10 punchng.com 10 washingtonpost.com 10 cbsnews.com 8 foxnews.com 8 misbar.com 7 thegatewaypundit.com 7 politifact.com 7 nypost.com 7 nbcnews.com 7 telegraph.co.uk 7 wisn.com 6 tatersgonnatate.com 6 bustatroll.org 6 dailymail.co.uk 5 whitehouse.gov 5 Table A1: Statistic of the source URLs of the samples for document-level CE. We only list URLs with a total number number greater than 5. A2 Evaluation Metrics We use the following metrics to assess the similarity between the claim decontextualised by our method and the claim decontextualised by fact-checkers. SARI (Xu et al., 2016) is developed to compare the claim with the reference claim by measuring the goodness of words that are added, deleted and kept. BERTScore (Zhang et al., 2019) is utilized to compute the semantic overlap between the claim and the reference claim by sentence representation. Since most claims in AVerTeC are decontextualised by fact-checkers with fact-checking articles, they may contain some information that is not in the source url, making it challenging for SARI and BERTScore to be used as evaluation metrics in this task. Thus, we use the chrF as our main evaluation metric for claim extraction. 11953 Method Sentence⋆ Dec. Sentence⋆ SARI BERTScore chrF SARI BERTScore chrF Claimbuster 6.23 82.7 24.3 6.24 82.8 24.5 Lead Sentence 6.41 83.4 23.8 LSA 5.56 83.2 24.1 5.57 83.2 24.3 TextRank 6.60 83.1 24.5 6.61 83.1 25.4 BertSum 6.54 83.6 25.6 6.55 83.6 25.9 Ours 6.56 83.7 25.9 6.70 83.8 26.4 Table A2: Results of Document-level CE on three different metrics. A3 Implementation Details All models we use in this paper are pre-trained models (e.g., BertSum) or approaches that do not require training (e.g., BM25). The hyperparameters of each model can be found in the original paper. To help readers reproduce our method, we have released our code on GitHub5. A4 ChatGPT for Decontextualisation To verify how well ChatGPT would do on decontextualisation, we use ChatGPT to decontextualise three claim sentences in Figure 3. The ChatGPT prompt for decontextualisation is as follows: ChatGPT Prompt Claim: [claim] Context: [context] To rewrite the Claim to be understandable out of context based on the Context, while retaining its original meaning. Decontextualised sentences produced by ChatGPT: • Sentence 1: Barack Obama publicly expressed sympathy for ISIS and hinted at providing them with American resources during his 2008 State of the Union address. • Sentence 2: During a specific event, there was a delay in obtaining approval from a certain authority for assistance requested by the Capitol Police. • Sentence 3: The Indian government lacks adequate storage facilities for managing the large surplus of grain it possesses. 5https://github.com/Tswings/AVeriTeC-DCE From the results, we can see that ChatGPT can produce well-formed claims that can be understood out of context, but it tends to rephrase the claim. In AVeriTeC (Appendix J.3.1), the decontextualised claims are required to be as close as possible to their original form. Our method tends not to change the original claims, but to rewrite only the ambiguous information units in claims, thus our generated claims are closer to the claims decontextualised by annotators than ChatGPT. 11954
2024
645
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11955–11971 August 11-16, 2024 ©2024 Association for Computational Linguistics PairCFR: Enhancing Model Training on Paired Counterfactually Augmented Data through Contrastive Learning Xiaoqi Qiu1,*, Yongjie Wang2,*, Xu Guo2, Zhiwei Zeng2, Yue Yu2, Yuhong Feng1,†, Chunyan Miao2,† 1 Shenzhen University, 2 Nanyang Technological University 1 [email protected], [email protected] 2{yongjie.wang,xu.guo,zhiwei.zeng,yue.yu,ascymiao}@ntu.edu.sg Abstract Counterfactually Augmented Data (CAD) involves creating new data samples by applying minimal yet sufficient modifications to flip the label of existing data samples to other classes. Training with CAD enhances model robustness against spurious features that happen to correlate with labels by spreading the casual relationships across different classes. Yet, recent research reveals that training with CAD may lead models to overly focus on modified features while ignoring other important contextual information, inadvertently introducing biases that may impair performance on out-ofdistribution (OOD) datasets. To mitigate this issue, we employ contrastive learning to promote global feature alignment in addition to learning counterfactual clues. We theoretically prove that contrastive loss can encourage models to leverage a broader range of features beyond those modified ones. Comprehensive experiments on two human-edited CAD datasets demonstrate that our proposed method outperforms the state-of-the-art on OOD datasets. 1 Introduction In the field of Natural Language Processing (NLP), a significant body of research (McCoy et al., 2019; Wang and Culotta, 2020; Poliak et al., 2018; Gururangan et al., 2018) has raised the concern that deep learning models can overfit spurious correlations, such as dataset-specific artifacts and biases, rather than focusing on the more complex, generalizable task-related features. For example, Gururangan et al. (2018) and Poliak et al. (2018) demonstrate that classifiers trained exclusively on hypotheses can still achieve decent results on some Natural Language Inference (NLI) datasets, which ideally requires comparing hypotheses with premises to determine the labels. The existence of biases or *Equal contribution. †Corresponding author. shortcuts in training datasets can severely degrade the performance of deep learning models on outof-distribution (OOD) datasets. Counterfactually Augmented Data (CAD) has emerged as a promising approach to mitigate this issue by making minimal modifications to existing data samples such that the corresponding labels are switched to other classes (Kaushik et al., 2020; Wen et al., 2022; Pryzant et al., 2023). This technique aims to establish direct causal relationships for models to learn more effectively and enhance generalization across different datasets (Teney et al., 2020; Kaushik et al., 2021). However, the effectiveness of CAD is not always guaranteed, particularly when both contexts and the modified information should be considered together to make predictions (Joshi and He, 2022; Huang et al., 2020). For instance, in sentiment analysis, simply replacing positive adjectives such as “good” or “excellent” with negative counterparts like “terrible” or “bad” will potentially risk models to overemphasize these changes and even assign zero weights to the broader unmodified context (Joshi and He, 2022). Consequently, the trained models may fail to understand more nuanced expressions like irony or negation, exemplified by sentences such as “Is it a good movie ????” or “This movie is not that good.” To solve the above risks of CAD training, an intuitive solution is to increase the diversity of counterfactual samples (Joshi and He, 2022; Sen et al., 2023), thereby disentangling the suspicious correlations between edited features and labels. Nonetheless, this kind of method often relies on human knowledge to steer the diversification, bearing high expenditure and time consumption (Huang et al., 2020). Others try to design additional constraints to align the model gradient with the straight line between the counterfactual example and the original input (Teney et al., 2020), or to minimize the invariant risk (Fan et al., 2024), but these attempts 11955 fail to exploit the complex effects of augmented feature components. In this paper, we introduce a simple yet effective learning strategy to mitigate the overfitting problem associated with CAD. Inspired by the recent success of contrastive learning (CL) in feature alignment (Gao et al., 2021; Wang et al., 2022b; Liu et al., 2023a,b) and its strengths in capturing global relationships (Park et al., 2023), we propose to employ a contrastive learning objective to complement the standard cross-entropy (CE) loss. While CL compels the model to extract complementary effects among counterfactually augmented data to alleviate the feature degeneration, CE ensures the induced feature representations are effectively used for classification. Our mathematical proof further corroborates the advantage of combining the two losses in training models on CAD, resulting in enhanced generation capability. In summary, our contributions are as follows: • We introduce a contrastive learning-based framework, named Pairwisely Counterfactual Learning with Contrastive Regularization (PairCFR), for training models on CAD, which prevents overfitting to minor, nonrobust edits, thus enhancing generalization performance. • We provide theoretical proof for understanding the synergistic benefits of combining the CE and CL losses, unravelling their complementary effects in preventing models from relying solely on counterfactual edits for classification. • We conduct comprehensive experiments to demonstrate that the models trained under our learning framework achieve superior OOD generalization performance on two humanedited CAD datasets. 2 Related work Counterfactually Augmented Data. Counterfactual examples (CFEs) suggest the minimal modifications required in an input instance to elicit a different outcome (Wachter et al., 2017; Barocas et al., 2020). This property has inspired researchers (Kaushik et al., 2020; Wu et al., 2021) to adopt CFEs as a meaningful data augmentation in NLP, aiming to mitigate spurious correlations and improve causal learning. Early efforts (Kaushik et al., 2020; Gardner et al., 2020) involved creating CAD datasets with manual sentence edits for label reversal. To ease the high cost of manual annotation, subsequent works adopt large language models (LLMs) for cost-effective generation of CAD (Wu et al., 2021; Madaan et al., 2021; Wen et al., 2022; Dixit et al., 2022; Pryzant et al., 2023; Chen et al., 2023). However, findings from various investigations have indicated that training on CAD does not always ensure improved generalization on OOD tasks (Huang et al., 2020; Joshi and He, 2022; Fan et al., 2024). Consequently, our emphasis in this work is not on generating CAD, but rather on the exploration of methodologies to effectively utilize the inherent prior knowledge within CAD. Contrastive Learning. Contrastive learning is initially proposed to learn a better embedding space by clustering similar samples closely while pushing dissimilar ones far apart (Schroff et al., 2015; Sohn, 2016; Oord et al., 2018; Wang and Isola, 2020). For example, the triplet loss (Schroff et al., 2015) minimizes the distance between an anchor point and its positive sample while maximizing the distance from a negative sample. The N-pair loss (Sohn, 2016) maximizes the distance between an anchor point with multiple negative points. Meanwhile, InfoNCE (Oord et al., 2018) separates positive samples from multiple noise samples with crossentropy loss. Enhanced by other efficient techniques, e.g., data augmentation (Chen et al., 2020), hard negative sampling (Schroff et al., 2015), and memory bank (Wu et al., 2018), CL has propelled significant advancements in various domains, under both supervised and unsupervised settings. In this section, we explore the untapped potential of CL to enhance the OOD generalization of models trained on CAD. Training with CAD. The task of effectively training a robust model with CAD has received relatively limited attention. The simple approach is to directly use the cross-entropy loss (Kaushik et al., 2020; Wen et al., 2022; Balashankar et al., 2023). To better exploit the causal relationship in counterfactual editing, Teney et al. (2020) have introduced gradient supervision over pairs of original data and their counterfactual examples, ensuring the model gradient aligns with the straight line between the original and counterfactual points. Meanwhile, Fan et al. (2024) considers original and counterfactual distribution as two different environments and proposes a dataset-level constraint using invariant risk minimization. Following these works, we introduce a learning framework employing contrastive loss as a regularizer to enhance the generalization 11956 of fine-tuned models notably. 3 Methodology 3.1 Motivation Recent studies have empirically shown that while perturbed features in CAD are robust and causal (Kaushik et al., 2020), they may inhibit the model’s ability to learn other robust features that remain unperturbed (Joshi and He, 2022). In this section, we mathematically demonstrate that the standard crossentropy loss, which is commonly used for training models on CAD, can exacerbate this tendency. Given an instance x ∈Rm×1, we train a singlelayer non-linear function fW (x) = σ(W T x), where W ∈Rm×1 and σ is the sigmoid function, to predict the label y ∈{0, 1}. We expand x, whose label y = 1, as x = [xr, xc]T , where xr denotes the features to be revised (perturbed) and xc denotes the constant (unperturbed) features. The counterfactual example of x can be written as c = [cr, xc]T , with label y = 0. As the sigmoid function is monotone and bounded, the cr and xr should have different signed values to ensure that x and c are classified differently. We expand the weights W = [wr, wc]T and take it into the function fW to obtain fW (x) = σ(wrxr + wcxc) and fW (c) = σ(wrcr +wcxc). The CE loss on the data x and its counterfactual c is calculated as LCE(x, c) = −log(fW (x)) −log(1 −fW (c)). (1) By minimizing the CE loss, we enforce fW (x) to approach 1 and fW (c) to approach 0. Considering that xr and its counterpart cr have different signed values, we observe that optimizing wr can achieve the desired contrasting effect with less effort than optimizing wc. Therefore, the model tends to assign higher weights wr for revised features and lower weights wc for constant or unperturbed features. An expanded illustration can be found in the Appendix A. Similar phenomena are observed in both least squares loss (Joshi and He, 2022) and Fisher’s Linear Discriminant on CAD (Fan et al., 2024). The above observations indicate that the CE loss alone can lead the model to focus on learning the revised features in CAD, which necessitates incorporating a regularization that compels the model to consider a broader range of features. 3.2 The Role of Contrastive Loss Recent research findings have empirically shown that models trained under contrastive loss mainly focus on capturing global relationships (Park et al., 2023) compared with negative log-likelihood losses such as masked language modeling. Inspired by this, we propose to employ CL to complement standard CE loss for training models on CAD. In the following, we start from the introduction of CL loss and then mathematically show how CL encourages the model to select a broader range of features beyond the edited ones in the counterfactual data. Given an anchor sample xi from a data batch D = {xi, yi}N i=1, ∀xi ∈D, we have its positive samples in Pi ≡{xp|yp = yi, p ̸= i} and negative samples in Ni ≡{xn|yn ̸= yi, n ̸= i}, where Ni contains the counterfactual samples c for every xi. The contrastive loss for the anchor xi is LCL =−E xp∈Pi " log esip/τ esip/τ + P xn∈Ni esin/τ # , (2) where sxy = zx·zy ∥zx∥∥zy∥measures the cosine similarity between the hidden representations of a pair of samples, and τ is a temperature scaling factor for controlling the extent to which we separate positive and negative pairs (Wang and Isola, 2020). Without loss of generality, we assume W ∈ Rm×d that directly maps the input instance into a d-dimensional embedding space, zi = WT xi. To obtain the gradient of the CL loss coming from negative samples, we have ∂LCL ∂W sin = ∂LCL ∂sin × ∂sin ∂W = 1 τ Pin × AinW. (3) The full derivation process can be found in the appendix B. Here, we have Pin = E xp∈Pi   esin/τ esip/τ + P xn∈Ni esin/τ  , (4) which indicates the probability of xi being recognized as xn. Ain = xixT n + xnxT i ∈Rm×m is a symmetric matrix derived from the outer product of xi and xn. Each element of Ai,n indicates the digit-level dot product between the features of xi and xn, which provides a full view of the entire feature space when comparing a pair of samples. A 11957 �� �� ℒ�� ℒ�� Training data Total loss ℒ Labels 01 �(·) �� �� �(·) Embeddings 00 Figure 1: The overall learning framework. higher value leads to a larger gradient update and the weights W are optimized by considering the whole feature sets. The above analysis implies that the CL loss has the capability of capturing global features beyond those being edited. When learning on CAD under CL, we pair each instance x with its CFE, c, to compel the model to disparate x from all negative samples, including its counterfactual example c: X xn∈Ni esin/τ =esic/τ + X xn∈Ni\c esin/τ, (5) where the non-bold c is the index of CFE. Let us revisit the toy example with x = [xr, xc]T and c = [cr, xc]T . Although minimizing the similarity between x and c encourages the model to focus on features xr, the other negative samples in the batch, e.g., x′ = [x′ r, x′ c]T , will enforce the model to use both wr and wc to compare the difference. Hence, the existence of real negative samples could help the model capture the relationships between xr and its context xc. As all sin equally contribute to updating the model weights, the number of non-CFE negatives moderates the learning from local CAD and global patterns. A smaller batch size will manifest the influence of edited features, whereas a larger batch size may dilute the local differences in CAD, as discussed in the experiments 5.4. 3.3 Overall Learning Framework Next, we introduce our proposed learning framework, Pairwisely Counterfactual Learning with Contrastive Loss Regularization, named PairCFR for short. As shown in Figure 1, a model f can be decomposed into two modules, ϕ(·) and φ(·), i.e., f = φ ◦ϕ, where ϕ(·) encodes the input sentence into a hidden embedding, and φ(·) maps ϕ(x) for classification. For transformer-based models, we instantiate ϕ(x) using the [CLS] representation, denoted as z. We explicitly pair the original sentences x and their CFEs, c, in the same batch to provide additional training signals indicative of the underlying causal relationships. The standard cross-entropy loss is computed on the logits vector projected from φ(z). Optimizing CE loss enforces φ(·) to identify a small set of features from z and assign them higher weights to quickly reach a local minimum while optimizing CL loss compels ϕ(·) to consider the entire feature space of z to meet the distance constraints. Overall, we combine the two losses as follows. L = λLCL + (1 −λ)LCE, (6) where λ is the trade-off factor to balance the two losses. To compute CL on a batch, we sample positive pairs that have the same label while all the negative samples including the CFE of the anchor sample are considered. 4 Experimental Setup In the following, we introduce experimental settings, which include benchmark tasks, evaluation metrics, competitive baselines and implementation details. Our code is released on GitHub 1. 4.1 Benchmark Tasks & Evaluations We evaluate our learning framework on two NLP tasks, sentiment analysis (SA) and natural language inference (NLI). We use two human-edited CAD datasets (Kaushik et al., 2020), which ensures goodquality counterfactual data (Sen et al., 2023), to train all the models. The IMDb augmented dataset contains 4880 data samples with an original to CFE ratio of 1:1. The SNLI dataset contains 11330 data samples with an original to CFE ratio of 1:4. The statistics of human-revised CAD are reported in Appendix C.1. To eliminate the random effect, we train each model for multiple runs (10 runs for SA and 7 runs for NLI) using different random seeds. We report the average test accuracy, standard deviation for both in-domain (ID) datasets and several out-of-domain (OOD) datasets. We also conduct significance tests by calculating p-value, to ensure that the observed improvements are not due to randomness. The details of ID and OOD datasets used for evaluation are described in Appendix C.2. 4.2 Implementation Details We finetune the BERT base (Devlin et al., 2019), RoBERTa base (Liu et al., 2019), SentencesBERT (SBERT, multi-qa-distilbert-cos) (Reimers 1https://github.com/Siki-cloud/PairCFR.git 11958 and Gurevych, 2019) and T5 base (Raffel et al., 2020) models with the original or CAD datasets on HuggingFace platform (Wolf et al., 2020). Volumes of model parameters are listed in Table 7 in Appendix C.3. Following the common practices of transformers (Devlin et al., 2019), we take the embedding of the “[CLS]” token as sentence representation and finetune the whole model. We set the maximum token length to 350 for SA and 64 for NLI. We follow the original dataset splits described in (Kaushik et al., 2020), where the train, validation, and test sets are divided in a ratio of 7:1:2, with all classes balanced across each set. Subsequently, we finetune all models up to 20 epochs with the AdamW optimizer, coupled with a linear learning rate scheduler with a warmup ratio as 0.05. The best learning rate is manually tuned from {1e−4, 1e−5, 3e−5, 5e−5, 5e−6, 1e−6}. We apply the early stopping strategy with a patience of 5 and the best model is selected according to the lowest validation loss. To determine the trade-off factor λ and temperature τ, we conducted a grid search in the range [0, 1] with a step size of 0.1. We also conducted experiments to evaluate our PairCFR in few shot setting where the learning rate and batch size were tuned accordingly. The hyperparameters for full data finetuning and few shot setting are shown in Table 9, Table 8 respectively, in Appendix C.3. 4.3 Baselines We compare our method PairCFR with the following baselines. For a fair comparison, we employ other forms of augmentation or increase the sampling number for the first three baselines without counterfactual augmentation, to ensure all approaches have the same number of training data. Vanilla (Devlin et al., 2019). This method refers to a general model fine-tuning with original sentences. We include this baseline to verify the improvement of our method result from both the introduction of CAD and the novel learning framework. BTSCL (Gunel et al., 2021). This approach employs the supervised contrastive loss (Khosla et al., 2020) into the model training where augmented positive samples are obtained through backtranslating a given sentence (Ng et al., 2019). CouCL (Wang et al., 2022a). As counterexamples (CEs) are rare in a mini-batch, CouCL samples counterexamples from the original training set, where an example with lower confidence corresponds to a higher likelihood of being selected. Subsequently, it adopts the self-supervised contrastive loss to push representations of positive CEs and negative CEs far apart. The following approaches study how to train a robust model with annotated CAD: HCAD (Kaushik et al., 2020). It collects two human-edited CAD datasets and fine-tunes a pretrained model on CAD with the cross-entropy loss. CFGSL (Teney et al., 2020). As domain priors in CAD may be lost due to random shuffling in preprocessing (Kaushik et al., 2020), CFGSL pairs original data and its counterfactual example in the same batch and introduces a gradient supervision loss (GSL) alongside the cross-entropy loss. The GSL enforces the model gradient to align with the straight line from the original point to CFE. ECF (Fan et al., 2024). It introduces two additional losses to mine the causal structures of CAD. The first loss extracts the dataset-level invariance through Invariant Risk Minimization (IRM) while the second loss is applied to pairs of original sentences and CFEs, preventing the model from relying on correlated features. 5 Results and Analysis 5.1 Overall Performance Comparison Table 1 reports the overall performance comparisons, showing that our proposed PairCFR method outperforms all the baseline models on three out of four OOD datasets for both SA and NLI tasks across four different backbone models. To exclude the possibility of marginal improvements due to random initializations, we also conducted significance tests under the null hypothesis that there are no differences between each baseline and our approach, as presented in Table 10, located in Appendix C.6. The p-values less than 0.05 demonstrate that our methods are significantly better than the baselines, even though some improvements are relatively slight in Table 1. In addition, we reported the following findings. Firstly, CAD-based methods may perform worse than non-CAD methods on OOD tasks, e.g., HCAD always lags behind CouCL on the SA task using fine-tuned T5 model. A similar phenomenon is also reported in (Joshi and He, 2022). These could be due to the failure to extract complementary features between CFEs and the original data; Secondly, the introduction of CFEs may shift the training data distribution from the in-domain data distribution. As anticipated, CAD-based methods fall behind 11959 Table 1: Average performance of various fine-tuned models on ID and OOD test sets. Acc denotes the average of all the OOD performance. The best results are bolded. Methods Sentiment Analysis Natural Language Inference In-Domain Out-of-Dimain In-Domain Out-of-Dimain IMDb Amazon Yelp Twitter SST-2 Acc SNLI MNLI-m MNLI-mm Negation Spelling-e Word-o Acc BERT-base-uncased Vanilla 90.15±1.66 86.38±0.39 91.03±0.83 81.66±0.27 82.59±1.00 85.42 78.85±0.44 57.43±0.92 59.36±0.80 40.96±4.32 53.56±1.54 50.75±6.65 52.41 BTSCL 90.43±1.47 85.45±0.71 91.97±0.31 81.79±1.28 83.80±1.17 85.75 79.02±0.49 57.28±1.30 59.10±1.42 43.10±3.65 53.51±1.74 49.20±4.51 52.44 CouCL 85.67±1.13 86.75±0.22 89.53±0.55 84.41±0.23 85.01±0.43 86.43 71.90±0.95 51.99±1.75 52.20±1.86 38.70±4.69 49.82±2.01 44.03±4.02 47.35 HCAD 88.16±2.70 86.40±0.77 89.94±0.99 83.29±2.71 85.74±1.04 86.34 73.49±1.37 58.53±1.59 60.77±1.46 35.43±3.06 54.01±2.70 54.72±3.29 52.69 CFGSL 88.51±3.29 85.52±1.05 89.58±1.83 84.56±1.53 86.77±0.79 86.61 77.16±0.41 60.11±1.07 62.25±0.66 33.81±1.89 56.37±0.74 58.45±0.97 54.20 ECF 87.71±0.29 86.43±0.10 89.30±0.16 83.05±0.69 86.23±0.18 86.25 73.23±1.52 58.95±0.15 61.19±1.34 42.40±1.07 54.15±0.53 57.10±0.92 54.76 Ours 89.63±1.36 86.79±0.72 91.78±0.44 85.27±0.39 86.81±0.97 87.66 75.38±0.21 60.46±0.38 62.27±0.39 39.21±3.61 56.84±0.54 59.16±0.88 55.59 RoBERTa-base Vanilla 92.68±1.15 87.08±1.39 94.00±0.77 81.43±2.82 86.04±2.76 87.14 85.16±0.39 70.35±1.29 71.25±1.59 52.47±5.55 67.36±1.36 61.82±4.54 64.65 BTSCL 93.09±0.61 89.46±0.21 94.74±0.36 85.72±1.22 87.16±0.87 89.27 85.72±0.44 70.83±1.38 72.10±1.32 56.89±3.78 67.61±1.32 62.22±3.55 65.93 CouCL 91.22±0.83 89.48±0.19 93.04±0.58 87.40±0.77 88.07±0.66 89.50 82.37±0.52 70.86±1.32 71.38±1.23 51.83±2.71 68.08±1.23 64.68±1.82 65.37 HCAD 90.12±1.74 88.50±0.57 92.18±0.94 83.43±1.75 86.48±0.98 87.65 80.91±0.69 70.35±1.08 70.77±0.76 45.79±4.16 67.37±1.28 64.83±1.47 63.82 CFGSL 90.69±0.92 88.32±0.41 93.48±0.48 83.90±1.78 86.89±0.80 88.15 82.45±0.35 71.59±0.90 71.25±1.06 51.40±1.47 68.86±1.07 62.22±1.99 65.06 ECF 91.05±0.44 88.56±0.32 93.79±0.19 85.82±0.43 87.84±0.59 89.00 81.88±0.17 70.45±1.03 71.18±0.93 51.70±2.38 66.60±0.94 63.76±1.98 64.74 Ours 91.74±0.88 89.60±0.26 93.35±0.34 87.90±0.45 88.61±0.41 89.87 82.13±0.51 71.80±0.53 72.12±0.79 55.19±1.97 68.88±0.36 65.91±1.35 66.78 SBERT-multi-qa-distilbert-cos Vanilla 87.61±1.86 80.65±0.67 89.74±0.77 83.95±1.12 82.01±1.59 84.09 76.96±0.53 53.90±2.03 55.90±2.22 45.20±4.18 51.23±2.72 48.27±5.00 50.90 BTSCL 88.84±2.41 81.21±0.76 90.49±0.37 84.20±0.61 83.62±0.64 84.88 77.16±0.38 54.42±1.31 56.14±1.36 45.40±2.78 52.44±1.83 49.80±2.63 51.64 CouCL 87.96±0.67 83.92±0.13 89.15±0.18 85.40±0.31 83.48±0.37 85.49 70.61±1.54 55.29±1.45 57.90±1.81 35.86±1.87 52.01±2.26 54.89±1.91 51.19 HCAD 86.09±1.74 83.94±0.39 87.87±0.66 85.91±0.66 82.83±0.90 85.14 71.64±1.04 55.93±1.61 58.70±1.96 35.05±1.22 53.33±1.06 54.86±2.08 51.57 CFGSL 86.05±1.07 82.71±0.73 87.59±0.75 83.36±0.55 83.70±0.49 84.34 70.72±1.06 55.84±0.88 58.52±1.15 36.07±3.38 52.60±1.27 55.57±1.68 51.72 ECF 87.83±0.46 84.51±0.34 88.44±0.20 84.60±0.70 84.27±0.56 85.46 64.55±1.23 49.95±1.84 51.49±1.82 38.59±2.32 48.31±1.67 49.55±2.27 47.58 Ours 87.28±0.75 84.58±0.22 88.52±0.30 86.32±0.35 84.31±0.78 85.93 71.48±0.40 57.19±0.84 60.76±0.46 37.27±2.35 54.36±0.67 56.78±1.24 53.27 T5-base Vanilla 92.15±1.49 88.24±0.85 94.44±0.67 83.40±1.38 86.17±2.60 88.06 83.28±0.57 62.62±2.59 65.18±2.10 41.00±2.46 58.76±2.61 48.30±3.27 55.17 BTSCL 92.78±1.08 88.50±0.81 94.89±0.42 83.37±1.09 87.17±1.07 88.48 83.66±0.46 64.01±2.57 66.47±2.24 42.16±2.90 60.01±3.43 50.16±5.69 56.56 CouCL 91.74±0.88 88.91±0.47 93.35±0.34 87.03±0.70 88.61±0.41 89.48 79.81±0.54 70.19±0.58 71.84±0.76 39.82±3.23 66.35±0.68 64.29±1.58 62.50 HCAD 90.09±1.95 88.72±0.85 92.60±0.87 85.63±1.15 85.54±1.28 88.12 80.09±0.73 70.19±0.72 71.60±0.83 45.05±3.94 66.57±0.73 65.30±1.51 63.74 CFGSL 89.48±5.17 88.27±1.05 92.77±1.45 81.56±2.49 82.11±2.50 86.18 80.71±0.64 69.08±0.97 69.85±1.12 45.59±3.74 65.58±1.18 65.80±1.55 63.18 ECF 90.85±0.37 89.27±0.25 92.65±0.44 87.66±0.26 88.57±0.54 89.54 78.93±0.51 69.57±1.14 70.30±1.45 46.14±3.12 64.19±1.08 65.79±1.71 63.20 Ours 91.47±0.89 89.18±0.21 93.45±0.63 87.90±0.45 88.64±1.04 89.79 80.87±0.77 71.38±0.13 72.46±0.57 46.31±0.50 67.37±0.12 67.39±0.33 64.98 Table 2: Ablation study for the pairing strategy and the CL loss on various transformer-based models. Acc denotes the average of all the OOD performance. The best results are bolded. Sentiment Analysis Natural Language Inference Variants In-Domain Out-of-Dimain In-Domain Out-of-Dimain #Train Loss IMDb Amazon Yelp Twitter SST-2 Acc SNLI MNLI-m MNLI-mm Negation Spelling-e Word-o Acc BERT-base-uncased ShuffCAD CE 88.16±2.70 86.40±0.77 89.94±0.99 83.29±2.71 85.74±1.04 86.34 73.49±1.37 58.53±1.59 60.77±1.46 35.43±3.06 54.01±2.70 54.72±3.29 52.69 PairCAD CE 88.23±3.11 86.56±0.34 89.97±1.85 84.15±1.20 85.84±0.85 86.62 74.27±0.72 59.13±0.65 60.85±0.88 36.10±1.92 56.14±1.34 55.40±2.83 53.52 ShuffCAD CE+CL 89.18±1.33 86.77±0.65 91.45±0.53 84.14±1.82 86.26±0.99 87.15 73.77±1.11 59.39±0.64 61.85±0.86 36.80±4.04 55.62±0.87 57.09±2.45 54.15 PairCAD CE+CL 89.63±1.36 86.79±0.72 91.78±0.44 85.27±0.39 86.81±0.97 87.66 75.38±0.21 60.46±0.38 62.27±0.39 39.21±3.61 56.84±0.54 59.16±0.88 55.59 RoBERTa-base ShuffCAD CE 90.12±1.74 88.50±0.57 92.18±0.94 83.43±1.75 86.48±0.98 87.67 80.91±0.69 70.35±1.08 70.77±0.76 45.79±4.16 67.37±1.28 64.83±1.47 63.82 PairCAD CE 90.95±0.84 88.77±0.74 92.77±0.95 83.45±2.53 86.37±1.06 87.84 81.69±0.90 70.77±0.49 71.33±0.45 54.38±1.67 67.90±0.63 65.43±0.99 65.96 ShuffCAD CE+CL 91.42±1.01 89.44±0.27 92.91±0.64 86.67±1.05 87.25±0.68 89.07 81.95±0.39 71.16±0.60 71.79±0.79 51.43±2.91 68.20±0.57 64.12±1.03 65.34 PairCAD CE+CL 91.74±0.88 89.60±0.26 93.35±0.34 87.90±0.45 88.61±0.41 89.61 82.13±0.51 71.80±0.53 72.12±0.79 55.19±1.97 68.88±0.36 65.91±1.35 66.78 SBERT-multi-qa-distilbert-cos ShuffCAD CE 86.09±1.74 83.94±0.39 87.87±0.66 85.91±0.66 82.83±0.90 85.13 71.64±1.04 55.93±1.61 58.70±1.96 35.05±1.22 53.33±1.06 54.86±2.08 51.57 PairCAD CE 86.78±1.41 83.55±0.39 88.51±0.77 85.95±0.40 83.20±0.63 85.30 70.90±1.02 56.50±0.58 59.03±0.57 35.89±1.98 53.03±1.17 55.04±1.03 51.89 ShuffCAD CE+CL 87.68±1.05 84.23±0.37 88.66±0.77 85.45±0.28 83.60±0.38 85.48 71.38±0.62 57.08±0.53 60.01±0.35 35.11±1.64 54.15±0.53 55.59±1.89 52.39 PairCAD CE+CL 87.28±0.22 84.58±0.22 88.52±0.30 86.32±0.35 84.31±0.7 85.93 71.48±0.40 57.19±0.84 60.76±0.46 37.27±2.35 54.36±0.67 56.78±1.24 53.27 T5-base ShuffCAD CE 90.09±1.95 88.72±0.85 92.60±0.87 85.63±1.15 85.54±1.28 88.12 80.09±0.73 70.19±0.72 71.60±0.83 45.05±3.94 66.57±0.73 65.30±1.51 63.85 PairCAD CE 90.03±1.35 89.02±0.41 92.76±0.99 86.46±1.00 86.59±1.37 88.71 79.55±0.66 68.86±0.52 70.75±0.77 45.18±3.49 65.56±0.67 65.64±1.50 62.83 ShuffCAD CE+CL 90.38±1.80 89.03±0.46 93.06±1.29 85.75±0.96 87.24±2.12 88.76 80.21±0.10 70.43±0.11 71.78±0.37 45.41±2.08 66.59±0.56 66.28±0.93 64.09 PairCAD CE+CL 91.47±0.89 89.18±0.21 93.45±0.63 87.03±0.70 88.64±1.04 89.79 80.87±0.77 71.38±0.13 72.46±0.57 46.31±0.50 67.37±0.12 67.39±0.33 64.98 11960 50 100 500 1k 4k 30 45 60 75 Accuracy (%) Vanilla BTSCL CouCL HCAD CFGSL ECF PairCFR (Ours) SNLI 50 100 500 1k 4k 30 40 50 60 MNLI-m 50 100 500 1k 4k 30 40 50 60 MNLI-mm 50 100 500 1k 4k 33 36 39 42 Negation 50 100 500 1k 4k 36 42 48 54 Spelling-e 50 100 500 1k 4k 36 42 48 54 Word-o 50 100 500 1k 4k 0 2 4 6 8 Std. #Train 50 100 500 1k 4k 0 1 2 3 4 #Train 50 100 500 1k 4k 0 1 2 3 4 #Train 50 100 500 1k 4k 0 1 2 3 4 #Train 50 100 500 1k 4k 0 1 2 3 4 #Train 50 100 500 1k 4k 0 2 4 6 #Train Figure 2: Few-shot learning results of BERTbase on NLI. x-axis represents the number of training samples and y-axis represents the averaged accuracy and standard deviation on ID and OODs. Table 3: The influence of neutral samples during fine-tuning BERTbase on SNLI. The number of training samples is kept the same. The abbreviations ‘w’ and ‘w/o’ stand for whether neutral examples are included or excluded in the computation of the CL. The p-value is reported under a null hypothesis that no difference exist between training with and without neural samples. Train Data netural samples In-Domain Out-of-Dimain SNLI MNLI-m MNLI-mm Negation Spelling-e Word-o Acc PairCAD w 73.29±1.09 59.41±0.91 61.66±0.85 35.96±2.81 56.42±1.10 56.14±2.60 53.92 PairCAD w/o 75.38±0.21 60.46±0.38 62.27±0.39 39.21±3.61 56.84±0.54 59.16±0.88 55.59 p-value 5.90e-06 0.0109 0.0055 0.0053 0.0087 0.0107 Table 4: The influence of counterfactual diversity during fine-tuning T5base on SNLI. The best results are bolded. In-Domain Out-of-Domain Train Data CE+CL R:O SNLI MNLI-m MNLI-mm Negation Spelling-e Word-o Acc Original (20k) 85.09±0.27 69.53±1.38 71.62±1.04 45.65±3.53 66.43±1.49 52.89±5.22 61.22 PairCAD (3.3k) 1 74.50±2.51 65.24±1.63 67.61±1.36 38.38±3.42 61.24±1.86 60.61±2.33 58.62 PairCAD (4.9k) 2 76.12±1.58 66.62±1.05 69.31±0.87 42.33±7.31 62.91±1.60 62.61±1.58 60.76 PairCAD (6.4k) 3 77.98±0.82 68.36±1.48 70.00±1.44 43.13±1.17 64.60±1.98 64.45±2.15 62.11 PairCAD (8.3k) 4 80.14±0.96 71.02±0.39 71.84±0.76 45.73±0.70 66.87±0.51 67.11±0.39 64.51 non-CAD methods on ID datasets. Thirdly, our proposed PairCFR exhibits superior OOD performance compared to the baselines, achieving the highest accuracy on mostly OOD datasets, with the sole exceptions being the Yelp and Negation datasets. We postulate that the noted exceptions may be attributed to Yelp and Negative datasets having distributions similar to the ID datasets. The above results validate that PairCFR possesses a heightened capability to learn prior knowledge in CAD. 5.2 Few-shot Learning Performance Data augmentation, such as counterfactual augmentation, is frequently utilized to enhance the performance of few-shot learning. In this part, we investigate the effectiveness of our proposed PairCFR in few-shot learning scenarios. We conducted experiments using the finetuned BERTbase model on the SNLI dataset, gradually increasing the number of training samples from 50 to 4,000. Similarly, on the IMDB dataset, we increased the number of training samples from 32 to 1,024. Experiment results on SNLI and IMDB under the few-shot setting are reported in Figure 2 and Figure 5 ( Appendix C.5). From both tables, we can conclude that our PairCFR generally demonstrates higher accuracy and lower standard deviation across OOD datasets, particularly in scenarios where training sample sizes are small. For instance, PairCFR significantly outperforms other methods by around 6% on Spelling-e when trained with only 11961 100 counterfactually augmented samples. 5.3 Ablation Study We conducted ablation experiments to verify the efficacy of two crucial strategies of our proposed method: (1) the pairing strategy: the integration of original data with their CFEs within the same batch, denoted PairCAD, versus ShuffCAD where randomly shuffle CFEs and originals. (2) the CL loss: the incorporation of CL and CE loss versus CE loss alone. Results in Table 2, together with significance tests in Table 11 in Appendix C.6, offer several insights: 1) The strategy of pairing original data with their CFEs in the same batch improves OOD performance for both SA and NLI tasks. This can be attributed to the preservation of prior causal relations, which might be lost during random shuffling; 2) The efficacy of PairCAD with a CE-alone learning framework is not guaranteed. For example, within the T5 model framework, PairCAD underperforms ShuffCAD on the SNLI, MNLI, and Spelling-e datasets when only CE loss is adopted. This underscores the critical role of the CL component in augmenting features when we batch CFEs and original data; 3) Integrating the CL consistently improves model performance in both ID and OODs. Particularly, combining CL with PairCAD yields the best performance across various model assessments, highlighting the effectiveness of contrastive learning and the pairing strategy in leveraging causal relations of CFEs. 5.4 Impact of Batch Size In this study, we investigated the effect of batch size on learning performance. We conducted experiments on the fine-tuned BERT model for SA and the fine-tuned T5 model for NLI, incrementally increasing the batch size while maintaining the original augmentation ratio for each task. From Figure 3, we observe that the model performance on both tasks initially improves with increasing batch size, but eventually reaches a plateau or experiences a slight decline. We contend that the inclusion of negative samples in the CL function provides additional regularization, forcing the model to rely on a broader array of features beyond those edited. However, an excessively large batch size introduces an overwhelming number of negative samples in CL, which may dilute the human priors in CAD, leading to diminished performance. This trend is consistent across 2 4 8 16 32 64 75 80 85 90 Accuracy Batch Size Amazom Yelp Twitter SST-2 IMDb 5 10 20 30 40 60 40 60 70 80 Batch Size MNLI-m MNLI-mm Word-o Spelling-e Negation SNLI Figure 3: Test results for fine-tuning BERTbase on IMDb augmented data (left) and T5base on SNLI augmented data (right) with respect to the batch size. both SA and NLI tasks, highlighting the effort required in batch size selection. 5.5 Contribution of Neutral Class in NLI Do all counterfactual examples equivalently contribute to enhancing model generality? To answer this, we specifically experimented with the finetuned BERT model on the NLI task, comparing performance with and without the inclusion of neutral class samples in CL. Results in Table 3 reveal that removing neutral samples, including neutral CFEs, significantly enhances the OOD generalization by approximately 2% when training the model on CAD with our learning framework. We attribute this performance difference to the distinct nature of neutral samples. In NLI tasks, judgments of entailment and contradiction are often readily determined based on the semantic alignment or disparity between text elements. Conversely, neutral samples represent scenarios where the hypothesis and premise lack any clear relationship, encompassing a vast array of potential expressions. This diversity poses a great challenge for models to identify universal patterns within the neutral class through human annotations. Therefore, adding neutral samples into the CL detrimentally affects the model’s performance in our experiments. This investigation highlights the necessity of contemplating the practical value of adding additional counterfactual examples for specific classes. 5.6 Effect of Counterfactual Diversity We also investigated the role of CFE diversity in improving model performance on the NLI task. In SNLI, each sentence is annotated with 4 CFEs, due to the existence of two opposite targets and modifications made to both the hypothesis and premise. Each CFE is obtained through a different type of modification, resulting in a dataset that includes 11962 more diverse counterfactuals. We fine-tuned the T5base model by incrementally including more CFEs in a batch, ranging from 1 to 4. The results in Table 4, reveal a direct relation between the number of CFEs and the model’s generalization capabilities. Notably, the OOD performance of the model trained on CAD is even better than that trained on a 3 times larger dataset with only original data. We conclude that enhancing counterfactual diversity proves to be an efficient strategy, which is the same as the findings reported in (Joshi and He, 2022). 6 Conclusion Counterfactually Augmented Data (CAD) can enhance model robustness by explicitly identifying causal features. However, recent research found that CAD may fall behind non-CAD methods on generality. In this work, we introduce PairCFR to overcome this challenge. PairCFR pairs original and counterfactual data during training and includes both contrastive and cross-entropy losses for learning discriminative representations. We prove that contrastive loss aids models in capturing sufficient relationships not represented in CAD, thus improving generality. Extensive experiments demonstrate that our PairCFR achieves superior accuracy and robustness in various scenarios. Our findings emphasize the potential of carefully designed training paradigms in utilization of CAD. 7 Limitations Our PairCFR has been demonstrated to effectively improve models’ OOD generalization with humanedited CAD datasets, which, despite its high quality, is quite limited in size. Future work will focus on utilizing LLMs such as ChatGPT or GPT-4 to generate a larger volume of CAD. Yet, LLMgenerated CAD may suffer from lower quality due to noisy and insufficient perturbations. It remains crucial and necessary to extend our PairCFR framework to accommodate such compromised CAD. Furthermore, PairCFR currently utilizes a simple form of contrastive loss, namely InfoNCE. In the future, we aim to investigate alternative contrastive loss variants and assess their potential to further enhance OOD generalization capabilities. Lastly, our experiments were conducted using relatively older and moderately sized LLMs, such as BERT and RoBERTa. We are also interested in exploring the potential improvements on larger LLMs by employing parameter-efficient finetuning methods. 8 Ethics Statement This work focuses on reducing shortcut learning in models trained on CAD, thereby improving their robustness and generalization. Similar to other methods designed to mitigate learning from spurious correlations, our proposed PairCFR could help elicit trust in NLP models. It assists models in betterconsidering context (see Section 3 for details), preventing decision-making based on incomplete or biased information, such as solely on the edited words in CAD. Nonetheless, ensuring absolute fairness in model decisions in complex real-world contexts remains a formidable challenge solely from a model design standpoint. For instance, models could be compromised by low-quality or erroneous counterfactual data, leading to the learning of false relationships and resulting in erroneous or biased real-world decisions. Consequently, it is crucial for practitioners to consider the quality of counterfactual data alongside model design. Acknowledgements This research is supported, in part, by the Joint NTU-WeBank Research Centre on Fintech, Nanyang Technological University, Singapore. This research is supported, in part, by the National Research Foundation, Prime Minister’s Office, Singapore under its NRF Investigatorship Programme (NRFI Award No. NRF-NRFI05-20190002). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore. Xu Guo wants to thank the WallenbergNTU Presidential Postdoctoral Fellowship. Zhiwei Zeng thanks the support from the GopalakrishnanNTU Presidential Postdoctoral Fellowship. This research is also supported by the Shenzhen Science and Technology Foundation (General Program, JCYJ20210324093212034) and the 2022 Guangdong Province Undergraduate University Quality Engineering Project (Shenzhen University Academic Affairs [2022] No. 7). We also appreciate the support from Guangdong Province Key Laboratory of Popular High Performance Computers 2017B030314073, Guangdong Province Engineering Center of China-made High Performance Data Computing System. 11963 References Ananth Balashankar, Xuezhi Wang, Yao Qin, Ben Packer, Nithum Thain, Ed Chi, Jilin Chen, and Alex Beutel. 2023. Improving classifier robustness through active generative counterfactual data augmentation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 127–139, Singapore. Association for Computational Linguistics. Solon Barocas, Andrew D Selbst, and Manish Raghavan. 2020. The hidden assumptions behind counterfactual explanations and principal reasons. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 80–89. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pages 1597–1607. PMLR. Zeming Chen, Qiyue Gao, Antoine Bosselut, Ashish Sabharwal, and Kyle Richardson. 2023. DISCO: Distilling counterfactuals with large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5514–5528, Toronto, Canada. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Tanay Dixit, Bhargavi Paranjape, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. CORE: A retrieve-thenedit framework for counterfactual data generation. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2964–2984, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Caoyun Fan, Wenqing Chen, Jidong Tian, Yitian Li, Hao He, and Yaohui Jin. 2024. Unlock the potential of counterfactually-augmented data in out-ofdistribution generalization. Expert Systems with Applications, 238:122066. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating models’ local decision boundaries via contrast sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1307–1323, Online. Association for Computational Linguistics. Beliz Gunel, Jingfei Du, Alexis Conneau, and Veselin Stoyanov. 2021. Supervised contrastive learning for pre-trained language model fine-tuning. In International Conference on Learning Representations. Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural language inference data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, New Orleans, Louisiana. Association for Computational Linguistics. William Huang, Haokun Liu, and Samuel R. Bowman. 2020. Counterfactually-augmented SNLI training data does not yield better generalization than unaugmented data. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 82–87, Online. Association for Computational Linguistics. Nitish Joshi and He He. 2022. An investigation of the (in)effectiveness of counterfactually augmented data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3668–3681, Dublin, Ireland. Association for Computational Linguistics. Divyansh Kaushik, Eduard Hovy, and Zachary C Lipton. 2020. Learning the difference that makes a difference with counterfactually augmented data. International Conference on Learning Representations (ICLR). Divyansh Kaushik, Amrith Setlur, Eduard Hovy, and Zachary C Lipton. 2021. Explaining the efficacy of counterfactually augmented data. International Conference on Learning Representations (ICLR). Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. Advances in neural information processing systems, 33:18661–18673. Meizhen Liu, Xu Guo, He Jiakai, Jianye Chen, Fengyu Zhou, and Siu Hui. 2023a. InteMATs: Integrating 11964 granularity-specific multilingual adapters for crosslingual transfer. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5035–5049, Singapore. Association for Computational Linguistics. Meizhen Liu, Jiakai He, Xu Guo, Jianye Chen, Siu Cheung Hui, and Fengyu Zhou. 2023b. Grancats: Cross-lingual enhancement through granularityspecific contrastive adapters. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, CIKM ’23, page 1461–1471, New York, NY, USA. Association for Computing Machinery. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Nishtha Madaan, Inkit Padhi, Naveen Panwar, and Diptikalyan Saha. 2021. Generate your counterfactuals: Towards controlled counterfactual generation for text. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 13516–13524. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340–2353, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR’s WMT19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314–319, Florence, Italy. Association for Computational Linguistics. Jianmo Ni, Jiacheng Li, and Julian McAuley. 2019. Justifying recommendations using distantly-labeled reviews and fine-grained aspects. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 188–197, Hong Kong, China. Association for Computational Linguistics. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748. Namuk Park, Wonjae Kim, Byeongho Heo, Taekyung Kim, and Sangdoo Yun. 2023. What do selfsupervised vision transformers learn? In The Eleventh International Conference on Learning Representations. Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language inference. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191, New Orleans, Louisiana. Association for Computational Linguistics. Reid Pryzant, Dan Iter, Jerry Li, Yin Lee, Chenguang Zhu, and Michael Zeng. 2023. Automatic prompt optimization with “gradient descent” and beam search. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7957–7968, Singapore. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 task 4: Sentiment analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502– 518, Vancouver, Canada. Association for Computational Linguistics. Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Indira Sen, Dennis Assenmacher, Mattia Samory, Isabelle Augenstein, Wil Aalst, and Claudia Wagner. 2023. People make better edits: Measuring the efficacy of LLM-generated counterfactually augmented data for harmful language detection. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10480–10504, Singapore. Association for Computational Linguistics. 11965 Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. Advances in neural information processing systems, 29. Damien Teney, Ehsan Abbasnedjad, and Anton van den Hengel. 2020. Learning what makes a difference from counterfactual examples and gradient supervision. page 580–599, Berlin, Heidelberg. SpringerVerlag. Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the gdpr. Harv. JL & Tech., 31:841. Jinqiang Wang, Rui Hu, Chaoquan Jiang, Rui Hu, and Jitao Sang. 2022a. Counterexample contrastive learning for spurious correlation elimination. In Proceedings of the 30th ACM International Conference on Multimedia, MM ’22, page 4930–4938, New York, NY, USA. Association for Computing Machinery. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9929–9939. PMLR. Yaushian Wang, Ashley Wu, and Graham Neubig. 2022b. English contrastive learning can learn universal cross-lingual sentence embeddings. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9122–9133, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Zhao Wang and Aron Culotta. 2020. Identifying spurious correlations for robust text classification. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3431–3440, Online. Association for Computational Linguistics. Jiaxin Wen, Yeshuang Zhu, Jinchao Zhang, Jie Zhou, and Minlie Huang. 2022. AutoCAD: Automatically generate counterfactuals for mitigating shortcut learning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2302–2317, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2021. Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6707–6723, Online. Association for Computational Linguistics. Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. 2018. Unsupervised feature learning via non-parametric instance discrimination. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3733–3742. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc. 11966 A The trap in the CE loss Given a sample, x = [xr, xc]T , associated with the label y =1, and the corresponding counterfactual example, c = [cr, xc]T , with the flipped label, y= 0, by minimizing the cross entropy loss, we compel the model such that fW (x) approaches 1 and fW (x) is close to 0, respectively. This can be equivalently formulated by maximizing the prediction difference, i.e., max[fW (x)−fW (c)]. The sigmoid function, σ(x) = 1 1+e−x , is bounded and monotonically increasing, implying that (wrxr + wcxc) should be as large as possible while (wrcr + wcxc) should be as small as possible. Here, xr and cr are the features before and after editing. The sign of cr should be opposite to the sign of xr such that when fW (x) approaches 1, fW (c) can approach 0. For the first term, we observe that increasing |wr| can lead to an opposite change, i.e., larger wrxr and smaller wrcr. However, the second term, wcxc, is contained in both fW (x) and fW (c). Optimizing wc does not have the opposite effect. B Gradient analysis of CL In this section, we introduce the details of the gradient of CL with respect to the weight W through the negative branches si,n. Before talking details, we rewrite the CL term for convenience, LCL = − E xi∈Pi " log esip/τ esip/τ + P xn∈Ni esin/τ # . (7) The total derivative of LCL w.r.t the model weights be calculated through the chain rule as ∂LCL ∂W = ∂LCL ∂sin × ∂sin ∂W + ∂LCL ∂sip × ∂sip ∂W, (8) where the gradient coming from the branch sin is ∂LCL ∂W sin = ∂LCL ∂sin × ∂sin ∂W. (9) For simplicity, we let sin = zT i zn and drop the denominator, ∥zi ∥∥zn ∥, which is eliminated in the product of partial derivatives. zi =WT xi and zj =WT xn, and then we have ∂sin ∂W = ∂(WT xi)T (WT xn) ∂W = ∂(xT i W)(WT xn) ∂W = xixT nW + xnxT i W = AinW. (10) Here, Ain =xixT n + xnxT i . The CL term of Eq (7) for anchor xi can be further written as, LCL(xi) = − E xp∈Pi  log exp(sip/τ) exp(sip/τ)+ P xn∈Ni exp(sin/τ)   = E xp∈Pi  log  exp(sip/τ)+ X xn∈Ni exp(sin/τ)     − E xp∈Pi(sip/τ). (11) Here, only the first term is a function of si,n. Hence, we can compute the gradient of LCL w.r.t. the similarity for a negative sample, si,n, as follows. ∂L(xi) ∂sin = 1 τ E xp∈Pi   exp(sin/τ) exp(sip/τ + P xn∈Ni exp(sin/τ)   = 1 τ Pin (written as Pin). (12) Combining Eq (10) and Eq (12) gives the final gradient from a negatives sample, ∂L(xi) ∂W sin = ∂L(xi) ∂sin × ∂sin ∂W = 1 τ PinAinW. (13) Summing up gradients in Eq (13) from all negative samples, we can derive ∂L(xi) ∂W Ni = ∂L(xi) ∂sin × ∂sin ∂W Ni = 1 τ X xn∈Ni PinAinW. (14) As the gradient contains pair-wise outer products between the anchor point and all its negative samples, it fully captures the overview of the feature space rather than focusing on a local perspective on edited words. C Experimental Details C.1 Training Data We introduce more details of the CAD data used in model training in our experiments. We adopt two counterfactually augmented datasets from IMDb (Maas et al., 2011) and SNLI (Bowman et al., 2015) in (Kaushik et al., 2021). The counterfactually augmented IMDb dataset contains 2440 original sentences, with each sentence having a corresponding revised counterfactual example. In SNLI, annotators can revise both the hypothesis and the premise 11967 for each of two opposite classes, and each sentence has 4 counterfactual examples. After another round of human filtering, the counterfactual augmented SNLI dataset consists of 9064 counterfactuals and 2266 original examples. During training, we split two CAD datasets into train, validation, test sets as shown in Table 5. Table 5: Statistic of human-edited CAD datasets. Dataset #Train #Val #Test Total No. Sentiment Analysis: IMDb Original 1707 245 488 2440 Revised 1707 245 488 2440 CAD 3414 490 976 4880 Natural Language Inference: SNLI Original 1666 200 400 2266 Revised 6664 800 1600 9064 CAD 8330 1000 2000 11330 Table 6: Datasets description. ♯refers to ID datasets. Dataset Domain #Test Sentiment Analysis #class=2 IMDb (Maas et al., 2011)♯ movie reviews 67k Amazon (Ni et al., 2019) service feedback 207k Yelp (Zhang et al., 2015) purchase reviews 38k Twitter (Rosenthal et al., 2017) social microblogs 10.3k SST-2 (Socher et al., 2013) movie reviews 1.82k Natural Language Inference #class=3 SNLI (Bowman et al., 2015)♯ written text 9.82k MNLI-m (Williams et al., 2018) mismatched genres 9.83k MNLI-mm (Williams et al., 2018) matched genres 9.82k Negation (Naik et al., 2018) strong negation 9.83k Spelling-e (Naik et al., 2018) spelling errors 9.14k Word-o (Naik et al., 2018) large word-overlap 9.83k C.2 ID and OOD datasets Here, we provide statistics of in-domain (ID) and out-of-domain (OOD) datasets used to evaluate the generalization of models in Table 6. Since CADs in our experiments are manually revised on samples from IMDb (Maas et al., 2011) and SNLI (Bowman et al., 2015), we include their test datasets for ID evaluation. As for OOD evaluation, we evaluate our sentiment models on Amazon reviews (Ni et al., 2019), Topic-based Tweets sentiment data (Rosenthal et al., 2017), Yelp reviews (Zhang et al., 2015) and SST-2 movie reviews (Socher et al., 2013). On NLI task, we report on the genre-matched (MNLI-m) and genre-mismatched (MNLI-mm) test set of MNLI (Williams et al., 2018), which are more challenging than SNLI due to multiple genres. In addition, We additionally employ the diagnostic datasets Negation, SpellingError, and Word-Overlap provided by Naik et al. (2018) to evaluate models’ reasoning abilities on lexical semantics and grammaticality. Table 7: Model parameter volume in our experiments. Model # Parameters BERTbase 110M RoBERTabase 125M SBERT 250M T5base 223M C.3 Implementation details In Table 7, we list the volume of model parameters used in our experiments. In our experiment, we tune hyperparameters of our PairCFR, including learning rate lr, batch size bts, trade-off factor λ, and temperature τ, based on the performance on validation set in full dataset finetuning and few shot setting separately. The best hyperparameters are reported in Table 8 and Table 9. All experiments were conducted on an NVIDIA A100 GPU server equipped with Ubuntu 22.04, featuring 40 GB of GPU memory, 32-core CPUs at 1.5 GHz, and 256 GB of RAM. The test environment was configured with Python 3.8, CUDA 11.8, and Pytorch 2.0. The training time for each hyperparameter configuration is less than one hour. C.4 Hyperparemeter analysis: λ and τ In this study, we investigate the influence of tradeoff factor λ and temperature τ on model generalization. Specifically, we incrementally increase λ or τ from 0.1 to 0.9 by 0.1 and fix other best hyper-parameters searched from grid search. The experimental results on ID and OODs are reported in Figure 4. We observe that with λ or τ increasing from 0.1, the model performance initially increases and then declines. In SA, the model perform better for a larger λ and a lower temperature 0.3 (i.e., λ = 0.7, τ = 0.3), while in NLI, a larger temperature and smaller λ is favored (i.e., λ=0.4, τ =0.7). We hypothesize that in SA, the model may overly depend on perturbed words for predictions, as revision patterns are relatively smaller than in NLI. Therefore, we should incorporate a smaller temperature τ and a higher trade-off λ to introduce a higher regularization from contrastive learning in SA. More insights will be explored in future work. 11968 0.2 0.4 0.6 0.8 80 84 88 92 Accuracy Amazom Yelp Twitter SST-2 IMDb 0.2 0.4 0.6 0.8 30 40 60 70 MNLI-m MNLI-mm Word-o Spelling-e Negation SNLI (a) The impact of trade-off term λ. We fix τ = 0.3 for SA (left) and τ =0.7 for NLI (right), and gradually increase λ. 0.2 0.4 0.6 0.8 80 84 88 92 Accuracy Amazom Yelp Twitter SST-2 IMDb 0.2 0.4 0.6 0.8 30 40 60 70 MNLI-m MNLI-mm Word-o Spelling-e Negation SNLI (b) The impact of temperature τ. We keep λ = 0.7 for SA (left) and λ = 0.4 for NLI (right), and gradually increase τ. Figure 4: The ID and OOD performance of the BERTbase models trained on full CAD for IMDb and SNLI tasks. Grey areas indicate the best hyperparameter settings for λ or τ. 32 64 128 512 1024 76 80 84 88 92 Accuracy (%) IMDb 32 64 128 512 1024 72 76 80 84 88 Amazon 32 64 128 512 1024 80 84 88 92 Yelp 32 64 128 512 1024 66 72 78 84 Twitter 32 64 128 512 1024 68 72 76 80 84 SST-2 32 64 128 512 1024 0 3 6 9 Std. #Train 32 64 128 512 1024 0 2 4 6 8 #Train 32 64 128 512 1024 0 3 6 9 #Train 32 64 128 512 1024 0 4 8 12 #Train 32 64 128 512 1024 2 4 6 8 #Train Vanilla BTSCL CouCL HCAD CFGSL ECF PairCFR (Ours) Figure 5: Few-shot learning results of BERTbase on SA. x-axis represents the number of training samples and y-axis represents the averaged accuracy and standard deviation on ID and OODs. Table 8: PairCFR hyperparameters for full data finetuning. Model lr bts λ τ Sentiment Analysis BERTbase 3e-5 16 0.7 0.3 RoBERTabase 3e-6 16 0.9 0.07 SBERT 5e-6 16 0.7 0.7 T5base 1e-4 16 0.8 0.07 Natural Language Inference BERTbase 3e-5 30 0.4 0.7 RoBERTabase 1e-5 30 0.3 0.8 SBERT 5e-5 30 0.2 0.9 T5base 1e-4 30 0.4 0.7 Table 9: PairCFR hyperparameters for few shot settings. ‘#Train’ means the training number of shots. Model #Train lr bts λ τ Sentiment Analysis BERTbase 32 1e-4 4 0.7 0.3 64 1e-5 8 0.7 0.3 128 1e-5 8 0.7 0.3 512 1e-5 16 0.7 0.3 1024 1e-5 16 0.7 0.3 Natural Language Inference BERTbase 50 1e-5 5 0.4 0.7 100 1e-5 5 0.4 0.7 500 1e-5 10 0.4 0.7 1k 1e-5 10 0.4 0.7 4k 1e-5 20 0.4 0.7 C.5 Few-shot learning on SA Here, we present the results of few-shot learning using the BERT model on the SA task, with the number of IMDb augmented data progressively increasing from 32 to 1024, as shown in Figure 5. Similar to the trend observed in few-shot learning for the NLI task, discussed in Section 5.2, our approach demonstrates significant performance improvements even with limited data in the SA task. 11969 Table 10: Results of statistical significance test under the hypothesis that there are no differences between baselines and our approach on both ID and OOD. P-values less than 0.05 are bolded, indicating a substantive disparity between two methods. Sentiment Analysis Natural Language Inference In-Domain Out-of-Dimain In-Domain Out-of-Dimain Baseline vs. Ours IMDb Amazon Yelp Twitter SST-2 SNLI MNLI-m MNLI-mm Negation Spelling-e Word-o BERT-base-uncased Vanilla 0.3237 0.0495 0.0043 2.40E-06 1.63E-05 0.0012 0.0136 0.0111 0.5754 0.0140 0.1182 BTSCL 0.0665 0.0411 0.1075 0.0005 2.11E-06 2.75E-05 0.0044 0.0053 0.1491 0.0101 0.0047 CouCL 7.72E-06 0.8357 6.10E-05 0.0204 0.0012 0.0005 0.0001 8.17E-05 0.7220 0.0007 0.0004 HCAD 0.077 0.1308 0.001 0.0498 0.002 0.0323 0.0382 0.0637 0.1826 0.0590 0.0588 CFGSL 0.3011 0.0457 0.0141 0.0421 0.5235 1.06E-05 0.0932 0.8232 0.0018 0.0078 0.0040 ECF 0.0279 0.0457 6.61E-06 0.0003 0.2573 0.1848 0.0177 0.0867 0.3704 0.0346 0.1361 RoBERTa-base Vanilla 0.0448 0.046 0.0495 0.0029 0.0469 1.45E-06 0.0102 0.0715 0.2057 0.0225 0.0452 BTSCL 0.0394 0.2731 0.0019 0.0231 0.0266 6.26E-05 0.0484 0.3835 0.9955 0.0344 0.0076 CouCL 0.0410 0.1456 0.0462 0.0443 0.0182 0.0922 0.0584 0.0207 0.0396 0.1400 0.0376 HCAD 0.0442 0.0349 0.0154 0.0014 0.0029 6.51E-05 0.0030 0.0005 0.0008 0.0180 0.0286 CFGSL 0.0317 0.0241 0.1834 0.0007 0.0380 0.0348 0.3550 0.0496 0.0033 0.7874 0.0014 ECF 0.0361 0.031 0.0012 0.0012 0.0021 0.0167 0.0147 0.0112 0.0830 0.0121 0.0071 SBERT-multi-qa-cos Vanilla 0.4796 1.56E-08 0.0002 3.66E-05 0.0003 6.48E-05 0.0273 0.0132 0.0076 0.0383 0.0306 BTSCL 0.0470 1.71E-07 2.01E-11 1.11E-07 4.70E-03 2.94E-05 0.0138 0.003 0.0006 0.04611 0.0035 CouCL 0.0097 0.0001 0.0002 0.0004 0.0099 0.0403 0.0448 0.0275 0.1397 0.0569 0.0428 HCAD 0.0173 7.43E-05 0.0025 0.0221 4.51E-05 0.0051 0.0584 0.0457 0.079 0.0422 0.0498 CFGSL 0.0050 0.0006 0.008 4.22E-06 0.0197 0.0325 0.0421 0.03106 0.485 0.0215 0.0533 ECF 0.0959 0.4188 0.3184 0.0013 0.3667 0.0008 0.0019 0.0013 0.1876 0.0019 0.0017 T5-base Vanilla 0.1072 0.0144 6.37E-05 0.0002 0.0112 0.0216 0.0299 0.0162 0.0294 0.0302 0.0088 BTSCL 0.0025 0.0300 8.42E-05 8.42E-05 8.42E-05 0.0207 0.0468 0.0356 0.0349 0.0445 0.0411 CouCL 0.0464 0.1554 0.03123 0.019 0.0027 0.0319 0.0407 0.0459 0.0397 0.0309 0.0211 HCAD 0.0306 0.1720 0.0012 0.0028 0.0001 0.0463 0.0566 0.0772 0.4857 0.0421 0.0438 CFGSL 0.4158 0.1139 0.2299 0.0067 0.0014 0.0497 0.0452 0.0229 0.4665 0.0416 0.0721 ECF 0.0053 0.2914 0.0065 0.1045 0.4612 0.0352 0.0976 0.0452 0.4403 0.0321 0.0813 Table 11: Results of statistical significance test under the hypothesis that there are no differences between two ablation studies. P-values less than 0.05 are bolded, indicating a substantive disparity. Sentiment Analysis Natural Language Inference Variants In-Domain Out-of-Dimain In-Domain Out-of-Dimain Control Comparison IMDb Amazon Yelp Twitter SST-2 SNLI MNLI-m MNLI-mm Negation Spelling-e Word-o BERT-base-uncased CE Shuff vs. Pair 0.8727 0.5053 0.9418 0.3465 0.4981 0.1934 0.2881 0.7977 0.4542 0.0450 0.3317 CE+CL Shuff vs. Pair 0.0389 0.9057 0.0055 0.1469 0.0350 0.0120 0.0011 0.1890 0.0268 0.0008 0.0406 ShuffCAD CE vs. CE+CL 0.2311 0.1238 0.0018 0.1890 0.0155 0.5306 0.1866 0.0973 0.2621 0.1722 0.0736 PairCAD CE vs. CE+CL 0.2021 0.3666 0.0417 0.0395 0.0032 0.0135 0.0034 0.0137 0.0293 0.2280 0.0210 RoBERTa-base CE Shuff vs. Pair 0.0376 0.0045 0.0049 0.9751 0.4894 0.0246 0.3194 0.0723 0.0031 0.2519 0.3029 CE+CL Shuff vs. Pair 0.3722 0.1181 0.0720 0.3250 0.0009 0.2123 0.0037 0.3623 0.0072 0.0033 0.0007 ShuffCAD CE vs. CE+CL 0.0005 2.48E-06 6.52E-05 8.76E-07 0.0073 0.0004 0.0178 0.0016 0.0006 0.0540 0.0655 PairCAD CE vs. CE+CL 0.0298 0.0133 0.1017 0.0120 0.0011 0.1585 0.0012 0.0252 0.2565 0.0040 0.2420 SBERT-multi-qa-distilbert-cos CE Shuff vs. Pair 0.0058 9.65E-09 0.0011 0.6263 0.0086 0.0491 0.3697 0.3337 0.2248 0.5029 0.4971 CE+CL Shuff vs. Pair 0.1317 0.0004 0.4958 1.65E-07 0.0027 0.6576 0.6699 0.0187 0.0476 0.4170 0.1311 ShuffCAD CE vs. CE+CL 0.0003 4.29E-04 3.43E-06 0.0049 0.0021 0.4285 0.0930 0.1230 0.5577 0.0494 0.1786 PairCAD CE vs. CE+CL 0.1491 1.41E-06 0.6202 0.0002 0.0002 0.2408 0.1698 0.0011 0.1113 0.0335 0.0118 T5-base CE Shuff vs. Pair 0.8304 0.1841 0.1112 0.0013 0.0006 0.0024 2.99E-06 0.0006 0.9644 0.0003 0.0002 CE+CL Shuff vs. Pair 0.0029 0.1851 0.0096 0.0108 0.0004 0.2966 0.0042 0.0415 0.5030 0.1371 0.1530 ShuffCAD CE vs. CE+CL 0.4340 0.1206 0.0876 0.4625 0.0164 0.5484 0.4942 0.4354 0.4859 0.4489 0.2817 PairCAD CE vs. CE+CL 0.0029 0.1837 0.0096 0.0108 0.0004 0.0481 0.0098 0.0223 0.4851 0.0284 0.0497 11970 C.6 Statistical significance test To ensure that the observed improvements are not due to randomness across multiple trials, we conducted statistical significance tests on comparative experiments and ablation studies. We first check that experimental results from random initialization on both ID and OOD datasets follow a Gaussian distribution, and thus employ a two-sided paired samples T-test. Our T-tests are conducted under the null hypothesis that there are no differences between the two groups of experiments. Table 10 presents the significance test results of our method against all baselines for the comparative experiments (refer to Table 1). We observed that the majority of p-values fall below the conventional confidence level of 0.05, indicating that the improvements in OOD performance achieved by our algorithm over the baselines are statistically significant and not due to randomness. Similarly, Table 11 presents the significance test results of the ablation study (refer to Table 2), verifying the effectiveness of our pairing strategy and CL function. 11971
2024
646
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11972–11990 August 11-16, 2024 ©2024 Association for Computational Linguistics LLMs Learn Task Heuristics from Demonstrations: A Heuristic-Driven Prompting Strategy for Document-Level Event Argument Extraction Hanzhang Zhou1,2, Junlang Qian1, Zijian Feng1,2, Hui Lu1, Zixiao Zhu1,2, Kezhi Mao1,2∗ 1Nanyang Technological University, Singapore 2Future Resilient Systems, Singapore-ETH Centre, Singapore {hanzhang001, junlang001, feng0119, hui007, zixiao001}@e.ntu.edu.sg [email protected] Abstract In this study, we explore in-context learning (ICL) in document-level event argument extraction (EAE) to alleviate the dependency on large-scale labeled data for this task. We introduce the Heuristic-Driven Link-of-Analogy (HD-LoA) prompting tailored for the EAE task. Specifically, we hypothesize and validate that LLMs learn task-specific heuristics from demonstrations in ICL. Building upon this hypothesis, we introduce an explicit heuristicdriven demonstration construction approach, which transforms the haphazard example selection process into a systematic method that emphasizes task heuristics. Additionally, inspired by the analogical reasoning of human, we propose the link-of-analogy prompting, which enables LLMs to process new situations by drawing analogies to known situations, enhancing their performance on unseen classes beyond limited ICL examples. Experiments show that our method outperforms existing prompting methods and few-shot supervised learning methods on document-level EAE datasets. Additionally, the HD-LoA prompting shows effectiveness in other tasks like sentiment analysis and natural language inference, demonstrating its broad adaptability 1. 1 Introduction Document-level Event Argument Extraction (EAE) aims to transform unstructured event information from documents into structured formats encapsulating event arguments, facilitating their interpretation and application in various domains (Grishman, 2019). The prevalent approach for this task relies on the collection of labeled data and the subsequent model training via supervised learning (Ren et al., 2023; Liu et al., 2023a; Pouran Ben Veyseh et al., 2022; Zhou and Mao, 2022; Du and Cardie, *Corresponding author. 1Our code is available at https://github.com/hzzhou01/HDLoA-Prompting 2020a). While effective, this approach comes with the significant drawback: it necessitates a substantial amount of training data, which is particularly burdensome and costly given the complexity inherent to document-level EAE. In this context, in-context learning (ICL) (Brown et al., 2020; Liu et al., 2022; Zhou et al., 2022), an emergent ability of large language models (LLMs), offers a promising alternative to supervised learning. ICL alleviates the need for large-scale data as it only uses a few examples as input-output pairs of the prompt to guide LLMs in performing the task on an unseen example. However, applying ICL to document-level EAE presents numerous challenges. The ICL performance is highly sensitive to the design of in-context demonstrations, such as the selection of examples and the formatting of reasoning steps (Zhang et al., 2023, 2022; Fu et al., 2022). Consequently, several crucial challenges emerge concerning the prompting strategy: 1) Example Selection Challenge. Selecting optimal in-context examples for ICL is pivotal, yet the understanding of what LLMs learn from these examples remains largely under-explored (Wang et al., 2023; Dong et al., 2022). This gap leads to a lack of systematic guidelines, resulting in a disorganized and inefficient example selection process. 2) Context Length Limit. In document-level EAE, selecting multiple documents as ICL examples could significantly extend the context length, potentially surpassing the token limit of LLMs. 3) Abundance of Event Types. The EAE task can involve more than a hundred distinct event types and argument roles. Yet, ICL examples can only capture a narrow subset, leaving the majority of argument roles unseen. Handling unseen classes beyond limited ICL examples is a common problem in classification tasks with diverse class types. 4) Prompting Strategy for Non-reasoning Task. While the chain-of-thought (CoT) prompting is 11972 Figure 1: CoT’s step-by-step reasoning degrades to a single step for non-reasoning tasks. Reasoning steps of reasoning tasks (in orange) and non-reasoning tasks (in blue) are compared. Different colors indicate distinct reasoning steps. Prompts are from (Shum et al., 2023). Figure 2: Heuristics are implicitly embedded within explanations of in-context examples. extensively used across a variety of tasks, its effectiveness is compromised in non-reasoning scenarios. As shown in Figure 1, applying CoT to non-reasoning tasks will degrade its step-by-step reasoning into an potentially inadequate single-step. Consequently, there is a need for prompting strategy tailored for non-reasoning tasks. In this work, we put forward a novel hypothesis that LLMs learn task-specific heuristics from examples and validate it through experiments. Building upon this hypothesis, we propose heuristic-driven link-of-analogy prompting to address the aforementioned challenges. To elaborate: We propose and empirically validate the hypothesis that LLMs learn task specific heuristics from examples in ICL. Heuristics, defined as ’a high-level rule or strategy for inferring answers to a specific task’, play a crucial role in human cognition. Humans use heuristics as efficient cognitive pathways, which often lead to more accurate inferences than complex methods (Gigerenzer and Gaissmaier, 2011; Hogarth and Karelaia, 2007). Similarly, in supervised machine learning (ML) systems, models also learn task-specific patterns through training (Shachaf et al., 2021; Najafabadi et al., 2015). Drawing a parallel, we hypothesize that LLMs learn task-specific heuristics from explanations of in-context examples to aid inference. We qualitatively illustrate how heuristics are implicitly embedded in explanations of in-context examples in Figure 2, and quantitatively validates our hypothesis with experiments detailed in Section 2. Notably, while drawing parallels to supervised ML, ICL is fundamentally different from supervised ML in mechanism: supervised ML learns and updates model parameters during training, whereas LLMs do ICL with all parameters frozen. Therefore, understandings of supervised ML systems (e.g. pattern learning) are not applicable for ICL (Min et al., 2022; Akyürek et al., 2022), which necessitates distinct explorations on the mechanism of ICL. We propose a heuristic-driven demonstration construction method. Based on our hypothesis, task heuristics are crucial for the ICL performance of LLMs, yet they are often implicitly conveyed through examples. This implicitness complicates the examination of whether demonstrations contain diverse heuristics and leads to uncertainty about whether LLMs have recognized these heuristics. Furthermore, the selection of in-context examples remains an underexplored challenge for ICL. To address these issues, in parallel with human’s exploitation of explicit heuristics, our method explicitly incorporates task heuristics into demonstrations, transforming the haphazard example selection process into a systematic method that emphasizes task heuristics. We propose the link-of-analogy prompting method that is suitable for non-reasoning tasks. To address the aforementioned challenges of abundance of event types in EAE and the limitations of CoT prompting on non-reasoning tasks, we present the link-of-analogy prompting. Inspired by the analogical reasoning––a core mechanism of human cognition, this approach enables LLMs process new situations (new classes) through drawing an analogy to known situations (known classes). Em11973 pirical results demonstrate its effectiveness in enhancing the ICL performance for classes not seen in ICL examples. Our contributions are as follows: • We introduce a pioneering work to prompting strategies for the document-level EAE, showcasing significant accuracy improvements on two document-level EAE datasets compared to prompting methods and few-shot supervised learning methods. • We investigate what LLMs learn from ICL, and unveil a new insight that LLMs learn taskspecific heuristics from ICL examples. • We propose an heuristic-driven demonstration construction approach, tackling the example selection issue with a fresh perspective on task heuristics, facilitating explicit heuristic learning in ICL. Furthermore, we propose the linkof-analogy prompting, which allows LLMs to process new situations by drawing analogies to known situations. • To further evaluate the adaptability of our method, we validate it on the sentiment analysis and natural language inference tasks, achieving notable accuracy enhancements. 2 What do LLMs learn from the demonstration? The understanding of what LLMs learn from the demonstration of ICL remains an open question. In this work, we hypothesize that LLMs learn task-specific heuristics from examples in ICL. We validate this hypothesis with carefully designed experiments in three aspects. 2.1 Correlation between Example Quantity and Heuristic Diversity in Well-Designed Prompts Our first experiment operates on the assumption that if LLMs indeed learn task-specific heuristics from demonstrations, then successful prompts should inherently incorporate a diverse range of heuristics in their examples, as these heuristics are learnable for LLMs. To examine this proposition, we assess both the quantity of examples and the quantity of different embedded heuristics within prompts from published papers. To objectively identify the number of implicit heuristics embedded in prompts, we employ GPT-4 to recognize the embedded heuristic for each example and to determine if it is a shared heuristic across multiple examples. An detailed example of the prompt we used and the heuristics identified by GPT-4 can be found in Appendix D. We investigate the correlation between the number of examples in a prompt and the number of embedded heuristics within the same prompt, analyzing six SOTA prompting methods applied across three distinct datasets. Specifically, prompting methods including CoT (Wei et al., 2022), Automate-CoT (Shum et al., 2023), Auto-CoT (Zhang et al., 2023), Iter-CoT (Sun et al., 2023), Boosted (Pitis et al., 2023), Active-CoT (Diao et al., 2023) are investigated and datasets of commonsense reasoning and arithmetic reasoning are evaluated. Our findings in Figure 3 reveal that: in welldesigned prompts, the number of heuristics closely matches the number of examples. Furthermore, the number of heuristics in carefully constructed prompts significantly exceeds that in randomly constructed prompts. This observation substantiates our statement that successful prompts indeed embed a wide array of heuristics in examples. 2.2 Comparing Diverse-Heuristics and Single-Heuristic Strategies The second experiment empirically evaluates how the diversity of heuristics within examples impacts ICL performance of the LLM. This experiment is premised under the assumption that if LLMs cannot learn heuristics from demonstrations, then demonstrations featuring multiple heuristics should yield similar performance with those incorporating a single heuristic, as heuristics cannot be utilized. To explore this, we compare two distinct example selection strategies. The single-heuristic strategy formulates prompts where all explanations of examples follow a same heuristic. Conversely, the diverseheuristic strategy constructs prompts where all explanations of examples exhibit different heuristics. We construct prompts that follow these two strategies based on prompts in Diao et al. (2023); Shum et al. (2023). The performance comparison of prompts constructed by the two different strategy on the StrategyQA (Geva et al., 2021) and SST-2 (Socher et al., 2013) datasets is illustrated in Figure 4. The results indicates that, given an equal number of examples, the diverse-heuristics strategy significantly outperforms the single-heuristic approach, which contradicts the assumption. This finding not only vali11974 Figure 3: An illustration of the correlation between example quantity and heuristic diversity in well-designed prompts. # Examples: the number of examples used in each prompt of the corresponding paper. # Heuristics: the number of heuristics identified in each prompt of the corresponding paper. # Heuristics in Rand.: the average number of heuristics in the randomly constructed prompt. ER Comp KB Def Chron Others Original Demonstration 78.5 72.7 87.2 85.1 74.7 65.5 Heuristic Deduction 71.4 (-7.1) 65.4 (-7.3) 81.6 (-5.6) 82.9 (-2.2) 70.3 (-4.4) Table 1: Performance comparison between original demonstration and a demonstration with heuristic deduction (replacing the example of a distinct heuristic type with another example containing a repeated heuristic type). Figure 4: Comparison of ICL performance using single-heuristic strategy versus diverse-heuristics strategy across different number of example on the StrategyQA and SST-2 Dataset. ER Comp KB Def Chron Others Count 14 55 125 47 91 168 Table 2: Distribution of samples by heuristic type. "Others" includes samples with heuristics not categorized in the predefined types. dates our hypothesis that LLMs can learn heuristics from in-context examples but also underscores the value of incorporating a variety of heuristics in enhancing ICL performance. 2.3 Impact of Heuristic Deduction Towards ICL Performance To validate our hypothesis, we further investigate the impact of reducing an implicit heuristic embedded in demonstration examples. If the classification accuracy of test samples corresponding to this heuristic decreases accordingly, we can validate that LLMs learn heuristics from demonstrations. We use 500 test samples from the StrategyQA (Geva et al., 2021) dataset and the prompt from Shum et al. (2023) for evaluation. As discussed in Section 2.2, we use GPT-4 to identify all implicit heuristics embedded in examples of the demonstration: empathetic reasoning (ER), comparison (Comp), knowledge-based (KB), definition-based (Def), and chronological (Chron) heuristics. The prompt for implicit heuristic identification and LLM output are detailed in Appendix D. We then employ GPT-4 to label each test sample with the corresponding heuristic that could be used to guide the prediction of the sample. Next, we group the test samples by heuristic type, and the statistics are illustrated in Table 2. Finally, given the prompt embedded with five different heuristic types, we eliminate the demonstration of a specific heuristic type by replacing its example with another example containing a repeated heuristic type, and monitor the performance change in the corresponding test group. gpt-4-1106-preview is used for evaluation. Experimental results are demonstrated in Table 1. These results indicate that eliminating the demonstration of a certain heuristic type indeed results in a significant performance drop in the test samples associated with that heuristic, further substantiating our hypothesis that LLMs learn task-specific 11975 Figure 5: An illustration of HD-LoA prompting. heuristics from examples. Interestingly, we also observe that samples with heuristics not represented in the demonstration examples (Others samples) show significantly lower accuracy, which not only support our hypothesis, but also shed light on example selection, suggesting that selecting examples with their implicit heuristics that cover a wider range of test samples is likely to enhance the ICL performance. 3 Heuristic-Driven Demonstration Construction Building on our understanding of heuristic learning during ICL, we aim to address the challenge of example selection for ICL. Experiments in Section 2 indicates that heuristics are crucial for ICL performance of LLMs, yet they are implicitly conveyed through explanations of examples. This implicitness complicates the examination of whether ICL demonstrations contain diverse heuristics and leads to uncertainty about whether LLMs have recognized these heuristics. Additionally, when solving a task, humans possess the ability to not only learn from examples but also learn from heuristics for efficient and accurate inference (Gigerenzer and Gaissmaier, 2011). This leads us to question whether LLMs can similarly leverage explicit heuristics to improve ICL performance. Therefore, we are motivated to explicitly providing LLMs with task-specific heuristics. Our approach is illustrated below: Replacing examples with explicit heuristics: Diverging from traditional prompting strategies that construct prompt with examples where heuristics are implicitly embedded, we propose to replace most examples in the prompt with distinct taskspecific heuristics, as demonstrated by the heuristics in Figure 5. Retaining minimum examples: A minimal number of examples are preserved to (1) illustrate the formatting of target task and reasoning steps, such as one example is required to illustrate the format of our link-of-analogy prompting, and (2) ensure a balanced coverage of labels in prompt to avoid introducing label bias. Specifically, for documentlevel EAE task, a single example is maintained to demonstrate the reasoning format. Heuristic generation: A remaining question is how to create the explicit heuristics in the prompt. Both human crafted heuristics and LLM-generated heuristics can be adopted as explicit heuristics. To automate this process, we utilize GPT-4 to generate a set of distinct heuristics S = {s1, s2, ..., sn} for the document-level EAE task. We adopt n = 10 in this work. The prompt for heuristic generation and its output are provided in the Appendix E. Heuristic selection: Given that not every generated heuristic may suit the target task, we introduce a heuristic selection step. Each heuristic in the generated heuristic set S is individually adopt into a prompt, the ICL performance of each heuristic is evaluated using a subset of the training dataset. 11976 Specifically, we employ 1% of the training dataset, identical to the sample size used in the few-shot supervised learning baseline. Through this evaluation, the top-performing heuristics, determined by accuracy, are selected to constitute the explicit heuristic list H in our prompt. We adopt the top 3 heuristics in this work. Through this heuristic selection step, low-quality heuristics are excluded. For example, the semantic role labeling heuristic generated in the heuristic generation step (in Appendix E) is too specific and of lower quality, thus it demonstrates a significantly low evaluation accuracy (26.52%) compared to a high-quality syntactic heuristic (33.69%). There are three advantages of our approach. Firstly, it provides a guidance on the example selection process. The example selection process of ICL is often an indiscriminate, manual process (Liu et al., 2023b; Wei et al., 2022; Zhou et al., 2022). However, our method converts the directionless and indiscriminate process into a methodical approach that emphasizes task-specific heuristics. Secondly, by emulating human cognitive strategies that leverage explicit heuristics for improved inference—a technique supported by cognitive studies (Gigerenzer and Gaissmaier, 2011)—our method enables LLMs to also benefit from heuristic learning during ICL. Finally, it condenses lengthy examples that consists of input-output pairs into compact heuristics, reducing the context length of prompts. 4 Link-of-Analogy Prompting We propose the link-of-analogy prompting to address the challenges below: First, the EAE task is characterized by its extensive variety of argument roles and event types, often exceeding a hundred, yet ICL examples can only cover a very limited subset. This discrepancy raises a critical challenge: designing a prompting strategy that effectively addresses unseen event types. Notably, the issue of handling unseen classes beyond limited ICL examples is a prevalent problem in various NLP tasks. Additionally, to concretize heuristic generation process, we provide heuristics for a specific argument, giver, within the prompt. This leads to the question of how to extend giver heuristics to other argument roles. Finally, as highlighted in the Introduction, applying CoT prompting to non-reasoning tasks tends to degrade the step-by-step analysis into a one-step rationale (Shum et al., 2023; Diao et al., 2023), necessitating more proper prompting strategies for such tasks. Inspired by the analogical reasoning (Gentner and Smith, 2013), a core mechanism of human cognition, we seek to resolve the challenges presented. Humans often understand a new situation by drawing an analogy to a familiar situation. For example, students often solve new problems by mapping solutions from known problems (Ross, 1987). Similarly, we anticipate that LLMs will be able to extract information of unseen events or generate heuristics for unseen argument roles by drawing an analogy to events and heuristics provided in in-context examples. Empirically, we find that LLMs are indeed capable of doing analogical reasoning when prompted appropriately. For example, when provided with the heuristic for giver in the prompt: "[giver] is the person, group, or organization in the document that gives the grant or gift", LLMs can make an analogy and generate the heuristic for the argument vehicle in the target question: "[vehicle] is the means of transport used to move the person or object". To further enhance the analogical reasoning capabilities of LLMs, we introduce our link-ofanalogy (LoA) prompting strategy, which emulates the analogical reasoning process of human. Cognitive science studies reveals that humans perform analogical reasoning through a sequence of retrieval, mapping, and evaluation (Gentner and Forbus, 2011; Gentner and Markman, 1997). In alignment with this process, our method involves the same steps. Specifically, in the retrieve step, given the base argument role rb, a set of heuristics H = {h1, h2, · · · , hk} for identifying rb, a target question and a target argument role rt, the LLM will select the most suitable heuristic hb from H for identifying rt. In the mapping step, the LLM employs analogy mapping rb : hb :: rt : ht to deduce the heuristic ht for rt. The LLM then infers the argument at of the target role based on the heuristic ht. Finally, in the evaluation step, the LLM will reassess the identified argument at. This methodology is exemplified in the in-context example presented in Figure 5. 5 Experiments In this section, we aim to explore the following research questions (RQs) regarding our HeuristicDriven Link-of-Analogy (HD-LoA) prompting. RQ1 Does HD-LoA prompting improve in-context learning performance in document-level EAE task? 11977 Method RAMS DocEE-Normal DocEE-Cross Arg-I Arg-C Arg-C Arg-C Supervised learning (few-shot) EEQA (Du and Cardie, 2020b) 19.54 PAIE (Ma et al., 2022) 29.86 TSAR (Xu et al., 2022) 26.67 CRP (Liu et al., 2023a) 30.09 FewDocAE (Yang et al., 2023) 12.07 10.51 text-davinci-003 Standard (Agrawal et al., 2022) 39.96 31.6 25.55 25.41 CoT (Wei et al., 2022) 43.03 34.94 27.68 28.64 HD-LoA (ours) 46.17 39.59 30.22 31.03 gpt-3.5 -turbo-instruct Standard (Agrawal et al., 2022) 42.44 32.46 25.67 24.48 CoT (Wei et al., 2022) 40.63 33.64 26.77 25.99 HD-LoA (ours) 43.34 37.05 27.98 27.34 gpt-4 Standard (Agrawal et al., 2022) 44.73 37.08 29.53 27.36 CoT (Wei et al., 2022) 44.93 38.09 30.32 30.95 HD-LoA (ours) 52.41 44.12 31.53 33.48 Table 3: Overall performance. In few-shot setting, the scores of supervised learning methods on RAMS dataset are based on results reported in Liu et al. (2023a), where 1% of the training data is used. RQ2 Can HD-LoA prompting effectively mitigate the dependency on extensive labeled data while enhancing accuracy for EAE task? RQ3 Is the HDLoA prompting effective when applied to tasks beyond EAE? RQ4 Do each components of the HD-LoA prompting effectively contributing to its performance? 5.1 Experimental Setup Dataset: For the evaluation of the document-level EAE task, we adopt RAMS (Ebner et al., 2020) and DocEE (Tong et al., 2022) datasets. The WIKIEVENTS dataset (Li et al., 2021) is excluded from our study because it relies on preprocessed entity candidates for annotating event arguments the annotation, which diverges from the direct argument identification of LLMs. For evaluation, we follow the metrics in (Ma et al., 2022), namely the argument identification F1 score (Arg-I), and the argument classification F1 score (Arg-C). Additionally, we utilize the SST-2 (Socher et al., 2013) and SNLI (Bowman et al., 2015) datasets to assess the effectiveness of our HD-LoA prompting strategy on other non-reasoning tasks: sentiment analysis and natural language inference. The detailed statistics of the datasets and the number of tested samples are listed in Appendix A. Baselines Our HD-LoA approach is compared against several state-of-the-art prompting methods, including the standard prompting (Agrawal et al., 2022) used in clinical EAE, and the Chainof-Thought (CoT) prompting (Wei et al., 2022). Agrawal et al. (2022) presents the only existing method that prompts LLMs in the context of EAE task. Given to its direct question-and-answer format, we refer to it as ’Standard Prompting’ in accordance with terminology prevalent in ICL research (Wei et al., 2022). Notably, as there is no existing prompting strategies tailored for EAE, neither the standard prompting nor CoT prompting has been applied to document-level EAE datasets in the literature. Thus, we report the reproduced results here. Additionally, we compare our method with various supervised learning methods in EAE, such as FewDocAE (Yang et al., 2023), CRP (Liu et al., 2023a), PAIE (Ma et al., 2022), TSAR (Xu et al., 2022), EEQA (Du and Cardie, 2020b), etc. The few-shot comparison results are based on the few-shot performance reported in Liu et al. (2023a). LLMs: The experiments are carried out using three large language models: the publicly available GPT3 (Brown et al., 2020) in its text-davinci-003 and gpt-3.5-turbo-instruct versions (Ouyang et al., 2022), as well as GPT-4 (OpenAI, 2023). Notably, due to the high cost associated with GPT-4, its evaluation is limited to part of the dataset. More experimental details are in Appendix A and the prompts we used are in Appendix F. 5.2 Overall Experimental Results Addressing RQ1, the experimental results presented in Table 3 indicate that our HD-LoA prompting significantly enhances in-context learning for document-level EAE task. The HD-LoA method consistently surpasses CoT prompting (Wei et al., 2022) across all three LLMs and both datasets, achieving the largest F1 score improvements of 4.65%, 3.41%, and 6.03% in Arg-C on each LLM, 11978 respectively. In addition, the improvement over the standard prompting (Agrawal et al., 2022) reaches 7.99% on the text-davinci-003 model. In response to RQ2, our HD-LoA method, augmented with external knowledge in heuristics, significantly enhances performance in few-shot settings compared to supervised learning approaches. With only one example adopted in the prompt, our HD-LoA achieves a 9.50% F1 score improvement over the CRP method (Liu et al., 2023a) on the RAMS dataset using the text-davinci-003 model. Similarly, on the DocEE dataset, our method achieves a substancial 20.52% improvement against FewDocAE (Yang et al., 2023). Experimental findings indicate that our method can successfully mitigate the document-level EAE task’s reliance on extensive labeled data while enhancing accuracy. SST-2 SNLI CoT 91.39 77.97 HD-LoA (ours) 94.26 80.60 Table 4: Evaluation of the HD-LoA prompting on sentiment analysis and natural language inference tasks. 5.3 Adaptability of HD-LoA Prompting for Other Tasks In addressing RQ3, we have extended our HD-LoA prompting method to sentiment analysis (SA) and natural language inference (NLI) tasks, utilizing the SST-2 (Socher et al., 2013) and SNLI (Bowman et al., 2015) datasets for evaluation. We adopt the CoT style prompts on these two datasets from Shum et al. (2023). Experimental results are presented in Table 4. Compared to CoT prompting, our method gets accuracy enhancements of 2.87% and 2.63% on SST-2 and SNLI datasets, respectively. These findings indicate that our HD-LoA prompting can be effectively adapted to a diverse array of non-reasoning NLP tasks. The prompts for SA and NLI tasks are provided in Appendix F. 5.4 Comparison with Fully Trained Supervised Models We also compare our HD-LoA method with supervised learning method that trained on the entire dataset. It is anticipated that these models trained on thousands of samples would exhibit higher accuracy compared to our method, which employs only a single sample in the prompt. Nevertheless, HDLoA prompting demonstrates competitive perforFigure 6: Experimental results of ablations. mance against fully trained supervised methods and even outperform these extensively trained models on the DocEE dataset in the cross-domain setting. Experimental results are in Table 6 of Appendix B. 5.5 Ablations To address RQ4, we conduct further experiments as follows: • Ablation Experiments: We conduct ablation studies on removing the explicit heuristics and removing the link-of-analogy prompting strategy from our prompt. As presented in Figure 6, experimental results on the RAMS dataset demonstrate that removing either the task-specific heuristics or link-of-analogy prompting will significantly degrade the ICL performance of the HD-LoA prompting, suggesting the effectiveness of each component of our prompting strategy. • Seen Classes and Unseen Classes Accuracy Increase Comparison for LoA: To further validate the objective of the LoA prompting strategy, which aims to enhance ICL performance for unseen classes in the prompt, we evaluate and compare the accuracy increase facilitated by LoA prompting for both seen and unseen classes in Appendix D. Results detailed in Appendix C show that LoA prompting is indeed effective in enhancing ICL performance on classes unseen in the prompt. 6 Understanding Why HD-LoA Prompting Works Following the empirical validation of the effectiveness of our HD-LoA prompting, this section delves into an analysis to elucidate why our method works. Analysis of heuristic-driven demonstration construction method: Firstly, our method naturally incorporates diverse distinct heuristics in prompt. As 11979 shown in Section 2.2, inclusion of diverse heuristics can significantly boost the ICL performance. In addition, cognitive research finds that humans use heuristics as efficient cognitive pathways to achieve more accurate inferences compared to complex methods (Gigerenzer and Gaissmaier, 2011; Hogarth and Karelaia, 2007). Paralleling this human cognitive strategy, we enable LLMs to learn from explicit heuristics to enhance inference. Specifically, for LLMs demonstrating suboptimal performance with Standard Prompting and in non-reasoning tasks where definitive rationales are elusive, the provision of explicit heuristics offers LLMs helpful strategies to use and enhance inference. Moreover, as discussed in Section 2, LLMs use implicit heuristics embedded conventional prompts to facilitate inference. By converting these implicit heuristics to explicit heuristics offers a more straightforward way to utilize heuristics and may potentially simplify the utilization of heuristics by LLMs. Analysis of the link-of-analogy prompting: The LoA prompting, which is inspired by the analogical reasoning of human cognition, enables LLMs to process new situations by drawing analogies to known situations. This ability is particularly useful in ICL, where LLMs are always facing unseen samples and unseen classes. As evidenced by experiments in Appendix C, the LoA prompting is indeed effective in enhancing ICL performance for classes unseen in the prompt. 7 Related Work Document-level EAE Existing document-level EAE studies are mostly based on supervised learning methods, which relies on the extensive collection of labeled data (Ma et al., 2022; Pouran Ben Veyseh et al., 2022; Zhou and Mao, 2022; Xu et al., 2021; Ebner et al., 2020; Du and Cardie, 2020a). Only Agrawal et al. (2022) exploits adopting LLMs on clinical EAE though standard prompts that not involve any reasoning strategies. Considering the potential of ICL to reduce the dependency on large-scale labeled datasets and the revolutionize impact of LLMs, it is lack of study on prompting strategy tailored for the EAE task. In-context learning ICL enables LLMs to perform a target task by feeding a few prompted examples as part of the input (Brown et al., 2020). As the mechanism of ICL is fundamentally different from supervised ML, the working mechanism of ICL remains an open question (Dong et al., 2022). Few studies have conducted preliminary explorations: Min et al. (2022) showed that the label space, input text distribution and overall format contribute to the ICL performance. Liu et al. (2022) concluded that examples that are semantically similar to the test sample are more effective. Akyürek et al. (2022) found that transformer-based ICL can implement standard finetuning implicitly. In this work, we further hypothesize and validate that LLMs learn task-task specific heuristics from examples via ICL. Moreover, the performance of ICL is very sensitive to example selection (Gonen et al., 2022) and the optimal selection criteria remains unclear. Various studies proposed different ways: selecting examples based on complexity (Fu et al., 2022), mutual information (Sorensen et al., 2022), diversity (Zhang et al., 2023), labeled dataset (Shum et al., 2023), etc. In this work, we convert the indiscriminate example selection process into a methodical approach that emphasizes task heuristics, making the example selection process more transparent. 8 Conclusion In this work, we hypothesize and validate that LLMs learn task-specific heuristics from demonstrations during ICL, which can provide a guidance and simplify the example selection process. Building upon this hypothesis, we introduce an explicit heuristic-driven demonstration construction strategy, and propose a link-of-analogy prompting method. These methods shed light on the heuristic learning of LLMs and the challenge of handling unseen classes in ICL. Extensive experimentation demonstrates the effectiveness and adaptability of our HD-LoA prompting. Limitations Dependency on advanced reasoning abilities of LLMs. In this work, we aims to explore the upper bounds of in-context learning performance on EAE task in the few-shot setting. Our method’s reliance on using the sophisticated reasoning capabilities in LLMs makes it unsuitable for models with limited reasoning capabilities. For example, the limited reasoning ability of the gpt-3.5-turbo-instruct model could hinder the performance of our method. However, our findings that LLMs can learn heuristics from in-context examples is applicable to diverse LLMs. 11980 Heuristic Quality. The heuristic quality is important for our method. We address this issue by enhancing the probability of generating high-quality heuristics and filtering out low-quality heuristics. We generates an excessive number of heuristic candidates to increase the chances of including highquality heuristics. Subsequently, we filter out lowquality heuristics by assessing the accuracy of each heuristic candidate on a small set of samples. Future work could explore more sophisticated heuristic generation strategies, such as generating heuristics with diverse granularity or refining heuristics based on feedback from misclassified examples. Acknowledgements The authors would like to thank Edmond Lo, Lihui Chen, Xiyu Zhang, and the anonymous reviewers for their constructive comments and suggestions. The research was conducted at the Future Resilient Systems at the Singapore-ETH Centre, which was established collaboratively between ETH Zurich and the National Research Foundation Singapore. This research is supported by the National Research Foundation Singapore (NRF) under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. References Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are few-shot clinical information extractors. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1998–2022, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. 2022. What learning algorithm is in-context learning? investigations with linear models. arXiv preprint arXiv:2211.15661. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong Zhang. 2023. Active prompting with chain-ofthought for large language models. arXiv preprint arXiv:2302.12246. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning. arXiv preprint arXiv:2301.00234. Xinya Du and Claire Cardie. 2020a. Documentlevel event role filler extraction using multigranularity contextualized encoding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8010–8020, Online. Association for Computational Linguistics. Xinya Du and Claire Cardie. 2020b. Event extraction by answering (almost) natural questions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 671– 683, Online. Association for Computational Linguistics. Seth Ebner, Patrick Xia, Ryan Culkin, Kyle Rawlins, and Benjamin Van Durme. 2020. Multisentence argument linking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8057–8077, Online. Association for Computational Linguistics. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2022. Complexity-based prompting for multi-step reasoning. arXiv preprint arXiv:2210.00720. Dedre Gentner and Kenneth D Forbus. 2011. Computational models of analogy. Wiley interdisciplinary reviews: cognitive science, 2(3):266–276. Dedre Gentner and Arthur B Markman. 1997. Structure mapping in analogy and similarity. American psychologist, 52(1):45. Dedre Gentner and Linsey A Smith. 2013. Analogical learning and reasoning. In The Oxford Handbook of Cognitive Psychology. Oxford University Press. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346– 361. Gerd Gigerenzer and Wolfgang Gaissmaier. 2011. Heuristic decision making. Annual review of psychology, 62:451–482. Hila Gonen, Srini Iyer, Terra Blevins, Noah A Smith, and Luke Zettlemoyer. 2022. Demystifying prompts in language models via perplexity estimation. arXiv preprint arXiv:2212.04037. Ralph Grishman. 2019. Twenty-five years of information extraction. Natural Language Engineering, 25(6):677–692. 11981 Robin M Hogarth and Natalia Karelaia. 2007. Heuristic and linear models of judgment: Matching rules and environments. Psychological review, 114(3):733. Sha Li, Heng Ji, and Jiawei Han. 2021. Documentlevel event argument extraction by conditional generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 894–908, Online. Association for Computational Linguistics. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 100–114, Dublin, Ireland and Online. Association for Computational Linguistics. Jian Liu, Chen Liang, Jinan Xu, Haoyan Liu, and Zhe Zhao. 2023a. Document-level event argument extraction with a chain reasoning paradigm. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9570–9583, Toronto, Canada. Association for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023b. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35. Yubo Ma, Zehao Wang, Yixin Cao, Mukai Li, Meiqi Chen, Kun Wang, and Jing Shao. 2022. Prompt for extraction? PAIE: Prompting argument interaction for event argument extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6759–6774, Dublin, Ireland. Association for Computational Linguistics. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048–11064, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Maryam M Najafabadi, Flavio Villanustre, Taghi M Khoshgoftaar, Naeem Seliya, Randall Wald, and Edin Muharemagic. 2015. Deep learning applications and challenges in big data analytics. Journal of big data, 2(1):1–21. OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Silviu Pitis, Michael R Zhang, Andrew Wang, and Jimmy Ba. 2023. Boosted prompt ensembles for large language models. arXiv preprint arXiv:2304.05970. Amir Pouran Ben Veyseh, Minh Van Nguyen, Franck Dernoncourt, Bonan Min, and Thien Nguyen. 2022. Document-level event argument extraction via optimal transport. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1648– 1658, Dublin, Ireland. Association for Computational Linguistics. Yubing Ren, Yanan Cao, Ping Guo, Fang Fang, Wei Ma, and Zheng Lin. 2023. Retrieve-andsample: Document-level event argument extraction via hybrid retrieval augmentation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 293–306, Toronto, Canada. Association for Computational Linguistics. Brian H Ross. 1987. This is like that: The use of earlier problems and the separation of similarity effects. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13(4):629. Gal Shachaf, Alon Brutzkus, and Amir Globerson. 2021. A theoretical analysis of fine-tuning with linear teachers. Advances in Neural Information Processing Systems, 34:15382–15394. Kashun Shum, Shizhe Diao, and Tong Zhang. 2023. Automatic prompt augmentation and selection with chain-of-thought from labeled data. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12113–12139, Singapore. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Taylor Sorensen, Joshua Robinson, Christopher Rytting, Alexander Shaw, Kyle Rogers, Alexia Delorey, Mahmoud Khalil, Nancy Fulda, and David Wingate. 2022. An information-theoretic approach to prompt engineering without ground truth labels. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 819–862, Dublin, Ireland. Association for Computational Linguistics. Jiashuo Sun, Yi Luo, Yeyun Gong, Chen Lin, Yelong Shen, Jian Guo, and Nan Duan. 2023. Enhancing chain-of-thoughts prompting with iterative bootstrapping in large language models. arXiv preprint arXiv:2304.11657. 11982 MeiHan Tong, Bin Xu, Shuai Wang, Meihuan Han, Yixin Cao, Jiangqi Zhu, Siyu Chen, Lei Hou, and Juanzi Li. 2022. DocEE: A large-scale and finegrained benchmark for document-level event extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3970–3982, Seattle, United States. Association for Computational Linguistics. Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023. Label words are anchors: An information flow perspective for understanding in-context learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9840–9855, Singapore. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. 2022. Rationaleaugmented ensembles in language models. arXiv preprint arXiv:2207.00747. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903. Runxin Xu, Tianyu Liu, Lei Li, and Baobao Chang. 2021. Document-level event extraction via heterogeneous graph-based interaction model with a tracker. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3533–3546, Online. Association for Computational Linguistics. Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, and Zhifang Sui. 2022. A two-stream AMR-enhanced model for document-level event argument extraction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5025–5036, Seattle, United States. Association for Computational Linguistics. Xianjun Yang, Yujie Lu, and Linda Petzold. 2023. Few-shot document-level event argument extraction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8029–8046, Toronto, Canada. Association for Computational Linguistics. Yiming Zhang, Shi Feng, and Chenhao Tan. 2022. Active example selection for in-context learning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9134–9148, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023. Automatic chain of thought prompting in large language models. In The Eleventh International Conference on Learning Representations (ICLR 2023). Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. 2022. Least-to-most prompting enables complex reasoning in large language models. arXiv preprint arXiv:2205.10625. Hanzhang Zhou and Kezhi Mao. 2022. Document-level event argument extraction by leveraging redundant information and closed boundary loss. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3041–3052, Seattle, United States. Association for Computational Linguistics. A Experimental Details The statistics of the dataset are provided in Table 5. We use the test split of RAMS dataset and the validation split of SST-2 for evaluation, following the setting in Wang et al. (2022). Considering the extensive size of the DocEE and SNLI datasets, which makes a full-scale evaluation using LLMs impractical, we follow Shum et al. (2023); Wang et al. (2022) and evaluate a subset of these datasets. Owing to the substantial costs associated with deploying GPT-4, we restrict its evaluation on the RAMS dataset and DocEE dataset to 200 samples. In addition, regarding the DocEE dataset, it presents two distinct settings. In the conventional configuration, the training and testing data share an identical distribution. Conversely, the crossdomain setup features training and testing data composed of non-overlapping event types. Furthermore, our heuristic-driven demonstration construction method necessitates far fewer examples than traditional prompting methods, only keeping the minimum number of examples to avoid bias in example answers. Specifically, for the EAE task, we use only one example, and for sentiment analysis and natural language inference tasks, two and three examples are employed respectively. We evaluate our prompting method on text-davinci-003, gpt-3.5-turbo-instruct and GPT-4 (OpenAI, 2023). The pricing for running these models ranges from USD 0.0015 per 1,000 tokens to USD 0.03 per 1,000 tokens. The gpt-3.5-turbo-instruct model is of the lowest cost but exhibits limited reasoning capabilities. We employ these LLM models from the OpenAI API. During the all experiments, the temperature is fixed 11983 Dataset Task Type # Example # Eval. Eval. Split RAMS (Ebner et al., 2020) Doc-Level EAE 1 871 Test DocEE (Tong et al., 2022) Doc-Level EAE 1 800 Test SST-2 (Socher et al., 2013) Sentiment Analysis 2 872 Validation SNLI (Bowman et al., 2015) Natural Language Inference 3 500 Test Table 5: The overall statistics of the dataset. # Example: The number of examples used in the HD-LoA prompting. # EVAL.: the number of samples used for evaluation of different prompting methods. EVAL. Split: evaluation split. Figure 7: Seen classes and unseen classes accuracy increase comparison with LoA prompting. as 0. In the evaluation of document-level EAE datasets, we omit articles from both the ground truth and the prediction during the assessment to align the more closely with the real content of event arguments. B Comparison with Fully Supervised Methods We compare our HD-LoA method with supervised learning method that trained on the entire dataset for document-level EAE task. As illustrated in Table 6, it is anticipated that these models trained on thousands of samples would exhibit higher accuracy compared to our HD-LoA method, which employs only a single labeled sample. Nevertheless, HD-LoA prompting demonstrates competitive performance against supervised methods and even outperform these extensively trained models on the DocEE dataset in the cross-domain setting. This finding also illustrates the effectiveness of our HDLoA prompting strategy, particularly in scenarios where it is impractical and costly to build large annotated datasets. C Seen Classes and Unseen Classes Accuracy Increase Comparison for LoA We conduct experiment to validate the effectiveness of the LoA prompting strategy in enhancing ICL performance for unseen classes. Given that incontext examples can only capture a narrow subset of classes (seen classes), leaving the majority of argument roles unseen, we assess and compare the accuracy increase of adopting LoA prompting for seen classes against unseen classes. Experimental results in Figure 7 show that LoA prompting results in a more significant accuracy increase on unseen classes compared to seen classes. It indicates that the LoA prompting is indeed effective in enhancing ICL performance on classes unseen in the prompt. D Recognize Implicit Heuristics of In-Context Examples by GPT-4 The prompt we use to recognize implicit heuristics inherent in examples and the output of GPT-4 is given below. 11984 Method RAMS DocEE-Normal DocEE-Cross Arg-I Arg-C Arg-C Arg-C Supervised learning EEQA (Du and Cardie, 2020b) 48.70 46.70 33.50 24.00 MG-Reader (Du and Cardie, 2020a) 32.90 21.40 BART-Gen (Li et al., 2021) 51.20 47.10 OntologyQA (Tong et al., 2022) 41.00 29.80 PAIE (Ma et al., 2022) 56.80 52.20 text-davinci-003 HD-LoA (ours) 46.17 39.59 30.22 31.03 Table 6: Comparison with Fully Trained Supervised Models. Prompt Input What is the most critical and profound heuristic at play in each of the following examples? If any two examples share the same heuristic , please indicate this connection. Use no more than two sentences to illustrate the heuristics of each example. example 1 Q: Did the death of Helen Palmer have a significant effect on Dr. Seuss? A: Helen Palmer was Dr. Seuss 's wife. The death of a spouse can have a significant effect on a person. Thus , the death of Helen Palmer could have a significant effect on Dr. Seuss. So the answer is yes. example 2 Q: Would a moose hypothetically be too much for a minotaur to devour whole? A: A moose is about 1,000 pounds. A minotaur is about 1,000 pounds. Thus , a moose would be too much for a minotaur to devour whole. So the answer is yes. example 3 Q: Yes or no: Is H's most common two letter pair partner a freebie in Wheel of Fortune bonus round? A: The most common two letter pair partner for H is E. E is a freebie in the Wheel of Fortune bonus round. Thus , H's most common two letter pair partner is a freebie in the Wheel of Fortune bonus round. So the answer is yes. example 4 Q: Yes or no: Would bald eagle deliver an urgent message before B-52? A: The top speed of a bald eagle is about 80 mph. The top speed of a B-52 is about 650 mph. Thus , a B-52 would be faster than a bald eagle. So the answer is no. example 5 Q: Yes or no: Does an individual oceanographer study many sciences? A: Oceanography is the study of the ocean. The ocean is a complex system. Thus , oceanography is a multidisciplinary science. So the answer is yes. example 6 Q: Yes or no: Could Steven Spielberg send emails as a child? A: Steven Spielberg was born in 1946. Email was invented in 1971. Thus , Steven Spielberg could not send emails as a child. So the answer is no. LLM Output Example 1: This employs the "empathetic reasoning" heuristic , assuming that because most people would be significantly affected by the death of a spouse , so too would Dr. Seuss. Example 2: The "comparison heuristic" is used here , comparing the size and weight of a moose to that of a minotaur to make a conclusion about the feasibility of consumption. Example 3: This uses the "knowledge -based" heuristic , using known facts about a game show 's rules to answer the question. Example 4: This is another instance of the "comparison heuristic ," this time comparing the top speeds of a bald eagle and a B-52 to determine which would deliver a message faster. Example 5: The "definition -based" heuristic is applied here , relying on the understanding of what the field of oceanography entails to conclude that it involves the study of multiple sciences. Example 6: The "chronological heuristic" is used here , comparing the timelines of Spielberg 's childhood and the invention of email to answer the question. E Explicit Heuristic Generation by GPT-4 The prompt we utilize to generate explicit heuristics for document-level EAE and the output by GPT4 is provided below. 11985 Prompt Input Generate 10 high -level and concise heuristics for identifying the event argument of 'giver ' in a document , as part of the event argument extraction task. Each heuristic should offer a distinct perspective and adhere to the format demonstrated in the provided example. Example heuristic: Semantic heuristic: The 'giver ' is identified as the individual , group , or organization mentioned in the document that is responsible for providing a gift or grant. LLM Output Grammatical heuristic: The 'giver ' is typically the subject in a sentence structure where a verb of giving (e.g., 'donate ', 'provide ', 'offer ') is used , indicating the action performer. Lexical heuristic: The 'giver ' can often be identified by keywords or phrases that signify giving , such as 'donor ', 'benefactor ', or 'sponsor ', appearing in close proximity to the action of giving. Syntactic heuristic: The 'giver ' may be recognized by analyzing sentence structure , often appearing before prepositional phrases starting with 'to ' that introduce the recipient (e.g., "X gives Y to Z", X is the 'giver '). Semantic role labeling heuristic: In sentences where an action of giving is present , the 'giver ' is likely the entity filling the semantic role of 'Agent ' or 'Source ', initiating the action. Coreference resolution heuristic: The 'giver ' might be determined by identifying pronouns or noun phrases that refer back to an individual , group , or organization previously mentioned as performing the action of giving. Dependency parsing heuristic: In parsing the sentence structure , the 'giver ' is often connected through a dependency relation (e.g., 'nsubj ' for nominal subject) to the main verb representing the giving action. Discourse analysis heuristic: The 'giver ' can be inferred from the broader context or narrative of the document , where a party is described or implied to be in a position to give , support , or provide resources. Named Entity Recognition (NER) heuristic: The 'giver ' is often a named entity categorized as a person , organization , or geopolitical entity , involved in a transactional or giving action. Pragmatic heuristic: The 'giver ' may be implied through indirect speech acts or politeness strategies , where the action of giving is not explicitly stated but understood from the context or social norms. Co -occurrence heuristic: The 'giver ' might be associated with frequent co-occurrence with verbs or nouns related to giving in a large text corpus , indicating a statistical likelihood of being the giving party. F Full Prompts F.1 The Demonstration for SST-2 Dataset Instruction: You are to perform a 2-class sentiment analysis on the given sentences. For each sentence , determine if the sentiment is either "positive" or "negative ". Specifically , you will leverage the pattern in the pattern list below to recognize the sentiment of the target sentence. For each sentence , you will select the most appropriate one or two patterns to identify its sentiment. The identification of a sentence based on different patterns should not be interconnected. Finally , re -evaluate the identified sentiment to get the final answer. Pattern list: [ Overall Context Pattern: Examine the general tone of the entire sentence. Determine if it overall express a positive or negative sentiment , regardless of individual words. Descriptive Phrases pattern: Identify key adjectives or descriptive phrases that convey strong feelings or opinions about the subject. Negation Handling Pattern: Look for negative words like "not ," "didn 't," or "never" that might reverse the sentiment of the words following them. Ensure the reversed sentiment is understood correctly. Comparatives Pattern: Search for words or phrases that compare the subject to something else , suggesting superiority , inferiority , or equality. Adverbs and Intensifiers Pattern: Spot adverbs or intensifiers that amplify the sentiment of the associated adjective or verb. They often provide a clue about the strength of the sentiment. ] Question: What is the sentiment of the following sentence? Sentence: "more than another " best man " clone by weaving a theme throughout this funny film" Answer: Step 1: Selected Patterns: Overall Context Pattern and Descriptive Phrases Pattern (Phrases: "weaving a theme" and "funny film"). Step 2: Apply selected patterns to identify the sentiment of the sentence independently. Step 2.1: Based on Overall Context pattern: The entirety of the sentence portrays a film that is superior and offers added value , emphasizing its distinctiveness and humorous quality. The overall sentiment is positive. Step 2.2: Based on Descriptive Phrases pattern: The descriptive phrases highlight a positive sentiment in the film being more than just a clone and having a funny theme. The sentiment is positive. Step 3: Re -evaluate sentiment: sentiment: positive Question: What is the sentiment of the following sentence? Sentence: "contains no wit , only labored gags" Answer: Step 1: Selected Patterns: Overall Context Pattern and Negation Handling Pattern (Phrase: "contains no"). Step 2: Apply selected patterns to identify the sentiment of the sentence independently. Step 2.1: Based on Overall Context pattern: The entire sentence conveys a lack of genuine humor and wit , and instead presents the humor as contrived or forced. The overall sentiment is negative. Step 2.2: Based on Negation Handling Pattern: The negation "contains no" highlights a lack of wit. It is further emphasized by "labored gags", suggesting forced or contrived humor. Thus , the sentiment is negative regarding the quality or genuineness of the humor. Step 3: Re -evaluate sentiment: 11986 sentiment: negative F.2 The Demonstration for SNLI Dataset Please solve the natural language inference task. Specifically , given a premise and a hypothesis , determine the validity of the hypothesis based on the premise: Yes: The hypothesis is logically derived or directly follows from the premise. No: The premise provides evidence that refutes the hypothesis. It is impossible to tell: The premise does not provide sufficient information to confirm or refute the hypothesis. You will select the most appropriate pattern in the pattern list below to classify the natural language inference task. For each sentence , you will use the selected patterns to identify the relationship between the premise and the hypothesis. Pattern list: [ Explicit Evidence Pattern: When the hypothesis directly restates or paraphrases information present in the premise , i.e., premise provides direct evidence that supports the hypothesis , the answer is "yes". Explicit Contradiction Pattern: The hypothesis contains information that directly negates or opposes a clear statement in the premise. If this condition is met , the answer is "no". Confident Neutral Pattern: If it is very certain that the hypothesis neither contradicts nor supports the premise in any evident or implicit manner , and the relationship between them is clearly independent , the answer is "it is not possible to tell". Implicit Contradiction or Neutral Pattern: In this case , no direct contradiction is found. If the hypothesis , when extended logically , negates or conflicts with any part of the premise , even if not directly. If it does , it leans towards contradiction ('no). If no such implicit contradiction is found and the relationship between hypothesis and premise is remains ambiguous , it could be neutral (it is not possible to tell). Implicit Evidence or Neutral Pattern: In cases where no direct evidence in the premise supports the hypothesis , the following steps should be applied: Check each element of the hypothesis against the premise. If each element of the hypothesis , when drawing from the premise using world knowledge or logical reasoning , one can infer or reasonably support the entire hypothesis , it leans towards implicit entailment ('yes '). If any part of the hypothesis lacks inferable evidence from the premise or if the connection between the entire hypothesis and premise remains ambiguous , it leans towards neutral ('it is not possible to tell '). Implicit Contradiction or Neutral Pattern: In cases where no direct evidence in the premise negates the hypothesis , the following steps should be applied: Check each element of the hypothesis against the premise. If any element of the hypothesis , when juxtaposed with the premise and utilizing world knowledge or logical reasoning , can subtly negate or contradict any part of the premise , it leans towards implicit contradiction ('no '). If each element of the hypothesis does not provide such subtle contradictions or if the connection between the entire hypothesis and premise remains ambiguous , it leans towards neutral ('it is not possible to tell '). ] Premise: "Children smiling and waving at camera ." Based on this premise , can we conclude the hypothesis "There are children present ." is true? OPTIONS: - yes - no - it is not possible to tell Answer: Step 1: Select the most appropriate pattern: Explicit Evidence Pattern Step 2: Using the Explicit Evidence Pattern , the premise directly states that there are children who are smiling and waving at the camera. This information supports the hypothesis , which claims that there are children present. So , this heuristic suggest a "yes" answer. Therefore , the answer is: yes Premise: "A couple play in the tide with their young son." Based on this premise ,can we conclude the hypothesis "The family is sitting down for dinner ." is true? OPTIONS: - yes - no - it is not possible to tell Answer: Step 1: Select the most appropriate pattern: Implicit Contradiction or Neutral Pattern Step 2: Using Implicit Contradiction or Neutral Pattern , although there is no contradiction in the premise describes a scene where a couple and their son are playing in the tide , which suggests they are outdoors and likely at a beach or coastal setting. The hypothesis , on the other hand , presents a scenario where the family is sitting down for dinner , typically an indoor activity. These two activities playing in the tide and sitting down for dinner are mutually exclusive in the context of a single time frame. Because some element of the hypothesis is identified implicitly contradicts the premise. This heuristic leans towards the answer "no". Therefore , the answer is: no Premise: "A young family enjoys feeling ocean waves lap at their feet." Based on this premise , can we conclude the hypothesis "A young man and woman take their child to the beach for the first time." is true? OPTIONS: - yes - no - it is not possible to tell Answer: Step 1: Select the most appropriate pattern: Implicit Evidence or Neutral Pattern Step 2: Using Implicit Evidence or Neutral Pattern , "a young family" typically implies the presence of a younger couple and potentially their child , so this part can be inferred. However , the statement about it being the child 's "first time" at the beach cannot be inferred or supported from the premise. Because not all elements of the hypothesis can be inferred or supported from the premise , this heuristic leans towards "it is not possible to tell". Therefore , the answer is: it is not possible to tell. F.3 The Demonstration for RAMS Dataset Your task is Event Argument Extraction. In this task , you will be provided with a document that describes an event and the goal is to extract the event arguments that correspond to each argument role associated with the event. The terminologies for this task is as follows: Event trigger: the main word that most clearly expresses an event occurrence , typically a verb or a noun. The trigger word is located between special tokens "<t>" and "<\t>" in the document , and only the event argument 11987 explicitly linked to the trigger word should be considered. Event argument: an entity mention , temporal expression or value that serves as a participant or attribute with a specific role in an event. Event arguments should be quoted exactly as the it appears in the given document. Argument role: the relationship between an argument to the event in which it participates. Heuristics: serving as guiding rules for extracting event arguments. Specifically , you will use the heuristic provided in the heuristic list to guide identify event arguments , and re-evaluate the identified argument candidates to get the final answer. heuristic list: [ Semantic Heuristic: [giver] is the person , group , or organization in the document that gives the grant or gift. Syntactic Heuristic: The [giver] may be recognized by analyzing sentence structure , often appearing before prepositional phrases starting with 'to ' that introduce the recipient (e.g., "X gives Y to Z", X is the 'giver '). Dependency Parsing Heuristic: In parsing the sentence structure , the [giver] is often connected through a dependency relation (e.g., 'nsubj ' for nominal subject) to the main verb representing the giving action. ] Example task: Question: Extract the event arguments of giver , beneficiary , and recipient in the "transaction.transaction.giftgrantprovideaid" event in the provided document , with the trigger word being "granted", highlighted between "<t>" and "</t>". When pinpointing each event argument , it 's crucial to quote the entity exactly as it appears in the text. If an event argument is not explicitly mentioned or cannot be directly associated with the event indicated by the trigger word , please respond with "not specified ". Document: a news document trigger sentence: "The access to the research center in the city was <t>granted </t> by the administrator. The man , Ripley Johnson , earned it." Answer: Elaborate the meaning of event type and its argument roles: "transaction.transaction.giftgrantprovideaid ": The event involves a transfer of money or resources in the form of a gift , grant , or provision of aid , signaled by the action of granting. [giver]: the giver is the person , group , or organization that provides or grants money , resources , or access in the event. [beneficiary ]: the beneficiary is the party who ultimately benefits from the transaction. [recipient ]: the recipient is the entity that receives the money , resources , or access granted in the event. Recognizing [giver] in the given document: Step 1: Select one or two heuristics in the heuristic list that are most suitable to identify the [giver] in the given document: Semantic Heuristic and Syntactic Heuristic. Step 2: Apply selected heuristics to identify [giver] independently. Step 2.1: Identify the [giver] based on Semantic Heuristic: "[giver] is the person , group , or organization that gives the grant or gift in the document ". Applying this heuristic to the document , the entity that gives access of the research center is "administrator ". Step 2.2: Identify the [giver] based on Syntactic Heuristic: "The [giver] may be recognized by analyzing sentence structure , often appearing before prepositional phrases starting with 'to ' that introduce the recipient (e.g., 'X gives Y to Z', X is the 'giver ')". Applying this heuristic to the given document , the entity that granted access to the research center is 'research center '. Step 3 Reevaluate argument candidates: [" administrator", "research center "] Is argument "administrator" alignment with the argument role [giver ]? Yes , because "administrator" is directly responsible for the action of granting , establishing their role as the provider of access in the event. Is argument "research center" alignment with the argument role [giver ]? No , because "research center" is the place that access has been granted to , but it doesn 't give access. [giver ]: "administrator" Recognizing [beneficiary] in the given document: Step 1: Select one or two heuristics in the heuristic list that are most suitable to identify the [beneficiary] in the given document: Semantic Heuristic. Step 2: Apply selected heuristics to identify [beneficiary] independently. Step 2.1 Identify the [beneficiary] based on Semantic Heuristic: "[ beneficiary] is the entity that ultimately benefits from the gift or grant ". Applying this heuristic to the given document , the entity that ultimately benefits from the grant is "not specified ". Step 3 Reevaluate argument candidate: ["not specified "] Is argument "not specified" alignment with the argument role [beneficiary ]? Yes , because the [beneficiary] is not explicitly mentioned so "not specified" is correct. [beneficiary ]: "not specified" Recognizing [recipient] in the given document: Step 1: Select one or two heuristics in the heuristic list that are most suitable to identify the [recipient] in the given document: Semantic Heuristic and Dependency Parsing Heuristic. Step 2: Apply selected heuristics to identify [recipient] independently. Step 2.2: Identify the [recipient] based on Semantic Heuristic: "[ recipient] is the entity that receives the gift or grant ". Applying heuristic f1 to the given document , the entity that receives the gift or grant is "Ripley Johnson ". Step 2.1 Identify the [recipient] based on Dependency Parsing Heuristic: "[ recipient] is often highlighted in the sentence through a dependency relation that denotes the receiver of the action , such as 'dobj ' (direct object) for direct transactions linked to the main verb of the event ". Applying this heuristic to the given document , the entity connected to the verb 'granted ' through a dobj relation is "Ripley Johnson ". Step 3 Reevaluate argument candidate: [" Ripley Johnson "] Is argument "Ripley Johnson" alignment with the argument role [recipient ]? Yes , because phrase "earned it" implies that "Ripley Johnson" was the intended recipient of the access , aligning with the role of [recipient] in the context of the event. [recipient ]: "Ripley Johnson" Target task: F.4 The demonstration for DocEE Dataset Objective: Your task is Event Argument Extraction. In this task , you will be provided with a document that describes an event and the goal is to extract the event arguments that correspond to each argument role associated 11988 with the event. The terminologies for this task is as follows: Key Terminologies: Event argument: an entity mention , temporal expression or value that serves as a participant or attribute with a specific role in an event. Event arguments should be quoted exactly as the it appears in the given document. Argument role: the relationship between an argument to the event in which it participates. Heuristics: serving as guiding principles or strategies to aid the extraction of event arguments , tailored to specific argument roles. Specifically , you will adapt a set of given heuristics for identifying the argument role of 'giver ' to other target argument roles , and then use these adapted heuristics to guide the extraction of target event arguments. Finally , re-evaluate the identified argument candidates to confirm if they are correct event arguments or not. Heuristic list: [ Semantic Heuristic: [giver] is the person , group , or organization in the document that gives the grant or gift. Syntactic Heuristic: The [giver] may be recognized by analyzing sentence structure , often appearing before prepositional phrases starting with 'to ' that introduce the recipient (e.g., "X gives Y to Z", X is the 'giver '). Dependency Parsing Heuristic: In parsing the sentence structure , the [giver] is often connected through a dependency relation (e.g., 'nsubj ' for nominal subject) to the main verb representing the giving action. ] When extracting event arguments from the given document , you should also follow the Argument extraction principles below. Argument Format Principle: articles and prepositions are not included in the identified event argument. For example , the answer should be "damaged car" rather than "damaged car belonging to the victim" or "the damaged car". Argument Number Principle: In general , each event argument only has a singular answer in the document. However , if and only if, you are highly confident that the mentions associated with an argument role are distinctively different , you may extract no more than three answers for that argument role. Example sample: Question: Extract the event arguments of 'Date ', 'Casualties and Losses ', 'Magnitude ', 'Number of Destroyed Building ' in the 'Earthquakes ' event in the provided news document. When pinpointing each event argument , it's crucial to quote the entity exactly as it appears in the text. Note that if an event argument is not explicitly mentioned or cannot be directly associated with its argument role in question , please respond with "not specified ". Document: a news , the content is omitted here Answer: Elaborate the meaning of event type and its argument roles: 'Earthquakes ': The event involves the shaking of the surface of the Earth resulting from a sudden release of energy in the Earth 's lithosphere. [Date]: the time when the earthquake occurred. [Casualties and Losses ]: the number of people killed or injured , and the amount of economic losses caused by the earthquake. [Magnitude ]: the measure of the size or intensity of the earthquake. [Number of Destroyed Building ]: the number of buildings or structures that were damaged or destroyed due to the earthquake. Recognizing [Date] in the given document: Step 1 Select a heuristic in the heuristic list that is most suitable to identify the [Date] in the given document: Semantic Heuristic. Step 2 Identify the argument based on Semantic Heuristic: [Date] is the time when the earthquake occurred. Applying this heuristic to the document , the time when the earthquake occurred is "not specified ". Step 3: reevaluate_argument_candidates: Is argument "not specified" alignment with the argument role [Date]? Yes , because [Date] is not explicitly mentioned in the document , so "not specified" is correct. [Date]: "not specified" Recognizing [Casualties and Losses] in the given document: Step 1 Select a heuristic in the heuristic list that is most suitable to identify the [Casualties and Losses] in the given document: Semantic Heuristic. Step 2 Identify the argument based on Semantic Heuristic: [Casualties and Losses] is the number of people killed or injured , and the amount of economic losses caused by the earthquake. Applying this heuristic to the document , the [Casualties and Losses] is "claimed 142 deaths" and "800 houses were damaged ". Is argument "claimed 142 deaths" alignment with the argument role [Casualties and Losses ]? Yes , because "claimed 142 deaths" provides details about the number of people killed due to the earthquake , aligning with the argument role [Casualties and Losses ]. Is argument "800 houses were damaged" alignment with the argument role [Casualties and Losses ]? Yes , because it describes the extent of economic losses caused by the earthquake , aligning with the role of [Casualties and Losses ]. [Casualties and Losses ]: "claimed 142 deaths", "800 houses were damaged" Recognizing [Magnitude] in the given document: Step 1 Select a heuristic in the heuristic list that is most suitable to identify the [Magnitude] in the given document: Semantic Heuristic. Step 2 Identify the argument based on Semantic Heuristic: [Magnitude] is the measure of the size or intensity of the earthquake. Applying this heuristic to the given document , the magnitude of the earthquake is "6.6". Step 3: reevaluate_argument_candidates: Is argument "6.6" alignment with the argument role [Magnitude ]? Yes , because "The magnitude 6.6 shock" indicate the size of the earthquake. [Magnitude ]: "6.6" Recognizing [Number of Destroyed Building] in the given document: Step 1 Select a heuristic in the heuristic list that is most suitable to identify the [Number of Destroyed Building] in the given document: Semantic Heuristic. Step 2 Identify the argument based on Semantic Heuristic: [Number of Destroyed Building] is the number of buildings or structures that were damaged or destroyed due to the earthquake. Applying this heuristic to the document , the number of buildings or structures that were destroyed due to the earthquake is "more than 80% of the houses ". Step 3: reevaluate_argument_candidates: Is argument "more than 80% of the houses" alignment with the argument role [Number of Destroyed Building ]? No , because the argument describes the percentage of destroyed buildings not the specific number of buildings destroyed. [Number of Destroyed Building ]: "not specified" Target sample: 11989 G An example of LLM output We show an example of the LLM’s output of the HD-LoA prompting on the RAMS dataset below: Elaborate the meaning of event type and its argument roles: "life.die.na": The event involves the death of one or more individuals , signaled by the action of massacre. [victim ]: the victim is the person or group of people who are killed in the event. [place]: the place is the location where the event occurs. Recognizing [victim] in the given document: Step 1: Select one or two heuristics in the heuristic list that are most suitable to identify the [victim] in the given document: Semantic Heuristic. Step 2: Apply selected heuristics to identify [victim] independently. Step 2.1: Identify the [victim] based on Semantic Heuristic: "[ victim] is the person or group of people who are killed in the event". Applying this heuristic to the document , the victims of the massacre are "280 victims , including women , children and old people ". Step 3 Reevaluate argument candidates: ["280 victims , including women , children and old people "] Is argument "280 victims , including women , children and old people" alignment with the argument role [victim ]? Yes , because "280 victims , including women , children and old people" are the ones who were killed in the massacre , establishing their role as the victims in the event. [victim ]: "280 victims , including women , children and old people" Recognizing [place] in the given document: Step 1: Select one or two heuristics in the heuristic list that are most suitable to identify the [place] in the given document: Semantic Heuristic. Step 2: Apply selected heuristics to identify [place] independently. Step 2.1: Identify the [place] based on Semantic Heuristic: "[place] is the location where the event occurs ". Applying this heuristic to the document , the place where the massacre occurred is "Syria". Step 3 Reevaluate argument candidates: ["Syria "] Is argument "Syria" alignment with the argument role [place]? Yes , because "Syria" is the location where the massacre took place , aligning with the role of [place] in the context of the event. [place]: "Syria" 11990
2024
647
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11991–12011 August 11-16, 2024 ©2024 Association for Computational Linguistics Investigating and Mitigating the Multimodal Hallucination Snowballing in Large Vision-Language Models Weihong Zhong1 Xiaocheng Feng1,2* Liang Zhao1 Qiming Li1 Lei Huang1 Yuxuan Gu1 Weitao Ma1 Yuan Xu1 Bing Qin1,2 1Harbin Institute of Technology 2Peng Cheng Laboratory {whzhong, xcfeng, lzhao, qmli, lhuang, yxgu, wtma, yuanxu, qinb}@ir.hit.edu.cn Abstract Though advanced in understanding visual information with human languages, Large VisionLanguage Models (LVLMs) still suffer from multimodal hallucinations. A natural concern is that during multimodal interaction, the generated hallucinations could influence the LVLMs’ subsequent generation. Thus, we raise a question: When presented with a query relevant to the previously generated hallucination, will LVLMs be misled and respond incorrectly, even though the ground visual information exists? To answer this, we propose a framework called MMHalSnowball to evaluate LVLMs’ behaviors when encountering generated hallucinations, where LVLMs are required to answer specific visual questions within a curated hallucinatory conversation. Crucially, our experiment shows that the performance of open-source LVLMs drops by at least 31%, indicating that LVLMs are prone to accept the generated hallucinations and make false claims that they would not have supported without distractions. We term this phenomenon Multimodal Hallucination Snowballing. To mitigate this, we further propose a training-free method called Residual Visual Decoding, where we revise the output distribution of LVLMs with the one derived from the residual visual input, providing models with direct access to the visual information. Experiments show that our method can mitigate more than 24% of the snowballed multimodal hallucination while maintaining capabilities.1 1 Introduction Large Vision-Language Models (LVLMs) have shown remarkable abilities in observing and understanding the real world in human languages (Achiam et al., 2023; Zhu et al., 2023; Liu et al., 2023d; Ye et al., 2023b; Dai et al.). However, multimodal hallucinations, in which LVLMs provide *Corresponding Author 1Resources will be available at https://github.com/ whongzhong/MMHalSnowball Where am I now? Please describe the scene in detail. Misled by the generated hallucination: Ask the same question separately: You are standing in front of the zebra crossing on a city street. Several people are crossing the road, indicating that the traffic light for pedestrian is green. Can I cross the street now? Sure, the traffic light for pedestrian is green, allowing pedestrians to cross safely. Can I cross the street now? No, it is not safe to cross the street at the moment. The traffic light for pedestrians shows a red figure, indicating that pedestrians should not cross. Hallucination Snowballing Figure 1: An example of the LVLM assisting a visually impaired person to cross the street. The model is misled by the generated hallucination and mistakenly suggests the user to cross the street, although it can give correct advice independently. Green and red colors highlight the correct answer and hallucinations, respectively. responses misaligned with the corresponding visual information, remain to be the Achilles’ heel (Cui et al., 2023; Kamath et al., 2023; Li et al., 2023b; Liu et al., 2023a; Lu et al., 2023; Rawte et al., 2023; West et al., 2023; Huang et al., 2023). Previous research has revealed that hallucinations generated by large language models may accumulate due to models’ over-commitment to early mistakes, leading to more mistakes that they otherwise would not make (Zhang et al., 2023a; Azaria and Mitchell, 2023; Kang et al., 2023), especially for the user-model interaction scenarios such as conversation (Huang et al., 2022; Tian et al., 2024; Gong et al., 2023). However, the extent to which accumulated multimodal hallucinations mislead LVLMs into generating false claims requires further exploration. In this work, we conducted an investigation into this issue for the first time. As shown in Figure 1, we seek the answer to the question: When presented with a query relevant to the 11991 25% 12% 8% 44% 56% 37% Hallucinatory Conversation Hallucination Free Conversation 0% 10% 20% 30% 40% 50% 60% (b) GPT-4V LLaVA 1.5 mPLUG-Owl2 (a) Correct Misled By Hallucination Other Incorrect 59.1% 74.5% 73% 2.3% 5.9% 8.1% 38.6% 19.6% 18.9% (c) GPT-4V LLaVA 1.5 mPLUG-Owl2 (b) Figure 2: Preliminary explorations on the hallucinations generated by LVLMs given conversational contexts. (a) Response accuracy with or without hallucinatory conversation. (b) Response distribution when asking the question within a hallucinatory conversation. We select question samples that the LVLM can correctly answer without distractions. previously generated hallucination that contradicts the visual information, can models make the correct judgment when they could have given a correct answer independently? We conduct a preliminary study on GPT-4V (Achiam et al., 2023), LLaVA 1.5 (Liu et al., 2023c), and mPLUG-Owl2 (Ye et al., 2023b). Similar to the setting of Figure 1, given an image, we start a conversation by asking the model to describe the image in detail. When observing hallucinations in the LVLM’s responses, we continue to ask a relevant question according to the model-generated hallucination. In addition, we ask the same question separately to see if the model can answer it correctly without distractions. As demonstrated in Figure 2(a), we find that when the text context contains relevant hallucination, the model performance declines significantly, compared to the model response when asking the same question separately. We further select those question samples that the LVLM can correctly answer separately, and manually identify the response change when asking the same question with the related model-generated hallucinatory context. As Figure 2(b) depicts, we find that more than 59% of the answers are semantically the same as the generated hallucination, indicating that they were misled by the previously generated hallucinations. To systematically investigate this phenomenon, we propose to identify whether the LVLM is misled by hallucinations via checking if a specific claim is flipped due to previous hallucinations. We design a framework called MMHalSnowball to construct hallucinatory visual conversations, where models are required to answer the question based on the image and the hallucinatory conversation. The result shows that LVLMs’ multimodal hallucinations are easy to mislead the later generation because their strong language capabilities make them prone to be over-confident in the hallucinated context, thereby generating false claims that they normally would not support, which we term as Multimodal Hallucination Snowballing. In addition to mitigating this issue, we further proposed a training-free decoding method called Residual Visual Decoding (RVD). By residual connecting the visual information and the current user instruction, distributions that emphasizing the visual information are derived to revise the original output distribution. Our RVD achieves more than 24% of improvements in reducing the multimodal hallucination snowballing while maintaining the contextual modeling ability. 2 Evaluating the Multimodal Hallucination Snowball Phenomenon In this section, we design a question-answer task in the conversation scenario, where a model is first asked to describe a picture in detail and then answers a visual question. As shown in Figure 3, we propose the MMHalSnowball framework to carefully simulate hallucinatory conversations and evaluate whether the model generates a wrong answer due to the hallucinatory context. Next, we will describe our evaluation framework in detail, including conversation creation, experimental settings, and evaluation metrics. We experimentally analyze the multimodal hallucinations snowball in §2.7. The prompts used are listed in Appendix A.2. 2.1 Dataset Source We use the validation set of the GQA dataset (Hudson and Manning, 2019) as our data source, which contains a balanced aspect of visual questions that focuses on objective perceptional questions. We adopt images, question-answer pairs, and regional description annotations from the Visual Genome (Krishna et al., 2017). Note that we use its balanced validation set to minimize the impact of dataset contamination and language prior. 2.2 Hallucination Allocation To be more practical, we construct hallucinations based on the common types generated by LVLMs. Inspired by Wang et al. (2023a); Zhai et al. (2023), we categorize the hallucinations as follows: 11992 Hallucination Allocation Hallucination Creation Conversation Construction Question:What is the animal to the right of the dog? Hallucination Type: Existence Answer:sheep Regional Descriptions: Hallucinatory Regional Descriptions:“A tan and white colored cow right beside a dog. ”: [0.454, 0.285, 0.998, 0.875], “ A brown dog with black nose and blue collar” : [0.000, 0.469, 0.504 … Hallucinatory Answer: cow Hallucinatory Fact: The animal to the right of the dog is a cow. Please describe this image in detail. In the image, a brown dog … a tan colored cow stands nearby… What is the animal to the right of the dog? Hallucinatory Image Description : In the image, a brown dog with a black nose… while to the right of the dog, a white and tan colored cow stands nearby. The cow’s head is... sheep or cow? Evaluation sheep What is the animal to the right of the dog? Snowballing? Fact: The animal to the right of the dog is a sheep. Image Description :In the image…To the right of the dog, there is a sheep… Step 1 Step 4 Step 2 Step 3 Fact Generation Hallucination Type Allocation “A tan and white colored sheep right beside a dog… Conflict Creation Description Generation Conflict Verification Answer: sheep Answer: cow Question Answer: sheep Answer: cow Conversation Construction Multimodal Hallucination Snowballing Evaluating Hallucination-free conversation Please describe this image in detail. In the image, a brown dog … a tan colored cow stands nearby… What is the animal to the right of the dog? Hallucinatory Conversation Figure 3: An overview of our MMHalSnowball framework for simulating hallucinatory conversations and evaluating LVLMs’ behavior in such conversations. In step 1, start with a question-answer pair, we generate a fact, an image description and allocate a proper hallucination type according to the corresponding question-answer pair. In step 2, we utilize the ChatGPT to rewrite a hallucinatory answer based on the allocated hallucination type. We then modify other annotations and generate the corresponding hallucinatory description using ChatGPT. In step 3, after ensuring the hallucinatory answer and descriptions contradict the image content, we construct a conversation that contains the specific hallucination. In step 4, we evaluate the LVLMs’ performance gap in two conversation settings to see whether they suffer from multimodal hallucination snowballing. Green and red color highlight the correct answer and hallucinations curated out of it, respectively. • Existence Hallucination, which refers to the incorrect recognition of visible objects in the image or the belief that specific visible objects are absent in the image. • Attribute Hallucination, which refers to the inaccurate characterization of objects and misrepresentations of attributes such as color, shape, size, and actions. • Relation Hallucination, which refers to the inaccurate depiction of the relationships or interactions among objects, including erroneous interaction states, relative positions, and spatial positions of objects relative to the image. • Imagination Hallucination, which refers to the erroneous imagination of objects in the picture that do not appear. To incorporate hallucinations, we first utilize ChatGPT (OpenAI, 2022) to rewrite a fact sentence that best describes the question-answer pair. In addition, the annotated regional descriptions and the fact sentence are used to generate an image description. The ChatGPT is prompted to ensure the image description semantically entails the fact sentence. Hallucination can be created by properly modifying the fact sentence. Our goal is to make the answer to the original question no longer correct according to the modified fact sentence. However, not all types of hallucination will make the original answer invalid (e.g. modify the fact sentence "The color of the trousers is blue" to "the color of the bike is blue" introduces an imagination hallucination, but won’t invalidate the answer to the question: "What color are the trousers that this boy is wearing in the image?"). To match the hallucination errors in the curated contexts with the corresponding question-answer pairs, We then 11993 Existence 1,128 Relation 1,318 Attribute 1,208 Imagination 1,319 (a) Sample number for each hallucination type. Is What On Which Are How Who In The Where Does Of Do there the it this that he she is color kind s animal type are material size do does vehicle device appliance fruit food piece place item length shape vegetable height pieces animals ethnic fruits makes pattern vegetables which side color kind place part material shape room size type there the these big large is long clean does s tall old thick do hard real wide is wears in which shirt jacket hat logo man umbrella vehicle woman inis the what which you the a an any man woman black white fence blue car chair person bag girl knife lamp laptop boy hair house bottle sky frisbee shirt bus couch faucet horse lady large meat picture red screen striped telephone umpire window wood yellow bicycle bowl broccoli empty food metal plastic plate pot remote rope sandwich shelf small table tennis tent toilet train young backpack ball baseball batter bench bird blond book box bread brown candle cat cellphone cheese clock coffee computer cup dog door drawer egg freezer fridge gray green ground guy helmet ladder lid monitor mouse orange pizza purple racket shoe shower side silver skateboarder truck vase vehicle an outdoors indoors a place a to to the on in this she that located sitting under he is do does are of the on inis are do of the is makes is are do does you the isisisisis ofis ofis do isisisof are group are up is are ofis is side is of is are is do are does of isof isis makes was isisisofofis isofis any tables bottles chairs birds black buses helmets hot large pictures plates tomatoes zebras people blinds feathers leaves scissors is are is are do the isis are the the isisis is isis wearing holding on the in standing looking sitting using the a the part is the of the is isis to sto isisis is isisisisisis of s isis is is isisisof is the this the it shirt man grass gray jacket logo sweater umbrella color color think isis lamp chair tree dog book bicycle spoon cup car clock mirror bench fork plate table boat camera painting cat ball bottle bike knife napkin bird bat bowl television building banana glove flower balloon mailbox house seagull keyboard computer candle guitar glass coffee cupcake bag snowboard lion surfboard pen kite skateboard towel hat window toilet palm phone seashell rock soap cow sandcastle potato goggle spatula mountain plant bus watch basket shoe saddle child vase swing microwave fence river refrigerator sled lighthouse pepperoni sunrise sandal racket soda bridge butterfly white toothbrush sunglass giraffe helmet helicopter woman tomato bush zebra pillow snowman parking sailboat blue traffic menu forest leather snowshoe shampoo sun streetlamp gold juice ketchup rug remote strawberry paddle pan sunset slide net teapot sky bread cheese train tractor tower cake can castle truck waterfall blanket beach bed sheep dock frisbee fruit fire cupboard flag flagpole blender statue staircase diaper door tissue hamburger cucumber carousel pot pink siren sunflower sink ring cloud drink curtain umbrella apple elephant earring oven orange onion automobile anchor egg ocean meat food plate chair dog broccoli cheese computer bag speaker wine pillow rice beer bench woman train tall bottle on to on in to in on s bag helmet hair device stove cabinet animal lock umbrella hose suitcase lamp blanket trailer bowl pillow meat plate toilet flag bread to black behind wooden vehicle car blanket tent pillow ball cup to in blue on to silver pink in to that on in to to in on to in on to on in to screen computer to on in long short on to in to on in clear that cloudless white to in yellow yellow red blue orange green to to white to in to on wet to in s vehicle mirror book bag to in in to on bike car barn drink to on pillow towel umbrella sheet in to on to in that down below bowl shelf chair vehicle curtain cup corn in to in to to on bus boat chair to fence trash pot bag cup to on to in control in green yellow to to on to in bowl appliance animal tan large to racket to on to seat short to on girl person to in in to baseball player in to to brown white in woman hair on in in to on hair door in gray to in to in small cup to monitor to to that of brown on in on to to on bag elephant bottle grass sandy wet to to in on made up in to pink book cup to oven towel bag on to white blue curtain walk fork device to to in to in behind to on on car unpeeled choppy on to blue case on knife on to to on black on to bowl bowl umbrella on orange on tray s in on horse to in on long black to door clean hydrant to red can empty brown in board black grass dog to on vest sky animal black hair in to in to to phone blue in to chair stove book dirty tree in grinder table on to black to to shirt on on on to banana purple to blue new in light white in green in can to fence full to in to white green man bear book to to to on blue in orange foggy fence indoors outdoors or or motorcycle bus train cabinet truck picture photo a an black full beach mirror red gray short small black the or the color person name man device animal woman size cat vehicle girl piece item vegetable height weather fruit bear large food bag black material length water child truck lady guy pattern bird bowl fence dog dessert cow boy mirror dark clothing curtain blue image standing batter spoon lid laptop kite panda parked street pipe small sink plate train young bike container clock guitar gray illuminated tissue depth cooking cup elephant box helmet fat giraffe glass grater grass bench aircraft airplane weapon couple bull calf sky symbol teddy pillow tree bee car appliance sidewalk berry the front the bear lettuce man boy cat animal a fence wearing catching doing carrying bird a fence cat animal on on the sitting doing the by the with in on the on on by the that this you the the that this the furniture animal vehicle appliance material food fruit device vegetable clothing bird watercraft drink place cooking person man fence girl station bottle pattern ground jacket snowboarder teddy pole depth dog sink zebra elephant car bear the the above front on on the in on standing itto looking inside pulling under black wearing followed the you furniture vehicle animal appliance aircraft device food material vegetable meat people vegetables vehicles denim trees giraffes carrots kids animals horses curtains tongs cups weather the that used this up the used the the you the think person man boy guy fence cat bottle old to the blue on parked this to the left on you toit above toin on the in filled itto stuck furniture pictured this shown this furniture the the you you the to green the furniture on isto on the the the right to clothing on isin the plant of on to of is are the the the the the that you the the the furniture animal appliance device vehicle fruit food material baked vegetable aircraft clothing cooking it the this the the the used up used the this it the material clothing clothing this the the furniture not kites black boats women pictures people onions bottles airplanes plates players chairs sheep lamps surfboards umbrellas silver wine pilots boys trains couches cans remote ropes scarves peaches pizzas red books cabinets trailers clocks computer cats trucks windows babies bags giraffes toilets trash bicycles flags cows blue Asian pine pillows snowboarders keyboards potatoes bears benches apples wooden chefs in on on inatinin gloves bags in on on in dogs pineapples windows on in on on inininininbyin on bikes inin on beside phones on ininininininininin ties on inbyin on on on on inin on airplanes controls ducks on kettles boats coats ininininin cans inin on rackets in chairs into on to up closed black purple on sparse on into gloves totoin on beets toin blue in blue to blue containers toto white white scissors pants in shoes khaki behind black oror the this the these the that the you weather vehicle fruit item device displayed animal the his that the the that the sink boy animal sky jacket horse tail shirt weather the the the that trees think the you the the vehicles the a trousers an shorts glasses goggles the the top airplane bear cake tape man front on beside against at back near at the the in the the the the on the the on the shirt cap helmet hat glove photo image picture think device the the of is are which of man him the person boy what which the which of where in the the to poster hat the isinofin what which which the which which where woman is where inof where inofis the that tail is on covered the on the in which which is which is what where around the is what is chair what of what reflected the is picture image photograph scene photo picture photo image scene photograph man bird newspaper teddy lamp sign person plate tree orange dog girl airplane woman pepper sand black sink boat bottle the trees woman image look appear s look seem sky look that appear look look to look television look look by look look look to extinguisher look look look look look that that grass to seem bird near look look to can near on to look is are the is are the that look look in what in what (b) Question prefixes distribution of our curated dataset. Figure 4: Statistics of our curated dataset. allocate a proper hallucination type from the above definition to each fact sentence. Appendix A.1 shows details about the rules of allocating proper hallucination types. 2.3 Hallucination Creation In this part, we describe how we utilize the question-answer pair, the fact sentence, and the regional descriptions to generate hallucinatory image descriptions. Rather than directly modifying the fact sentence according to the hallucination type to create hallucinations, we find it more stable to ask the ChatGPT to rewrite a hallucinatory answer that contradicts the original answer. Then, the fact sentence, as well as all the regional descriptions are heuristically modified to hallucinatory ones according to the hallucinatory answer. With the hallucinatory fact sentence and hallucinatory regional descriptions as inputs, the ChatGPT is asked to generate a detailed image description that entails the hallucinatory fact. the original answers Y + = {y+ 1 , y+ 1 , ..., y+ n } and the rewritten hallucinatory answers Y −= {y− 1 , y− 1 , ..., y− n } are kept for evaluation, where n represents the dataset size. 2.4 Conversation Construction Before constructing a hallucinatory conversation, we should ensure that the generated hallucinatory answer and descriptions contradict the image content, while the hallucinatory description supports the hallucinatory answer. To do this, we provide ChatGPT with descriptions, answers, and their corresponding hallucinatory ones to check if the modification and generation meet our requirements. We also check if the image description generated in Section 2.2 entails the fact sentence. See Figure 12 for the prompt used. Note that only those descriptions that conflict with the original answer but can deduce the hallucinatory answer will be kept. After checking, we utilize the generated hallucinatory descriptions and the question-answer pairs to construct a question-answering conversation, as Figure 3 step 3 shows. Conversation examples for each hallucination type are in Appendix A.5. 2.5 Statistics With our meticulous data curation and checking process, Our curated dataset D contains 4,973 samples in total. The detailed sample number for each hallucination type is as Table 4 shows. As Figure 5 shows, the diverse nature of the GQA dataset is maintained. To check the effectiveness of the modifications made by our framework, as Figure 5 left shows, we sample 400 data and manually review them by several professionals. Our generated hallucinatory answers and conversations mostly meet our expectations, Please refer to Appendix A.3 for more details about manual checking. 2.6 Evaluation To gain a deep understanding of the LVLMs’ multimodal hallucination snowballing, given visual question-answering pairs from our dataset, We generate model responses under two different settings as Figure 1 shows and compare the results under these two conversation settings. The first setting is that the model generates the response to the question in our curated corresponding hallucinatory conversation, which we refer to as HalluConv.. The second is that the model answers the same visual question alone, without the distraction of hallucinatory context, which we refer to as CleanConv.. Since LVLMs’ response format can be diverse due to the ambiguous query prompt, it might make the automatic evaluation result slightly imprecise. To address this, we follow (Liu et al., 2023c) to add a formatting prompt right after the question: "Please answer the question using a single word or phrase.", namely Formatting Prompt setting. The user input with the question only is named as Question Prompt. Note that we conduct experiments with Formatting Prompt if not specified. 2.6.1 Evaluation Metrics In this part, we introduce our evaluation metrics. First, to evaluate the correctness of each generated answer, we adopt the following criteria: Entailment Matching Score: Considering both the original answer and the hallucinatory answer were short, while models tends to generate longer 11994 Model Question Prompt Formatting Prompt CleanConv. HalluConv. CleanConv. HalluConv. Acc↑ Acc↑ FR↓ WFR ↓ Acc↑ Acc↑ FR↓ WFR ↓ 7B LLM LLaVA-1.5 61.21 7.68 ↓53.53 79.96 89.03 71.24 14.96 ↓56.28 78.21 81.29 MiniGPT-4 33.60 13.11 ↓20.49 76.42 86.24 37.12 5.75 ↓31.37 84.18 89.65 MiniGPT-v2 59.24 25.14 ↓34.10 58.08 63.92 62.12 21.40 ↓40.72 66.11 72.06 InternLM-XC 40.84 5.21 ↓35.63 83.95 92.52 43.51 5.83 ↓37.68 86.55 91.31 ShareGPT4V 61.81 10.54 ↓51.27 78.01 86.27 71.81 15.91 ↓55.90 77.18 80.12 CogVLM 72.69 2.49 ↓70.20 92.84 96.90 75.17 2.63 ↓72.54 93.07 96.79 mPLUG-Owl 37.18 4.10 ↓33.08 71.50 93.24 37.80 3.64 ↓34.16 78.62 93.35 mPLUG-Owl2 54.88 4.75 ↓50.13 84.65 93.55 60.47 7.82 ↓52.65 86.63 89.82 Qwen-VL-Chat 51.80 26.20 ↓25.60 72.48 77.83 77.94 20.03 ↓57.91 71.70 74.97 Otter 44.90 9.43 ↓35.47 71.61 87.42 52.12 13.94 ↓38.18 73.50 82.21 IDEFICS 41.22 5.05 ↓36.17 83.37 92.83 40.94 7.32 ↓33.62 85.07 91.11 InstructBLIP 60.61 4.32 ↓56.29 85.73 94.06 59.88 4.54 ↓55.34 90.36 93.92 13B LLM LLaVA-1.5 62.03 9.57 ↓52.46 78.61 86.29 72.07 14.74 ↓57.33 78.21 81.45 ShareGPT4V 64.71 6.92 ↓57.79 83.84 90.77 72.43 13.43 ↓59.00 80.01 83.29 InstructBLIP 55.02 6.21 ↓48.81 76.94 92.76 53.53 12.75 ↓40.78 76.15 85.80 Closed-Source GPT-4V 52.02 42.09 ↓9.93 14.26 43.95 60.49 52.00 ↓8.49 23.30 27.69 Table 1: Experiment results for models answering the same questions under two different conversation settings: CleanConv. and HalluConv settings. Numbers that are highlighted orange represent the model performance drop caused by hallucinatory conversation, compared to the model performance under CleanConv. setting. The results in bold and underlined represent the best and the second-best results, respectively. All experiments are implemented under a zero-shot setting to avoid the bias introduced by demonstrations. answers with explanations. We evaluate the correctness for the ith sample by checking if the answer is entailed in the generated response: Scorei = 1 if yi in ˆyi else 0, (1) where yi and ˆyi stand for the expected answer and the generated response, respectively. With a proper scoring method for one sample, we can calculate the overall accuracy with the following method: Accuracy (Acc): Acc(Y, ˆY ) = Pn i=1 Scorei(yi, ˆyi) n , (2) where Acc(Y, ˆY ) represents the model’s accuracy score over the entire dataset. Flip Rate (FR): In order to systematically measure whether one model is affected by the hallucination snowballing phenomenon, we propose the FR to evaluate how many model responses are misled by hallucinatory context and are matched with our curated hallucinatory answers: FR = P i∈D+ Scorei(y− i , ˆy− i ) Acc(Y +, ˆY ) , (3) D+ = {i|Score(y+ i , ˆy+ i ) = 1, ˆyi ∈ˆY , y+ i ∈Y +}, (4) where ˆY + = {ˆy+ 1 , ˆy+ 1 , ..., ˆy+ N} and ˆY − = {ˆy− 1 , ˆy− 1 , ..., ˆy− N} represent generated answers under CleanConv. and HalluConv. settings, D+ represents the sample indexes that the LVLM correctly answers in the CleanConv. setting. Furthermore, we designed a more generalized flip-rate metric named weak flip-rate(WFR) which only evaluates how many model responses are distracted by hallucinatory context and conflict with the original answers: WFR = P i∈D+(1 −Scorei(y+ i , ˆy− i )) Acc(Y +, ˆY +) , (5) 2.6.2 Models We investigate the multimodal snowballing phenomenon in the following mainstream LVLMs: LLaVA-1.5 (Liu et al., 2023c), MiniGPT-4 (Zhu et al., 2023), MiniGPT-v2 (Chen et al., 2023a), InternLM-XComposer (Zhang et al., 2023b), ShareGPT4V (Chen et al., 2023b), CogVLM (Wang et al., 2023b), mPlug-Owl (Ye et al., 2023a), 11995 mPlug-Owl2 (Ye et al., 2023c), Qwen-VL-Chat (Bai et al., 2023), Otter (Li et al., 2023a), IDEFICS (Laurençon and Strien, 2023), InstructBLIP (Dai et al.) and GPT-4V (gpt-4-vision-preview)(Achiam et al., 2023). All experiments are completed under a zero-shot setting. Please refer to Appendix A.4 for more generation details. 2.7 Do LVLMs Suffer from Multimodal Hallucination Snowballing? To answer this question, we compare the model responses under the conversation settings of HalluConv. and CleanConv., as Section 2.6 describes. The results are depicted in the Table 1. Though advanced in answering visual questions even in a zeroshot manner (See accuracy in CleanConv.), most models struggle to stick to their judgment when there are specious hallucinations in the context (See accuracy in HalluConv.), resulting in extremely low accuracy. For LLaVA-1.5, ShareGPT4V, mPlugOwl2, and InstructBLIP, despite their advanced model ability, they still suffer an over 50% performance drop. However, we also recognize that GPT4V is significantly less affected by hallucinations. We observed a correction process in the responses of GPT-4 (See Appendix B.2 for examples), indicating that it is capable of paying attention to visual information to a certain extent and realizing that some hallucinations have been generated in the conversation. In addition, we find that GPT-4 often refuses to answer the user question due to its strict safety protocol, especially in the Clean Conv. setting (around 12%), indicating a potential cause of such a comparably low accuracy. But in general, all the LVLMs suffer from multimodal hallucination snowballing at different levels. What’s more, a high flip rate indicates that the model responses are easily misled by the hallucinatory conversation, even when the model can make a correct claim in CleanConv. setting. An even higher weak flip rate is observed, which shows that LVLMs’ responses are corrupted due to the hallucinatory context. Hence, comparing the same LVLMs with different scale LLM backbones, we find no significant performance improvement in mitigating the multimodal hallucination snowballing, except for the InstructBLIP. Comparing the experiments between two different query prompts, we find that the Formatting Prompt shows clearer instructions, which not only improves question-answering ability but also eases the multimodal hallucination snowballing Existence Attribute Relation Imagination Accuracy 0 20 40 60 80 100 LLaVA 1.5 MiniGPT-V2 mplug-owl-2 InstructBLIP (a) Existence Attribute Relation Imagination Filp Rate 0 20 40 60 80 100 LLaVA 1.5 MiniGPT-V2 mplug-owl-2 InstructBLIP (b) Figure 5: Question answering accuracy(a) and flip rate(b) of two different context settings (i.e. HalluConv. and CleanConv.) for each hallucination type. Note that the stripe pattern represents a performance drop due to the snowballed hallucination. phenomenon for most of the LVLMs. We further present the accuracy of two different conversation settings and the flip rate for each hallucination type in figure 5. The result shows that existence, attribute, and imagination hallucinations are easier to snowball. We even observe a nearly 100% flip rate on the imagination hallucination where LVLMs readily accept objects that are mistakenly imagined to exist, which could attributed to the LVLMs’ nature to generate positive response (Liu et al., 2023b). while the relation hallucinations have a higher probability of being correct while answering the question. For detailed results, please refer to Appendix B.1. 2.8 Will LVLMs Be Affected by the Hallucination-Free Context? Compared to CleanConv. setting, where the conversation context only contains one image and one user question, LVLMs under HalluConv. setting are required to answer the same user question with an additional round of conversation. How does a longer context length affect the model performance? To answer this question, we further create two conversation settings that have similar context length to HalluConv. setting, in which there is also an additional conversation round but without hallucinatory content related to the user question. Specifically, we first replace the hallucinatory descriptions in Halluconv. setting with the image descriptions generated in Section 2.2, which are semantically consistent with the fact sentence. We 11996 Model CleanConv.↑ FactConv.↑ IrrConv.↑ HalluConv.↑ 7B LLM LLaVA-1.5 71.24 89.28 ↑18.04 65.35 ↓5.89 14.96 ↓56.28 MiniGPT-4 37.12 67.67 ↑30.55 35.11 ↓2.01 5.75 ↓31.37 MiniGPT-v2 62.12 75.39 ↑13.27 56.46 ↓5.66 21.40 ↓40.72 InternLM-XC 43.51 74.04 ↑30.53 40.82 ↓2.69 5.83 ↓37.68 ShareGPT4V 71.81 89.32 ↑17.51 69.74 ↓2.07 15.91 ↓55.90 CogVLM 75.17 93.20 ↑18.03 74.68 ↓0.49 2.63 ↓72.54 mPLUG-Owl 37.80 62.01 ↑24.21 30.54 ↓7.26 3.64 ↓34.16 mPLUG-Owl2 60.47 91.27 ↑13.33 77.12 ↓0.82 7.82 ↓52.65 Qwen-VL-Chat 77.94 87.77 ↑9.83 74.60 ↓3.34 20.03 ↓57.91 Otter 52.12 66.70 ↑14.58 44.06 ↓8.06 13.94 ↓38.18 IDEFICS 40.94 73.68 ↑32.74 38.01 ↓2.93 7.32 ↓33.62 InstructBLIP 59.88 86.10 ↑26.22 54.90 ↓4.98 4.54 ↓55.34 13B LLM LLaVA-1.5 72.07 90.87 ↑18.80 70.24 ↓1.83 14.74 ↓57.33 ShareGPT4V 72.43 91.98 ↑19.55 70.80 ↓1.63 13.43 ↓59.00 InstructBLIP 54.94 62.68 ↑7.74 42.71 ↓10.82 12.75 ↓40.78 Table 2: Accuracy results for models answering the same questions under four different conversation settings. orange and green numbers represent the model performance drop and improvement in different conversation settings, compared to the model performance under CleanConv. setting. The results in bold and underlined represent the best and the second-best results, respectively. name the resulting new conversation setting as FactConv. setting. In addition, we replace the 1st round conversation in HalluConv. with a single questionanswer pair that is irrelevant to any specific visual information in the image, namely IrrConv. setting (See Appendix B.3 for more details). The results are as Table 2 shows. From the results, we can observe that all the models benefit a lot from a correct image description, which further proves that LVLMs tend to rely on text context when there is text format visual information that can help to generate the response. Such nature could potentially lead to the hallucination snowballing with a hallucinatory conversation. What’s more, when the context provides no useful information, the models’ abilities are not severely influenced by the context, which further indicates the performance drop in HalluConv. setting is caused by hallucination snowballing, not the context length. 3 Residual Visual Decoding From the phenomenon of multimodal hallucination snowballing, we find that LVLMs tend to condition on text context when there are plausible clues to help make responses, thereby ignoring the visual information and could be easily misled by erroneous context. To remedy this, we manage to emphasize the visual information during the inference process without additional training or external tools under the multi-turn conversation scenario. 3.1 Residual Visual Predictions Given a visual input v, a dialog history h, and the current text query x, one LVLM parametrized by θ generates a response y token-wisely. With generated tokens y<t up to time step t −1, the output distribution in time step t is formulated as pθ(yt|v, h, x, y<t), where the output token yt is sampled from the output distributions: yt ∼pθ(yt|v, h, x, y<t) = softmax(logitθ(yt|v, h, x, y<t)), (6) Since the hallucinatory context could interfere with the process of reasoning over the visual input, we first construct an input that residual connects the visual input v with the current text query x, and derive an output distribution from it: pθ(yt|v, x, y<t) = softmax(logitθ(yt|v, x, y<t)), (7) in which the output distribution will naturally shift from dependence on text context to reliance on visual information. We term it the Residual Visual Predictions, which are based entirely on visual information and the query while sacrificing attention to the text context. 3.2 Residual Visual Decoding In order to put an emphasis on the visual information under a multi-turn visual text conversation scenario, inspired by (Leng et al., 2023; Liu et al., 2021), we introduce Residual Visual Decoding (RVD), where residual visual predictions are utilized to enhance the perception of the visual information. The revised distribution pRV D is formulated as: pRV D(y|v, h, x) = softmax(αlogitθ(y|v, x) +(1 −α)logitθ(y|v, h, x)), (8) where a larger α indicates a higher model focus on the visual information. Note that when the length of dialog history h is 0, the RVD degenerates to the regular decoding. 3.3 Adaptive Distribution Blending However, as we tune up the α, the text context gets to be ignored when generating responses, which possibly does harm to the model’s inherited contextual ability. To preserve the contextual ability while tackling the hallucination snowballing, we propose to adaptively adjust the scaling parameter. Specifically, we derive an output distribution 11997 pθ(y|x) given the current user query x only, and calculate the Jensen-Shannon divergence (JSD) between it and residual visual predictions, which evaluates the similarity between two output distributions: τ = JSD(pθ(y|v, x)||pθ(y|x)), τ ∈[0, 1], (9) where τ is the JSD score between pθ(y|v, x) and pθ(y|x). We suspect that when responding to the query depends on the visual information v, τ gets larger, since the latter is barely making guesses. Meanwhile, when responding to the query depends on the dialog history h, the corresponding two distributions tend to make guesses. However, they still have access to the nearest user query from the current round of conversation. Thus, We assume that conditioned on these two output distributions tend to make similar guesses so that the τ will get smaller. Therefore, we dynamically adjust the α with τ and a scaling factor β: α = Min(β ∗τ, 1), (10) With the dynamic adjusted α, we can adaptively blend the residual visual distribution into the original output distribution with equation (8). 3.4 Experiments By blending the residual visual distribution into the original output distribution, the models’ contextual ability could be harmed. Inspired by Chen et al. (2023c), to quantitatively evaluate the LVLMs’ contextual ability with our pipeline, we construct a multiple choice task called Who Provide This Image (WPI). Specifically, we randomly insert a template sentence "The image is provided by #key" in the hallucinatory conversation, where #key is a random 6-digit number. We then change the corresponding question to "Who provides this image?". An LVLM that can correctly access the context will have over 90% accuracy in answering this question. For more details, please refer to Appendix A.6. As a result, We test our proposed RVD in our proposed multimodal hallucination snowballing evaluation and the aforementioned WPI task to evaluate its ability to alleviate the multimodal hallucination snowballing while maintaining contextual ability. 3.4.1 Baselines To show the effectiveness of our proposed RVD, we compare our method with the following strategies: Model CleanConv. HalluConv. WPI task Acc↑ Acc↑ FR↓ Acc↑ LLaVA-1.5 71.24 14.96 78.21 92.84 w/ Prompt 70.82 ↓0.38 13.41 ↓1.55 79.16 ↑0.38 95.42 ↑2.58 w/ VCD 70.20 ↓1.04 17.29 ↑2.33 74.59 ↓3.62 95.12 ↑2.28 w/ RVD(ours) 70.34 ↓0.90 32.84 ↑17.88 53.52 ↓24.69 91.54 ↓1.30 mPlug-owl2 60.47 7.82 86.63 96.82 w/ Prompt 61.39 ↑1.04 7.78 ↓0.04 86.73 ↑0.10 93.23 ↓3.59 w/ VCD 61.17 ↑0.60 8.77 ↑0.95 85.21 ↓1.42 97.08 ↑0.26 w/ RVD(ours) 61.69 ↑1.22 22.54 ↑14.72 39.15 ↓47.48 90.85 ↓5.97 ShareGPT4V 71.81 15.91 77.18 95.22 w/ Prompt 71.68 ↓0.13 13.83 ↓2.08 79.61 ↑2.43 98.31 ↑3.09 w/ VCD 72.91 ↑1.10 16.77 ↑0.79 75.57 ↓1.60 98.51 ↑3.29 w/ RVD(ours) 72.21 ↑0.40 37.50 ↑21.59 48.79 ↓28.39 94.52 ↓0.70 Table 3: Evaluation results for different methods on our proposed evaluation. Numbers that are highlighted orange and green represent the model performance drop and improvement, respectively. The results in bold represent the best results, respectively. • Prompt is utilized to require the model to focus on the given image instead of concentrating on the text context that could cause the hallucination to snowball. Specifically, we explicitly ask the model with the following query: {#Question, Please answer the question based on the given image.}. • Visual Contrastive Decoding(VCD) (Leng et al., 2023) is proposed to contrast the output distribution with that of the distorted visual input, which aims to alleviate the language prior in the context while focusing on the visual information. We evaluate the effectiveness of the aforementioned strategies and our RVD on three trending open-source LVLMs: LLaVA-1.5, mPlug-owl2, and ShareGPT4V. We set the β = 2 if not specified. 3.4.2 Experiment Results The results are shown in Table 3. We find that incorporating the prompt methods will do harm to the model performance, which might be because of the inability of LVLMs to follow complex instructions. Though shown to be effective in correcting the snowballed hallucination, the VCD contrasts the output distribution with the distorted visual input, which could do harm to the model performance when the context is utilized to respond to the query. However, by dynamically emphasizing the visual information whenever needed, our proposed RVD makes a large accuracy improvement in overcoming the multimodal hallucination snowballing while maintaining contextual ability. 11998 LLaVA-1.5 mPlug-owl2 ShareGPT4V Acc-HalluConv. 0 10 20 30 40 50 60 70 α 0 0.2 0.4 0.6 0.8 LLaVA-1.5 mPlug-owl2 ShareGPT4V Acc-WPI task 60 70 80 90 100 α 0 0.2 0.4 0.6 0.8 Figure 6: Ablation study on α w/o Adaptive Distribution Blending. LLaVA-1.5 mPlug-owl2 ShareGPT4V Acc-HalluConv. 10 20 30 40 50 β 0 1 2 3 LLaVA-1.5 mPlug-owl2 ShareGPT4V Acc-WPI task 60 70 80 90 100 β 0 1 2 3 Figure 7: Ablation study on β w/ Adaptive Distribution Blending. 3.4.3 Effect of Parameters We evaluate the effect of our proposed hyperparameters α and β. The results are shown in Figure 6 and 7. First, we remove the adaptive distribution blending and adjust the α manually, the result shows that a larger α clearly revises the output distribution more towards the golden visual information. However, the context is ignored in return. With adaptive distribution blending, the model performance is more balanced when we enlarge the β, which won’t cause a large performance drop on contextual abilities. See Appendix B.4 for more experiment results. 4 Related work 4.1 Large Vision-Language Models Inspired by the recent success of large language models (LLMs) (Zhao et al., 2023), researchers have devoted significant effort to integrating LLMs into vison-language models to utilize their powerful language understanding and generation capabilities (Wu et al., 2023). In addition to the advanced capabilities demonstrated by closed-source models such as GPT-4V(Achiam et al., 2023), open-source large vision-language models(LVLMs), building upon powerful open-source LLMs such as LLaMa (Touvron et al., 2023) and Vicuna (Chiang et al., 2023), have adopted a powerful instruction following abilities to tackle visual-language tasks in a zero-shot manner (Zhu et al., 2023; Liu et al., 2023d; Dai et al.; Ye et al., 2023b). Possessing both visual perception abilities and language capabilities, LVLMs are further utilized to perform real-world tasks, such as tool-using (Liu et al., 2023e), web browsing (Zheng et al., 2024), and autonomous driving (Xu et al., 2023). However, current LVLMs still suffer from severe multi-modal hallucination problems (Liu et al., 2024), which brings challenges to evaluating and maintaining the reliability of LVLMs. 4.2 Multimodal Hallucination Multimodal hallucinations (Liu et al., 2024) refer to the responses generated by LVLMs that are misaligned with the corresponding visual information. Multimodal hallucination can arise due to overfitting to specific patterns in the training data, inferior abilities to recognize the visual elements, or an inability to model the multimodal input. Li et al. (2023b), Lovenia et al. (2023), take the first step towards evaluating the hallucinations in the LVLMs. Furthermore, Liu et al. (2023b), Zong et al. (2023) and Liu et al. (2023a) show that LVLMs can be easily fooled and experience a severe performance drop due to their over-reliance on the strong language prior. In addition, efforts have been made towards mitigating multi-modal hallucinations by further finetuning or post-hoc rectify(Gunjal et al., 2023; Lu et al., 2023; Liu et al., 2023b; Zhou et al., 2023; Yin et al., 2023). However, current methods are unable to completely eliminate the hallucinations generated by models, yet no one has explored the subsequent impacts of the generated hallucinations. In this paper, we take the first step towards it by systematically evaluating the multimodal hallucination snowballing phenomenon and propose a training-free method to ease LVLMs from it. 5 Conclusion In this paper, we raise the question of Whether LVLMs suffer from multimodal hallucination snowballing. We meticulously designed the MMHalSnowball framework to simulate hallucinatory conversations and study models’ behaviors when encountering hallucinations. Our investigation proved that LVLMs are being severely affected by hallucinations in the context, thus generating snowballed hallucinations. Further, we proposed the Residual Visual Decoding to alleviate the multimodal hallucination snowballing while maintaining its contextual abilities. However, our methods still have limitations when deployed to a generalpurpose assistant, which we left as future works. 11999 6 Limitations In this work, with a carefully designed evaluation framework, we have revealed that current LVLMs severely suffer from multimodal hallucination snowballing. We further proposed the RVD to mitigate the phenomenon. However, our work still has limitations. Firstly, despite the greater variety of hallucination snowballing phenomena in the realworld setting, the scenarios we focus on are still relatively simplistic. This is because constructing rich and diverse scenarios would be more difficult and would require a significant amount of effort. Secondly, instead of meticulously finding real hallucinations generated by each LVLM and constructing relevant question-answer pairs, we choose to conduct experiments on our simulated hallucinatory conversations. This is because the evaluation processes based on responses from a single LVLM will make it difficult to scale up the evaluation data and adapt to more models. Thirdly, our experiments are conducted on models of 7B and 13B sizes, and we evaluate our proposed RVD only on a few selected models. This is due to computational limitations. Fourthly, our proposed RVD is currently still limited in several conversation scenarios. We will further explore expanding this method to more diverse conversation scenarios. 7 Acknowledgments Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. This work was supported by the National Key R&D Program of China via grant No. 2021ZD0112905, National Natural Science Foundation of China (NSFC) via grant (62276078, U22B2059), the Key R&D Program of Heilongjiang via grant 2022ZX01A32, the International Cooperation Project of PCL, PCL2022D01 and the Fundamental Research Funds for the Central Universities via grant No. HIT.OCEF.2023018. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Amos Azaria and Tom Mitchell. 2023. The internal state of an llm knows when its lying. arXiv preprint arXiv:2304.13734. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A versatile visionlanguage model for understanding, localization, text reading, and beyond. Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. 2023a. Minigpt-v2: large language model as a unified interface for vision-language multi-task learning. arXiv preprint arXiv:2310.09478. Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. 2023b. Sharegpt4v: Improving large multimodal models with better captions. arXiv preprint arXiv:2311.12793. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023c. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality. Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, and Huaxiu Yao. 2023. Holistic analysis of hallucination in gpt-4v(ision): Bias and interference challenges. W Dai, J Li, D Li, AMH Tiong, J Zhao, W Wang, B Li, P Fung, and S Hoi. Instructblip: Towards generalpurpose vision-language models with instruction tuning. arxiv 2023. arXiv preprint arXiv:2305.06500. Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. 2023. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790. Anisha Gunjal, Jihan Yin, and Erhan Bas. 2023. Detecting and preventing hallucinations in large vision language models. arXiv preprint arXiv:2308.06394. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. 2023. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. arXiv preprint arXiv:2311.05232. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. 2022. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608. 12000 Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. Conference on Computer Vision and Pattern Recognition (CVPR). Amita Kamath, Jack Hessel, and Kai-Wei Chang. 2023. What’s "up" with vision-language models? investigating their struggle with spatial reasoning. Haoqiang Kang, Juntong Ni, and Huaxiu Yao. 2023. Ever: Mitigating hallucination in large language models through real-time verification and rectification. arXiv preprint arXiv:2311.09114. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32–73. Hugo Laurençon and Daniel van Strien. 2023. Introducing idefics: An open reproduction of state-of-the-art visual langage model. Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, and Lidong Bing. 2023. Mitigating object hallucinations in large visionlanguage models through visual contrastive decoding. arXiv preprint arXiv:2311.16922. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. 2023a. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726. Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023b. Evaluating object hallucination in large vision-language models. Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A Smith, and Yejin Choi. 2021. Dexperts: Decoding-time controlled text generation with experts and anti-experts. arXiv preprint arXiv:2105.03023. Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. 2023a. Hallusionbench: You see what you think? or you think what you see? an image-context reasoning benchmark challenging for gpt-4v(ision), llava-1.5, and other multi-modality models. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023b. Mitigating hallucination in large multi-modal models via robust instruction tuning. arXiv preprint arXiv:2306.14565, 1(2):9. Hanchao Liu, Wenyuan Xue, Yifei Chen, Dapeng Chen, Xiutian Zhao, Ke Wang, Liping Hou, Rongjun Li, and Wei Peng. 2024. A survey on hallucination in large vision-language models. arXiv preprint arXiv:2402.00253. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023c. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023d. Visual instruction tuning. arXiv preprint arXiv:2304.08485. Shilong Liu, Hao Cheng, Haotian Liu, Hao Zhang, Feng Li, Tianhe Ren, Xueyan Zou, Jianwei Yang, Hang Su, Jun Zhu, et al. 2023e. Llava-plus: Learning to use tools for creating multimodal agents. arXiv preprint arXiv:2311.05437. Holy Lovenia, Wenliang Dai, Samuel Cahyawijaya, Ziwei Ji, and Pascale Fung. 2023. Negative object presence evaluation (nope) to measure object hallucination in vision-language models. arXiv preprint arXiv:2310.05338. Jiaying Lu, Jinmeng Rao, Kezhen Chen, Xiaoyuan Guo, Yawen Zhang, Baochen Sun, Carl Yang, and Jie Yang. 2023. Evaluation and mitigation of agnosia in multimodal large language models. OpenAI. 2022. Introducing chatgpt. Vipula Rawte, Amit Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. Yunjie Tian, Tianren Ma, Lingxi Xie, Jihao Qiu, Xi Tang, Yuan Zhang, Jianbin Jiao, Qi Tian, and Qixiang Ye. 2024. Chatterbox: Multi-round multimodal referring and grounding. arXiv preprint arXiv:2401.13307. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Junyang Wang, Yuhang Wang, Guohai Xu, Jing Zhang, Yukai Gu, Haitao Jia, Ming Yan, Ji Zhang, and Jitao Sang. 2023a. An llm-free multi-dimensional benchmark for mllms hallucination evaluation. arXiv preprint arXiv:2311.07397. Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, et al. 2023b. Cogvlm: Visual expert for pretrained language models. arXiv preprint arXiv:2311.03079. Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, and Yejin Choi. 2023. The generative ai paradox: "what it can create, it may not understand". Jiayang Wu, Wensheng Gan, Zefeng Chen, Shicheng Wan, and S Yu Philip. 2023. Multimodal large language models: A survey. In 2023 IEEE International Conference on Big Data (BigData), pages 2247–2256. IEEE. 12001 Zhenhua Xu, Yujia Zhang, Enze Xie, Zhen Zhao, Yong Guo, Kenneth KY Wong, Zhenguo Li, and Hengshuang Zhao. 2023. Drivegpt4: Interpretable end-toend autonomous driving via large language model. arXiv preprint arXiv:2310.01412. Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. 2023a. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178. Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. 2023b. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. arXiv preprint arXiv:2311.04257. Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. 2023c. mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration. arXiv preprint arXiv:2311.04257. Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, and Enhong Chen. 2023. Woodpecker: Hallucination correction for multimodal large language models. arXiv preprint arXiv:2310.16045. Bohan Zhai, Shijia Yang, Chenfeng Xu, Sheng Shen, Kurt Keutzer, and Manling Li. 2023. Halle-switch: Controlling object hallucination in large vision language models. arXiv e-prints, pages arXiv–2310. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A Smith. 2023a. How language model hallucinations can snowball. arXiv preprint arXiv:2305.13534. Pan Zhang, Xiaoyi Dong Bin Wang, Yuhang Cao, Chao Xu, Linke Ouyang, Zhiyuan Zhao, Shuangrui Ding, Songyang Zhang, Haodong Duan, Hang Yan, et al. 2023b. Internlm-xcomposer: A vision-language large model for advanced text-image comprehension and composition. arXiv preprint arXiv:2309.15112. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. Boyuan Zheng, Boyu Gou, Jihyung Kil, Huan Sun, and Yu Su. 2024. Gpt-4v (ision) is a generalist web agent, if grounded. arXiv preprint arXiv:2401.01614. Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. arXiv preprint arXiv:2310.00754. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. Yongshuo Zong, Tingyang Yu, Bingchen Zhao, Ruchika Chavhan, and Timothy Hospedales. 2023. Fool your (vision and) language model with embarrassingly simple permutations. ArXiv preprint, abs/2310.01651. 12002 A Additional Experimental Details A.1 Hallucination Allocation After carefully analyzing the question-answer pairs in the dataset, we manage to create an answer vocabulary for answers suitable for introducing relation errors. What’s more, we utilize Part-ofSpeech2 of the answer in the fact sentence to choose proper hallucination types. Specifically, we allocate attribute hallucination for those answers tagged as adjectives and verbs and allocate existence hallucination for those answers tagged as nouns. For imagination hallucination, instead of using the annotated question-answer pair, we provide ChatGPT with all annotated objects and ask ChatGPT to generate an object that is not present in the image but is reasonable to be in the corresponding scene. Then, we directly construct a question-answer pair with the template: "question: Is there a in the image? answer: No". A.2 Prompts In this section, we list all prompts used during the process of constructing hallucinatory conversations, which include fact generation (Figure 9), conflict creation (Figure 10), description generation (Figure 11) and conflict verification (Figure 12). Note that we reuse the description generation prompt (Figure 11) to generate the ground image description by giving the annotated regional description and fact sentence. A.3 Manual Checking We randomly select 100 data for each hallucination type in our curated dataset, 400 in total. We ask three annotators to check each of them from three aspects, as Table 5 depicts. The annotation results show that the generated hallucinatory description mostly meets our requirements. A.4 Generation Details Through all our experiments, we follow a consistent generation configuration to ensure fairness. Specifically, we set the inference hyperparameter as follows: do_sample=True, temperature=1.0, top_p=0.95, top_k=None and num_beams=1. A.5 Hallucinatory Conversation Example We list four examples to demonstrate curated hallucinatory conversation of each hallucination type, namely existence(Figure 13), attribute(Figure 14), 2We use Spacy to do the Part-of-Speech tagging. β Acc-HalluConv. Acc-WPI task JSD KLD JSD KLD 0.00 14.96 14.96 92.84 92.84 0.25 16.85 19.06 92.54 93.03 0.50 19.10 24.39 93.03 92.44 0.75 21.32 30.69 92.54 89.25 1.00 23.23 37.22 91.94 86.57 2.00 32.84 53.85 91.54 66.37 3.00 41.30 59.36 87.36 49.85 Table 4: Experiment results for adjusting β with different distribution similarity measurement methods. relation(Figure 15), and imagination(Figure 16), where the ground answers are highlighted green and the hallucinated answers are highlighted red . A.6 Details of Who Provides This Image Task We construct the Who Provides This Image (WPI) task to evaluate the contextual capabilities of LVLMs. To achieve this goal, we adopt a multichoice approach and judge the match by checking if the answer contains only the correct option (instead of the content). Both the correct option and interference option are randomly generated six-digit numbers, and the third option is "None of the options are correct". To further ensure fairness and effectiveness, the order of choices is also random. An example is shown in Figure 8. Input image: Who Provides This Image (WPI) Task. User: Please describe the given image in detail. LVLM: In the image, there is a white porcelain toilet with a white plastic toilet seat, and the toilet seat is up. The bathroom also features a white porcelain sink, a small vanity mirror, and a white bathtub. The shower area is separated by a glass shower door. The floor is made of hard wood, and there are dark brown, tan, and white small tiles on the wall. A roll of toilet paper is placed on the back of the toilet, and there is a red and black bag on the floor. This image is provided by 292071. The overall color scheme of the bathroom is light and clean. User: Who provided this image?, Please select the correct option. Options: (A) 292071; (B) None of the options are correct; (C) 886958. LVLM: #response Figure 8: An conversation example of the WPI task. A six-digit number is randomly inserted into the first round of LVLM’s response. To show an LVLM can maintain its contextual ability, it is required to select a correct answer out of 3 options including two distractors. 12003 Aspect Annotator-1 Annotator-2 Annotator-3 Agreement Kappa Are the hallucinations in the conversation consistent with the hallucination type? 0.990 0.988 0.990 0.988 0.983 Is the hallucinatory answer conflict with the original answer? 0.995 0.993 0.990 0.985 0.980 Does the hallucinatory description support the hallucinatory answer? 0.990 0.953 0.988 0.940 0.920 Table 5: Manual checking for the sampled data. B Additional Experimental Results B.1 More Evaluation Results We show our detailed evaluation results for each hallucination type in Table 6. B.2 GPT-4 Answer Examples We present GPT-4 answer examples with the Question Prompt. The first example is represented in Figure 17, which illustrates that GPT-4V is able to adaptively focus on golden visual information, and further identify and clarify the hallucinations in the previous hallucinatory description in some cases. The second example is represented in Figure 18, which demonstrates that GPT-4V tends to refuse to answer some categories of questions, leading to difficulty in the evaluation and the degradation of the evaluation results. B.3 Hallucination-free Context Experiment Details In order to exclude the interference of irrelevant factors and to check whether LVLMs are affected by the hallucination-free context, we further set up two conversation settings, namely FactConv. and IrrConv. Corresponding examples are shown in Figure 19. We follow equation 3 and equation 4 to calculate FR and WFR metric, respectively, but we modify the definition of ˆY −to represent generated answers under FactConv. or IrrConv. setting. We further present the full experiment result for these two conversation settings in Table 7 B.4 Effect of Different Similarity Measurement Methods In our RVD, we choose JSD to evaluate the output distribution similarity, because it’s a symmetric metric that measures the difference between two distributions, with a range [0, 1], which fits our goal of adjusting the α (range is also [0, 1]) dynamically using the difference between two distributions. We also try to use the Kullback–Leibler divergence (KLD) as the similarity measurement method, which is not a symmetric metric with a range [0, +∞] and can’t be directly applied to our Residual Visual Decoding (RVD). Specifically, to transform the range into [0, 1], We modify the Equation 9 to the following form: τ = 1 −EXP(−KLD(pθ(y|v, x)||pθ(y|x))). (11) We compare the experiment results of our RVD using KLD and JSD as the similarity measurement method on our proposed MMHalSnowball framework with LLaVA-1.5 7B. The results are as the Table 4 shows. We can observe from the table that RVD with KLD aggressively puts more emphasis on visual information, resulting in a better result in hallucination snowballing with smaller β, but a worse contextual ability. The result indicates that the JSD has a generally smaller value and is more balanced compared to the KLD in alleviating snowballed hallucinations while maintaining contextual ability. B.5 Case Study In this part, we present some cases of the LLaVA1.5-7B model equipped with our proposed RVD. All cases are using the Question Prompt. We provide some case studies in Figure 20, one for each hallucination type, to demonstrate the effectiveness of our methods, where our RVD with LLaVA-1.57B successfully mitigated the snowballed hallucinations. In these examples, we can observe that with our proposed RVD, the model can focus more on the visual information to avoid generating snowballed hallucinations, rather than solely rely on the previously generated hallucinatory text and thus generate snowballed hallucinations. What’s more, in the example of Imagination Hallucination, the model with our RVD can even correct its previous mistakes, further illustrating the model’s contextual capability is preserved while avoiding hallucination snowballing. 12004 Please combine and rephrase the following question-answer pairs into a grammatically correct single declarative sentence. And be cautious about the singular and plural forms. Here are some examples: #Example1 #Example2 ... Please generate pure json-format response which can be directly loaded as json objects, following the requirements and the above examples: sample_id: sample['sample_id'] question: sample['question'] answer: sample['fullAnswer'] Figure 9: Prompt used to generate fact sentence based on question-answer pair. Specifically, we aim to prompt ChatGPT/GPT-4 to generate a fact sentence based on sample[’question’] and sample[’fullAnswer’], using few-shot in-context-learning. Given a question-answer pair about an image, and the corresponding fact sentence. Please modify the answer to another one that are compatitable with the question. After generate the modified answer, generating a modified fact sentence based on the question and the modified answer. The modified sentence and answer should meet the following requirements: 1. The modified answer should be under three words, ideally just one. 2. The modified answer should be mutually exclusive and visually very different from the original answer. 3. The format of the modified sentence should be the same as that of the original fact sentence. Here are some examples: #Example1 #Example2 ... Please generate pure json-format response which can be directly loaded as json objects, following the requirements and the above examples: sample_id: sample['sample_id'] question: sample['question'] answer: sample['answer'] fact: sample['fact'] Figure 10: Prompt used to create conflict answer and conflict fact based on original question-answer pair and fact generated by Figure 9. Specifically, we aim to prompt ChatGPT/GPT-4 to generate this conflict information based on sample[’question’], sample[’answer’] and sample[’fact’], using few-shot in-context-learning. Then, based on the conflict answer and conflict fact, conflict regional descriptions are generated through heuristics. 12005 You are trying to pretend to be an AI visual assistant, and you are seeing a single image. Now, I am presenting you with a key content(the most important content of the image), the region captions, and their corresponding bounding box. Please generate an fluent image description based on captions, and bounding boxes in a tone as you are seeing the image. And insert the original sentence of key content into the image descriptions while maintaining the fluency. Do not mention that you are seeing captions and bounding boxes. Make sure the generated description is less than 100 words. Do not try to describe uncertain details or make some assumptions. Here is examples: #Example1 #Example2 ... Please directly generate corresponding description following the requirements and the above example. Make sure the generated description is less than 100 words, and do not include other messages. Query: sample_id: sample['sample_id'] regional captions: sample['conflic_regional_descriptions'] if generate_hallucination else sample['ground_regional_descriptions'] key content: sample['conflict_fact'] if generate_hallucination else sample['ground_fact'] Figure 11: Prompt used to generate hallucinatory image description based conflict information generated by Figure 10. Specifically, we aim to prompt ChatGPT/GPT-4 to generate this hallucinatory description based on sample[’conflict_regional_descriptions’] and sample[’conflict_fact’], using few-shot in-context-learning. Note that we also use this prompt to generate the ground image description by giving the annotated regional description and fact sentence. Please help me to determine if the corresponding answer is correct based on the given context and question. Please respond with True or False. Here are some examples: #Example1 #Example2 ... Please generate pure json-format response which can be directly loaded as json objects, following the requirements and the above examples: sample_id: sample['sample_id'] question: sample['question'] context: sample['modified_description'] if verify_hallucination else sample['fact_description'] answer1: sample['modified_answer'] answer2: sample['answer'] Figure 12: Prompt used to verify if the generated hallucinatory description truly conflicts with the original answer and implies the conflict answer. Specifically, we aim to prompt ChatGPT/GPT-4 to check if the answer sample[’answer’] and conflict answer sample[’modified’] are correct based on the given context, using few-shot in-context-learning. Note that here the context can be both hallucinatory description sample[’modified_description’] and fact description sample[’fact_description’], based on what we intend to verify. 12006 Input image: HalluConv. User: Please describe the given image in detail. LVLM: In the image, a man is looking into the rear view mirror of a car, holding a camera in his hand. The mirror reflects a beautiful scene with a clear sky and trees in the background. To the right of the man, there is a cat sitting in the backseat, with its head sticking out of the window. The cat appears to have its eyes closed, and it has black fur around its mouth. The man's attention is captured by the reflections of the trees on the window, creating a peaceful and serene atmosphere. User: Which kind of animal is to the right of the man? LVLM: #response Figure 13: An hallucinatory conversation example for the existence hallucination. Input image: HalluConv. User: Please describe the given image in detail. LVLM: In a dynamic skatepark scene, a young skateboarder impresses with a mid-air trick. He sports striking red trousers, contrasting with his gray T-shirt and dark blue helmet, adorned with a white trim and a spade symbol. His green wristband and the red skateboard add vibrant touches. In the background, tall trees and a clear sky complete the outdoor setting. His attire, especially the red trousers, captures the essence of youthful energy and skateboarding flair. User: What color are the trousers? LVLM: #response Figure 14: An hallucinatory conversation example for the attribute hallucination. Input image: HalluConv. User: Please describe the given image in detail. LVLM: In the image, a variety of fresh fruits and vegetables are arranged on a table. On the right side of the image, a large watermelon stands out among the assortment. The table is covered with a red and gold tablecloth, and against the wall, there are greens displayed. Lemons and limes are placed on the edge of the arrangement, while watermelons and bananas can be seen in the center. Pineapples and apples are also part of the arrangement. Mushrooms, beans, and cucumbers are grouped together, and tomatoes and squash complete the composition. A white wall serves as the background, and there is an outlet on the wall. User: On which side is the large fruit? LVLM: #response Figure 15: An hallucinatory conversation example for the relation hallucination. Input image: HalluConv. User: Please describe the given image in detail. LVLM: In the image, a small yellow and black bird is perched on a grey rock, with its sharp white beak and tiny, round, black eye clearly visible. The bird has a yellow mark over its eye and is covered in green feathers. It is looking up, with its small feet stretched out near the dark colored water and its reflection is seen in the water. In the background, there is a tree, adding a natural element to the scene. User: Is there a tree in this image? LVLM: #response Figure 16: An hallucinatory conversation example for the imagination hallucination. 12007 Model Imagination Existence Attribute Relation CleanConv. HalluConv. CleanConv. HalluConv. CleanConv. HalluConv. CleanConv. HalluConv. Acc↑ Acc↑ FR↓ WFR ↓ Acc↑ Acc↑ FR↓ WFR ↓ Acc↑ Acc↑ FR↓ WFR ↓ Acc↑ Acc↑ FR↓ WFR ↓ 7B LLM LLaVA-1.5 81.65 1.14 ↓80.51 98.79 98.79 60.82 9.22 ↓51.60 79.30 88.05 55.96 10.18 ↓45.78 78.85 85.21 83.76 38.09 ↓45.67 57.07 57.61 MiniGPT-4 5.69 0.83 ↓4.86 96.00 96.00 55.05 7.62 ↓47.43 78.58 88.24 39.07 2.98 ↓36.09 86.65 94.28 51.44 11.61 ↓39.83 86.28 87.02 MiniGPT-v2 67.40 4.70 ↓62.70 94.26 94.26 52.48 14.45 ↓38.03 64.02 78.89 47.35 15.56 ↓31.79 59.44 75.70 78.60 49.39 ↓29.21 46.81 47.10 InternLM-XComposer 49.05 1.06 ↓47.99 98.45 98.61 46.45 6.21 ↓40.24 79.96 90.65 35.18 4.06 ↓31.12 83.29 92.00 43.10 11.91 ↓31.19 81.51 83.10 ShareGPT4V 84.08 1.82 ↓82.26 98.20 98.20 59.49 10.99 ↓48.50 76.30 84.20 56.79 9.52 ↓47.27 79.30 86.30 83.84 40.06 ↓43.78 55.29 55.66 CogVLM 85.97 0.99 ↓84.98 96.74 99.03 65.87 2.22 ↓63.65 88.16 96.77 64.24 2.24 ↓62.00 92.27 96.78 82.32 5.01 ↓77.31 93.18 94.47 mPLUG-Owl 26.31 0.30 ↓26.01 96.25 99.14 46.90 5.76 ↓41.14 70.89 91.12 35.93 2.40 ↓33.53 73.73 94.70 43.25 6.30 ↓36.95 78.77 90.88 mPLUG-Owl2 77.86 1.21 ↓76.65 98.73 98.73 54.70 9.22 ↓45.48 78.61 87.03 50.99 9.02 ↓41.97 79.06 84.90 56.68 12.14 ↓44.54 82.86 83.94 Qwen-VL-Chat 91.66 2.50 ↓89.16 97.35 97.35 62.32 14.45 ↓47.87 68.42 77.67 66.56 15.31 ↓51.25 71.77 77.61 88.01 46.66 ↓41.35 46.90 48.19 Otter 62.47 0.91 ↓61.56 98.42 98.91 53.55 18.71 ↓34.84 58.61 72.19 46.52 10.51 ↓36.01 66.90 81.49 45.68 26.02 ↓19.66 60.47 70.10 IDEFICS 44.28 1.29 ↓42.99 97.95 98.63 42.46 8.69 ↓33.77 75.57 87.47 32.70 5.46 ↓27.24 83.04 92.41 43.85 13.88 ↓29.97 81.31 85.64 InstructBLIP 76.04 1.06 ↓74.98 98.70 98.70 58.78 8.78 ↓50.00 77.68 87.33 54.72 5.30 ↓49.42 86.23 91.83 49.39 3.72 ↓45.67 94.62 95.39 13B LLM LLaVA-1.5-13B 81.27 1.67 ↓79.60 98.32 98.32 60.28 10.90 ↓49.38 77.21 86.18 57.70 6.87 ↓50.83 83.64 90.53 87.48 40.14 ↓47.34 57.24 57.33 ShareGPT4V-13B 84.31 1.36 ↓82.95 98.56 98.56 60.46 8.51 ↓51.95 78.74 88.86 58.53 7.04 ↓51.49 83.45 89.82 83.54 35.58 ↓47.96 59.85 60.22 InstructBLIP-13B 71.11 10.01 ↓61.10 85.18 89.66 49.29 9.40 ↓39.89 67.27 85.97 47.02 8.69 ↓38.33 71.30 87.15 45.52 22.08 ↓23.44 74.83 78.33 Closed-Source GPT-4V 90.14 85.52 ↓4.62 10.01 10.26 48.32 39.80 ↓8.52 21.83 31.56 41.64 34.19 ↓7.45 21.87 32.60 58.50 45.22 ↓13.28 45.78 48.64 Table 6: Experiment results for models answering the same questions under two different conversation settings: CleanConv. and HalluConv. Numbers that are highlighted orange represent the model performance drop caused by hallucinatory conversation, compared to the CleanConv. All experiments are implemented under a zero-shot setting to avoid the bias introduced by demonstrations. Note that all the models except GPT-4V have a significant drop in performance on HalluConv., and this trend holds for GPT-4V as well. 12008 Input image: CleanConv. User: What animal is wearing a blanket? GPT-4V: Horse HalluConv. User: Please describe the given image in detail. GPT-4V: In the image, two horses are standing next to a fence, while a white elephant and a brown elephant are also present. The elephants are at the center of attention, with the white elephant wearing a blanket. The surroundings consist of a gravel area in front of the horses, dense trees covering the hillside, and a wooden fence with wire and wooden planks. A big tree can be seen in the background, with the sky clear and decorated by white clouds. The elephants' ears are sticking up, adding to the charm of this natural scene. User: What animal is wearing a blanket? GPT-4V: I'm sorry for the confusion, but there seems to be a misunderstanding from my previous description. The image features two horses, not elephants. Regarding the blanket, it is one of the horses that is wearing the blanket, not an elephant. Figure 17: An example that GPT-4V can answer correctly given the hallucinatory description. Input image: CleanConv. User: What ethnic group is the man that is riding a motorcycle? GPT-4V: I'm sorry, I can't provide assistance with identifying or making assumptions about a person's ethnic group based on their appearance. HalluConv. User: Please describe the given image in detail. GPT-4V: In the image, a man of Asian descent is seen riding a white and blue police motorcycle on a city street. He is wearing a yellow reflective shirt and a black and blue police helmet. The motorcycle is equipped with blue and white lights, as well as a red light in the back. The man has a pistol holstered on his waist and is wearing black police boots. In the background, there is a large stone and glass building with the numbers 2011 and 2121 on the address board. The scene is illuminated by bright lights, creating a sense of urgency. User: What ethnic group is the man that is riding a motorcycle? GPT-4V: I'm sorry for the confusion in my previous response. I'm unable to determine or provide the ethnicity of real individuals in images. Figure 18: An example that GPT-4V refuses to answer. Input image: HalluConv. User: Please describe the given image in detail. LVLM: In the image, two horses are standing next to a fence, while a white elephant and a brown elephant are also present. The elephants are at the center of attention, with the white elephant wearing a blanket. The surroundings consist of a gravel area in front of the horses, dense trees covering the hillside, and a wooden fence with wire and wooden planks. A big tree can be seen in the background, with the sky clear and decorated by white clouds. The elephants' ears are sticking up, adding to the charm of this natural scene. User: What animal is wearing a blanket? LVLM: #response Input image: FactConv. User: Please describe the given image in detail. LVLM: In the image, there are two horses standing next to a fence. One horse is white, covered with a blanket, and wearing a covering. The other horse is brown, also covered with a blanket. Behind the horses, there are hills covered with trees. The ground is covered with gravel, and there is tall grass by the fence. The scene is bathed in a clear white sky, creating a peaceful atmosphere. User: What animal is wearing a blanket? LVLM: #response Input image: IrrConv. User: Does this image contain visual elements? LVLM: Absolutely, this image is full of visual elements. No matter what it shows, how it's styled, or what it's made of, it's got these important parts that really make it stand out. You've got lines that guide your eyes around, shapes that put everything together, colors that stir up feelings, textures that give you a sense of what it would feel like to touch, and forms that add depth. These bits are super important for getting the message across, making it look good, and showing what the artist wanted to say. They all work in sync to shape how we see it, make us feel something, and tell a story. This is true for any kind of visual art, be it a photo, a painting, or even digital creations. User: What animal is wearing a blanket? LVLM: #response (a) HalluConv. (b) FactConv. (c) IrrConv. Figure 19: An hallucinatory conversation example for control groups. 12009 Model CleanConv. FactConv. CleanConv. IrrConv. Acc↑ Acc↑ FR↓ WFR ↓ Acc↑ Acc↑ FR↓ WFR ↓ 7B LLM LLaVA-1.5 71.24 89.28 ↑18.04 4.74 6.35 71.24 65.35 ↓5.89 14.93 21.06 MiniGPT-4 37.12 67.67 ↑30.55 5.85 9.15 37.12 35.11 ↓2.01 23.62 31.58 MiniGPT-v2 62.12 75.39 ↑13.27 11.01 14.83 62.12 56.46 ↓5.66 20.75 29.01 InternLM-XComposer 43.51 74.04 ↑30.53 14.79 18.35 43.51 40.82 ↓2.69 25.83 40.34 ShareGPT4V 71.81 89.32 ↑17.51 3.56 5.29 71.81 69.74 ↓2.07 10.47 16.27 CogVLM 75.17 93.20 ↑18.03 0.64 1.82 75.17 74.68 ↓0.49 2.09 4.17 mPLUG-Owl 37.80 62.01 ↑24.21 13.56 27.23 37.80 30.54 ↓7.26 26.97 50.80 mPLUG-Owl2 60.47 88.06 ↑27.59 6.29 7.62 60.47 59.82 ↓0.65 18.99 25.01 Qwen-VL-Chat 77.94 91.27 ↑13.33 0.72 1.68 77.94 77.12 ↓0.82 2.81 5.29 Otter 52.12 66.70 ↑14.58 14.62 22.45 52.12 44.06 ↓8.06 27.31 34.34 IDEFICS 40.94 73.68 ↑32.74 14.05 18.96 40.94 38.01 ↓2.93 29.96 47.64 InstructBLIP 59.88 86.10 ↑26.22 6.28 7.69 59.88 54.90 ↓4.98 17.90 23.47 13B LLM LLaVA-1.5-13B 72.43 90.85 ↑18.42 3.69 4.94 72.43 69.31 ↓3.12 11.22 17.49 ShareGPT4V-13B 72.43 91.98 ↑19.55 2.50 4.00 72.43 70.80 ↓1.63 9.69 15.82 InstructBLIP-13B 53.53 71.31 ↑17.78 16.27 22.69 53.53 42.71 ↓10.82 28.66 45.19 Table 7: Experiment results for models answering the same questions under two different conversation settings: CleanConv., FactConv., IrrConv., and HalluConv. settings. Numbers that are highlighted orange and green represent the model performance drop and improvement in different conversation settings, compared to the model performance under CleanConv. setting. 12010 Input image: CleanConv. User: What type of furniture is shown in this photograph? LVLM: The photograph features a bed as the furniture in the room. HalluConv. User: Please describe the given image in detail. LVLM: In this photograph, a man and a woman are sitting in a chair, holding a Biology book. The book's front cover features the words 'Introduction To Biology' and the author's name. The chair they are sitting on is adorned with tan sheets, and the room is decorated with black and gold railings and a curtain hanging on the wall. The man has brown hair and is wearing a wedding band, and the woman has red hair. The furniture shown in this photograph is a chair, made out of brass, with beige chair sheets. User: What type of furniture is shown in this photograph? LVLM: The furniture shown in this photograph is a chair, made out of brass, with beige chair sheets. LVLM w/ RVD: The photograph shows a large bed in a small bedroom. Input image: CleanConv. User: What's the depth of the water? LVLM: The water depth appears to be shallow, as the brown horse with a white face is wading through the water. HalluConv. User: Please describe the given image in detail. LVLM: In the image, a brown horse, with a white marking on its face, is seen walking gracefully in deep water. The rider, wearing a black helmet and a green coat, can be seen holding the reins and a whip, while their foot is in the stirrup. The horse's ears are pricked up, and its hooves make a splash as it walks. In the distance, a mountain is visible. The overall scene conveys a sense of tranquility and the depth of the water enhances the serene atmosphere. User: What's the depth of the water? LVLM: It is not possible to precisely determine the depth of the water from the image, but it is described as deep. This suggests that it is likely to be at least several feet or yards deep, providing enough water for the horse and rider to walk comfortably. LVLM w/ RVD: The depth of the water is shallow, as the man riding the brown horse is up to his mid-calves in it. Input image: CleanConv. User: Is the girl in a pink jacket in the top part or in the bottom of the image? LVLM: The girl is in the bottom part of the image. HalluConv. User: Please describe the given image in detail. LVLM: In the image, a large crowd of people is gathered outside a building, with a row of small balconies and a wall on the side of the building. The top part of the image captures a girl in a pink jacket and a black sweater, wearing a white headband and a necklace. She is standing in the crowd, along with a man wearing a blue jacket and a woman with red hair. The surroundings are adorned with green trees and a bush on the right. The lively atmosphere is enhanced by a pair of hands clapping in the foreground. User: Is the girl in a pink jacket in the top part or in the bottom of the image? LVLM: The girl is standing in the top part of the image, on a side balcony. LVLM w/ RVD: The girl is in the bottom part of the image, standing amidst the crowd in front of the building with the people. Input image: CleanConv. User: Is there a dog in this image? LVLM: No, there is no dog in this image. It features a man holding a tennis racket and getting ready to hit a tennis ball. HalluConv. User: Please describe the given image in detail. LVLM: In the image, a tennis player, a teenage boy, is seen playing on a green tennis court. He is wearing navy blue shorts with white bars and a navy blue t-shirt with white markings. The boy has brown hair and wears glasses. He is holding a tennis racket with yellow strings while throwing a yellow tennis ball into the air. The boy's blue and white shoes can be seen in the frame. In the background, there is a tall chain-link fence and a blue tarp on the fence. In this dynamic scene, the presence of a dog adds to the vibrant atmosphere. User: Is there a dog in this image? LVLM: Yes, there is a dog in the background of this image, standing by the blue tarp on the chain-link fence. LVLM w/ RVD: No, there isn't a dog in the image. The focus is on the young man on the tennis court. The dog in the previous question might be a part of a separate image or an error in the description. Existence Hallucination Attribution Hallucination Relation Hallucination Imagination Hallucination Figure 20: Cases in which our RVD with LLaVA1.5-7B successfully mitigated the snowballed hallucinations. The ground answers are highlighted green and the hallucinated answers are highlighted red . 12011
2024
648
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12012–12026 August 11-16, 2024 ©2024 Association for Computational Linguistics mCoT: Multilingual Instruction Tuning for Reasoning Consistency in Language Models Huiyuan Lai and Malvina Nissim Center for Language and Cognition (CLCG) University of Groningen / The Netherlands {h.lai, m.nissim}@rug.nl Abstract Large language models (LLMs) with Chainof-thought (CoT) have recently emerged as a powerful technique for eliciting reasoning to improve various downstream tasks. As most research mainly focuses on English, with few explorations in a multilingual context, the question of how reliable this reasoning capability is in different languages is still open. To address it directly, we study multilingual reasoning consistency across multiple languages, using popular open-source LLMs. First, we compile the first large-scale multilingual math reasoning dataset, mCoT-MATH, covering eleven diverse languages. Then, we introduce multilingual CoT instruction tuning to boost reasoning capability across languages, thereby improving model consistency. While existing LLMs show substantial variation across the languages we consider, and especially low performance for lesser resourced languages, our 7B parameter model mCoT achieves impressive consistency across languages, and superior or comparable performance to close- and open-source models even of much larger sizes. 1 Introduction Recent progress on language models shows that they can achieve surprising performance on complex reasoning tasks in natural language processing (NLP), such as symbolic reasoning (Wei et al., 2022a; Kojima et al., 2022), math word problem (Cobbe et al., 2021; Wei et al., 2022a), and commonsense reasoning (Wei et al., 2022a; Kojima et al., 2022). Most of the research focuses on prompting large language models (LLMs), where the LLMs are conditioned on a few examples or instructions describing the target task (Wei et al., 2022b; Fu et al., 2023b; Zhou et al., 2023; Kojima et al., 2022; Chuanyang et al., 2023). While most previous works focus on reasoning with LLMs in English, Shi et al. (2023) recently have extended it to a multilingual setting leveraging a few-shot prompting approach. However, Math Problems LLMs CoT Reasoning Final Answers aligned … EN SW ZH aligned … EN Problem: Silvia has $23. She bought five bagels for $3 each. How much money does she have left? EN CoT:5 bagels for $3 each should cost 5 * $3 = $15. Silvia had $23 in the beginning, so now she has 23 -15 = $8 left. The answer is 8. Figure 1: Overview of multilingual reasoning; LLMs are expected to have consistent reasoning capabilities across different languages when given the same problem which has the same answer. Shown in picture are three example languages: English (EN), Swahili (SW), and Chinese (ZH). For EN, we show the problem formulation, and the Chain-of-Thought (CoT) reasoning. performance for lesser resourced languages still lags behind, in a similar way as generalization of factual knowledge has been shown to vary widely across languages (Fierro and Søgaard, 2022; Qi et al., 2023), mainly due to the fact that most languages are not well represented in LLMs. The work we present here has the twofold aim of (i) better understanding and evaluating the general reasoning capabilities of LLMs beyond just English, and (ii) providing lesser resourced languages with capable but manageable models which can be used for reasoning tasks. To this end, we propose to measure reasoning consistency across multiple languages. As shown in Figure 1, LLMs are expected to produce logically similar reasoning solutions and consistent final results for inputs which are semantically equivalent but expressed in different languages. Based on our findings, we aim to enhance multilingual 12012 reasoning abilities through instruction tuning, yielding a model that can solve reasoning tasks in various languages, with similar reasoning capabilities across those languages. Specifically, we focus on math word problems and empirically investigate the reasoning consistency of current open-source state-of-the-art LLMs across multiple languages. The model’s reasoning consistency is evaluated toward the final answer (the final results should be the same across languages). On the basis of preliminary results showing substantial reasoning gaps between different languages, we propose a multilingual Chain-of-Thought reasoning (mCoT) framework using instruction tuning, which aims to boost reasoning capability across languages, thereby improving consistency. So, first we construct the multilingual instruction training dataset (mCoTMATH) by automatically translating English data into multiple languages, and then we use it for LLM finetuning. In summary, our contributions are:1 • We propose to study reasoning consistency of LLMs across different languages, providing insights into (the evaluation of) this ability of LLMs. • We compile and distribute mCoT-MATH, the first large-scale multilingual math CoT reasoning dataset containing around 6.3 million samples for 11 diverse languages. • Based on mCoT-MATH, we train and make available a 7B parameter model mCoT for multilingual math reasoning, which achieves impressive consistency across languages, and superior or comparable performance to closeand open-source models. 2 Background Math Reasoning with LLMs In recent years, LLMs such as GPT-3 (Brown et al., 2020) have shown impressive performance in various NLP tasks. In particular, LLMs with CoT-based method exhibit the emergent ability to perform complex math reasoning tasks (Wei et al., 2022b; Wang et al., 2023). One popular line is prompt engineering, which aims to elicit reasoning capability of LLMs by exploring various prompts, such as basic CoT prompting (Wei et al., 2022b), complex CoT (Fu et al., 2023b), auto-CoT (Zhang et al., 2023), self-consistency CoT (Wang et al., 2023), 1Data, code, and model are available at https://github. com/laihuiyuan/mcot. multilingual CoT (Shi et al., 2023), least-to-most prompting (Zhou et al., 2023), progressive-hint prompting (Chuanyang et al., 2023), residual connection prompting (Jiang et al., 2024), and using specific phrases like “Let’s think step by step” (Kojima et al., 2022). In addition to guiding the model through prompting methods, several works propose to control the reasoning path during inference through verifier (Cobbe et al., 2021; Khalifa et al., 2023) and decoding method (O’Brien and Lewis, 2023). Another research line is a tailor-designed reasoning model that improves the mathematical reasoning ability of LLMs through instruction tuning on reasoning data, including reinforcement learning (Uesato et al., 2022; Luo et al., 2023), knowledge distillation (Fu et al., 2023a; Hsieh et al., 2023; Magister et al., 2023; Shridhar et al., 2023; Yue et al., 2024), and data augmentation (Huang et al., 2023; Zelikman et al., 2022; Ni et al., 2023; Zhu et al., 2023; Yu et al., 2024). In this work we extend the multilingual reasoning carried out by Shi et al. (2023) in GPT-3 and PaLM (Chowdhery et al., 2022) to current open-source popular LLMs, and study multilingual reasoning consistency. Additionally, following previous works (Chen et al., 2023a; Chai et al., 2024), we employ machine translation to translate existing English data into other languages for multilingual reasoning, but we do so at a very large scale, with substantial gains in performance. Consistency in Language Models Consistency is one of the core qualities of language models, which refers to models behaving consistently on semantically equivalent inputs (Elazar et al., 2021; Fierro and Søgaard, 2022). Intra-language consistency has been studied across different tasks, such as language inference (Li et al., 2019; Mitchell et al., 2022), explanation generation (Camburu et al., 2020), fill-in-the-blank phrases (Ravichander et al., 2020), and math reasoning (Wang et al., 2023). These works mainly consider the English language, although there is some recent research on factual consistency in a multilingual scenario (Fierro and Søgaard, 2022; Qi et al., 2023). To our knowledge, the present work is the first systematic analysis of multilingual reasoning consistency for LLMs, measuring the extent to which language models reason about the same answer to the same question written in different languages. In this context, we introduce multilingual CoT instruction tuning to boost reasoning capability across 12013 multiple languages, thereby improving model consistency. 3 Multilingual Reasoning Consistency This section provides details on the task, the data, and the experimental setup we use to investigate the LLMs’ reasoning capability in different languages and their reasoning consistency. 3.1 Task, Dataset, and Setup Task Definition Generally, each math problem is fed to an LLM along with a set of manually written CoT exemplars, which is expected to generate all necessary intermediate steps up to and including the final answer. Given a set of math problems M, each problem consists of a triplet (question:qx, reasoning steps:rx, final answer:a), where the problem and intermediate steps are written in a natural language x ∈L. We define the multilingual reasoning consistency of an LLM as the extent to which it reasons to the same answer for the same question asked in different languages, including correct consistency and incorrect consistency. For a given language pair (x, y), correct consistency (CC) is the percentage of identical math problems written in that language pair for which the correct answer is predicted. Formally: CC(x, y) = P|M| i=1I (ˆax i = ˆay i = ai) |M| (1) Where I(·) is the indicator function. ai represents the gold answer corresponding to the i-th math question, ˆax i and ˆay i are the predicted answers to the i-th question written in languages x and y, respectively. For incorrect consistency (IC), we calculate the proportion of predicted answers that are incorrect but identical in the language pair out of the total number of incorrect answers in each respective language, and take the average of the two languages as the final result: IC(x, y) = P|M| i=1I(ˆax i = ˆay i ̸= ai) P|M| i=1 I(ˆax i ̸= ai) + P|M| i=1I(ˆax i = ˆay i ̸= ai) P|M| i=1 I(ˆay i ̸= ai) ! /2 (2) Reasoning Dataset As defined above and depicted in Figure 1, multilingual reasoning consistency requires both questions and reasoning steps to be written in the same language, for multiple languages. GSM8K (Cobbe et al., 2021) is an English (EN) dataset which includes high-quality grade school math word problems, each involving basic arithmetic operations (addition, subtraction, multiplication, and division) that usually require two to eight steps to solve according to the official, provided solution. It contains approximately 7,500 and 1,319 samples for training and testing, respectively. Based on GSM8K, Shi et al. (2023) create MGSM, extending math reasoning into a multilingual setting. To do so, they select the first 250 problems from GSM8K and manually translate them into ten different languages: Bengali (BN), Chinese (ZH), French (FR), German (DE), Japanese (JA), Russian (RU), Spanish (ES), Swahili (SW), Telugu (TE) and Thai (TH). SW, BN, TE, and TH are usually heavily underrepresented languages in pre-trained language models; in PaLM (Chowdhery et al., 2022) they account for less than 0.1% of the pretraining data. We conduct experiments on these ten languages plus English, and measure the reasoning consistency between any two languages. Model Setup We select a range of open-source state-of-the-art LLMs in three different sizes: • 7B: LLAMA2 (Touvron et al., 2023); Qwen (Bai et al., 2023); Mistral (Jiang et al., 2023). • 13-14B: LLAMA2-13B (Touvron et al., 2023); Qwen-14B (Bai et al., 2023). • 56-72B: Mistral-8×7B2; LLAMA2-70B (Touvron et al., 2023); Qwen-72B (Bai et al., 2023). We perform few-shot CoT reasoning following Wei et al. (2022a) and Shi et al. (2023), who improve the reasoning task by augmenting few-shot examples with intermediate steps. We use 8-shot for all languages except TE which only uses 2-shot due to the maximum number of input tokens. All CoT prompts in different languages are sourced from the original multilingual CoT reasoning paper (Shi et al., 2023). On the other hand, assuming that we can not access existing math problems with the corresponding reasoning solutions in some languages, a natural and simple way is to use machine translation to translate existing data (e.g., English data) into the target languages. Therefore, we compare not only the consistency between languages, but also the consistency between humantranslated (HT) prompts and machine-translated 2https://mistral.ai/news/mixtral-of-experts/ 12014 EN SW BN TE TH JA ZH RU ES FR DE AVG 0 20 40 60 80 100 Accuracy (%) LLAMA2-7B Qwen-7B Mistral-7B LLAMA2-13B Qwen-14B LLAMA2-70B Qwen-72B Mistral 7*8B (a) Reasoning accuracy (%) using human-written prompt. SW BN TE TH JA ZH RU ES FR DE AVG 0 20 40 60 80 100 Accuracy (%) LLAMA2-7B Qwen-7B Mistral-7B LLAMA2-13B Qwen-14B LLAMA2-70B Qwen-72B Mistral 7*8B (b) Reasoning accuracy (%) using machine-translated prompt. Figure 2: Accuracy (%) on MGSM of different models with the few-shot method. All machine-translated prompts are translated from English data using Google Translate. (MT) prompts to understand the impact of translated data on model performance. 3.2 Results Model Performance We first compare the performance of different models, as shown in Figure 2.3 Unsurprisingly, and similar to the results reported by Shi et al. (2023), the performance generally improves for all models across all languages as the models scale up in size. However, in contrast to their findings for the large-scale model PaLM540B, open-source models still yield a performance gap between underrepresented languages (which cover less than 0.1% of the training corpora in PaLM) and high-resource languages. For instance, Qwen-7B scores above 30% in most high-resource languages (e.g., ES and FR) and below 10% in underrepresented languages (e.g., SE, BN, and TE). Particularly, Mistral-8×7B and Qwen-72B achieve 3Detailed results are in Appendix A.2. similar or higher scores than PaLM-540B in highresource languages, while PaLM-540B has better results in underrepresented languages. Regarding HT and MT prompts, we compare all languages except EN, and observe that the results using these two prompts are very close for all models and languages we considered. Reasoning Consistency We illustrate multilingual reasoning consistency results for different models in Figure 3. When looking at reasoning consistency between language pairs as presented in Figure 3(a), consistent with the trend in model performance, we find overall consistency to be lower for smaller-scale models (7B and 13B) and underrepresented languages (SW, BN, and TE). The correct consistency for all models improves with increasing language representation in pretraining, which is not surprising as higher-resource languages have better accuracy. Incorrect consistency, however, shows a different trend, with higher 12015 LLAMA2-7B Qwen-7B Mistral-7B LLAMA2-13B LLAMA2-70B Qwen-72B Mistral-8×7B Qwen-14B (a) Reasoning consistency (%) between language pairs in LLMs. LLAMA2-7B Qwen-7B Mistral-7B LLAMA2-13B LLAMA2-70B Qwen-72B Mistral-8×7B Qwen-14B (b) Reasoning consistency (%) between human-translated (row) and machine-translated (column) prompts in LLMs. Figure 3: Multilingual reasoning consistency. The triangle above the marked diagonal shows the consistency of the models on the correct answers; the triangle below the diagonal contains the consistency between the language pairs where the final answer is the same but incorrect. scores between underrepresented languages and between high-resource languages in most models, and this occurs even in larger-scale models such as LLAMA2-70B and Qwen-72B. This suggests that incorrect reasoning knowledge in LLMs is similar to a certain extent between these languages. To further analyse the impact of using machinetranslated prompts, we present reasoning consistency between HT and MT prompts in Figure 3(b). The consistency trend is similar to those in Figure 3(a), with high-resource languages having high consistency on correct answers and underrepre12016 Instruction Format [EN] Question: \n[Math Question]\nAnswer: \nLet‘s think step by step. \n[CoT Reasoning] EN Problem: Silvia has $23. She bought five bagels for $3 each. How much money does she have left? EN CoT:5 bagels for $3 each should cost 5 * $3 = $15. Silvia had $23 in the beginning, so now she has 23 -15 = $8 left. The answer is 8. Problem and CoT … EN SW ZH [SW] Question: \n[Math Question]\nAnswer: \nHebu fikiriahatua kwa hatua.\n[CoTReasoning] [ZH] Question:\n[MathQuestion]\nAnswer: \n让我们一步步思考。 \n[CoT Reasoning] … … Figure 4: Overview of multilingual CoT reasoning data. English data is first automatically translated into target languages, and then inserted into the templates to construct multilingual instruction data. sented languages having high consistency on incorrect answers. On the other hand, consistency scores of the larger models are closer to the reasoning accuracy of using human-written prompts, indicating that using machine-translated prompts has less impact on these models. 4 mCoT Instruction Tuning Based on our findings in multilingual reasoning consistency, we propose a multilingual instruction tuning framework mCoT to supervise the reasoning process, aiming to generate similar reasoning and the same results on inputs expressed in different languages but semantically equivalent. Formally, given a question q = {q1, · · · , qm} in language x and its corresponding answer – always an integer in this work – the model can generate the step-bystep reasoning solution s = {s1, · · · , sn} written in language x with the final answer. 4.1 mCoT-MATH Source Data We leverage two English datasets as source data: MetaMathQA (Yu et al., 2024), which augments the training data of GSM8K and MATH with a question bootstrapping method that rewrites questions using both forward and backward reasoning paths and leverages LLMs to reformulate the question text; and MathInstruct (Yue et al., 2024), which is based on seven existing math rationale datasets annotated by humans or GPT-4 (OpenAI, 2023). This dataset contains two prompt formats: Chain-of-Thought (CoT, Wei et al. 2022b) and Program-of-Thought (PoT, Chen et al. 2023b). We select MathInstruct’s CoT samples and combine them with MetaMathQA, resulting in approximately 580,000 samples. Automatic Translation Following Shi et al. (2023), we select the ten different languages included in MGSM as the target languages for translation. The overall framework is illustrated in Figure 4. First, we machine-translate4 all EN data into the target languages. After translation, we use the instruction format to reformulate all the data to obtain mCoT-MATH, the first large-scale multilingual math CoT dataset, containing around 6.3 million samples. This is expected to facilitate an exhaustive exploration of model reasoning consistency across languages. 4.2 Implementation We use Mistral-7B as base model, training our mCoT instruction tuning framework using HuggingFace Transformers (Wolf et al., 2020) and Deepspeed (Rasley et al., 2020). During training, we set the maximum length of the input sequence to 1024, thus reducing GPU memory consumption and improving training speed. We train our model using AdamW optimiser (Loshchilov and Hutter, 2019) with a maximum learning rate of 5e-6 and a 3% learning rate warmup. The batch size is set to 32 and gradients are accumulated in 4 update steps. We train our model on 4 × NVIDIA A100 40GB GPUs for around 10 days. We report the final answer accuracy for all experiments. 4.3 Evaluation Data We select two popular multilingual math reasoning datasets, in which each sample contains a question and the corresponding final answer. Specifically, in addition to MGSM, we also include MSVAMP (Chen et al., 2023a), which is constructed based on English data SVAMP (Patel et al., 2021). Chen et al. (2023a) use Google Translate to transform 1,000 questions from the SVAMP test set into nine languages: SW, BN, TH, JA, ZH, RU, ES, FR, and DE. To ensure the translation quality, they back-translate the translated text into English and ask three professional annotators to check semantic consistency manually. 4.4 Baselines We consider state-of-the-art models of different sizes, both closed and open source. 4https://translate.google.com/. 12017 Model SW BN TE TH JA ZH RU ES FR DE EN Lang. Freq. (%) <0.1 <0.1 <0.1 <0.1 0.4 0.4 0.5 2.1 3.3 3.5 78.0 Close-Source Models GPT-3 few-shot 11.2 6.4 0.4 0.8 26.0 40.0 28.4 40.4 37.6 36.0 53.6 GPT-3.5-En 2-shot 40.0 7.6 15.6 46.8 52.8 50.4 61.2 59.2 62.0 67.2 GPT-4-En 2-shot 64.4 17.6 40.4 71.6 70.0 64.0 71.2 72.0 73.6 80.0 PaLM-540B few-shot 35.2 46.0 45.6 52.8 40.0 46.8 48.4 56.8 46.4 49.2 62.4 Open-source Models 7B Models WizardMath (Luo et al., 2023) 3.4 2.0 4.0 24.0 22.4 30.8 34.8 30.4 30.4 47.6 MathOctopus (Chen et al., 2023a) 38.4 33.2 36.4 35.6 45.2 48.4 45.2 38.0 43.6 54.8 MathOctopus-Mistral (Chen et al., 2023a) 51.6 44.0 48.8 48.0 51.6 49.6 53.2 47.2 50.0 58.4 xCoT (Chai et al., 2024) 48.4 40.4 42.8 49.2 50.0 50.0 50.0 48.8 49.6 47.2 48.4 13B Models WizardMath (Luo et al., 2023) 5.6 6.4 5.6 22.0 28.0 34.4 45.6 42.0 40.4 52.8 MathOctopus (Chen et al., 2023a) 46.0 42.0 46.0 39.6 51.2 47.6 53.2 49.6 49.2 51.6 xCoT (Chai et al., 2024) 51.6 50.0 47.2 50.0 49.6 54.0 56.8 54.8 46.4 52.4 54.4 mCoT-7B (ours) 67.2 65.6 62.4 67.6 65.2 64.8 66.8 68.4 63.8 61.2 71.6 Table 1: Multilingual evaluation results (final answer accuracy:%) on the MGSM benchmark. Notes: (i) Lang. Freq. (%) is the language frequency in PaLM training data; (ii) the results of GPT-3 and PaLM-540B are from Shi et al. (2023), while those for GPT-3.5 and GPT-4 are from Chen et al. (2023a); and (iii) in boldface best results per language among closed models and among open models. Close-Source Models We include six models from three different companies as foundational benchmarks: (i) OpenAI’s GPT-3 (Brown et al., 2020), GPT-3.55, and GPT-4 (OpenAI, 2023); (ii) Anthropic’s Claude-26; and (iii) Google’s PaLM 2 (Chowdhery et al., 2022) and Flan-PaLM (Anil et al., 2023). Open-Source Models We also compare our model mCoT with several best-performing opensource models for the sake of fairness (including size comparison): (i) WizardMath (Luo et al., 2023); (ii) MathOctopus (Chen et al., 2023a); (iii) xCoT (Chai et al., 2024); and (iv) MetaMath (Yu et al., 2024). 4.5 Results Evaluation on MGSM Table 1 reports results on the MGSM benchmark. The first observation is that most existing models, including close- and opensource, perform poorly on underrepresented languages such as SW, BN, and TE. For close-source models, GPT models achieve higher accuracy in high-resource languages, with GPT-4 scoring the highest; PaLM-540 achieves competitive performance in all languages, especially reaching the highest score in low-resource languages BN, TE and TH. For open-source models, we observe that 5https://openai.com/blog/chatgpt. 6https://www.anthropic.com/index/claude-2. WizardMath achieves less than 7% accuracy on low-resource languages which is explained by the fact that this model is trained on English data, while both MathOctopus and xCoT gain strong improvement with the help of the multilingual instruction dataset. When looking at our model, we see that mCoT significantly outperforms all previous strong baselines, and even outperforms GPT-4 in underrepresented languages such as SW, BN, TE, and TH. Particularly, mCoT has higher accuracy scores than PaLM-540B across all languages. Evaluation on MSVAMP Table 2 reports results on the MSVAMP benchmark. We can observe that GPT-4 with 2-shot achieves the best performance in all languages except BN, where our model achieves the best results. When looking only at open-source models, similar to the observations on MGSM, WizardMath performs poorly in low-resource languages, while our model mCoT shows the highest scores across the board. In particular, mCoT outperforms MathOctopus-Mistral, the one also based on Mistral-7B, confirming that leveraging our dataset mCoT-MATH can yield substantial gains in performance. Finally, we observe that mCoT scores are very close across all languages, suggesting a lesser dependency on the low- vs high-resource aspect. Reasoning Consistency To further evaluate mCoT’s reasoning capability in different languages, in Figure 5 we present its reasoning consistency 12018 Model SW BN TH JA ZH RU ES FR DE EN AVG Lang. Freq. (%) <0.1 <0.1 <0.1 0.4 0.4 0.5 2.1 3.3 3.5 78.0 Close-Source Models GPT-3.5-En zero-shot 63.2 3.1 24.4 63.3 72.4 62.3 69.5 71.9 66.7 76.1 57.3 GPT-3.5-En 2-shot 68.4 14.4 46.0 74.0 78.4 70.9 74.6 78.2 73.9 81.2 66.0 GPT-4-En 2-shot 75.7 31.2 68.1 74.8 78.9 77.9 81.5 83.9 78.1 80.1 73.0 Open-source Models 7B Models WizardMath (Luo et al., 2023) 10.3 16.1 6.3 26.7 26.8 33.7 42.9 39.9 39.6 45.1 27.0 MathOctopus (Chen et al., 2023a) 42.3 32.8 40.5 43.2 43.2 42.1 44.5 45.3 43.1 46.8 42.4 MathOctopus-Mistral (Chen et al., 2023a) 41.2 36.7 40.2 41.5 43.1 44.0 47.0 49.0 46.4 49.7 43.9 13B Models WizardMath (Luo et al., 2023) 12.5 13.7 16.3 29.5 37.0 43.8 50.4 49.4 48.7 56.3 35.8 MathOctopus (Chen et al., 2023a) 43.4 34.2 39.5 43.1 46.4 48.2 48.2 49.9 47.7 44.6 44.5 mCoT-7B (ours) 55.0 53.7 56.4 58.8 58.2 58.1 58.9 58.8 61.1 58.3 57.7 Table 2: Multilingual evaluation results (final answer accuracy:%) on the MSVAMP benchmark. Notes: (i) Lang. Freq. (%) is the language frequency in PaLM training data; (ii) the results of GPT-3.5 and GPT-4 are from Chen et al. (2023a); and (iii) in boldface best results per language among closed models and among open models. results on the dataset MGSM. After instruction tuning on our dataset mCoT-MATH, correct consistency shows a strong improvement as our model’s reasoning accuracy improves, and in particular we observe that: (i) this is especially true for underrepresented languages; and (ii) the scores for all language pairs are very close, with most being above 50%. It is also interesting to see an increasing trend in incorrect consistency for most language pairs, including low- and high-resource languages, confirming that our model exhibits similar reasoning capabilities across languages. Overall, these observations underscore the efficacy of our method in improving the model’s reasoning consistency. 5 Analysis Case Study Table 3 shows reasoning solutions generated by our model mCoT for the same math question expressed in different languages. In this case, mCoT incorrectly reasons in the second step in TE: Yogurts cost $5.00 each, so Terry spends 60 * 5.00 =300.00 on yogurts in 30 days (EN translation of red background part), which leads to the wrong reasoning and final answer. For other languages, we observe that mCoT generates the correct final answers, while the solutions might be logically different. For example, the focus of FR is to first reason about how many packs of yogurt are needed and then calculate the total cost based on the price of each pack, while for other languages such as EN, the total cost is calculated through the total amount of yogurt and the unit price of each yogurt. Overall, these evidences demonstrate sw bn te th ja zh ru es fr de en sw bn te th ja zh ru es fr de en 67.6 56.0 54.4 56.4 56.0 54.8 56.4 60.0 54.4 53.2 60.0 20.4 65.6 52.8 53.6 53.2 54.4 53.6 58.0 52.0 49.6 56.8 14.9 12.2 62.4 51.6 50.0 50.4 51.6 55.6 48.8 50.0 56.0 18.5 13.2 13.8 67.6 54.4 52.0 56.0 60.0 53.2 50.8 59.2 16.7 13.9 7.8 14.3 65.2 52.0 53.2 56.4 51.6 50.0 57.6 19.0 18.4 12.1 17.8 13.7 64.8 54.4 56.8 50.0 50.8 58.0 13.4 11.8 7.9 19.5 8.2 14.1 66.8 57.6 53.6 52.4 59.6 16.2 15.8 14.0 25.0 9.7 18.0 16.1 68.4 55.6 52.8 62.8 15.2 18.1 14.1 21.0 13.5 11.2 23.0 21.3 63.6 50.0 55.2 19.3 15.4 14.7 17.0 12.0 15.2 15.7 18.4 20.2 61.2 56.4 21.1 14.1 9.9 17.2 14.1 15.3 17.0 25.4 17.5 19.5 71.6 0 20 40 60 80 100 Figure 5: Multilingual reasoning consistency of mCoT. The triangle above the marked diagonal shows the consistency of the models on the correct answers; the triangle below the diagonal contains the consistency between the language pairs where the final answer is the same but incorrect. that mCoT has a good reasoning capability across different languages. 6 Conclusion We studied multilingual reasoning consistency across multiple languages for popular open-source LLMs such as Mistral and LLAMA2, which provides insights into the evaluation of LLMs. Our findings show that there is a substantial variation across the languages, with lesser resourced ones 12019 Language CoT Reasoning Question EN SW BN TE TH JA ZH RU ES FR DE Terry eats 2 yogurts a day, so in 30 days, he will eat 2 x 30 = 60 yogurts.\nThe yogurts are on sale at 4 for $5.00, so each yogurt costs $5.00 / 4 = $1.25.\nTherefore, Terry will spend $1.25 x 60 = $75.00 on yogurt over 30 days.\nThe answer is 75. Terry eats 2 yogurts a day. They are currently on sale at 4 yogurts for $5.00. How much does he spend on yogurt over 30 days?\nAnswer:\nLet's think step by step. !টির িদেন 2( ইেয়াগাট- খায়, তাই 30 িদেনর মেধ2, !স 2 x 30 = 60( ইেয়াগাট- খােব। ইেয়াগাট- 4 প2ােকর জন2 $5.00 এর জন2 িবি9 হে;, তাই <িত( ইেয়াগােট-র দাম $5.00 / 4 = $1.25। 30 িদেনর মেধ2, !টির ইেয়াগােট- র জন2 60 x $1.25 = $75.00 খরচ করেব। উ?র হল 75। ట"#$% ప()#*+ 2 -గ/0 ల2 )ంట4డ6, 8ాబట;0 30 #*>ల?@ అతను 2 * 30 = 60 -గ/0 ల2 )ంట4డ6. -గ/0 ధర ఒ8LMకMట; $5.00, 8ాబట;0 ట"#$% 30 #*>ల?@ -గ/0 ల 8Qసం 60 * $5.00 = $300.00 ఖరTU VేXాY డ6. సమ[\]నం 300. テリーは1 日に2 つのヨーグルトを食べるので、30 日後には2 x 30 = 60 個のヨーグルトを食べることになりま す。ヨーグルトは4 個で5 ドルのセール中なので、ヨーグルト1 個の価格は5 ドル/4 = 1.25 ドルとなります。 したがって、テリーは30 日分のヨーグルトに60 x 1.25 ドル= 75 ドルを費やすことになります。答えは75です。 特里每天吃2 份酸奶,因此30 天他将吃2 x 30 = 60 份酸奶。酸奶打折,4 份酸奶售价5 美元,因此1 份酸奶售价 5/4 = 1.25 美元。Terry 将在30 天内花费60 x 1.25 美元= 75.00 美元购买酸奶。答案是75。 เทอร์รี'กินโยเกิร์ต 2 ถ้วยต่อวัน ดังนั5นใน 30 วัน เขาจะกินโยเกิร์ต 2 x 30 = 60 ถ้วย โยเกิร์ตถ้วยละ 4 ถ้วยลดราคาอยู่ที' 5.00 ดอลลาร์สหรัฐฯ ดังนั5นโยเกิร์ตแต่ละถ้วยจึงมีราคา 5.00 ดอลลาร์สหรัฐฯ / 4 = 1.25 ดอลลาร์สหรัฐฯ ดังนั5น เทอร์รี'จะใช้จ่าย 60 x 1.25 ดอลลาร์ = 75.00 ดอลลาร์สําหรับโยเกิร์ต คําตอบคือ 75 Терри съедает 2 йогурта в день, значит, за 30 дней он съест 2 х 30 = 60 йогуртов. Йогурты продаются по цене 5 долларов за 4 штуки, поэтому каждый йогурт стоит 5 долларов/4 = 1,25 доллара. За 30 дней Терри потратит на йогурты 60 x 1,25 доллара = 75 долларов. Ответ: 75. Terry mange 2 yaourts chaque jour, donc sur 30 jours, il mangera 2 x 30 = 60 yaourts. Les yaourts sont vendus au prix de 4 pour 5,00 $, donc Terry devra acheter 60/4 = 15 ensembles de yaourts. Chaque ensemble coûte 5,00 $, donc Terry dépensera 15 x 5,00 $ = 75,00 $ en yaourt sur 30 jours. La réponse est 75. Terry isst 2 Joghurts pro Tag, also isst sie in 30 Tagen 2 x 30 = 60 Joghurts. Die Joghurts sind für 5,00 $ für 4 Stück im Angebot, also kostet 1 Joghurt 5,00 $/4 = 1,25 $. In 30 Tagenwird Terry 60 Joghurts zu je 1,25 $ ausgeben, also 60 x 1,25 $ = 75,00 $. Die Antwort ist 75. Terry hunywa maziwa 2 ya mtindi kwa siku, kwa hiyo katika siku 30 atakunywa2 x 30 = 60 maziwa ya mtindi. Kwa sasa kuna ofa ya maziwa 4 ya mtindi kwa $5.00, kwa hivyokila moja inagharimu$5.00 / 4 = $1.25. Kwa muda wa siku 30, Terry atatumia $1.25 x 60 = $75.00 kwa maziwa ya mtindi. Jibu ni 75. Terry come 2 yogures al día, por lo que en 30 días comerá 2 x 30 = 60 yogures. Actualmente hay una oferta de 4 yogures por $5, por lo que cada yogurcuesta $5/4 = $1,25. Durante 30 días, Terry gastará 60 x $1,25 = $75 en yogur. La respuesta es 75. Table 3: Case study in the test set of MSGM. Note that here we only show an EN question, the questions corresponding to each output are written in their respective languages. substantially underperforming. To address this issue, we constructed the first large-scale multilingual math reasoning instruction dataset mCoTMATH, with around 6.3 million samples in eleven diverse languages. We then introduced a multilingual reasoning instruction tuning framework to train our model mCoT on mCoT-MATH. Evaluation on two multilingual benchmark datasets shows that our 7B parameter model achieves impressive reasoning consistency across all languages, and comparable or superior performance to close- and open-source state-of-the-art models even of much larger sizes. 7 Limitations In this work we investigated multilingual reasoning consistency across 11 languages and eight opensource models in different sizes, but there are very many more languages and LLMs that can still bring substantial challenges and insights if considered. In addition, when considering more languages in the future, it will also be interesting to consider language families as factor. On the other hand, this work focused on reasoning consistency based on the final answer, while the consistency of intermediate reasoning steps is definitely an interesting direction. Reasoning solutions are not necessarily consistent across languages (even within the same language), since they might be logically different but still result in the same and correct answer. Therefore, automatically assessing intermediate steps is challenging and requires more explorations in the future. Regarding the dataset, while mCoTMATH boosts the reasoning capability across languages, screening high-quality machine-translated data could further improve the model. Finally, the full potential of our approach could be further explored by for example extending instruction tuning with reward learning to encourage models to generate more diverse solutions. 12020 Acknowledgments The anonymous reviewers of ACL Rolling Review provided us with useful comments which contributed to improving this paper and its presentation, so we’re grateful to them. We would also like to thank the SURF organisation and the Center for Information Technology of the University of Groningen for their support and for providing access to the high-performance computing clusters Snellius and Hábrók, respectively. References Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Oana-Maria Camburu, Brendan Shillingford, Pasquale Minervini, Thomas Lukasiewicz, and Phil Blunsom. 2020. Make up your mind! adversarial generation of inconsistent natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4157– 4165, Online. Association for Computational Linguistics. Linzheng Chai, Jian Yang, Tao Sun, Hongcheng Guo, Jiaheng Liu, Bing Wang, Xiannian Liang, Jiaqi Bai, Tongliang Li, Qiyao Peng, and Zhoujun Li. 2024. xcot: Cross-lingual instruction tuning for crosslingual chain-of-thought reasoning. arXiv preprint arXiv:2401.07037. Nuo Chen, Zinan Zheng, Ning Wu, Ming Gong, Yangqiu Song, Dongmei Zhang, and Jia Li. 2023a. Breaking language barriers in multilingual mathematical reasoning: Insights and observations. arXiv preprint arXiv:2310.20246. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2023b. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, 12021 Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Zheng Chuanyang, Liu Zhengying, Xie Enze, Li Zhenguo, and Li Yu. 2023. Progressive-hint prompting improves reasoning in large language models. arXiv preprint arXiv:2304.09797. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics, 9:1012–1031. Constanza Fierro and Anders Søgaard. 2022. Factual consistency of multilingual pretrained language models. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3046–3052, Dublin, Ireland. Association for Computational Linguistics. Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023a. Specializing smaller language models towards multi-step reasoning. In Proceedings of the 40th International Conference on Machine Learning, Proceedings of Machine Learning Research, pages 10421–10430. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2023b. Complexity-based prompting for multi-step reasoning. In The Eleventh International Conference on Learning Representations. Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003–8017, Toronto, Canada. Association for Computational Linguistics. Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2023. Large language models can self-improve. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1051–1068, Singapore. Association for Computational Linguistics. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Song Jiang, Zahra Shakeri, Aaron Chan, Maziar Sanjabi, Hamed Firooz, Yinglong Xia, Bugra Akyildiz, Yizhou Sun, Jinchao Li, Qifan Wang, and Asli Celikyilmaz. 2024. Resprompt: Residual connection prompting advances multi-step reasoning in large language models. In The Twelfth International Conference on Learning Representations. Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, and Lu Wang. 2023. GRACE: Discriminator-guided chain-of-thought reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15299–15328, Singapore. Association for Computational Linguistics. Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems, volume 35, pages 22199–22213. Curran Associates, Inc. Tao Li, Vivek Gupta, Maitrey Mehta, and Vivek Srikumar. 2019. A logic-driven framework for consistency of neural models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3924–3935, Hong Kong, China. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Conference on Learning Representations. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583. Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2023. Teaching small language models to reason. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1773–1781, Toronto, Canada. Association for Computational Linguistics. Eric Mitchell, Joseph Noh, Siyan Li, Will Armstrong, Ananth Agarwal, Patrick Liu, Chelsea Finn, and Christopher Manning. 2022. Enhancing selfconsistency and performance of pre-trained language models through natural language inference. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1754– 12022 1768, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Alex Polozov, Christopher Meek, Dragomir Radev, and Jianfeng Gao. 2023. Learning math reasoning from self-sampled correct and partially-correct solutions. In The Eleventh International Conference on Learning Representations. Sean O’Brien and Mike Lewis. 2023. Contrastive decoding improves reasoning in large language models. arXiv preprint arXiv:2309.09117. OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics. Jirui Qi, Raquel Fernández, and Arianna Bisazza. 2023. Cross-lingual consistency of factual knowledge in multilingual language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10650–10666, Singapore. Association for Computational Linguistics. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, page 3505–3506, New York, NY, USA. Association for Computing Machinery. Abhilasha Ravichander, Eduard Hovy, Kaheer Suleman, Adam Trischler, and Jackie Chi Kit Cheung. 2020. On the systematicity of probing contextualized word representations: The case of hypernymy in BERT. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics, pages 88–102, Barcelona, Spain (Online). Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. 2023. Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations. Kumar Shridhar, Alessandro Stolfo, and Mrinmaya Sachan. 2023. Distilling reasoning capabilities into smaller language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7059–7073, Toronto, Canada. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288. Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with process- and outcomebased feedback. arXiv preprint arXiv:2211.14275. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022a. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, 12023 Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2024. Metamath: Bootstrap your own mathematical questions for large language models. In The Twelfth International Conference on Learning Representations. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. 2024. MAmmoTH: Building math generalist models through hybrid instruction tuning. In The Twelfth International Conference on Learning Representations. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. 2022. STar: Bootstrapping reasoning with reasoning. In Advances in Neural Information Processing Systems. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023. Automatic chain of thought prompting in large language models. In The Eleventh International Conference on Learning Representations. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations. Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. 2023. Solving math word problems via cooperative reasoning induced language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4471–4485, Toronto, Canada. Association for Computational Linguistics. 12024 A Appendix A.1 Prompt Examples Question: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now? Step-by-Step Answer: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11. Question: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? Step-by-Step Answer: There are 4 days from monday to thursday. 5 computers were added each day. That means in total 4 * 5 = 20 computers were added. There were 9 computers in the beginning, so now there are 9 + 20 = 29 computers. The answer is 29. Question: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? Step-by-Step Answer: Leah had 32 chocolates and Leah’s sister had 42. That means there were originally 32 + 42 = 74 chocolates. 35 have been eaten. So in total they still have 74 - 35 = 39 chocolates. The answer is 39. Question: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? Step-by-Step Answer: He has 5 toys. He got 2 from mom, so after that he has 5 + 2 = 7 toys. Then he got 2 more from dad, so in total he has 7 + 2 = 9 toys. The answer is 9. Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Step-by-Step Answer: Michael started with 58 golf balls and lost 23, so he has 58 - 23 = 35. After he lost 2 more, he has 35 - 2 = 33 balls now. The answer is 33. Question: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? Step-by-Step Answer: 5 bagels for $3 each should cost 5 * 3 = 15 dollars. Olivia had $23 in the beginning, so now she has 23 - 15 = 8 dollars left. The answer is 8. Question: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? Step-by-Step Answer: Jason started with 20 lollipops, but now he only has 12, so he gave Denny 20 - 12 = 8 lollipops. The answer is 8. Question: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? Step-by-Step Answer: There are 3 cars in the beginning, 2 more arrive, so now there should be 3 + 2 = 5 cars. The answer is 5. Question: [Math Question] Step-by-Step Answer: (a) English CoT prompt. Frage: Roger hat 5 Tennisbälle. Er kauft noch 2 Dosen Tennisbälle. In jeder Dose sind 3 Tennisbälle. Wie viele Tennisbälle hat er jetzt? Schritt-für-Schritt-Antwort: Roger begann mit 5 Bällen. 2 Dosen von jeweils 3 Tennisbällen macht 6 Tennisbälle. 5 + 6 = 11. Die Antwort ist 11. Frage: Es waren neun Computer im Serverraum. Von Montag bis Donnerstag wurden jeden Tag noch fünf Computer installiert. Wie viele Computer sind jetzt im Serverraum? Schritt-für-Schritt-Antwort: Von Montag bis Donnerstag sind es 4 Tage. Jeden Tag kamen 5 neue Computer hinzu. Das macht insgesamt 4 x 5 = 20 Computer, die hinzugefügt wurden. Am Anfang waren es 9 Computer, also sind es jetzt 9 + 20 =29 Computer. Die Antwort lautet 29. Frage: Leah hat 32 Pralinen und ihre Schwester hat 42. Wenn sie 35 essen, wie viele sind dann insgesamt noch übrig? Schritt-für-Schritt-Antwort: Leah hat 32 Pralinen und Leahs Schwester 42. Das bedeutet, dass es ursprünglich 32 + 42 =74 Pralinen waren. 35 wurden gegessen. Also haben sie insgesamt noch 74 - 35 = 39 Pralinen übrig. Die Antwort lautet 39. Frage: Shawn hat fünf Spielzeuge. Zu Weihnachten hat er von seiner Mama und seinem Papa jeweils zwei Spielzeuge bekommen. Wie viele Spielzeuge hat er jetzt? Schritt-für-Schritt-Antwort: Er hat 5 Spielzeuge. Er hat 2 von seiner Mama bekommen, sodass er nun 5 + 2 = 7 Spielzeuge hat. Dann hat er noch 2 von seinem Papa bekommen, also hat er insgesamt 7 + 2 = 9 Spielzeuge. Die Antwort lautet 9. Frage: Michael hat 58 Golfbälle. Am Dienstag hat er 23 Golfbälle verloren. Am Mittwoch hat er 2 weitere verloren. Wie viele Golfbälle hat er Mittwoch am Ende des Tages? Schritt-für-Schritt-Antwort: Michael hatte anfangs 58 Golfbälle und hat 23 verloren, sodass er 58 - 23 = 35 hat. Nachdem er 2 weitere verloren hat, hat er jetzt 35 - 2 = 33 Bälle. Die Antwort lautet 33. Frage: Olivia hat 23 US-Dollar. Sie hat fünf Bagels für 3 US-Dollar pro Stück gekauft. Wie viel Geld hat sie übrig? Schritt-für-Schritt-Antwort: 5 Bagels für 3 US-Dollar pro Stück kosten 5 x 3 = 15 Dollar. Olivia hat anfangs 23 US-Dollar, also hat sie jetzt 23 - 15 = 8 Dollar übrig. Die Antwort lautet 8. Frage: Jason hatte 20 Lutscher. Er hat Denny einige Lutscher gegeben. Jetzt hat Jason 12 Lutscher. Wie viele Lutscher hat Jason Denny gegeben? Schritt-für-Schritt-Antwort: Jason hat mit 20 Lutschern angefangen, aber jetzt hat er nur 12, also hat er Denny 20 - 12 = 8 Lutscher abgegeben. Die Antwort lautet 8. Frage: Wenn 3 Autos auf dem Parkplatz stehen und 2 weitere Autos ankommen, wie viele Autos stehen dann auf dem Parkplatz? Schritt-für-Schritt-Antwort: Anfangs sind 3 Autos da, 2 weitere kommen an, also sind jetzt 3 + 2 = 5 Autos da. Die Antwort lautet 5. Frage: [Math Question] Schritt-für-Schritt-Antwort: (b) HT German CoT prompt. Frage: Roger hat 5 Tennisbälle. Er kauft noch zwei Dosen Tennisbälle. Jede Dose enthält 3 Tennisbälle. Wie viele Tennisbälle hat er jetzt? Schritt-für-Schritt-Antwort: Roger begann mit 5 Bällen. 2 Dosen mit je 3 Tennisbällen ergeben 6 Tennisbälle. 5 + 6 = 11. Die Antwort ist 11. Frage: Im Serverraum befanden sich neun Computer. Jeden Tag, von Montag bis Donnerstag, wurden fünf weitere Computer installiert. Wie viele Computer stehen jetzt im Serverraum? Schritt-für-Schritt-Antwort: Es gibt 4 Tage von Montag bis Donnerstag. Jeden Tag kamen 5 Computer hinzu. Das bedeutet, dass insgesamt 4 * 5 = 20 Computer hinzugefügt wurden. Am Anfang gab es 9 Computer, jetzt sind es 9 + 20 = 29 Computer. Die Antwort ist 29. Frage: Leah hatte 32 Pralinen und ihre Schwester 42. Wenn sie 35 gegessen hätten, wie viele Stücke wären dann insgesamt übrig geblieben? Schritt-für-Schritt-Antwort: Leah hatte 32 Pralinen und Leahs Schwester hatte 42. Das bedeutet, dass es ursprünglich 32 + 42 = 74 Pralinen waren. 35 wurden gegessen. Insgesamt haben sie also noch 74 – 35 = 39 Pralinen. Die Antwort ist 39. Frage: Shawn hat fünf Spielzeuge. Zu Weihnachten bekam er von seiner Mutter und seinem Vater jeweils zwei Spielzeuge. Wie viele Spielsachen hat er jetzt? Schritt-für-Schritt-Antwort: Er hat 5 Spielzeuge. Er hat 2 von Mama bekommen, also hat er danach 5 + 2 = 7 Spielzeuge. Dann bekam er noch zwei weitere von Papa, also hat er insgesamt 7 + 2 = 9 Spielzeuge. Die Antwort ist 9. Frage: Michael hatte 58 Golfbälle. Am Dienstag verlor er 23 Golfbälle. Am Mittwoch verlor er zwei weitere. Wie viele Golfbälle hatte er am Ende des Mittwochs? Schritt-für-Schritt-Antwort: Michael begann mit 58 Golfbällen und verlor 23, also hat er 58 – 23 = 35. Nachdem er 2 weitere verloren hat, hat er jetzt 35 – 2 = 33 Bälle. Die Antwort ist 33. Frage: Olivia hat 23 $. Sie kaufte fünf Bagels für jeweils 3 Dollar. Wie viel Geld bleibt ihr übrig? Schritt-für-Schritt-Antwort: 5 Bagels für jeweils 3 US-Dollar sollten 5 * 3 = 15 US-Dollar kosten. Olivia hatte am Anfang 23 Dollar, jetzt hat sie also 23 - 15 = 8 Dollar übrig. Die Antwort ist 8. Frage: Jason hatte 20 Lutscher. Er gab Denny ein paar Lutscher. Jetzt hat Jason 12 Lutscher. Wie viele Lutscher hat Jason Denny gegeben? Schritt-für-Schritt-Antwort: Jason begann mit 20 Lutschern, aber jetzt hat er nur noch 12, also gab er Denny 20 – 12 = 8 Lutscher. Die Antwort ist 8. Frage: Wenn 3 Autos auf dem Parkplatz stehen und 2 weitere Autos ankommen, wie viele Autos sind dann auf dem Parkplatz? Schritt-für-Schritt-Antwort: Am Anfang stehen 3 Autos, 2 weitere kommen hinzu, also sollten es jetzt 3 + 2 = 5 Autos sein. Die Antwort ist 5. Frage: [Math Question] Schritt-für-Schritt-Antwort: (c) MT German CoT prompt. Figure 6: CoT prompt template: mathematical questions are inserted in square brackets, and the model generates corresponding CoT reasoning. 12025 A.2 Multilingual reasoning results on MGSM Language Model Prompt EN SW BN TE TH JA ZH RU ES FR DE AVG Lang. Freq. (%) 78.0 <0.1 <0.1 <0.1 <0.1 0.4 0.4 0.5 2.1 3.3 3.5 COMET Score 84.6 87.3 89.7 81.3 86.3 89.2 86.3 87.9 88.5 88.8 7B Models LLAMA2 HT 19.6 2.4 2.0 0.4 2.4 6.8 10.8 12.4 11.2 14.4 11.9 7.5 MT 2.0 2.4 0.4 4.4 7.2 9.2 9.6 12.4 13.6 13.6 7.5 Qwen HT 51.2 5.6 7.6 2.0 14.0 18.8 45.2 36.4 36.4 33.6 32.0 22.1 MT 6.4 8.4 1.2 15.6 23.2 44.0 36.0 36.4 37.2 34.8 24.3 Mistral HT 45.6 7.2 12.0 2.8 14.8 20.0 33.6 30.8 35.6 32.4 27.6 21.7 MT 6.8 11.6 2.8 13.2 25.6 34.8 28.4 37.2 34.0 31.2 22.6 13-14B Models LLAMA2-13B HT 34.4 2.4 4.0 2.4 7.2 13.2 20.8 20.4 26.4 21.6 22.4 14.1 MT 4.0 2.8 2.4 7.2 13.2 17.6 19.2 26.8 21.6 20.8 13.6 Qwen-14B HT 63.6 14.4 0.0 7.6 44.8 36.8 64.4 54.8 59.2 54.8 52.0 38.9 MT 10.8 0.0 10.8 50.4 43.2 65.2 51.6 62.8 53.2 54.4 40.2 >65B Models LLAMA2-70B HT 62.4 9.6 16.0 3.6 18.8 40.4 46.4 50.0 51.6 49.2 49.6 33.5 MT 10.8 16.0 3.6 18.8 40.8 43.6 47.2 54.0 46.0 54.4 33.5 Qwen-72B HT 80.8 31.6 42.8 8.0 70.0 62.0 74.0 74.8 76.8 69.6 72.0 58.2 MT 32.8 42.8 2.0 64.8 63.6 72.0 72.8 75.2 71.2 72.4 57.0 Mistral-8×7B HT 62.0 19.2 31.2 10.0 37.2 37.2 51.6 46.8 58.8 48.4 49.2 39.0 MT 18.0 26.0 6.4 28.0 40.4 45.2 48.4 56.8 52.0 54.4 37.6 PaLM-540B† HT 62.4 35.2 46.0 45.6 52.8 40.0 46.8 48.4 56.8 46.4 49.2 48.1 Table 4: Accuracy (%) on MGSM of different models with the few-shot method. Notes: (i) Lang. Freq. (%) is the language frequency in PaLM training data; (ii) we report COMET (Rei et al., 2020) score between the HT and MT prompts; (iii) average (AVG) scores do not include EN results; (iv) †: Results from Shi et al. (2023). 12026
2024
649
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1173–1203 August 11-16, 2024 ©2024 Association for Computational Linguistics Navigate through Enigmatic Labyrinth A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future Zheng Chu1*, Jingchang Chen1*, Qianglong Chen2*, Weijiang Yu2, Tao He1 Haotian Wang1, Weihua Peng2, Ming Liu1,3†, Bing Qin1,3, Ting Liu1 1Harbin Institute of Technology, Harbin, China 2Huawei Inc., Shenzhen, China 3Peng Cheng Laboratory, Shenzhen, China {zchu,jcchen,mliu}@ir.hit.edu.cn, [email protected] Abstract Reasoning, a fundamental cognitive process integral to human intelligence, has garnered substantial interest within artificial intelligence. Notably, recent studies have revealed that chainof-thought prompting significantly enhances LLM’s reasoning capabilities, which attracts widespread attention from both academics and industry. In this paper, we systematically investigate relevant research, summarizing advanced methods through a meticulous taxonomy that offers novel perspectives. Moreover, we delve into the current frontiers and delineate the challenges and future directions, thereby shedding light on future research. Furthermore, we engage in a discussion about open questions. We hope this paper serves as an introduction for beginners and fosters future research. Resources have been made publicly available at https://github.com/zchuz/CoTReasoning-Survey. 1 Introduction In the realm of human cognition, reasoning stands as the linchpin, essential in the understanding of the world and the formation of our decisions. As the scale of pre-training continues to expand (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023a,b), large language models (LLMs) exhibit growing capabilities in numerous downstream tasks (Wei et al., 2022a; Schaeffer et al., 2023; Zhou et al., 2023c). Recently, researchers have discovered that LLMs emerge with the capability for step-by-step reasoning through in-context learning, a phenomenon referred to as chain-of-thought (CoT) reasoning. It is broadly observed that CoT prompting significantly boosts the reasoning abilities of LLMs, especially in complex tasks (Wei et al., 2022b; Cobbe et al., 2021; Geva et al., 2021). * Equal Contribution. † Corresponding Author. chain-of-thought direct output Yes, Aristotle's ideas are known for their emphasis on empirical observation and practical wisdom, he … 1. Aristotle (384–322 BC) lives in Ancient Greek. 2. The first laptop computer hit shelves in 1981. Laptop computer didn’t exist in Aristotle’s time, so the answer is No. Did Aristotle use a laptop? Figure 1: The model tackles complex problems step-bystep under the guidance of chain-of-thought prompting. Figure 1 illustrates an example of chain-ofthought reasoning. Rather than directly providing the answer, chain-of-thought reasoning offers a step-by-step reasoning trajectory. Specifically, it decomposes intricate problems into manageable steps (thoughts), simplifying the overall reasoning process, and creates a linkage (chain) among the reasoning steps to ensure no important conditions are overlooked. Additionally, chain-of-thought reasoning offers an observable reasoning process, allowing users to comprehend the model’s decisionmaking trajectory and increase the trustworthiness and interpretability of the final answer. Benefiting from the remarkable performance of CoT prompting, it has attracted widespread attention across both academia and industry, evolving into a distinct research branch within the field of prompt engineering (Liu et al., 2023d; Qiao et al., 2023). Moreover, it has emerged as a crucial component in the landscape of AI autonomous agents (Wang et al., 2023h; Xi et al., 2023). However, these studies still lack a systematic review and analysis. To fill this gap, we propose this work to conduct a comprehensive and detailed analysis of CoT reasoning. Specifically, this paper delves into the broader scope of chain-of-thought reasoning, which we refer to as generalized chain-of-thought (XoT). The core philosophy of XoT reasoning is the gradual unraveling of complex problems via a step-by-step reasoning approach. 1173 Our contributions can be summarized as follows: (1) Comprehensive Survey: This is the first comprehensive survey dedicated for XoT reasoning; (2) Meticulous taxonomy: We introduce a meticulous taxonomy (shown in Figure 2); (3) Frontier and Future: We discuss new frontiers, outline their challenges, and shed light on future research. (4) Resources: We make the resources publicly available to facilitate the research community. Survey Organization We first give background and preliminary (§2); then present benchmarks (§3) and advanced methods (§4) from different perspectives. Furthermore, we discuss frontier research (§5), and outline challenges as well as future directions (§6). Finally, we give a further discussion about open questions (§A.2). 2 Background and Preliminary 2.1 Background Over the past few years, as the scale of pre-training continuously increases (Brown et al., 2020; Scao et al., 2022; Touvron et al., 2023b; Zhao et al., 2023b), language models have emerged with numerous new capabilities, such as in-context learning (Wei et al., 2022a; Brown et al., 2020) and chain-of-thought reasoning (Wei et al., 2022b). Accompanying this trend, pre-training then prompting has gradually replaced pre-training then fine-tuning as the new paradigm in natural language processing (Qiu et al., 2020; Zhao et al., 2023b). 2.2 Preliminary In this section, we provide the preliminary for standard prompting and chain-of-thought reasoning. Referring to Qiao et al. (2023), we define the notations as follows: question Q, prompt T , probabilistic language model pLM and prediction A. First, we consider the few-shot standard prompting scenario, where prompt TSP includes instruction I and few-shot demonstrations (several question-answer pairs). The model takes the question and prompt as inputs and produces the answer prediction A as its output, as shown in Equ. (1,2). TSP = {I, (x1, y1), · · · , (xn, yn)} (1) p(A | T , Q) = |A| Y i=1 pLM(ai | T , Q, a<i) (2) Next, we consider chain-of-thought prompting under few-shot setting, wherein the prompt TCoT includes instruction, questions, answers, and rationales ei. In chain-of-thought reasoning, the model no longer directly generates answers. Instead, it generates step-by-step reasoning trajectories R before giving answers A, as shown in Equ. (3,4,5,6). TCoT = {I, (x1, e1, y1), · · · , (xn, en, yn)} (3) p(A, R|T , Q) = p(A|T , Q, R) · p(R|T , Q) (4) p(R | T , Q) = |R| Y i=1 pLM(ri | T , Q, r<i) (5) p(A|T , Q, R) = |A| Y j=1 pLM(ai|T , Q, R, a<j) (6) 2.3 Advantages of CoT Reasoning As a novel reasoning paradigm, chain-of-thought gains various advantages. (1) Boosted Reasoning. Chain-of-thought reasoning breaks down complex problems into manageable steps and establishes connections among these steps, thereby facilitating reasoning. (2) Offering Interpretability. Chain-ofthought reasoning provides observable reasoning traces, allowing the user to understand the model’s decision, making the reasoning process transparent and trustworthy. (3) Advance Collaboration. Fine-grained reasoning traces facilitate user-system interaction, allowing for altering the model’s execution trajectory, thereby fostering the development of autonomous agents powered by LLMs. 3 Benchmarks In this section, we briefly outline the benchmarks for evaluating reasoning capabilities, including mathematical, commonsense, symbolic, logical, and multi-modal reasoning. The overview of benchmarks is shown in Table 1. For more details about benchmarks, please refer to Appendix B. Mathematical Reasoning Mathematical reasoning forms the foundation of human intelligence, playing a crucial role in problem-solving, decisionmaking, and world comprehension. It is commonly used to assess the general reasoning ability of LLMs (Patel et al., 2021; Cobbe et al., 2021; Hendrycks et al., 2021b; Mishra et al., 2022a). Commonsense Reasoning Commonsense reasoning is essential for the interaction in daily life and the perception of the world, which assesses the world comprehension capacity of language models (Talmor et al., 2019, 2021; Geva et al., 2021). 1174 A survey of X-of-Thought Advanced Methods (§4) XoT Prompt Construction (§4.1) Manual Prompting E.g., Few-shot CoT (Wei et al., 2022b) , PAL (Gao et al., 2023) Automatic Prompting E.g., Zero-shot CoT (Kojima et al., 2022), Auto-CoT (Zhang et al., 2023h) Semi-auto Prompting E.g., AutoMate CoT (Shum et al., 2023), BoostedPrompt (Pitis et al., 2023) XoT Topological Variants (§4.2) Chain Structure E.g., PoT (Chen et al., 2022a), LINC (Olausson et al., 2023) Tree Structure E.g., ToT (Yao et al., 2023b), SoT (Ning et al., 2023) Graph Structure E.g., GoT (Besta et al., 2023), ResPrompt (Jiang et al., 2023a) XoT Enhancement Methods (§4.3) Verify and Refine E.g., VerifyCoT (Ling et al., 2023), Self-Refine (Madaan et al., 2023) Question Decomposition E.g., L2M (Zhou et al., 2023b) , Successive Prompt (Dua et al., 2022) Knowledge Enhancement E.g., CoK (Wang et al., 2023c), KD-CoT (Wang et al., 2023e) Self-Ensemble E.g., Self-Consistency (Wang et al., 2023m) , Complex CoT (Fu et al., 2023a) Efficient Reasoning E.g., SoT (Ning et al., 2023), ActivePrompting (Diao et al., 2023) Frontier (§5) Tool Use (§5.1) E.g., TAML (Parisi et al., 2022), Toolformer (Schick et al., 2023), ReACT (Yao et al., 2023c) Planning (§5.2) E.g., LLM+P (Liu et al., 2023a), RAP (Hao et al., 2023a), ToolChain* (Zhuang et al., 2024) Distillation (§5.3) E.g., STaR (Zelikman et al., 2022), SCoTD (Li et al., 2023c), SCOTT (Wang et al., 2023j) Future Directions (§6) Multi-modal (§6.1) E.g., MMCoT (Zhang et al., 2023i), T-SciQ (Wang et al., 2023g), SocreticQues (Qi et al., 2023) Faithfulness (§6.2) E.g., Rethinking and Retrievaling (He et al., 2023a), Measure Faithful (Lanham et al., 2023) Theory (§6.3) E.g., Wang et al. (2023f), Tutunov et al. (2023), Wang et al. (2023a), Feng et al. (2023), Wu et al. (2023b) Figure 2: Taxonomy of Advanced Methods, Frontiers and Future Directions (Full version in Figure 8). Symbolic Reasoning Symbolic reasoning disentangles semantics and serves as a testbed for language models’ competence in simulating atomic operations (Wei et al., 2022b; Srivastava et al., 2022; Suzgun et al., 2023). Logical Reasoning Logical reasoning is of paramount importance as it serves as the bedrock for rational thinking, robust problem-solving and interpretable decision-making (Liu et al., 2020; Yu et al., 2020; Tafjord et al., 2021; Han et al., 2022). Multi-modal Reasoning Multimodal reasoning seamlessly integrates textual thought with sensory experiences from the natural world, such as visual scenes, and auditory sounds, to create a richer, more comprehensive understanding of information (Zellers et al., 2019; Park et al., 2020; Xiao et al., 2021; Lu et al., 2022; Chen et al., 2023c). 4 Advanced Methods This section discusses advanced XoT methods from three viewpoints: prompt construction (§4.1), topological variations (§4.2), and enhancement methods (§4.3). The taxonomy is shown in Figure 2. 4.1 XoT Prompt Construction Based on the human effort for constructing chainof-thought prompting, we divide the construction approaches into three categories: 1) Manual XoT, 2) Automatic XoT, and 3) Semi-automatic XoT. 4.1.1 Manual Prompting Wei et al. (2022b) first proposes chain-of-thought prompting (Fewshot CoT) by manually annotating natural language form rationales to guide models in stepwise reasoning. Moreover, Fu et al. (2023a) discovers that using complex reasoning chains as demonstrations can further improve reasoning performance. Yet, the NL form reasoning encounters inconsistent reasoning. To mitigate intermediate errors in reasoning, PAL (Gao et al., 2023), PoT (Chen et al., 2022a), MathPrompter (Imani et al., 2023) and NLEP (Zhang et al., 2023d) leverage rationales in programming language form, transforming problem-solving into program generation, and obtaining a deterministic answer through external program executor. Although manual XoT demonstrates better performance, the annotation of rationales incurs a significant increase in cost and introduces dilemmas in demonstration selection. 4.1.2 Automatic Prompting Some work designs specific instructions to stimulate CoT reasoning under zero-shot, such as appending Let’s think step by step after questions (Kojima et al., 2022). There are also other types of instructions, including writing programs to solve problems (Chen et al., 2022a), drafting plans before reasoning (Wang et al., 2023i), generating meta instructions based on task information (Crispino et al., 2023) and role playing (Kong et al., 2023a). 1175 Input Output (a) Input-Output (I-O) Input Output Merge Parallel Decoding (b) Parallel Decoding (SoT) (e) Tree-structure Reasoning (ToT) Input Output Input Output NL NL …… (c) Chain-style Reasoning (CoT, PoT, CoS) Input Output PL PL …… Input Output Symbol Symbol …… CoT: Natural Language Rationale PoT: Programming Language Rationale CoS: Symbol Sequence Rationale PoT CoT CoS Input Output (f) Graph-structure Reasoning (GoT) Thought Attempt Wrong Correct Reason Backtrack Refine Aggregate Input Output …… …… (d) Self-Ensemble (SC) …… Majority Vote Figure 3: Topological variants emerging in the evolution of XoT. (a) standard I-O prompting, (b) parallel-constrained tree structure variants, (c) chain structure variants with distinct rationale descriptions, (d) chain structure variants with self-ensemble, (e) standard tree structure variants, and (f) standard graph structure variants. However, due to the lack of guidance from clearly defined demonstrations, instruction-based methods appear extremely unstable. Another route of work conducts few-shot reasoning based on automatically generated rationales (usually by zeroshot CoT), which improves the stability of reasoning. These methods focus on selecting appropriate demonstrations. Zhang et al. (2023h) chooses diverse rationales through clustering, Zou et al. (2023) constructs demonstrations based on the question pattern, improving the generalization, Wan et al. (2023) employs answer entropy as a metric for selection, and Xu et al. (2023) uses Gibbs sampling to iteratively select demonstrations. 4.1.3 Semi-automatic Prompting Building upon automatic XoT based on few-shot learning, semi-automatic approaches incorporate a small number of human-annotated rationales to obtain supervised signals. They focus on bootstrapping to acquire high-quality rationales and selecting appropriate demonstrations to facilitate reasoning. Shao et al. (2023b) generates high-quality rationales through alternating forward and backward synthetic processes, and Pitis et al. (2023) iteratively expands the examples when encountering challenging questions, which mitigates the issue of limited human supervision. On the other hand, some studies optimize demonstration selection. Shum et al. (2023) and Lu et al. (2023b) utilize policy gradient optimization to learn demonstration selection strategy, while Ye and Durrett (2023) searches the development set and selects proper demonstration using two proxy metrics. 4.1.4 Pros and Cons of Three Approaches Manual prompting relies on high-quality rationale annotations, which result in better performance. However, it encounters drawbacks such as high labor costs and challenges in domain transfer. In contrast, automatic prompting incurs no labor costs and facilitates free domain transfer. However, it is plagued by errors and instability due to the absence of supervised signals. Semi-automatic prompting strikes a dedicated balance, achieving a trade-off between performance and costs, making it more suitable for downstream applications. 4.2 XoT Topological Variants The evolution of XoT has led to the development of multiple topological variants1. In this section, we will delve into topological variants of XoT: chain structure, tree structure and graph structure. Chain Structure The description format of rationales significantly influences reasoning execution. PAL (Gao et al., 2023) and PoT (Chen et al., 2022a) use programming languages to depict the reasoning process, transforming problem-solving into code generation. Similarly, formal logic description languages are also used to depict logical reasoning (Olausson et al., 2023; Pan et al., 2023; Ye et al., 2023a). The aforementioned methods decouple the thought generation from execution, thereby eliminating inconsistency reasoning errors. Additionally, algorithmic descriptions (Sel et al., 2023) can offer a high-level reasoning framework 1We consider XoT with chain structure and natural language rationales as vanilla CoT (the most primitive one). 1176 instead of details, endowing the model with the ability for global thinking. Tree Structure Chain structure inherently limits the scope of exploration. Through the incorporation of tree structures and search algorithms, models gain the capability to widely explore and backtrack during reasoning (Long, 2023; Yao et al., 2023b), as shown in Figure 3(e). Chen et al. (2024) iteratively explores and evaluates multiple tree-ofthoughts to further enhance reasoning. Benefiting from the exploration, tree variants have gained preliminary global planning capabilities towards the global optimum. Meanwhile, Mo and Xin (2023); Cao et al. (2023) introduce uncertainty measurement based on Monte Carlo dropout and generation likelihood, respectively, thereby offering a more accurate evaluation of intermediate reasoning processes. Yu et al. (2024) uses a bottom-up approach by building an analogy sub-problems tree. In addition, Ning et al. (2023) initially delivers reasoning drafts, accelerating reasoning by solving tree structure sub-problems in parallel. However, tree-based methods are restricted by demands of explicit question decomposition and state transition, which leads to limitations in task generalization. Graph Structure Graph structures introduce loops and N-to-1 connections, enabling improved modeling of sub-problem aggregation and selfverification (Besta et al., 2023; Lei et al., 2023a), as illustrated in Figure 3(f). Graph structures outperform tree-based methods in handling complex problems. However, they rely on specially designed state decomposition, leading to poorer generalization. To address this, Jiang et al. (2023a) establishes an implicit graph upon the reasoning process through prompts, avoiding the constraints of explicit topological structures, thereby generalizing to various multi-step reasoning tasks. The complex topological structure introduces a fine control flow, which facilitates LLMs in tackling harder problems. However, this complexity also limits the application of these methods in general reasoning, posing a significant challenge that needs to be addressed in future research. 4.3 XoT Enhancement Methods This section introduces five enhanced XoT reasoning approaches, including verify and refine (§4.3.1), question decomposition (§4.3.2), knowledge enhancement (§4.3.3), self-ensemble (§4.3.4) and efficient reasoning (§4.3.5). Feedback Verify Critic Model Refine Cascading Errors CoT with Verification and Refinement✨ Vanilla CoT Verify & Refine Figure 4: Verification and refinement rectify intermediate errors, which reduce cascading errors in reasoning. 4.3.1 Verify and Refine LLMs tend to hallucinate, which manifests as factual and faithful errors in reasoning (Huang et al., 2023b). Incorporating verification and refinement can be an effective strategy for mitigating the phenomena. In this section, we primarily focus on mitigating faithful errors, with a separate discussion of factual errors in the following knowledge enhancement section (§4.3.3). Reasoning can be refined based on critical feedback provided by LLMs. Paul et al. (2024a) trains a small critic model to provide structured feedback, but the quality of the feedback is limited due to the model size. Madaan et al. (2023) employs feedback from itself for iterative self-refinement, Li et al. (2023g) uses finer-grained feedback at the step level, and Shinn et al. (2023) further expands this method by incorporating long and short-term memory to provide more concise feedback. However, recent research suggests that LLMs may not address issues beyond their own capabilities (Kadavath et al., 2022; Yin et al., 2023), which raises doubt on the effectiveness of self-feedback (Huang et al., 2024a). To remedy this, some work incorporates external feedback (Gou et al., 2024a; Nathani et al., 2023) or performs secondary verification on the refined reasoning (Shridhar et al., 2023). On the other hand, logical reasoning structures are also well-suited for verification. Ling et al. (2023) devises a deductive reasoning form named Natural Program, which guarantees that the conclusion is derived from the designated premises. Wu et al. (2024) applies a deductive filter to verify the entailment relationship between question and reasoning chains. Some studies perform step-wise verification during the beam search decoding stage. Xie et al. (2023) uses the log-probabilities of deduc1177 Too difficult to answer hard questions directly Sub Ques 1 Sub Ques 2 CoT with Question Decomposition✨ Question Decomposition Sub-Question Solving Vanilla CoT Sub Ques 3 Solve complex questions progressively by solving sub-questions Figure 5: Question decomposition solves complex questions progressively by solving simple sub-questions. tive reasoning as a search criterion, while Zhu et al. (2024a) trains a deductive discriminator for verification. Besides, backward (abductive) reasoning excels in detecting inconsistencies in reasoning. It reconstructs conditions or variables in the question based on the reasoning chain to discover inconsistencies, thereby refining the reasoning (Xue et al., 2023; Weng et al., 2022; Jiang et al., 2023b). Reasoning with LLMs is prone to hallucinations, and feedback from intermediate steps plays a crucial role in refining the reasoning. However, the current acquisition of feedback signals still has many shortcomings, which necessitates further research. 4.3.2 Question Decomposition The philosophy of XoT is to solve questions stepby-step. However, vanilla CoT does not explicitly decompose questions, making it challenging to answer complex questions. To address this, certain approaches address intricate problems by progressively tackling straightforward sub-problems. L2M (Zhou et al., 2023b) initially breaks down the question into sub-questions in a top-down fashion. It then solves one sub-question at a time and leverages its solution to facilitate subsequent subquestions. Dua et al. (2022) takes a similar approach to L2M, but it uses solutions from previous sub-questions to iteratively decompose questions. Khot et al. (2023) designs a modular task-sharing library that tailors more effective solutions to different classes of sub-questions. Huang et al. (2024b) breaks down the problem into a directed acyclic graph represented by QDMR, and then performs step-wise reasoning based on the graph dependencies. In multi-hop reasoning, iterative decomposition has become a common practice (Wang et al., 2022; Press et al., 2023; Trivedi et al., 2023). AdFactual Errors CoT with Knowledge ✨ External Knowledge Wikipedia Knowledge Graph News, Books, etc Knowledge Acquisition Module Retriever Search Engine Parametric, etc Query Knowledge Knowledge Correct Factual Errors Vanilla CoT Figure 6: Incorporating knowledge (either internal or external) helps mitigate factual errors in reasoning. ditionally, some methods obtain a dedicated decomposer through supervised training rather than relying on the LLM itself (Li et al., 2023f; Junbing et al., 2023). However, when dealing with tabular reasoning, answering sub-questions may also pose a challenge, particularly when handling large tables. To tackle this issue, certain approaches involve decomposing both the questions and tables simultaneously (Ye et al., 2023b; Cheng et al., 2023; Nahid and Rafiei, 2024). Bottom-up aggregation is also a viable solution, with a smaller exploration space. Qi et al. (2023) employs Socratic questioning for recursive selfquesting to solve complex questions, while Zhang et al. (2024), in a similar fashion, breaks down the conditions of complex problems into small components and resolves them bottom-up. It should be noted that both decomposition and aggregation are highly dependent on the proper problem division, and reversely, a misaligned division may yield counterproductive results. 4.3.3 Knowledge Enhancement When dealing with knowledge-sensitive tasks, LLMs often make factual errors. Introducing external knowledge or mining the model’s internal knowledge can help alleviate this issue. Some methods explicitly utilize the model’s intrinsic knowledge. For example, Dhuliawala et al. (2023); Ji et al. (2023); Zheng et al. (2024) prompt models to output its parametric knowledge, and then reason based on it. Additionally, Zhang et al. (2023f) prompts the model to perform inductive reasoning on its internal knowledge, deriving more general conclusions. Furthermore, Liu et al. (2023c) incorporates reinforcement learning to optimize introspective knowledge-grounded reasoning. Mean1178 Answer 2 (correct) Answer 3 (wrong) Answer 1 (correct) A2 A1 A3 Vanilla CoT Inconsistent Answers CoT with Self-Ensemble ✨ Multiple Sampling Answer Ranker Vote 2/3 Final Answer ? rank1 Figure 7: Self-ensemble reduces inconsistency by selecting final answers from multiple samplings. while, Li and Qiu (2023) leverages model’s reasoning traces to construct a memory base, selecting relevant demonstrations whenever needed. External knowledge is often more reliable than parametric knowledge. Li et al. (2023f); Wang et al. (2023e) generates queries based on the question, utilizing a knowledge base as the external knowledge. Building upon this, Wang et al. (2023c) introduces a verification step for the retrieved knowledge, further ensuring knowledge accuracy. However, when confronted with multi-hop reasoning, direct retrieval using the question can be insufficient. Therefore, Press et al. (2023); Trivedi et al. (2023); Shao et al. (2023a); Yoran et al. (2023) decompose the question and iteratively use subquestion for more precise retrieval. 4.3.4 Self-Ensemble The sampling during generation introduces uncertainty, which in turn, creates the possibility of improving performance through self-ensemble. Cobbe et al. (2021) trains a verifier to rank answers, and Hu et al. (2024a) utilizes LLMs to selfrank their predictions. SC (Wang et al., 2023m) performs majority voting based on answers across multiple samples, and Fu et al. (2023a) proposes a complexity-based voting strategy on top of SC. Widespread practical evidence indicates that selfensemble is an effective way to improve performance. However, answer-based ensemble fails to consider intermediate steps. In response, Miao et al. (2024); Yoran et al. (2023); Khalifa et al. (2023) refines the ensemble at the step level, and Yin et al. (2024) introduces hierarchical answer aggregation. Yet another concern is the limited diversity offered by probability sampling. To overcome this limitation, Naik et al. (2023) uses different instructions, Liu et al. (2023e) ensembles various XoT variants, and Qin et al. (2023) ensembles using multi-lingual reasoning chains. Besides, the multi-agent debate (MAD) framework can also be regarded as heterogeneous ensemblings (Liang et al., 2023; Du et al., 2023; Wang et al., 2023b). Self-ensemble, as a simple yet effective means, has gained widespread favor. Nevertheless, alongside the improvement in performance, there has been a multiplied increase in inference costs, which in turn limits its wide application. 4.3.5 Efficient Reasoning LLMs are often inefficient in reasoning, such as high latency, substantial annotation costs, and elevated inference costs. To speed up reasoning, Ning et al. (2023) decomposes the questions in parallel and handles them simultaneously, Zhang et al. (2023b) generates a draft to skip intermediate layers during inference, and Leviathan et al. (2023); Chen et al. (2023a) introduce speculative decoding, which employs a smaller model for faster inference. Diao et al. (2023) annotates high-uncertainty samples to reduce human costs, and Aggarwal et al. (2023) dynamically adjusts sampling frequency to reduce inference costs. Further research should focus on efficient reasoning to promote the widespread application of LLMs. 5 Frontiers of Research 5.1 Tool Use LLMs face difficulties in accessing news, performing calculations, and interacting with the environment. Previous work endows LLMs with the ability to use external tools, enhancing their reasoning capabilities and enabling them to interact with the (multi-modal) external environment (Parisi et al., 2022; Schick et al., 2023; Shen et al., 2023a). However, these methods have limitations in facilitating multiple tool invocations and rectifying query errors. To tackle this problem, ReAct (Yao et al., 2023c) and Reflexion (Shinn et al., 2023) integrate the strengths of reasoning and action to complement each other. ART (Paranjape et al., 2023) uses a task library to select relevant tools and reasoning demonstrations. MM-REACT (Yang et al., 2023b) further incorporates vision experts to facilitate multi-modal reasoning and action. Above-mentioned studies focus on leveraging external tools to grant LLMs the capacities they initially lacked, thereby improving their performance across various domains. Tool invocation facilitates interaction with external sources, enabling it to 1179 gather additional information, while XoT enables effective elicitation, tracking, and action refining. 5.2 Planning It is challenging for LLMs to provide accurate responses for complex goals, which requires planning to decompose them into sub-tasks and track the execution process. Plans can be described by code or definition languages. Sun et al. (2023) generates Python code to control the agent, and iteratively refine the plan based on the execution feedback. Liu et al. (2023a); Dagan et al. (2023) leverage the Planning Domain Definition Language (PDDL) (Gerevini, 2020) to describe the planning procedure. PDDL assists in decomposing complex problems and utilizing specialized models for planning before converting the results into natural languages. Zhou et al. (2023d) integrates self-refine (Madaan et al., 2023) with PDDL to achieve a better success rate in long-horizon sequential tasks. Instead of pre-defined plans, many studies use search algorithms to dynamically plan and explore the action space. Tree-of-Thought explores the problem through DFS or BFS search, and tracks and updates the intermediate states (Yao et al., 2023b). RAP and LATS incorporate Monte Carlo Tree Search based on reasoning trajectories in planning (Hao et al., 2023a; Zhou et al., 2023a), and ToolChain* enables more efficient exploring through heuristic A* search (Zhuang et al., 2024). LLMs, endowed with robust reasoning capabilities, can devise strategies for achieving complex goals. Furthermore, the integration of planning, reasoning, memory, and tool utilization serves as a cornerstone for LLM-powered autonomous agents. 5.3 Distillation of Reasoning Capabilities In low-resource scenarios such as edge computing, distillation offers a possibility for deploying LLMs. Some methods employ self-distillation for selfimprovement without external supervision. Huang et al. (2023a) employs self-consistency to generate reasoning chains from unlabeled data, followed by fine-tuning, enhancing its generalized reasoning capabilities. Zelikman et al. (2022) improves LM’s reasoning capabilities via self-loop bootstrapping. Despite the powerful reasoning exhibited by CoT, it emerges primarily in large-scale LLMs, with its usage limited in smaller models. Magister et al. (2023) finds that smaller models, after fine-tuning on CoT reasoning data, can also exhibit the capacity for step-by-step reasoning. Following this trend, numerous studies attempt to distill the step-by-step reasoning capabilities of LLMs into smaller models. Hsieh et al. (2023b) employs self-consistency to filter predictions, distilling high-quality reasoning chains from LLMs. Ho et al. (2023); Li et al. (2023c) find that sampling multiple reasoning chains per instance is paramount for improving students’ reasoning capability. SCOTT (Wang et al., 2023j) utilizes contrastive decoding (Li et al., 2023e; O’Brien and Lewis, 2023) and counterfactual reasoning objective to tackle the shortcut problem. Li et al. (2024) improves the generalization of reasoning for unseen tasks through LoRA mixture-of-experts distillation. Recent studies have found that the reasoning capabilities of small models can be further improved by optimizing over preference data. DialCoT (Han et al., 2023) decomposes reasoning steps into a multi-round dialog and optimizes the correct reasoning traces using PPO. Wang et al. (2023k); Feng et al. (2024) train a reward model on automatically generated data, which is designed to rank LLM’s reasoning traces, and then optimizes smaller models using PPO. (Xie et al., 2024) utilizes Monte Carlo Tree Search to sample and score reasoning trajectories, generates preference data on the fly, and uses DPO for online preference optimization. Since code serves as an excellent intermediate representation for reasoning, Zhu et al. (2023) distills program-aided reasoning capability into smaller models. Meanwhile, some studies find that distilling reasoning chains from both natural language and code formats leads to further improvement (Li et al., 2023a; Zhu et al., 2024b). In addition to regular reasoning, Yang et al. (2024a) attempts to distill tabular reasoning capabilities, and Zhao et al. (2024b) seeks to endow smaller models with retrieval-augmented reasoning capabilities. These studies adopt a shared paradigm that distills smaller models with reasoning chains generated from larger models with superior reasoning capabilities. However, it is worth noting that language models have intricate tradeoffs associated with multi-dimensional capabilities, and distilling task-specific reasoning ability may adversely downgrade the general performance (Fu et al., 2023b). 6 Future Directions Despite XoT reasoning has showcased remarkable performance on numerous tasks, there are still some challenges that necessitate further research. 1180 6.1 Multi-modal Reasoning Current XoT research mostly focuses on plain text. However, interacting with the real world necessitates multi-modal capabilities. To facilitate research, SciQA (Lu et al., 2022) and CURE (Chen et al., 2023c) are introduced to emphasize multimodal CoT reasoning. Through fine-tuning with the combination of vision and language features, Zhang et al. (2023i); Wang et al. (2023g) endow models with multi-modal CoT reasoning capabilities, and Yao et al. (2023d,a) further incorporate graph structures to model multi-hop relationships. Other approaches convert images to captions and use LLM for prompt-based reasoning (Yang et al., 2023b; Zheng et al., 2023b). However, the limited capabilities of vision-language models constrain their performance in multi-step reasoning (Alayrac et al., 2022; Li et al., 2023b; Peng et al., 2023). Several critical challenges remain to be addressed in future research, which we summarize as follows: (1) Vision-text interaction: How can visual and textual features be effectively integrated, than solely depending on captions? (2) Harnessing VLLMs: How can we better apply LLM-based reasoning techniques to the multi-modal domain? (3) Video Reasoning: How to expand into video reasoning with complex temporal dependencies? 6.2 Faithful Reasoning Extensive research indicates that LLMs often engage in unfaithful reasoning, such as factual errors and inconsistent reasoning. To address factual errors, one common approach is retrieval augmentation (Trivedi et al., 2023; Zhao et al., 2023a), but it requires appropriate timing and retrieval accuracy. Compared to factual errors, inconsistencies are more difficult to identify (Paul et al., 2024b). Common detection methods include deductive logic (Jiang et al., 2023b; Xue et al., 2023; Ling et al., 2023), post-processing (He et al., 2023a; Lei et al., 2023b), and critic-based approaches (Madaan et al., 2023; Nathani et al., 2023). Among them, Neural-symbolic reasoning (Chen et al., 2022a; Olausson et al., 2023) is a widely used approach for reducing inconsistencies, and question decomposition (Radhakrishnan et al., 2023) has also demonstrated its effectiveness to some degree. Furthermore, Zhang et al. (2023c); Lanham et al. (2023) investigate the factors influencing faithfulness from an empirical perspective. Faithful reasoning encounters two significant challenges: (1) Detection: How can unfaithful reasoning be accurately identified? (2) Correction: How can one obtain accurate feedback and make correct refinements based on that feedback? 6.3 Theoretical Perspective The mechanism behind the CoT and ICL has not been clearly explained so far. Some studies empirically explore the roles of CoT and ICL in reasoning, offering practical insights (Wang et al., 2023a; Madaan and Yazdanbakhsh, 2022; Tang et al., 2023). Another line of work explores from a theoretical perspective. Li et al. (2023h); Feng et al. (2023); Merrill and Sabharwal (2023); Prystawski et al. (2023) investigate why CoT enhances reasoning abilities, while Wu et al. (2023b); Tutunov et al. (2023); Hou et al. (2023); Wang et al. (2023f) examine the mechanisms from a feature-based standpoint (information flow, attention, variables, etc.). Additionally, there have been preliminary explorations of the emergence mechanism (Schaeffer et al., 2023; Zhou et al., 2023c). At present, the exploration of CoT theories is still limited to the surface level. There are still open questions that require further in-depth investigation. (1) How does the emergence capability arise? (2) In what way does CoT enhance reasoning compared to standard few-shot prompting? 7 Discussion We delve into open questions about chain-ofthought reasoning, with the details discussion in Appendix A.2. The discussion encompasses three topics: (a) How does chain-of-thought reasoning ability emerge with large-scale pre-training? (b) How to provide accurate feedback for a model’s reasoning and decision-making. (c) The implications of chain-of-thought reasoning for LLM-powered autonomous agents and AGI. 8 Conclusion In this paper, we conduct a systematic survey of existing research on generalized chain-of-thought reasoning, offering a comprehensive review of the field. Specifically, we meticulously categorize advanced methods, delve into current frontier research, highlight existing challenges, identify potential future research directions, and discuss open questions. This paper is the first systematic survey dedicated to CoT reasoning. We hope that this survey will facilitate further research in this area. 1181 Limitations This study provides the first comprehensive survey of generalized chain-of-thought (XoT) reasoning. Related work, benchmarks details and further discussion can be found in Appendix A,B. We have made our best effort, but there may still be some limitations. On one hand, due to page limitations, we can only provide a brief summary of each method without exhaustive technical details. On the other hand, we primarily collect studies from ∗ACL, NeurIPS, ICLR, ICML, COLING and arXiv, and there is a chance that we may have missed some important work published in other venues. In the benchmarks section, we primarily list widely used datasets, and more complete benchmarks can be found in Guo et al. (2023). As of now, there is no definitive conclusion on open questions. We will stay abreast of discussions within the research community, updating opinions and supplementing overlooked work in the future. Acknowledgements The research in this article is supported by the National Key Research and Development Project (2021YFF0901602), the National Science Foundation of China (U22B2059, 62276083), and Shenzhen Foundational Research Funding (JCYJ20200109113441941), Major Key Project of PCL (PCL2021A06). Ming Liu is the corresponding author. References Pranjal Aggarwal, Aman Madaan, Yiming Yang, and Mausam. 2023. Let’s sample step by step: Adaptiveconsistency for efficient reasoning with llms. ArXiv preprint, abs/2305.11860. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karén Simonyan. 2022. Flamingo: a visual language model for few-shot learning. In NeurIPS. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics. Simran Arora, Avanika Narayan, Mayee F Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, and Christopher Re. 2023. Ask me anything: A simple strategy for prompting language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler. 2023. Graph of thoughts: Solving elaborate problems with large language models. ArXiv preprint, abs/2308.09687. Sumithra Bhakthavatsalam, Daniel Khashabi, Tushar Khot, Bhavana Dalvi Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, and Peter Clark. 2021. Think you have solved direct-answer question answering? try arcda, the direct-answer AI2 reasoning challenge. ArXiv preprint, abs/2102.03315. Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432– 7439. AAAI Press. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott M. Lundberg, Harsha Nori, Hamid Palangi, Marco Túlio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experiments with GPT-4. ArXiv preprint, abs/2303.12712. Tianle Cai, Xuezhi Wang, Tengyu Ma, Xinyun Chen, and Denny Zhou. 2024. Large language models as 1182 tool makers. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Shulin Cao, Jiajie Zhang, Jiaxin Shi, Xin Lv, Zijun Yao, Qi Tian, Lei Hou, and Juanzi Li. 2023. Probabilistic tree-of-thought reasoning for answering knowledgeintensive complex questions. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12541–12560, Singapore. Association for Computational Linguistics. Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023a. Accelerating large language model decoding with speculative sampling. CoRR, abs/2302.01318. Sijia Chen, Baochun Li, and Di Niu. 2024. Boosting of thoughts: Trial-and-error problem solving with large language models. CoRR, abs/2402.11140. Wenhu Chen. 2023. Large language models are few(1)shot table reasoners. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1120–1130, Dubrovnik, Croatia. Association for Computational Linguistics. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022a. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. ArXiv preprint, abs/2211.12588. Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, and Tony Xia. 2023b. TheoremQA: A theorem-driven question answering dataset. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7889–7901, Singapore. Association for Computational Linguistics. Yangyi Chen, Karan Sikka, Michael Cogswell, Heng Ji, and Ajay Divakaran. 2023c. Measuring and improving chain-of-thought reasoning in vision-language models. ArXiv preprint, abs/2309.04461. Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Xin Zhao, and Ji-Rong Wen. 2023d. ChatCoT: Tool-augmented chain-of-thought reasoning on chatbased large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14777–14790, Singapore. Association for Computational Linguistics. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. 2021. FinQA: A dataset of numerical reasoning over financial data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3697–3711, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang Ma, Sameena Shah, and William Yang Wang. 2022b. ConvFinQA: Exploring the chain of numerical reasoning in conversational finance question answering. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6279–6292, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2023. Binding language models in symbolic languages. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Haotian Wang, Ming Liu, and Bing Qin. 2023. Timebench: A comprehensive evaluation of temporal reasoning abilities in large language models. ArXiv preprint, abs/2311.17667. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. ArXiv preprint, abs/2110.14168. Nicholas Crispino, Kyle Montgomery, Fankun Zeng, Dawn Song, and Chenguang Wang. 2023. Agent instructs large language models to be general zeroshot reasoners. Preprint, arXiv:2310.03710. Gautier Dagan, Frank Keller, and Alex Lascarides. 2023. Dynamic planning with a llm. ArXiv preprint, abs/2308.06391. Yuntian Deng, Kiran Prasad, Roland Fernandez, Paul Smolensky, Vishrav Chaudhary, and Stuart Shieber. 2023. Implicit chain of thought reasoning via knowledge distillation. Preprint, arXiv:2311.01460. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. 2023. Chain-of-verification reduces hallucination in large language models. ArXiv preprint, abs/2309.11495. Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong Zhang. 2023. Active prompting with chain-ofthought for large language models. ArXiv preprint, abs/2302.12246. 1183 David Dohan, Winnie Xu, Aitor Lewkowycz, Jacob Austin, David Bieber, Raphael Gontijo Lopes, Yuhuai Wu, Henryk Michalewski, Rif A. Saurous, Jascha Sohl-Dickstein, Kevin Murphy, and Charles Sutton. 2022. Language model cascades. ArXiv preprint, abs/2207.10342. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and Zhifang Sui. 2023. A survey for in-context learning. ArXiv preprint, abs/2301.00234. Qingxiu Dong, Ziwei Qin, Heming Xia, Tian Feng, Shoujie Tong, Haoran Meng, Lin Xu, Zhongyu Wei, Weidong Zhan, Baobao Chang, Sujian Li, Tianyu Liu, and Zhifang Sui. 2022. Premise-based multimodal reasoning: Conditional inference on joint textual and visual clues. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 932–946, Dublin, Ireland. Association for Computational Linguistics. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, and Igor Mordatch. 2023. Improving factuality and reasoning in language models through multiagent debate. ArXiv preprint, abs/2305.14325. Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. 2022. Successive prompting for decomposing complex questions. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1251–1265, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Dheeru Dua, Sameer Singh, and Matt Gardner. 2020. Benefits of intermediate annotations in reading comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5627–5634, Online. Association for Computational Linguistics. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368–2378, Minneapolis, Minnesota. Association for Computational Linguistics. Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, and Tat-Seng Chua. 2023. Reasoning implicit sentiment with chain-of-thought prompting. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1171–1182, Toronto, Canada. Association for Computational Linguistics. Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. 2023. Towards revealing the mystery behind chain of thought: A theoretical perspective. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Yunlong Feng, Yang Xu, Libo Qin, Yasheng Wang, and Wanxiang Che. 2024. Improving language model reasoning with self-motivated learning. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 8840–8852. ELRA and ICCL. Hao Fu, Yao; Peng and Tushar Khot. 2022. How does gpt obtain its ability? tracing emergent abilities of language models to their sources. Yao Fu’s Notion. Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. 2023a. Complexity-based prompting for multi-step reasoning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Yao Fu, Hao-Chun Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023b. Specializing smaller language models towards multi-step reasoning. In International Conference on Machine Learning. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. PAL: Program-aided language models. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 10764–10799. PMLR. Alfonso Emilio Gerevini. 2020. An introduction to the planning domain definition language (PDDL): book review. Artif. Intell., 280:103221. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346– 361. Zhibin Gou, Zhihong Shao, Yeyun Gong, yelong shen, Yujiu Yang, Nan Duan, and Weizhu Chen. 2024a. CRITIC: Large language models can self-correct with tool-interactive critiquing. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Zhibin Gou, Zhihong Shao, Yeyun Gong, yelong shen, Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu Chen. 2024b. ToRA: A tool-integrated reasoning 1184 agent for mathematical problem solving. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Zishan Guo, Renren Jin, Chuang Liu, Yufei Huang, Dan Shi, Linhao Yu, Yan Liu, Jiaxuan Li, Bojian Xiong, Deyi Xiong, et al. 2023. Evaluating large language models: A comprehensive survey. ArXiv preprint, abs/2310.19736. Pranay Gupta and Manish Gupta. 2022. Newskvqa: Knowledge-aware news video question answering. In Advances in Knowledge Discovery and Data Mining - 26th Pacific-Asia Conference, PAKDD 2022, Chengdu, China, May 16-19, 2022, Proceedings, Part III, volume 13282 of Lecture Notes in Computer Science, pages 3–15. Springer. Chengcheng Han, Xiaowei Du, Che Zhang, Yixin Lian, Xiang Li, Ming Gao, and Baoyuan Wang. 2023. DialCoT meets PPO: Decomposing and exploring reasoning paths in smaller language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8055–8068, Singapore. Association for Computational Linguistics. Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq R. Joty, Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong, and Dragomir Radev. 2022. FOLIO: natural language reasoning with first-order logic. ArXiv preprint, abs/2209.00840. Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu. 2023a. Reasoning with language model is planning with world model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8154–8173, Singapore. Association for Computational Linguistics. Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023b. ToolkenGPT: Augmenting frozen language models with massive tools via tool embeddings. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Hangfeng He, Hongming Zhang, and Dan Roth. 2023a. Rethinking with retrieval: Faithful large language model inference. ArXiv preprint, abs/2301.00303. Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023b. Exploring humanlike translation strategy with large language models. ArXiv preprint, abs/2305.04118. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the MATH dataset. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023. Large language models are reasoning teachers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14852–14882, Toronto, Canada. Association for Computational Linguistics. Ruixin Hong, Hongming Zhang, Xinyu Pang, Dong Yu, and Changshui Zhang. 2023. A closer look at the self-verification abilities of large language models in logical reasoning. CoRR, abs/2311.07954. Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533, Doha, Qatar. Association for Computational Linguistics. Yifan Hou, Jiaoda Li, Yu Fei, Alessandro Stolfo, Wangchunshu Zhou, Guangtao Zeng, Antoine Bosselut, and Mrinmaya Sachan. 2023. Towards a mechanistic interpretation of multi-step reasoning capabilities of language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4902–4919, Singapore. Association for Computational Linguistics. Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, and Tomas Pfister. 2023a. Tool documentation enables zero-shot tool-usage with large language models. ArXiv preprint, abs/2308.00675. Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023b. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003–8017, Toronto, Canada. Association for Computational Linguistics. Chi Hu, Yuan Ge, Xiangnan Ma, Hang Cao, Qiang Li, Yonghua Yang, Tong Xiao, and Jingbo Zhu. 2024a. Rankprompt: Step-by-step comparisons make language models better reasoners. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 13524–13536. ELRA and ICCL. 1185 Hanxu Hu, Hongyuan Lu, Huajian Zhang, Wai Lam, and Yue Zhang. 2023a. Chain-of-symbol prompting elicits planning in large langauge models. ArXiv preprint, abs/2305.10276. Mengkang Hu, Yao Mu, Xinmiao Chelsey Yu, Mingyu Ding, Shiguang Wu, Wenqi Shao, Qiguang Chen, Bin Wang, Yu Qiao, and Ping Luo. 2024b. Tree-planner: Efficient close-loop task planning with large language models. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Pengbo Hu, Ji Qi, Xingyu Li, Hong Li, Xinqi Wang, Bing Quan, Ruiyu Wang, and Yi Zhou. 2023b. Treeof-mixed-thought: Combining fast and slow thinking for multi-hop visual reasoning. ArXiv preprint, abs/2308.09658. Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2023a. Large language models can self-improve. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1051–1068, Singapore. Association for Computational Linguistics. Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 1049–1065. Association for Computational Linguistics. Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. 2024a. Large language models cannot self-correct reasoning yet. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Jinfeng Huang, Qiaoqiao She, Wenbin Jiang, Hua Wu, Yang Hao, Tong Xu, and Feng Wu. 2024b. Qdmr-based planning-and-solving prompting for complex reasoning tasks. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 13395–13406. ELRA and ICCL. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023b. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. Preprint, arXiv:2311.05232. Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2391–2401, Hong Kong, China. Association for Computational Linguistics. Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, and Lichao Sun. 2023c. Metatool benchmark: Deciding whether to use tools and which to use. Preprint, arXiv:2310.03128. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023d. Ceval: A multi-level multi-discipline chinese evaluation suite for foundation models. ArXiv preprint, abs/2305.08322. Shima Imani, Liang Du, and Harsh Shrivastava. 2023. MathPrompter: Mathematical reasoning using large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track), pages 37– 42, Toronto, Canada. Association for Computational Linguistics. Raer Jack. 2023. Compression for agi. Stanford MLSys. Ziwei Ji, Tiezheng Yu, Yan Xu, Nayeon Lee, Etsuko Ishii, and Pascale Fung. 2023. Towards mitigating hallucination in large language models via selfreflection. ArXiv preprint, abs/2310.06271. Dongwei Jiang, Jingyu Zhang, Orion Weller, Nathaniel Weir, Benjamin Van Durme, and Daniel Khashabi. 2024. SELF-[IN]CORRECT: llms struggle with refining self-generated responses. CoRR, abs/2404.04298. Song Jiang, Zahra Shakeri, Aaron Chan, Maziar Sanjabi, Hamed Firooz, Yinglong Xia, Bugra Akyildiz, Yizhou Sun, Jinchao Li, Qifan Wang, et al. 2023a. Resprompt: Residual connection prompting advances multi-step reasoning in large language models. ArXiv preprint, abs/2310.04743. Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu, Yu Zhang, Zhenguo Li, and James T. Kwok. 2023b. Forward-backward reasoning in large language models for verification. ArXiv preprint, abs/2308.07758. Yan Junbing, Chengyu Wang, Taolin Zhang, Xiaofeng He, Jun Huang, and Wei Zhang. 2023. From complex to simple: Unraveling the cognitive tree for reasoning with small language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12413–12425, Singapore. Association for Computational Linguistics. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared 1186 Kaplan. 2022. Language models (mostly) know what they know. ArXiv preprint, abs/2207.05221. Ehud D. Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhlgay, Noam Rozen, Erez Schwartz, Gal Shachaf, Shai Shalev-Shwartz, Amnon Shashua, and Moshe Tenenholtz. 2022. Mrkl systems: A modular, neurosymbolic architecture that combines large language models, external knowledge sources and discrete reasoning. ArXiv preprint, abs/2205.00445. Uri Katz, Mor Geva, and Jonathan Berant. 2022. Inferring implicit relations in complex questions with language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2548–2566, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, and Lu Wang. 2023. Discriminator-guided multi-step reasoning with language models. ArXiv preprint, abs/2305.14934. Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish Sabharwal. 2023. Decomposed prompting: A modular approach for solving complex tasks. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Seungone Kim, Se Joo, Doyoung Kim, Joel Jang, Seonghyeon Ye, Jamin Shin, and Minjoon Seo. 2023. The CoT collection: Improving zero-shot and few-shot learning of language models via chainof-thought fine-tuning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12685–12708, Singapore. Association for Computational Linguistics. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In NeurIPS. Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. 2015. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585–597. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152–1157, San Diego, California. Association for Computational Linguistics. Aobo Kong, Shiwan Zhao, Hao Chen, Qicheng Li, Yong Qin, Ruiqi Sun, and Xin Zhou. 2023a. Better zeroshot reasoning with role-play prompting. CoRR, abs/2308.07702. Yilun Kong, Jingqing Ruan, Yihong Chen, Bin Zhang, Tianpeng Bao, Shiwei Shi, Guoqing Du, Xiaoru Hu, Hangyu Mao, Ziyue Li, Xingyu Zeng, and Rui Zhao. 2023b. Tptu-v2: Boosting task planning and tool usage of large language model-based agents in realworld systems. Preprint, arXiv:2311.11315. Andrew Lampinen, Ishita Dasgupta, Stephanie Chan, Kory Mathewson, Mh Tessler, Antonia Creswell, James McClelland, Jane Wang, and Felix Hill. 2022. Can language models learn from explanations in context? In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 537–563, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamile Lukosiute, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy TelleenLawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, and Ethan Perez. 2023. Measuring faithfulness in chain-ofthought reasoning. ArXiv preprint, abs/2307.13702. Soochan Lee and Gunhee Kim. 2023. Recursion of thought: A divide-and-conquer approach to multicontext reasoning with language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 623–658, Toronto, Canada. Association for Computational Linguistics. Bin Lei, Pei-Hung Lin, Chunhua Liao, and Caiwen Ding. 2023a. Boosting logical reasoning in large language models through a new framework: The graph of thought. ArXiv preprint, abs/2308.08614. Deren Lei, Yaxi Li, Mingyu Wang, Vincent Yun, Emily Ching, Eslam Kamal, et al. 2023b. Chain of natural language inference for reducing large language model ungrounded hallucinations. ArXiv preprint, abs/2310.03951. Jie Lei, Licheng Yu, Tamara Berg, and Mohit Bansal. 2020. What is more likely to happen next? videoand-language future event prediction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8769–8784, Online. Association for Computational Linguistics. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 19274–19286. PMLR. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay V. Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, 1187 Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. In NeurIPS. Chenglin Li, Qianglong Chen, Caiyu Wang, and Yin Zhang. 2023a. Mixed distillation helps smaller language model better reasoning. CoRR, abs/2312.10730. Jiangtong Li, Li Niu, and Liqing Zhang. 2022. From representation to reasoning: Towards both evidence and commonsense reasoning for video questionanswering. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 21241– 21250. IEEE. Junnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi. 2023b. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 19730–19742. PMLR. Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, and Yejin Choi. 2023c. Symbolic chain-of-thought distillation: Small models can also “think” step-by-step. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2665– 2679, Toronto, Canada. Association for Computational Linguistics. Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023d. Apibank: A benchmark for tool-augmented llms. ArXiv preprint, abs/2304.08244. Xiang Li, Shizhu He, Jiayu Wu, Zhao Yang, Yao Xu, Yang jun Jun, Haifeng Liu, Kang Liu, and Jun Zhao. 2024. Mode-cotd: Chain-of-thought distillation for complex reasoning tasks with mixture of decoupled lora-experts. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 11475–11485. ELRA and ICCL. Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. 2023e. Contrastive decoding: Open-ended text generation as optimization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12286–12312, Toronto, Canada. Association for Computational Linguistics. Xiaonan Li and Xipeng Qiu. 2023. Mot: Prethinking and recalling enable chatgpt to selfimprove with memory-of-thoughts. ArXiv preprint, abs/2305.05181. Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq R. Joty, and Soujanya Poria. 2023f. Chain of knowledge: A framework for grounding large language models with structured knowledge bases. ArXiv preprint, abs/2305.13269. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. 2023g. Making language models better reasoners with step-aware verifier. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5315–5333, Toronto, Canada. Association for Computational Linguistics. Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris S. Papailiopoulos, and Samet Oymak. 2023h. Dissecting chain-of-thought: A study on compositional in-context learning of mlps. ArXiv preprint, abs/2305.18869. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel J. Orr, Lucia Zheng, Mert Yüksekgönül, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2022. Holistic evaluation of language models. ArXiv preprint, abs/2211.09110. Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. 2023. Encouraging divergent thinking in large language models through multi-agent debate. ArXiv preprint, abs/2305.19118. Hunter Lightman, Vineet Kosaraju, Yuri Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2024. Let’s verify step by step. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158–167, Vancouver, Canada. Association for Computational Linguistics. Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang, Mingu Lee, Roland Memisevic, and Hao Su. 2023. Deductive verification of chain-of-thought reasoning. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. 1188 Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023a. Llm+p: Empowering large language models with optimal planning proficiency. Preprint, arXiv:2304.11477. Hanmeng Liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, and Yue Zhang. 2023b. Glore: Evaluating logical reasoning of large language models. ArXiv preprint, abs/2310.09107. Jiacheng Liu, Ramakanth Pasunuru, Hannaneh Hajishirzi, Yejin Choi, and Asli Celikyilmaz. 2023c. Crystal: Introspective reasoners reinforced with selffeedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11557–11572, Singapore. Association for Computational Linguistics. Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In Proceedings of the TwentyNinth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3622–3628. ijcai.org. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023d. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., 55(9):195:1–195:35. Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun Hu, Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2023e. Plan, verify and switch: Integrated reasoning with diverse X-of-thoughts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2807–2822, Singapore. Association for Computational Linguistics. Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun Hu, Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2023f. Plan, verify and switch: Integrated reasoning with diverse X-of-thoughts. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2807–2822, Singapore. Association for Computational Linguistics. Jieyi Long. 2023. Large language model guided tree-ofthought. ArXiv preprint, abs/2305.08291. Hongyuan Lu, Haoyang Huang, Dongdong Zhang, Haoran Yang, Wai Lam, and Furu Wei. 2023a. Chainof-dictionary prompting elicits translation in large language models. ArXiv preprint, abs/2305.06575. Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. In NeurIPS. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. 2023b. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and KaiWei Chang. 2023c. A survey of deep learning for mathematical reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14605– 14631, Toronto, Canada. Association for Computational Linguistics. Yining Lu, Haoping Yu, and Daniel Khashabi. 2024. GEAR: Augmenting language models with generalizable and efficient tool resolution. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 112–138, St. Julian’s, Malta. Association for Computational Linguistics. Man Luo, Shrinidhi Kumbhar, Ming shen, Mihir Parmar, Neeraj Varshney, Pratyay Banerjee, Somak Aditya, and Chitta Baral. 2023. Towards logiglue: A brief survey and a benchmark for analyzing logical reasoning capabilities of language models. Preprint, arXiv:2310.00836. Qianli Ma, Haotian Zhou, Tingkai Liu, Jianbo Yuan, Pengfei Liu, Yang You, and Hongxia Yang. 2023. Let’s reward step by step: Step-level reward model as the navigators for reasoning. Preprint, arXiv:2310.10080. Yingwei Ma, Yue Liu, Yue Yu, Yuanliang Zhang, Yu Jiang, Changjian Wang, and Shanshan Li. 2024. At which training stage does code data help LLMs reasoning? In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Aman Madaan and Amir Yazdanbakhsh. 2022. Text and patterns: For effective chain of thought, it takes two to tango. ArXiv preprint, abs/2209.07686. Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2023. Teaching small language models to reason. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1773–1781, Toronto, Canada. Association for Computational Linguistics. 1189 Ana Marasovic, Iz Beltagy, Doug Downey, and Matthew Peters. 2022. Few-shot self-rationalization with natural language prompts. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 410–424, Seattle, United States. Association for Computational Linguistics. William Merrill and Ashish Sabharwal. 2023. The expresssive power of transformers with chain of thought. Preprint, arXiv:2310.07923. Ning Miao, Yee Whye Teh, and Tom Rainforth. 2024. Selfcheck: Using LLMs to zero-shot check their own step-by-step reasoning. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984, Online. Association for Computational Linguistics. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics. Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. 2022a. LILA: A unified benchmark for mathematical reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5807–5832, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. 2022b. NumGLUE: A suite of fundamental yet challenging mathematical reasoning tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3505–3523, Dublin, Ireland. Association for Computational Linguistics. Shentong Mo and Miao Xin. 2023. Tree of uncertain thoughts reasoning for large language models. ArXiv preprint, abs/2309.07694. Md Mahadi Hasan Nahid and Davood Rafiei. 2024. Tabsqlify: Enhancing reasoning capabilities of llms through table decomposition. CoRR, abs/2404.10150. Ranjita Naik, Varun Chandrasekaran, Mert Yuksekgonul, Hamid Palangi, and Besmira Nushi. 2023. Diversity of thought improves reasoning abilities of large language models. ArXiv preprint, abs/2310.07088. Deepak Nathani, David Wang, Liangming Pan, and William Wang. 2023. MAF: Multi-aspect feedback for improving reasoning in large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6591–6616, Singapore. Association for Computational Linguistics. Xuefei Ning, Zinan Lin, Zixuan Zhou, Huazhong Yang, and Yu Wang. 2023. Skeleton-of-thought: Large language models can do parallel decoding. ArXiv preprint, abs/2307.15337. Sean O’Brien and Mike Lewis. 2023. Contrastive decoding improves reasoning in large language models. ArXiv preprint, abs/2309.09117. Theo Olausson, Alex Gu, Ben Lipkin, Cedegao Zhang, Armando Solar-Lezama, Joshua Tenenbaum, and Roger Levy. 2023. LINC: A neurosymbolic approach for logical reasoning by combining language models with first-order logic provers. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5153–5176, Singapore. Association for Computational Linguistics. OpenAI. 2023. GPT-4 technical report. ArXiv preprint, abs/2303.08774. Liangming Pan, Alon Albalak, Xinyi Wang, and William Wang. 2023. Logic-LM: Empowering large language models with symbolic solvers for faithful logical reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3806–3824, Singapore. Association for Computational Linguistics. Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Tulio Ribeiro. 2023. Art: Automatic multistep reasoning and tool-use for large language models. Preprint, arXiv:2303.09014. Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models. ArXiv preprint, abs/2205.12255. Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, and Yejin Choi. 2020. Visualcomet: Reasoning about the dynamic context of a still image. In Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part V, volume 12350 of Lecture Notes in Computer Science, pages 508–524. Springer. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics. Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. 2024a. REFINER: Reasoning feedback on 1190 intermediate representations. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1100–1126, St. Julian’s, Malta. Association for Computational Linguistics. Debjit Paul, Robert West, Antoine Bosselut, and Boi Faltings. 2024b. Making reasoning matter: Measuring and improving faithfulness of chain-of-thought reasoning. CoRR, abs/2402.13950. Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. 2023. Kosmos-2: Grounding multimodal large language models to the world. ArXiv preprint, abs/2306.14824. Silviu Pitis, Michael R. Zhang, Andrew Wang, and Jimmy Ba. 2023. Boosted prompt ensembles for large language models. ArXiv preprint, abs/2304.05970. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5687–5711, Singapore. Association for Computational Linguistics. Ben Prystawski, Michael Li, and Noah D. Goodman. 2023. Why think step by step? reasoning emerges from the locality of experience. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. Jingyuan Qi, Zhiyang Xu, Ying Shen, Minqian Liu, Di Jin, Qifan Wang, and Lifu Huang. 2023. The art of SOCRATIC QUESTIONING: Recursive thinking with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4177–4199, Singapore. Association for Computational Linguistics. Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and Huajun Chen. 2023. Reasoning with language model prompting: A survey. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5368–5393, Toronto, Canada. Association for Computational Linguistics. Libo Qin, Qiguang Chen, Fuxuan Wei, Shijue Huang, and Wanxiang Che. 2023. Cross-lingual prompting: Improving zero-shot chain-of-thought reasoning across languages. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2695–2709, Singapore. Association for Computational Linguistics. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, dahai li, Zhiyuan Liu, and Maosong Sun. 2024. ToolLLM: Facilitating large language models to master 16000+ real-world APIs. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. ArXiv preprint, abs/2003.08271. Ansh Radhakrishnan, Karina Nguyen, Anna Chen, Carol Chen, Carson Denison, Danny Hernandez, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamile Lukosiute, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Sam McCandlish, Sheer El Showk, Tamera Lanham, Tim Maxwell, Venkatesa Chandrasekaran, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, and Ethan Perez. 2023. Question decomposition improves the faithfulness of model-generated reasoning. ArXiv preprint, abs/2307.11768. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019a. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics. Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019b. Explain yourself! leveraging language models for commonsense reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4932–4942, Florence, Italy. Association for Computational Linguistics. Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2Mind: Commonsense inference on events, intents, and reactions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 463–473, Melbourne, Australia. Association for Computational Linguistics. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752, Lisbon, Portugal. Association for Computational Linguistics. Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Xingyu Zeng, and Rui Zhao. 2023. Tptu: Task planning and tool usage of large language modelbased ai agents. Preprint, arXiv:2308.03427. Abulhair Saparov and He He. 2023. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. 1191 Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM: A 176b-parameter open-access multilingual language model. ArXiv preprint, abs/2211.05100. Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. 2023. Are emergent abilities of large language models a mirage? In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Timo Schick, Jane Dwivedi-Yu, Roberto Dessi, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Bilgehan Sel, Ahmad Al-Tawaha, Vanshaj Khattar, Lu Wang, Ruoxi Jia, and Ming Jin. 2023. Algorithm of thoughts: Enhancing exploration of ideas in large language models. ArXiv preprint, abs/2308.10379. Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023a. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9248–9274, Singapore. Association for Computational Linguistics. Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023b. Synthetic prompting: Generating chain-of-thought demonstrations for large language models. ArXiv preprint, abs/2302.00618. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023a. HuggingGPT: Solving AI tasks with chatGPT and its friends in hugging face. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Yongliang Shen, Kaitao Song, Xu Tan, Wenqi Zhang, Kan Ren, Siyu Yuan, Weiming Lu, Dongsheng Li, and Yueting Zhuang. 2023b. Taskbench: Benchmarking large language models for task automation. Preprint, arXiv:2311.18760. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. 2023. Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. 2023. Reflexion: language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Kumar Shridhar, Harsh Jhamtani, Hao Fang, Benjamin Van Durme, Jason Eisner, and Patrick Xia. 2023. Screws: A modular framework for reasoning with revisions. ArXiv preprint, abs/2309.13075. Kashun Shum, Shizhe Diao, and Tong Zhang. 2023. Automatic prompt augmentation and selection with chain-of-thought from labeled data. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 12113–12139, Singapore. Association for Computational Linguistics. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, and et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. ArXiv preprint, abs/2206.04615. Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai, and Chao Zhang. 2023. Adaplanner: Adaptive planning from feedback with language models. In Thirtyseventh Conference on Neural Information Processing Systems, NeurIPS 2023. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc Le, Ed Chi, Denny Zhou, and Jason Wei. 2023. Challenging BIG-bench tasks and whether chain-of-thought can solve them. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13003–13051, Toronto, Canada. Association for Computational Linguistics. Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021. ProofWriter: Generating implications, proofs, and 1192 abductive statements over natural language. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3621–3634, Online. Association for Computational Linguistics. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics. Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, and Jonathan Berant. 2021. Commonsenseqa 2.0: Exposing the limits of AI through gamification. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, and Muhan Zhang. 2023. Large language models are in-context semantic reasoners rather than symbolic reasoners. ArXiv preprint, abs/2305.14825. Qingyuan Tian, Hanlun Zhu, Lei Wang, Yang Li, and Yunshi Lan. 2023. R3 prompting: Review, rephrase and resolve for chain-of-thought reasoning in large language models under noisy context. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1670–1685, Singapore. Association for Computational Linguistics. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and finetuned chat models. ArXiv preprint, abs/2307.09288. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrieval with chain-of-thought reasoning for knowledgeintensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10014–10037, Toronto, Canada. Association for Computational Linguistics. Rasul Tutunov, Antoine Grosnit, Juliusz Ziomek, Jun Wang, and Haitham Bou-Ammar. 2023. Why can large language models generate correct chain-ofthoughts? ArXiv preprint, abs/2310.13571. Jonathan Uesato, Nate Kushman, Ramana Kumar, H. Francis Song, Noah Y. Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, and Irina Higgins. 2022. Solving math word problems with processand outcome-based feedback. ArXiv preprint, abs/2211.14275. Xingchen Wan, Ruoxi Sun, Hanjun Dai, Sercan Arik, and Tomas Pfister. 2023. Better zero-shot reasoning with self-adaptive prompting. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3493–3514, Toronto, Canada. Association for Computational Linguistics. Boshi Wang, Xiang Deng, and Huan Sun. 2022. Iteratively prompt pre-trained language models for chain of thought. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2714–2730, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun. 2023a. Towards understanding chain-of-thought prompting: An empirical study of what matters. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2717–2739, Toronto, Canada. Association for Computational Linguistics. Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiaonan Li, and Tian Gao. 2019. Does it make sense? and why? a pilot study for sense making and explanation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4020–4026, Florence, Italy. Association for Computational Linguistics. Haotian Wang, Xiyuan Du, Weijiang Yu, Qianglong Chen, Kun Zhu, Zheng Chu, Lian Yan, and Yi Guan. 2023b. Apollo’s oracle: Retrieval-augmented reasoning in multi-agent debates. Preprint, arXiv:2312.04854. Jianing Wang, Qiushi Sun, Nuo Chen, Xiang Li, and Ming Gao. 2023c. Boosting language models reasoning with chain-of-knowledge prompting. ArXiv preprint, abs/2306.06427. 1193 Jinyuan Wang, Junlong Li, and Hai Zhao. 2023d. Selfprompted chain-of-thought on large language models for open-domain multi-hop reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2717–2731, Singapore. Association for Computational Linguistics. Keheng Wang, Feiyu Duan, Sirui Wang, Peiguang Li, Yunsen Xian, Chuantao Yin, Wenge Rong, and Zhang Xiong. 2023e. Knowledge-driven cot: Exploring faithful reasoning in llms for knowledge-intensive question answering. Preprint, arXiv:2308.13259. Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023f. Label words are anchors: An information flow perspective for understanding in-context learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9840–9855, Singapore. Association for Computational Linguistics. Lei Wang, Yi Hu, Jiabang He, Xing Xu, Ning Liu, Hui Liu, and Heng Tao Shen. 2023g. T-sciq: Teaching multimodal chain-of-thought reasoning via large language model signals for science question answering. ArXiv preprint, abs/2305.03453. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, Wayne Xin Zhao, Zhewei Wei, and Ji-Rong Wen. 2023h. A survey on large language model based autonomous agents. ArXiv preprint, abs/2308.11432. Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023i. Plan-and-solve prompting: Improving zeroshot chain-of-thought reasoning by large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2609–2634, Toronto, Canada. Association for Computational Linguistics. Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, and Xiang Ren. 2023j. SCOTT: Selfconsistent chain-of-thought distillation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5546–5558, Toronto, Canada. Association for Computational Linguistics. Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, and Zhifang Sui. 2023k. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. CoRR, abs/2312.08935. Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji. 2024. MINT: Evaluating LLMs in multi-turn interaction with tools and language feedback. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Xinyi Wang, Lucas Caccia, Oleksiy Ostapenko, Xingdi Yuan, and Alessandro Sordoni. 2023l. Guiding language model reasoning with planning tokens. Preprint, arXiv:2310.05707. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023m. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Yiming Wang, Zhuosheng Zhang, and Rui Wang. 2023n. Element-aware summarization with large language models: Expert-aligned evaluation and chain-ofthought method. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8640–8665, Toronto, Canada. Association for Computational Linguistics. Yuqing Wang and Yun Zhao. 2023. TRAM: benchmarking temporal reasoning for large language models. ArXiv preprint, abs/2310.00835. Zhaoyang Wang, Shaohan Huang, Yuxuan Liu, Jiahai Wang, Minghui Song, Zihan Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Feng Sun, and Qi Zhang. 2023o. Democratizing reasoning ability: Tailored learning from large language model. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1948–1966, Singapore. Association for Computational Linguistics. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emergent abilities of large language models. Trans. Mach. Learn. Res., 2022. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS. Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. 2022. Large language models are reasoners with self-verification. ArXiv preprint, abs/2212.09561. Bo Wu, Shoubin Yu, Zhenfang Chen, Josh Tenenbaum, and Chuang Gan. 2021. STAR: A benchmark for situated reasoning in real-world videos. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Haoyi Wu, Wenyang Hui, Yezeng Chen, Weiqi Wu, Kewei Tu, and Yi Zhou. 2023a. Conic10K: A challenging math problem understanding and reasoning 1194 dataset. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6444–6458, Singapore. Association for Computational Linguistics. Skyler Wu, Eric Meng Shen, Charumathi Badrinath, Jiaqi Ma, and Himabindu Lakkaraju. 2023b. Analyzing chain-of-thought prompting in large language models via gradient-based feature attributions. ArXiv preprint, abs/2307.13339. Yexin Wu, Zhuosheng Zhang, and Hai Zhao. 2024. Mitigating misleading chain-of-thought reasoning with selective filtering. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 11325–11340. ELRA and ICCL. Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huan, and Tao Gui. 2023. The rise and potential of large language model based agents: A survey. ArXiv preprint, abs/2309.07864. Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, and Bo Li. 2024. Badchain: Backdoor chain-of-thought prompting for large language models. CoRR, abs/2401.12242. Junbin Xiao, Xindi Shang, Angela Yao, and Tat-Seng Chua. 2021. Next-qa: Next phase of questionanswering to explaining temporal actions. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 9777–9786. Computer Vision Foundation / IEEE. Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P. Lillicrap, Kenji Kawaguchi, and Michael Shieh. 2024. Monte carlo tree search boosts reasoning via iterative preference learning. CoRR. Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, James Xu Zhao, Min-Yen Kan, Junxian He, and Michael Qizhe Xie. 2023. Self-evaluation guided beam search for reasoning. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. Weijia Xu, Andrzej Banburski-Fahey, and Nebojsa Jojic. 2023. Reprompting: Automated chain-of-thought prompt inference through gibbs sampling. Preprint, arXiv:2305.09993. Tianci Xue, Ziqi Wang, Zhenhailong Wang, Chi Han, Pengfei Yu, and Heng Ji. 2023. RCOT: detecting and rectifying factual inconsistency in reasoning by reversing chain-of-thought. ArXiv preprint, abs/2305.11499. Bohao Yang, Chen Tang, Kun Zhao, Chenghao Xiao, and Chenghua Lin. 2024a. Effective distillation of table-based reasoning ability from llms. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 5538–5550. ELRA and ICCL. Hui Yang, Sifu Yue, and Yunzhong He. 2023a. Autogpt for online decision making: Benchmarks and additional opinions. ArXiv preprint, abs/2306.02224. Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023b. MM-REACT: prompting chatgpt for multimodal reasoning and action. ArXiv preprint, abs/2303.11381. Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, and Furu Wei. 2024b. Language models as inductive reasoners. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 209–225, St. Julian’s, Malta. Association for Computational Linguistics. Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, and Erik Cambria. 2023c. Logical reasoning over natural language as knowledge representation: A survey. ArXiv preprint, abs/2303.12023. Fanglong Yao, Changyuan Tian, Jintao Liu, Zequn Zhang, Qing Liu, Li Jin, Shuchao Li, Xiaoyu Li, and Xian Sun. 2023a. Thinking like an expert:multimodal hypergraph-of-thought (hot) reasoning to boost foundation modals. Preprint, arXiv:2308.06207. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik R Narasimhan. 2023b. Tree of thoughts: Deliberate problem solving with large language models. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023c. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Yao Yao, Zuchao Li, and Hai Zhao. 2023d. Beyond chain-of-thought, effective graph-of-thought reasoning in large language models. ArXiv preprint, abs/2305.16582. Xi Ye, Qiaochu Chen, Isil Dillig, and Greg Durrett. 2023a. SatLM: Satisfiability-aided language models using declarative prompting. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Xi Ye and Greg Durrett. 2022. The unreliability of explanations in few-shot in-context learning. ArXiv preprint, abs/2205.03401. 1195 Xi Ye and Greg Durrett. 2023. Explanation selection using unlabeled data for in-context learning. ArXiv preprint, abs/2302.04813. Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, and Yongbin Li. 2023b. Large language models are versatile decomposers: Decomposing evidence and questions for table-based reasoning. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, pages 174–184. ACM. Kexin Yi, Chuang Gan, Yunzhu Li, Pushmeet Kohli, Jiajun Wu, Antonio Torralba, and Joshua B. Tenenbaum. 2020. CLEVRER: collision events for video representation and reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, and Xuanjing Huang. 2023. Do large language models know what they don’t know? In Findings of the Association for Computational Linguistics: ACL 2023, pages 8653–8665, Toronto, Canada. Association for Computational Linguistics. Zhangyue Yin, Qiushi Sun, Qipeng Guo, Zhiyuan Zeng, Xiaonan Li, Tianxiang Sun, Cheng Chang, Qinyuan Cheng, Ding Wang, Xiaofeng Mou, Xipeng Qiu, and Xuanjing Huang. 2024. Aggregation of reasoning: A hierarchical framework for enhancing answer selection in large language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 609–625. ELRA and ICCL. Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel Deutch, and Jonathan Berant. 2023. Answering questions by meta-reasoning over multiple chains of thought. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5942–5966, Singapore. Association for Computational Linguistics. Fei Yu, Hongbo Zhang, and Benyou Wang. 2023a. Nature language reasoning, A survey. ArXiv preprint, abs/2303.14725. Junchi Yu, Ran He, and Zhitao Ying. 2024. THOUGHT PROPAGATION: AN ANALOGICAL APPROACH TO COMPLEX REASONING WITH LARGE LANGUAGE MODELS. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. Reclor: A reading comprehension dataset requiring logical reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Xiao Yu, Baolin Peng, Michel Galley, Jianfeng Gao, and Zhou Yu. 2023b. Teaching language models to self-improve through interactive demonstrations. Preprint, arXiv:2310.13522. Zihan Yu, Liang He, Zhen Wu, Xinyu Dai, and Jiajun Chen. 2023c. Towards better chain-ofthought prompting strategies: A survey. Preprint, arXiv:2310.04959. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. 2022. Star: Bootstrapping reasoning with reasoning. In NeurIPS. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6720–6731. Computer Vision Foundation / IEEE. Bowen Zhang, Kehua Chang, and Chunping Li. 2023a. Cot-bert: Enhancing unsupervised sentence representation through chain-of-thought. ArXiv preprint, abs/2309.11143. Hugh Zhang and David C. Parkes. 2023. Chain-ofthought reasoning is a policy improvement operator. Preprint, arXiv:2309.08589. Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, and Sharad Mehrotra. 2023b. Draft & verify: Lossless large language model acceleration via self-speculative decoding. ArXiv preprint, abs/2309.08168. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. 2023c. How language model hallucinations can snowball. ArXiv preprint, abs/2305.13534. Sarah J. Zhang, Reece Shuttleworth, Derek Austin, Yann Hicke, Leonard Tang, Sathwik Karnik, Darnell Granberry, and Iddo Drori. 2022a. A dataset and benchmark for automatically answering and generating machine learning final exams. ArXiv preprint, abs/2206.05442. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022b. OPT: open pre-trained transformer language models. ArXiv preprint, abs/2205.01068. Tianhua Zhang, Jiaxin Ge, Hongyin Luo, Yung-Sung Chuang, Mingye Gao, Yuan Gong, Xixin Wu, Yoon Kim, Helen Meng, and James Glass. 2023d. Natural language embedded programs for hybrid language symbolic reasoning. ArXiv preprint, abs/2309.10814. Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. 2024. Cumulative reasoning with large language models. In ICLR 2024 Workshop on 1196 Bridging the Gap Between Practice and Theory in Deep Learning. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. 2023e. Siren’s song in the AI ocean: A survey on hallucination in large language models. ArXiv preprint, abs/2309.01219. Zhebin Zhang, Xinyu Zhang, Yuanhang Ren, Saijiang Shi, Meng Han, Yongkang Wu, Ruofei Lai, and Zhao Cao. 2023f. IAG: Induction-augmented generation framework for answering reasoning questions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1– 14, Singapore. Association for Computational Linguistics. Zhuosheng Zhang, Yao Yao, Aston Zhang, Xiangru Tang, Xinbei Ma, Zhiwei He, Yiming Wang, Mark Gerstein, Rui Wang, Gongshen Liu, and Hai Zhao. 2023g. Igniting language intelligence: The hitchhiker’s guide from chain-of-thought reasoning to language agents. CoRR, abs/2311.11797. Zhuosheng Zhang and Aston Zhang. 2023. You only look at screens: Multimodal chain-of-action agents. Preprint, arXiv:2309.11436. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023h. Automatic chain of thought prompting in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. 2023i. Multimodal chain-of-thought reasoning in language models. ArXiv preprint, abs/2302.00923. Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, and Lidong Bing. 2023a. Verify-and-edit: A knowledge-enhanced chain-of-thought framework. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5823–5840, Toronto, Canada. Association for Computational Linguistics. Wayne Xin Zhao, Kun Zhou, Zheng Gong, Beichen Zhang, Yuanhang Zhou, Jing Sha, Zhigang Chen, Shijin Wang, Cong Liu, and Ji-Rong Wen. 2022. Jiuzhang: A chinese pre-trained language model for mathematical problem understanding. In KDD ’22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022, pages 4571–4581. ACM. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023b. A survey of large language models. ArXiv preprint, abs/2303.18223. Xufeng Zhao, Mengdi Li, Wenhao Lu, Cornelius Weber, Jae Hee Lee, Kun Chu, and Stefan Wermter. 2024a. Enhancing zero-shot chain-of-thought reasoning in large language models through logic. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 6144– 6166, Torino, Italia. ELRA and ICCL. Yichun Zhao, Shuheng Zhou, and Huijia Zhu. 2024b. Probe then retrieve and reason: Distilling probing and reasoning capabilities into smaller language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 13026–13032. ELRA and ICCL. Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. 2023a. Progressive-hint prompting improves reasoning in large language models. ArXiv preprint, abs/2304.09797. Ge Zheng, Bin Yang, Jiajin Tang, Hong-Yu Zhou, and Sibei Yang. 2023b. DDCot: Duty-distinct chain-ofthought prompting for multimodal reasoning in language models. In Thirty-seventh Conference on Neural Information Processing Systems, NeurIPS 2023. Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H. Chi, Quoc V Le, and Denny Zhou. 2024. Take a step back: Evoking reasoning via abstraction in large language models. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. 2023a. Language agent tree search unifies reasoning acting and planning in language models. Preprint, arXiv:2310.04406. Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. “going on a vacation” takes longer than “going for a walk”: A study of temporal commonsense understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3363–3369, Hong Kong, China. Association for Computational Linguistics. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023b. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Yuxiang Zhou, Jiazheng Li, Yanzheng Xiang, Hanqi Yan, Lin Gui, and Yulan He. 2023c. The mystery and fascination of llms: A comprehensive survey on 1197 the interpretation and analysis of emergent abilities. ArXiv preprint, abs/2311.00237. Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, and Lei Ma. 2023d. Isr-llm: Iterative self-refined large language model for long-horizon sequential task planning. Preprint, arXiv:2308.13724. Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics. Tinghui Zhu, Kai Zhang, Jian Xie, and Yu Su. 2024a. Deductive beam search: Decoding deducible rationale for chain-of-thought reasoning. CoRR, abs/2401.17686. Xuekai Zhu, Biqing Qi, Kaiyan Zhang, Xingwei Long, and Bowen Zhou. 2023. Pad: Program-aided distillation specializes large models in reasoning. CoRR, abs/2305.13888. Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. 2024b. Improving small language models’ mathematical reasoning via equation-of-thought distillation. CoRR, abs/2401.11864. Yuchen Zhuang, Xiang Chen, Tong Yu, Saayan Mitra, Victor Bursztyn, Ryan A. Rossi, Somdeb Sarkhel, and Chao Zhang. 2024. Toolchain*: Efficient action space navigation in large language models with a* search. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna Austria, May 7-11, 2024. OpenReview.net. Jin Ziqi and Wei Lu. 2023. Tab-CoT: Zero-shot tabular chain of thought. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10259– 10277, Toronto, Canada. Association for Computational Linguistics. Anni Zou, Zhuosheng Zhang, and Hai Zhao. 2024. Aurora: A one-for-all platform for augmented reasoning and refining with task-adaptive chain-ofthought prompting. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 1801–1807. ELRA and ICCL. Anni Zou, Zhuosheng Zhang, Hai Zhao, and Xiangru Tang. 2023. Meta-cot: Generalizable chain-ofthought prompting in mixed-task scenarios with large language models. ArXiv preprint, abs/2310.06692. A Appendix A.1 Related Survey Zhao et al. (2023b) primarily focuses on the development of contemporary LLMs, while Qiu et al. (2020) surveys about early PLMs. Some works discuss reasoning in specific domains, such as mathematical reasoning (Lu et al., 2023c), commonsense reasoning (Talmor et al., 2019), and logical reasoning (Yang et al., 2023c). Huang et al. (2023b); Zhang et al. (2023e) conducts an investigation into potential hallucination phenomena in LLM’s reasoning. Dong et al. (2023) discusses in-context learning techniques in the era of LLMs, and Yu et al. (2023a) conducts a macroscopic investigation into natural language reasoning. Liu et al. (2023d) discusses prompt tuning, Qiao et al. (2023); Yu et al. (2023c); Huang and Chang (2023) focus on prompt engineering and reasoning strategies, and Zhang et al. (2023g) highlights the development from chain-of-thought reasoning to autonomous agents. This repository2 also collects chain-of-thought reasoning papers. Distinct from the above-mentioned surveys, this paper focuses on generalized chain-of-thought (XoT) reasoning in the era of LLMs. This is the first systematic investigation into XoT reasoning, and we hope our work can serve as an overview to facilitate future research. A.2 Further Discussion Open Question: Does CoT ability originate from code data pre-training? This is a pending question, initially summarized by Fu and Khot (2022) and widely circulated in the research community. In the early stages, LLMs like GPT3 (Brown et al., 2020) (davinci) and OPT (Zhang et al., 2022b) usually do not possess CoT capabilities, and they do not use or only incorporate a small amount of code data (not specialized) during pre-training. Recent models often incorporate specialized code data during pre-training, such as GPT-3.5, LLaMA2 (Touvron et al., 2023b) (with approximately 8% of code data during pre-training) and they all possess strong CoT capabilities. Additionally, Gao et al. (2023); Chen et al. (2022a) have found that the use of programming language form rationales can significantly enhance the model’s performance on complex reasoning tasks. Various indications point towards the source of CoT abilities lying in code data during pre-training. 2Timothyxxx/Chain-of-ThoughtsPapers 1198 Recently, Ma et al. (2024) investigates the impact of code data on LLMs at different training stages, reaching the first qualitative conclusion supported by quantitative experimental results. They find that mixing code data during the pre-training stage enhances general reasoning abilities, while doing that in the instruction fine-tuning stage endows task-specific reasoning abilities. Open Question: How to provide precise feedback on model’s reasoning or decisions? When dealing with multi-step reasoning or decisionmaking tasks, errors often occur in intermediate steps, and if these errors are not corrected promptly, they may lead to cascading errors. Currently, the primary methods for obtaining feedback include feedback from model itself (Madaan et al., 2023; Shinn et al., 2023), feedback from other models (Paul et al., 2024a), feedback from the external environment (Nathani et al., 2023; Gou et al., 2024a), and feedback based on reinforcement learning (Uesato et al., 2022; Lightman et al., 2024; Ma et al., 2023). However, some studies have raised doubts about the ability of LLMs to provide selffeedback (Huang et al., 2024a; Jiang et al., 2024). Generally speaking, certain issues exist in current methods. (1) How dependable is the feedback generated by the model itself? (2) Is there a fundamental distinction between feedback from other language models and self-feedback? (3) Does the feedback quality still remain constrained by the model’s capability boundaries? (4) How is external feedback for various scenarios pre-defined, and how can this be expanded to different scenarios? In summary, there is currently no fully satisfying feedback approach and more research attention is needed on how to accurately obtain feedback signals from the model’s intermediate reasoning. Discussion: Towards (early) AGI AGI has been the long-standing ultimate aspiration in the realm of artificial intelligence. Recent research on LLM-powered autonomous agents has successfully demonstrated a preliminary implementation of nascent artificial general intelligence (AGI). Synergy between reasoning and interaction. Equipped with robust language comprehension capabilities, LLMs can interact with the external world through text-based interactions using plugins (tools, KB query, search engine, etc.) (Schick et al., 2023; Shen et al., 2023a; Qin et al., 2024). Combining powerful reasoning capabilities, LLMs have made significant strides in various planning and decision-making tasks (Shinn et al., 2023; Yao et al., 2023b; Zhuang et al., 2024), catalyzing research on LLM-based autonomous agents (Wang et al., 2023h; Xi et al., 2023; Zhang et al., 2023g). LLM acts as the Brain (Controller). In contrast to traditional AI, which concentrates on specific tasks, AGI seeks the ability to understand general tasks (Devlin et al., 2019; Dosovitskiy et al., 2021), covering a widespread spectrum. Within LLM-powered AI, the LLM typically serves as the brain (or central controller), handling reasoning, planning and decision-making, while delegating specific execution to dedicated modules (tools, weak AI, etc.) (Shen et al., 2023a; Yang et al., 2023a). LLM-powered AI has already diverged significantly from weak AI and is progressing toward human cognition and thinking. While some studies suggest that LLMs represent an early manifestation of AGI (Bubeck et al., 2023; Jack, 2023), there are also scholars who contend that LLMs may not progress into AGI due to factors such as auto-regressive modeling and limited memory. As of now, there is still intense debate on whether LLMs can evolve into AGI. But regardless, LLM-powered AI has embarked on a distinctly different path from traditional AI, evolving towards a more generalized direction. A.3 Early Attempts and Efforts in Specific Domains In this section, we list the early attempts of XoT reasoning and efforts focused on specific domains. Before the concept of CoT was introduced (Wei et al., 2022b), some efforts were made to enhance reasoning performance through the use of rationales (Marasovic et al., 2022; Rajani et al., 2019a,b; Dua et al., 2020). After that, certain work has empirically demonstrated the effectiveness of chainof-thought prompting (Lampinen et al., 2022; Ye and Durrett, 2022; Arora et al., 2023) and Shi et al. (2023) explores multi-lingual CoT reasoning. Other work focuses on specific domains, such as machine translation (He et al., 2023b), sentiment analysis (Fei et al., 2023), sentence embeddings (Zhang et al., 2023a), summarization (Wang et al., 2023n), arithmetic (Lee and Kim, 2023), tabular reasoning (Chen, 2023; Ziqi and Lu, 2023), and backdoor attack (Xiang et al., 2024), etc. Katz et al. (2022); Zhang et al. (2022a) provide benchmarks and resources. Besides, some research utilizes specific pre-training to enhance reasoning (Lewkowycz et al., 2022; Zhao et al., 2022). 1199 A.4 Empirical Results We statistic the performance of various XoT methods in mathematics, commonsense, and symbolic reasoning, as shown in Table 2. We primarily collect the performance of GPT series models and the results are mainly from corresponding papers (some results are used as baselines in other papers). It is worth noting that due to variations in model checkpoints and experimental setups, even the methods with the same backbone LLM may not be fairly comparable. Therefore, this table only provides a rough trend of performance. B Details of Benchmarks B.1 Mathematical Reasoning Mathematical reasoning is often used to measure the reasoning power of a model. Early benchmarks contain simple arithmetic operations (Hosseini et al., 2014; Koncel-Kedziorski et al., 2015; Roy and Roth, 2015; Koncel-Kedziorski et al., 2016). Ling et al. (2017) labels the reasoning process in natural language form, and Amini et al. (2019) builds on AQUA by labeling the reasoning process in program form. Later benchmarks (Miao et al., 2020; Patel et al., 2021; Cobbe et al., 2021; Gao et al., 2023) contain more complex and diverse questions. (Zhu et al., 2021; Chen et al., 2021, 2022b) require reasoning based on the table content. There are also competition-level benchmarks (Hendrycks et al., 2021b; Mishra et al., 2022a,b) and reading comprehension form benchmarks (Dua et al., 2019; Chen et al., 2023b). B.2 Commonsense Reasoning Commonsense reasoning entails the process of drawing inferences, forming judgments, and gaining insights based on widely known and commonly accepted world knowledge. Acquiring and understanding commonsense knowledge presents a significant challenge for models engaged in commonsense reasoning. Various benchmarks have been put forward to address these challenges, including commonsense understanding (Talmor et al., 2019, 2021; Bhakthavatsalam et al., 2021; Mihaylov et al., 2018; Geva et al., 2021; Huang et al., 2019; Bisk et al., 2020), event temporal commonsense reasoning (Rashkin et al., 2018; Zhou et al., 2019) , and commonsense verification (Wang et al., 2019). B.3 Symbolic Reasoning Symbolic reasoning here refers specifically to the simulation of some simple operations, which are simple for humans yet challenging for LLMs. Last letter concatenation, coin flip, and reverse list (Wei et al., 2022b) are the most commonly used symbolic reasoning tasks. In addition, the collaborative benchmark BigBench (Srivastava et al., 2022) and BigBench-Hard (Suzgun et al., 2023) also contain several symbolic reasoning datasets, such as state tracking and object counting. B.4 Logical Reasoning Logical reasoning encompasses deductive reasoning, inductive reasoning, and abductive reasoning. Deductive reasoning derives conclusions from general premises (Liu et al., 2020; Yu et al., 2020; Tafjord et al., 2021; Han et al., 2022; Hong et al., 2023). Inductive reasoning derives general conclusions from special cases (Yang et al., 2024b). Abductive reasoning gives rational explanations for observed phenomena (Saparov and He, 2023). B.5 Multi-modal Reasoning In the real world, reasoning also involves information in modalities other than text, with visual modalities being the most prevalent. To this end, many benchmarks for visual multi-modal reasoning are proposed (Zellers et al., 2019; Park et al., 2020; Dong et al., 2022; Lu et al., 2022), and among them, ScienceQA (Lu et al., 2022) annotates reasoning process and is the most commonly used visual multi-modal reasoning benchmark. Video multi-modal reasoning (Lei et al., 2020; Yi et al., 2020; Wu et al., 2021; Xiao et al., 2021; Li et al., 2022; Gupta and Gupta, 2022) is more challenging as it introduces additional temporal information compared to visual multi-modal reasoning. B.6 Comprehensive Benchmarks Apart from the aforementioned individual datasets, there are also some comprehensive evaluation benchmarks. Some works aim to provide a holistic evaluation of the general reasoning capabilities (Srivastava et al., 2022; Suzgun et al., 2023; Hendrycks et al., 2021a; Huang et al., 2023d; Liang et al., 2022). In addition, there are also some multi-task benchmarks that focus on specific reasoning abilities, such as logical reasoning (Luo et al., 2023; Liu et al., 2023b) and temporal reasoning (Chu et al., 2023; Wang and Zhao, 2023). 1200 Task Dataset Size Input Output Rationale Description AddSub (Hosseini et al., 2014) 395 Question Number Equation Simple arithmetic SingleEq (Koncel-Kedziorski et al., 2015) 508 Question Number Equation Simple arithmetic MultiArith (Roy and Roth, 2015) 600 Question Number Equation Simple arithmetic MAWPS (Koncel-Kedziorski et al., 2016) 3,320 Question Number Equation Simple arithmetic AQUA-RAT (Ling et al., 2017) 100,000 Question Option Natural Language Math reasoning with NL rationale ASDiv (Miao et al., 2020) 2,305 Question Number Equation Multi-step math reasoning SVAMP (Patel et al., 2021) 1,000 Question Number Equation Multi-step math reasoning GSM8K (Cobbe et al., 2021) 8,792 Question Number Natural Language Multi-step math reasoning GSM-Hard (Gao et al., 2023) 936 Question Number Natural Language GSM8K with larger number MathQA (Amini et al., 2019) 37,297 Question Number Operation Annotated based on AQUA DROP (Dua et al., 2019) 96,567 Question+Passage Number+Span Equation Reading comprehension form TheoremQA (Chen et al., 2023b) 800 Question+Theorem Number ✗ Answer based on theorems TAT-QA (Zhu et al., 2021) 16,552 Question+Table+Text Number+Span Operation Answer based on tables FinQA (Chen et al., 2021) 8,281 Question+Table+Text Number Operation Answer based on tables ConvFinQA (Chen et al., 2022b) 3,892 Question+Table+Dialog Number Operation Multi-turn dialogs MATH (Hendrycks et al., 2021b) 12,500 Question Number Natural Language Challenging competition math problems Mathematical NumGLUE (Mishra et al., 2022b) 101,835 Question+Text Number+Span ✗ Multi-task benchmark Reasoning LILA (Mishra et al., 2022a) 133,815 Question+Text Free-form Program Multi-task benchmark ARC (Bhakthavatsalam et al., 2021) 7,787 Question Option ✗ From science exam OpenBookQA (Mihaylov et al., 2018) 5,957 Question+Context Option ✗ Open-book knowledges PIQA (Bisk et al., 2020) 21,000 Goal+Solution Option ✗ Physical commonsense knowledge CommonsenseQA (Talmor et al., 2019) 12,247 Question Option ✗ Derived from ConceptNet CommonsenseQA 2.0 (Talmor et al., 2021) 14,343 Question Yes/No ✗ Gaming annotation with high quality Event2Mind (Rashkin et al., 2018) 25,000 Event Intent+Reaction ✗ Intension commonsense reasoning McTaco (Zhou et al., 2019) 13,225 Question Option ✗ Event temporal commonsense reasoning CosmosQA (Huang et al., 2019) 35,588 Question+Paragraph Option ✗ Narrative commonsense reasoning ComValidation (Wang et al., 2019) 11,997 Statement Option ✗ Commonsense verification Commonsense ComExplanation (Wang et al., 2019) 11,997 Statement Option/Free-form ✗ Commonsense explanation Reasoning StrategyQA (Geva et al., 2021) 2,780 Question Yes/No ✗ Multi-hop commonsense reasoning Last Letter Concat. (Wei et al., 2022b) Words Letters ✗ Rule-based Coin Flip (Wei et al., 2022b) Statement Yes/No ✗ Rule-based Reverse List (Wei et al., 2022b) List Reversed List ✗ Rule-based Symbolic BigBench (Srivastava et al., 2022) ✗ Contains multiple symbolic reasoning datasets Reasoning BigBench-Hard (Suzgun et al., 2023) ✗ Contains multiple symbolic reasoning datasets ReClor (Yu et al., 2020) 6,138 Question+Context Option ✗ Questions from GMAT and LSAT LogiQA (Liu et al., 2020) 8,678 Question+Paragraph Option ✗ Questions from China Civil Service Exam ProofWriter (Tafjord et al., 2021) 20,192 Question+Rule Answer+Proof Entailment Tree Reasoning process generation FOLIO (Han et al., 2022) 1,435 Conclusion+Premise Yes/No ✗ First-order logic Logical DEER (Yang et al., 2024b) 1,200 Fact Rule ✗ Inductive reasoning Reasoning PrOntoQA (Saparov and He, 2023) Question+Context Yes/No+Proccess First-Order Logic Deductive reasoning VCR (Zellers et al., 2019) 264,720 Question+Image Option Natural Language Visual commonsense reasoning VisualCOMET (Park et al., 2020) 1,465,704 Image+Event Action+Intent ✗ Visual commonsense reasoning PMR (Dong et al., 2022) 15,360 Image+Background Option ✗ Premise-based multi-modal reasoning ScienceQA (Lu et al., 2022) 21,208 Q+Image+Context Option Natural Language Multi-modal reasoning with NL rationales VLEP (Lei et al., 2020) 28,726 Premise+Video Option ✗ Video event prediction CLEVRER (Yi et al., 2020) 305,280 Question+Video Option/Free-form Program Video temporal and causal reasoning STAR (Wu et al., 2021) 600,000 Question+Video Option ✗ Video situated reasoning NEXT-QA (Xiao et al., 2021) 47,692 Question+Video Option ✗ Video temporal,causal,commonsense reasoning Multimodal Causal-VidQA (Li et al., 2022) 107,600 Question+Video Free-form Natural Language Video causal and commonsense reasoning Reasoning News-KVQA (Gupta and Gupta, 2022) 1,041,352 Q+V+KG Option ✗ Video reasoning with external knowledge Table 1: An overview of benchmarks and tasks on reasoning. 1201 Method Setting Backbone Mathematical Commonsense Symbolic GSM8K SVAMP Asdiv AQuA CSQA StrategyQA LastLetterConcat CoinFlip I-O Prompting (Brown et al., 2020) fewshot text-davinci-002 19.7 69.9 74 29.5 79.5 65.9 5.8 49.0 Fewshot CoT (Wei et al., 2022b) fewshot text-davinci-002 63.1 76.4 80.4 45.3 73.5 65.4 77.5 99.6 PoT (Chen et al., 2022a) fewshot text-davinci-002 80 89.1 58.6 Complex CoT (Fu et al., 2023a) fewshot text-davinci-002 72.6 77 Automate CoT (Shum et al., 2023) fewshot text-davinci-002 49.7 73.3 74.2 37.9 76.1 67.9 58.9 Fewshot CoT (Wei et al., 2022b) fewshot text-davinci-003 66.83 69.06 29.13 PHP (Zheng et al., 2023a) fewshot text-davinci-003 79 84.7 58.6 Self-consistency (Wang et al., 2023m) fewshot text-davinci-003 67.93 83.11 55.12 Active Prompt (Diao et al., 2023) fewshot text-davinci-003 65.6 80.5 79.8 48 78.9 74.2 71.2 Synthetic Prompt (Shao et al., 2023b) fewshot text-davinci-003 73.9 81.8 80.7 FOBAR (Jiang et al., 2023b) fewshot text-davinci-003 79.5 86 58.66 Boosted Prompting (Pitis et al., 2023) fewshot text-davinci-003 71.6 55.1 Fewshot CoT (Wei et al., 2022b) fewshot code-davinci-002 60.1 75.8 80.1 39.8 79 73.4 70.4 99 Self-Consistency (Wang et al., 2023m) fewshot code-davinci-002 78 86.8 87.8 52 81.5 79.8 73.4 99.5 PAL (Gao et al., 2023) fewshot code-davinci-002 72 79.4 79.6 Resprompt (Jiang et al., 2023a) fewshot code-davinci-002 66.6 45.3 DIVERSE (Li et al., 2023g) fewshot code-davinci-002 82.3 87 88.7 79.9 78.6 Least-to-Most (Zhou et al., 2023b) fewshot code-davinci-002 68.01 94 Boosted Prompting (Pitis et al., 2023) fewshot code-davinci-002 83.3 88.6 61.7 Fewshot CoT (Wei et al., 2022b) fewshot gpt-3.5-turbo 76.5 81.9 54.3 78 63.7 73.2 99 Self-consistency (Wang et al., 2023m) fewshot gpt-3.5-turbo 81.9 86.4 62.6 MetaCoT (Zou et al., 2023) fewshot gpt-3.5-turbo 75.1 88.6 54.7 72.4 64.5 77.2 100 Verify CoT (Ling et al., 2023) fewshot gpt-3.5-turbo 86 69.5 92.6 Active Prompting (Diao et al., 2023) fewshot gpt-3.5-turbo 81.8 82.5 87.9 55.3 RCoT (Xue et al., 2023) fewshot gpt-3.5-turbo 84.6 84.9 89.3 57.1 FOBAR (Jiang et al., 2023b) fewshot gpt-3.5-turbo 87.4 87.4 57.5 Memory-of-Thought (Li and Qiu, 2023) fewshot gpt-3.5-turbo 54.1 Adaptive-consistency (Aggarwal et al., 2023) fewshot gpt-3.5-turbo 82.7 85 83 67.9 Boosted Prompting (Pitis et al., 2023) fewshot gpt-3.5-turbo 87.1 72.8 Zeroshot CoT (Kojima et al., 2022) zeroshot text-davinci-002 40.5 63.7 31.9 64 52.3 57.6 87.8 PoT (Chen et al., 2022a) zeroshot text-davinci-002 57 70.8 43.9 AutoCoT (Zhang et al., 2023h) zeroshot text-davinci-002 47.9 69.5 36.5 74.4 65.4 59.7 99.9 COSP (Aggarwal et al., 2023) zeroshot code-davinci-001 8.7 55.4 52.8 Plan-and-Solve (Wang et al., 2023i) zeroshot text-davinci-003 58.2 72 42.5 65.2 63.8 64.8 96.8 Agent-Instruct (Crispino et al., 2023) zeroshot gpt-3.5-turbo 73.4 80.8 57.9 74.1 69 99.8 95.2 Self-Refine (Madaan et al., 2023) zeroshot gpt-3.5-turbo 64.1 RCoT (Xue et al., 2023) zeroshot gpt-3.5-turbo 82 79.6 86 55.5 Table 2: The performance of various XoT methods in commonly used mathematical, commonsense and symbolic reasoning benchmarks. It is worth noting that, due to variations in the experimental setups of different methods, their performances are not directly comparable. The table is used to provide an overall empirical insight. 1202 A survey of X-of-Thought Advanced Methods (§4) XoT Prompt Construction (§4.1) Manual Prompting Few-shot CoT (Wei et al., 2022b), PAL (Gao et al., 2023), PoT (Chen et al., 2022a), MathPrompter (Imani et al., 2023), Complex CoT (Fu et al., 2023a) Automatic Prompting Zero-shot CoT (Kojima et al., 2022), PoT (Chen et al., 2022a), Plan-and-Solve (Wang et al., 2023i), Auto-CoT (Zhang et al., 2023h), RePrompting (Xu et al., 2023), Agent-Instruct (Crispino et al., 2023), MetaCoT (Zou et al., 2023), COSP (Wan et al., 2023), LogiCoT (Zhao et al., 2024a), Role-Play Prompting (Kong et al., 2023a) Semi-automatic Prompting Synthetic Prompting (Shao et al., 2023b), AutoMate CoT (Shum et al., 2023), Explanation-Selection (Ye and Durrett, 2023), BoostedPrompt (Pitis et al., 2023), DynamicPrompt (Lu et al., 2023b), SPCoT (Wang et al., 2023d) XoT Topological Variants (§4.2) Chain Structure PoT (Chen et al., 2022a), PAL (Gao et al., 2023), LINC (Olausson et al., 2023), LogicLM (Pan et al., 2023), SatisfiedLM (Ye et al., 2023a), AoT (Sel et al., 2023), CoS (Hu et al., 2023a), Tree Structure ToT1 (Yao et al., 2023b), ToT2 (Long, 2023), ToUT (Mo and Xin, 2023), Skeleton-of-Thought (Ning et al., 2023), ProbTree (Cao et al., 2023), Thought-Propagation (Yu et al., 2024) Graph Structure GoT1 (Besta et al., 2023), GoT2 (Lei et al., 2023a), ResPrompt (Jiang et al., 2023a), Cascades Mixture-ofThought (Dohan et al., 2022) XoT Enhancement Methods (§4.3) Verify and Refine Self-Refine (Madaan et al., 2023), DIVERSE (Li et al., 2023g), Reflexion (Shinn et al., 2023), R3Prompt (Tian et al., 2023), REFINER (Paul et al., 2024a), SCREWS (Shridhar et al., 2023), CRITIC (Gou et al., 2024a), MAF (Nathani et al., 2023), CannotSelfCorrect (Huang et al., 2024a), Verify-and-Edit (Zhao et al., 2023a), VerifyCoT (Ling et al., 2023), RCoT (Xue et al., 2023), SelfVerification (Weng et al., 2022), FOBAR (Jiang et al., 2023b), AuRoRA (Zou et al., 2024), SeIFReasoner (Wu et al., 2024), Guided Decoding (Xie et al., 2023), DBS (Zhu et al., 2024a) Question Decompose Least-to-Most (Zhou et al., 2023b), Decomposed Prompting (Khot et al., 2023), Successive Prompting (Dua et al., 2022), PHP (Zheng et al., 2023a), CogTree (Junbing et al., 2023), iCAP (Wang et al., 2022), SelfAsk (Press et al., 2023), IRCot (Trivedi et al., 2023), SocraticQuestion (Qi et al., 2023), CumulativeReasoning (Zhang et al., 2024), Binder (Cheng et al., 2023), VersatileDecomposer (Ye et al., 2023b) Knowledge Enhancement CoD (Lu et al., 2023a), CoK1 (Li et al., 2023f), CoK2 (Wang et al., 2023c), Memory-of-Thought (Li and Qiu, 2023), KD-CoT (Wang et al., 2023e), IAG (Zhang et al., 2023f), Self-Ask (Press et al., 2023), Iter-RetGen (Shao et al., 2023a), MCR (Yoran et al., 2023), Crystal (Liu et al., 2023c), Chain-ofVerification (Dhuliawala et al., 2023), StepbackPrompt (Zheng et al., 2024) Self-Ensemble Verifiers (Cobbe et al., 2021), Self-Consistency (Wang et al., 2023m), ComplexCoT (Fu et al., 2023a), GRACE (Khalifa et al., 2023), Self-Check (Miao et al., 2024), MCR (Yoran et al., 2023), Diversity-ofThought (Naik et al., 2023), DiverseXoT (Liu et al., 2023e), CLP (Shi et al., 2023), MAD1 (Liang et al., 2023), MAD2 (Du et al., 2023), MAD3 (Wang et al., 2023b), Aggregation-of-Reasoning (Yin et al., 2024), Rank Prompt (Hu et al., 2024a), Efficient Reasoning Adaptive Consistency (Aggarwal et al., 2023), Skeleton-of-Thought (Ning et al., 2023), ActivePrompting (Diao et al., 2023), DraphVerify (Zhang et al., 2023b), SpeculativeDecoding (Leviathan et al., 2023) Frontier (§5) Tool Use (§5.1) MRKL (Karpas et al., 2022), TAML (Parisi et al., 2022), HuggingGPT (Shen et al., 2023a), Toolformer (Schick et al., 2023), ToolkenGPT (Hao et al., 2023b), ChatCoT (Chen et al., 2023d), LATM (Cai et al., 2024), GEAR (Lu et al., 2024), ToolLLM (Qin et al., 2024), ToolDoc (Hsieh et al., 2023a), MINT (Wang et al., 2024), ReACT (Yao et al., 2023c), ART (Paranjape et al., 2023), MMREACT (Yang et al., 2023b), API-Bank (Li et al., 2023d), MetaTool (Huang et al., 2023c), TaskBench (Shen et al., 2023b) Planning (§5.2) AdaPlanner (Sun et al., 2023), LLM+P (Liu et al., 2023a), LLM+DP (Dagan et al., 2023), ISRLLM (Zhou et al., 2023d), ReAct (Yao et al., 2023c), Self-Refine (Madaan et al., 2023), Reflexion (Shinn et al., 2023), Plan, Verify and Switch (Liu et al., 2023f), ToT1 (Yao et al., 2023b), ToT2 (Long, 2023), Tree-Planner (Hu et al., 2024b), RAP (Hao et al., 2023a), LATS (Zhou et al., 2023a), ToolChain* (Zhuang et al., 2024), TPTU (Ruan et al., 2023), TPTUv2 (Kong et al., 2023b), AgentInstruct (Crispino et al., 2023), ToRA (Gou et al., 2024b), AutoUI (Zhang and Zhang, 2023) Distillation (§5.3) LMSI (Huang et al., 2023a), STaR (Zelikman et al., 2022), Magister et al. (2023), SCoTD (Li et al., 2023c), SECToR (Zhang and Parkes, 2023), Distilling Step-by-Step (Hsieh et al., 2023b), SCOTT (Wang et al., 2023j), DialCoT (Han et al., 2023), PlanningToken (Wang et al., 2023l), TailoredLearning (Wang et al., 2023o), Yu et al. (2023b), ImplicitCoT (Deng et al., 2023), Fu et al. (2023b), CoT Collection (Kim et al., 2023), MoDE-CoTD (Li et al., 2024), PRR (Zhao et al., 2024b), PaD (Zhu et al., 2023), EoTD (Li et al., 2023a), Mixed Distillation (Zhu et al., 2024b), Math-Shepherd (Wang et al., 2023k), MCTS-IPL (Xie et al., 2024) Future Directions (§6) Multi-modal (§6.1) Multi-modalCoT (Zhang et al., 2023i), GoT3 (Yao et al., 2023d), ToMT (Hu et al., 2023b), Hypergraph-of-Thought (Yao et al., 2023a), T-SciQ (Wang et al., 2023g), SocraticQuestion (Qi et al., 2023), MMReact (Yang et al., 2023b) Faithfulness (§6.2) Rethinking and Retrievaling (He et al., 2023a), Verify-and-Edit (Zhao et al., 2023a), CoK (Li et al., 2023f), Verify-Edit (Zhao et al., 2023a), Chain-of-NLI (Lei et al., 2023b), Radhakrishnan et al. (2023), Lanham et al. (2023), Zhang et al. (2023c) CoT Theory (§6.3) Wang et al. (2023a), Madaan and Yazdanbakhsh (2022), Tang et al. (2023), Merrill and Sabharwal (2023), Wu et al. (2023b), Li et al. (2023h), Feng et al. (2023), Tutunov et al. (2023), Hou et al. (2023), Wang et al. (2023f), Schaeffer et al. (2023), Zhou et al. (2023c) Benchmarks (§3) Mathematical (§B.1) AddSub (Hosseini et al., 2014), SingleEq (Koncel-Kedziorski et al., 2015), MultiArith (Roy and Roth, 2015), MAWPS (KoncelKedziorski et al., 2016), AQUA-RAT (Ling et al., 2017), ASDiv (Miao et al., 2020), SVAMP (Patel et al., 2021), GSM8K (Cobbe et al., 2021), GSM-Hard (Gao et al., 2023), MathQA (Amini et al., 2019), DROP (Dua et al., 2019), TheoremQA (Chen et al., 2023b), TAT-QA (Zhu et al., 2021), FinQA (Chen et al., 2021), ConvFinQA (Chen et al., 2022b), MATH (Hendrycks et al., 2021b), NumGLUE (Mishra et al., 2022b), LILA (Mishra et al., 2022a), Conic10K (Wu et al., 2023a) Commonsense (§B.2) CSQA (Talmor et al., 2019), CSQA 2.0 (Talmor et al., 2021), ARC (Bhakthavatsalam et al., 2021), OpenBookQA (Mihaylov et al., 2018), PIQA (Bisk et al., 2020), Event2Mind (Rashkin et al., 2018), McTaco (Zhou et al., 2019), CosmosQA (Huang et al., 2019), ComValidation (Wang et al., 2019), ComExplanation (Wang et al., 2019), StrategyQA (Geva et al., 2021) Symbolic (§B.3) Last Letter Concat. (Wei et al., 2022b), Coin Flip (Wei et al., 2022b), Reverse List (Wei et al., 2022b), BigBench (Srivastava et al., 2022), BigBench-Hard (Suzgun et al., 2023) Logical (§B.4) ReClor (Yu et al., 2020), LogiQA (Liu et al., 2020), ProofWriter (Tafjord et al., 2021), FOLIO (Han et al., 2022), PrOntoQA (Saparov and He, 2023), LogiGLUE (Luo et al., 2023), GLORE (Liu et al., 2023b), FALLACIES (Hong et al., 2023) Multi-modal (§B.5) Image-Text ScienceQA (Lu et al., 2022), VCR (Zellers et al., 2019), VisualCOMET (Park et al., 2020), PMR (Dong et al., 2022), CURE (Chen et al., 2023c) Video-Text VLEP (Lei et al., 2020), CLEVRER (Yi et al., 2020), STAR (Wu et al., 2021), NExT-QA (Xiao et al., 2021), Causal-VidQA (Li et al., 2022), News-KVQA (Gupta and Gupta, 2022) Figure 8: Taxonomy of Advanced Methods, Frontiers, Future Directions, and Benchmarks (Full Edition). 1203
2024
65
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12027–12044 August 11-16, 2024 ©2024 Association for Computational Linguistics GunStance: Stance Detection for Gun Control and Gun Regulation Nikesh Gyawali*♥Iustin Sirbu*♦Tiberiu Sosea*♣Sarthak Khanal ♥ Doina Caragea♥Traian Rebedea♦Cornelia Caragea♣ ♥Kansas State University ♦University Politehnica of Bucharest ♣University of Illinois Chicago {gnikesh,sarthakk,dcaragea}@ksu.edu {iustin.sirbu,traian.rebedea}@upb.ro {tsosea2,cornelia}@uic.edu Abstract The debate surrounding gun control and gun regulation in the United States has intensified in the wake of numerous mass shooting events. As perspectives on this matter vary, it becomes increasingly important to comprehend individuals’ positions. Stance detection, the task of determining an author’s position towards a proposition or target, has gained attention for its potential use in understanding public perceptions towards controversial topics and identifying the best strategies to address public concerns. In this paper, we present GUNSTANCE, a dataset of tweets pertaining to shooting events, focusing specifically on the controversial topics of “banning guns” and “regulating guns.” The tweets in the dataset are sourced from discussions on Twitter following various shooting incidents in the United States. Amazon Mechanical Turk was used to manually annotate a subset of the tweets relevant to the targets of interest (“banning guns” and “regulating guns”) into three classes: In-Favor, Against, and Neutral. The remaining unlabeled tweets are included in the dataset to facilitate studies on semi-supervised learning (SSL) approaches that can help address the scarcity of the labeled data in stance detection tasks. Furthermore, we propose a hybrid approach that combines curriculum-based SSL and Large Language Models (LLM), and show that the proposed approach outperforms supervised, semisupervised, and LLM-based zero-shot models in most experiments on our assembled dataset. The dataset and code are available on GitHub.1 1 Introduction In the aftermath of the multitude of mass shooting events in the United States, topics regarding tighter gun regulations and control, expanding mental health services, and upgrading security in public places have been strongly debated over politi*Equal contribution. 1https://github.com/gnikesh/gunstance cal campaigns on various social media platforms. People are generally on two opposing sides with respect to gun regulations and control: those who favor such regulations and those who strongly oppose them. Such differences in opinions on gun regulations have prevailed in society for over four decades, resulting in very little legislative success (Lin and Chung, 2020). Social media, especially Twitter (now X), has been a melting pot for such debates. It would be useful to automatically analyze the intricate dynamics, challenges, and complexities surrounding gun control topics. Stance detection on social media is an emerging research area in opinion mining for various applications where sentiment analysis may be suboptimal (ALDayel and Magdy, 2021). Stance detection is the task of automatically determining if an author of a text is In-Favor, Against, or Neutral towards a proposition or target (Mohammad et al., 2017). While research on stance detection has seen significant progress in various domains, such as politics, healthcare, and finance (Mohammad et al., 2017; Ghosh et al., 2019; Conforti et al., 2020a; Glandt et al., 2021), there remains a notable gap in analyzing stance dynamics in the context of gun control and regulations on social media, despite the importance of this topic in the contemporary society. This is mainly due to a lack of datasets labeled with respect to stance towards the controversial topics of gun control and regulation. In this work, we aim to address this need by creating GUNSTANCE, a dataset of tweets collected during mass shooting events in the United States. Specifically, we focus on seven shooting events and identify two controversial, interconnected gunrelated stance targets, a stronger stance target “banning guns” and a softer stance target - “regulating guns.” We collect tweets posted during those shooting events and filter tweets potentially relevant to each of the targets considered. Note that some tweets can express (either the same or 12027 different) stances towards both targets, therefore the sets of tweets filtered for the two targets are not disjoint. Using Amazon Mechanical Turk, we annotate 2,700 tweets according to the author’s stance toward the “banning guns” target, and 2,800 tweets according to the author’s stance toward the “regulating guns” target. In addition, the GUNSTANCE dataset includes 7,334 unlabeled tweets for the “banning guns” target and 8,824 tweets for the “regulating guns” target. The unlabeled tweets can be utilized by semi-supervised learning approaches. More details about data collection and statistics are provided in Section 3. In addition to assembling the GUNSTANCE dataset consisting of labeled and unlabeled tweets for the two targets, we first establish strong baselines for this dataset by employing supervised, semi-supervised and zero-shot LLM-based approaches. To push the performance on our task and increase the real-world applicability of the stance detection models, we propose a hybrid SSL/LLM approach that utilizes unlabeled examples from a stream of data (i.e., posted as events unfold), while leveraging the zero-shot capabilities of LLMs to improve the model. Critically, compared to traditional SSL, our method incurs minimal additional training costs, due to our novel approach of identifying informative unlabeled examples, and no inference costs. Detailed information on our methods is provided in Section 4, while the results of the models are presented and discussed in Section 5. To summarize, the main contributions of this work consist in providing the research community with a dataset, a novel low-cost hybrid SSL/LLM approach, and comparisons with meaningful baseline models for analyzing social media data with respect to stance towards controversial topics regarding gun regulations and control. Our dataset and proposed hybrid model are designed to help uncover prevalent arguments and opinions related to these topics, and to foster a deeper understanding of the intricate dynamics surrounding gun control and regulations. By enabling the examination of diverse perspectives, our work has the potential to lead to advancements in addressing the challenges posed by gun control and regulation. 2 Related Work In the last decade, stance detection has been an area of active research and numerous studies focused on introducing novel datasets or proposing stance detection models have been carried out. 2.1 Stance Detection Datasets SemEval2016 dataset introduced by Mohammad et al. (2016, 2017) consists of tweets about US politics collected during the lead-up to the 2016 US Presidential election. Conforti et al. (2020a) presented a large dataset of approximately 50,000 tweets for stance detection. Villa-Cox et al. (2020) assembled a dataset for identifying stance in Twitter replies and quotes. Multi-target and multilingual datasets are contributed by several works, including (Sobhani et al., 2017a; Zotova et al., 2020; Vamvas and Sennrich, 2020; Lai et al., 2020; Glandt et al., 2021). Several non-English datasets have also been published (Alturayeif et al., 2022; Lozhnikov et al., 2020; Küçük and Can, 2018; Lai et al., 2018). Furthermore, Allaway and McKeown (2020a); Xu et al. (2022); Allaway and McKeown (2020b) curated zero-shot and few-shot stance detection datasets. A summary of existing datasets is provided in Table A1 in the Appendix. While some previous works have incorporated the generic topic of “guns” as a target in their stance detection datasets (Sakketou et al., 2022; Cinelli et al., 2021), there has been limited differentiation between the intricate relationships surrounding the topics of “banning guns” and “regulating guns.” Furthermore, prior studies have not targeted such topics, specifically in the aftermath of mass shooting events in the United States. As another important difference, our dataset includes unlabeled tweets, which makes it possible to study semi-supervised, and zero-shot/few-shot approaches, while most of the existing datasets only include labeled data. Therefore, our dataset provides a novel contribution in terms of stance targets included and approaches enabled. 2.2 Stance Detection Approaches Support vector machines (SVM) with manually engineered features were established as strong baselines for the SemEval2016 Task 6 datasets (Mohammad et al., 2016, 2017). Subsequently, deep learning approaches, including recurrent neural networks (RNNs) and convolutional neural networks (CNNs), that incorporate both input and target-specific information using attention mechanisms (Augenstein et al., 2016; Du et al., 2017; Zhou et al., 2017; Sun et al., 2018; Siddiqua et al., 2019) outperformed the prior SVM strong baselines. With the advent of transformers (Vaswani et al., 2017) and bidirectional encoder representa12028 tion from transformers (BERT) (Devlin et al., 2018) in NLP tasks, recent works (Slovikovskaya, 2019; Li and Caragea, 2021) have investigated the use of BERT for stance detection. Ghosh et al. (2019) found BERT to be the best model overall for stance detection on the SemEval2016 Task 6. In addition to utilizing target-specific information, several studies have demonstrated that incorporating auxiliary information, such as sentiment, emotion, Wikipedia knowledge, etc. can enhance the performance of stance detection models as compared to using only tweet/target information (Mohammad et al., 2017; Sun et al., 2019; Li and Caragea, 2021; Hosseinia et al., 2020a; Xu et al., 2020; Luo et al., 2022; Fu et al., 2022; He et al., 2022). Moreover, in scenarios where the labeled data for the target task is limited, various approaches have been employed to enhance performance. These include techniques such as weak supervision (Wei and Zou, 2019), knowledge distillation (Miao et al., 2020), knowledge graphs (Liu et al., 2021), transfer learning using pre-trained models (Ebner et al., 2019; Hosseinia et al., 2020a), distant supervision (Zarrella and Marsh, 2016), and domain adaptation from a source to a target task (Xue and Li, 2018; Xu et al., 2020). Following prior works, to address the scarcity of labeled data, we propose a hybrid SSL/LLM approach that makes use of unlabeled data by leveraging the capabilities of LLMs. We compare the proposed hybrid approach with supervised, SSL and LLM-based zero-shot strong baselines on the GUNSTANCE dataset. 3 GUNSTANCE Dataset Dataset Collection and Filtering: Our data collection process involved gathering tweets related to mass shooting incidents across different locations in the United States. Using Wikipedia pages that list mass shooting events in the United States in 20212 and 20223, respectively, we identified and focused on 7 mass shooting events (3 in 2021 and 4 in the first half of 2022), each with at least 5 dead people. The events selected (shown in Table 2) cover a variety of locations/settings (including groceries, a parade, a school, massage parlors, etc). Past tweets were retrieved in chronological order by using the Twitter’s v2 full-archive search 2https://en.wikipedia.org/wiki/List_of_mass_shootings_in_ the_United_States_in_2021 3https://en.wikipedia.org/wiki/List_of_mass_shootings_in_ the_United_States_in_2022 endpoint4. For each event, we collected tweets posted the day of the event and up to eight weeks after the event. We started the search using a basic set of seed rules specific to the location of each event. As an example, the seed rule for the Robb Elementary School shooting in Uvalde, Texas was: “((uvalde OR texas) shooting) OR uvaldeshooting OR uvaldetexas”. The seed rules for the other events are shown in Table A2 in Appendix A.2. The rule for an event was updated with the most frequent hashtags daily during the first week of the event and weekly during the following weeks. On each update of the rule, we identified the 15 most frequent hashtags during the update period (a day or a week), and added those hashtags as additional OR terms to the current rule. In addition, we also maintained a list of terms to ignore, specifically terms that were observed among the frequent hashtags but were too general and added noise to the collection (e.g., mass, dead, breaking, update, usa, america). The terms added with each update are shown in Table A3 in Appendix A.2. The tweets collected were capped at a maximum of 4,500 tweets per day and 25,000 tweets per week for each event. The final dataset collected included a total of 395,485 original tweets, replies, and quoted tweets (retweets were ignored and are thus not included in this count). A preliminary analysis of the data showed that 77.48% of the tweets collected (i.e., 306,422 tweets) contained URLs (e.g., links to news articles). While the tweets containing URLs may express a stance towards the targets of interest in our study, given that they link to external resources (many of them related to media outlets), the stance expressed may not be the general public’s stance but rather the stance of media outlets, political organizations, etc. As we aim to focus on detecting the general people’s stance towards gun regulations, we removed such tweets. We also removed tweets from a list of 2,834 Twitter handles corresponding to media outlets (e.g., PulpNews, NYDailyNews, ABC, etc.). The 2,834 handles were identified based on post frequency (as media outlets have a large number of posts as shown in Figure A.1 in Appendix A.2) and/or the use of the “news“ keyword in their username. Together, we filtered out 309,395 tweets out of the total 395,485 collected tweets, leaving us with a set of 86,090 tweets. To identify tweets most relevant to the targets 4https://developer.twitter.com/en/docs/twitter-api/tweets/ search/introduction 12029 Tweet Target Stance (Labelled) I am following reports of a shooting at #SixFlag in the #Chicago suburb of #Gurnee #Illinois. #GunControl NOW! Regulating guns In-Favor Illinois has some of the strictest gun laws in the country, yet policymakers are scrambling to explain the series of missed warning signs and communication failures that led to the Fourth of July mass shooting in Highland Park. Regulating guns Against One would wonder how this Highland Park shooting suspect got his guns with all the gun laws in IL to flag with his mental sickness? Regulating guns Neutral The shooter in Tulsa bought an AR-15, just hours before this shooting... shooters in both Buffalo and Uvalde... both just turned 18 yrs old... also bought their rifles just before going out to kill... we need a moratorium on sales of AR/AK rifles for now... that w/ save lives... Banning guns In-Favor With your logic please explain why Buffalo NY shooting happened if gun laws will stop bad guys shooting? Banning guns Against Which of these laws would have stopped the buffalo shooting or the uvalde shooting? In the ulvade shooting the door was open but was supposed to be closed. Can’t legislate doors to work! Banning guns Neutral Table 1: Examples of Annotated Tweets. considered, we used Sentence-BERT (Reimers and Gurevych, 2019) to embed all tweets in the dataset, as well as the targets (expressed as “guns should be banned” versus “guns should be regulated”) and ranked the tweets with respect to the targets based on cosine similarity. The similarity between these two targets themselves was 0.6980. We filtered out the least relevant tweets to these targets by applying a similarity threshold of 0.40. Consequently, we obtained 11,621 tweets for “banning guns“ and 13,476 tweets for “regulating guns.“ Dataset Preprocessing: As part of the data preprocessing phase, we applied several steps to clean and enhance the dataset. These steps involved removing duplicate tweets, emoticons, and non-ASCII characters from the tweet texts. We also filtered out all user mentions (e.g., @username) from the tweets. We only kept English-language tweets, filtering out all non-English-language ones. For this, we selectively retained tweets with the language code “en” as specified in the tweet metadata during the crawling process. Figure A.2 in Appendix A.3 shows the distribution of the number of words and length of tweets in our preprocessed dataset. A significant portion of the dataset consists of tweets containing 45-50 words, with the peak at 47 words. Similarly, the majority of the tweets have 270-280 characters, with the peak at 278 characters. Data Annotation: We annotated 2,700 randomly selected tweets for the “banning guns” target and 2,800 randomly selected tweets for the “regulating guns” target using the Amazon Mechanical Turk crowd-sourcing platform. We ran the annotation task in several iterations in order to develop our quality control steps. Initially, we annotated 100 examples internally (50 for “banning guns” and 50 for “regulating guns”) to use as a qualification task. Then, we asked all annotators to annotate this set of 100 examples to measure how well they perform. “banning guns” “regulating guns” Shooting Events Date A F N U A F N U Atlanta, Georgia 03/16/2021 19 54 58 340 25 88 25 376 Boulder, Colorado 03/22/2021 73 117 126 572 86 188 78 704 San Jose, California 05/26/2021 20 24 21 221 13 35 9 224 Buffalo, New York 05/14/2022 330 354 477 3,215 373 518 342 4,033 Uvalde, Texas 05/24/2022 190 241 315 2,178 179 383 205 2,610 Tulsa, Oklahoma 06/01/2022 22 61 48 371 11 92 23 392 Highland Park, Illinois 07/04/2022 32 58 60 437 31 51 45 485 Total 686 909 1,105 7,334 718 1,355 727 8.824 Table 2: Statistics on the labeled/unlabeled tweets for each event and each target. Here, A=Against, F=In-Favor, N=Neutral, and U=Unlabelled. Finally, we annotated all the data using three highperforming annotators and calculated the final label for a tweet using majority voting. We measured the inter-annotator agreement using Krippendorff Alpha, and obtained an average value of α=0.63, indicating moderate agreement. Table 1 shows examples of annotated tweets. We observe that some tweets have an implicit stance (e.g., “In-Favor” and “Against” examples for “regulating guns”), whereas others have an explicit stance (e.g., “In-Favor” and “Against” examples for “banning guns”). Dataset Statistics: Table 2 provides statistics about the labeled and unlabeled tweets for each target and each shooting event. For each event, the table shows the number of tweets annotated as Against (A), In-Favor (F), Neutral (N), as well as the number of Unlabelled (U) tweets. The total number of labeled tweets for “banning guns” is 2,700, while the number of unlabeled tweets is 7,334. The number of labeled tweets for “regulating guns” is 2,800, while the number of unlabeled tweets is 8,824. Figure 1 shows bi-gram word clouds for “InFavor” and “Against” stances with respect to the “banning guns” target. While many bi-grams are shared between the two word clouds, we find that people In-Favor of banning guns employ more terms such as “thought prayer,” “don’t need,” and “common sense,” indicative of their inclination to12030 (a) "In-Favor" (b) "Against" Figure 1: Bi-gram word clouds for (a) In-Favor and (b) Against stances for the “banning guns” target. wards advocating gun prohibition. Conversely, individuals Against banning guns employ more often terminology such as “strict law,” “free zone,” and “mental health.” In particular, the term “strict law” is used to argue that despite strict gun laws in states such as Illinois and New York, instances of crime persist, thereby challenging the efficacy of banning guns as a viable solution to prevent gun violence, as shown in the examples in Table 1. We also performed a lexical analysis of tweets pertaining to the “Against” and “In-Favor” stances of the “banning guns” target in terms of the five basic moral foundations (harm/care, cheating/fairness, betrayal/loyalty, subversion/authority and degradation/purity) of the Moral Foundation Theory (Araque et al., 2020, 2022; Graham et al., 2013), as shown in the Figure 2 (Left). Each dimension is rated on a continuous scale ranging from 1 to 9 (1 for “harm” and 9 for “care”; 1 for “cheating” and 9 for “fairness”; 1 for “betrayal” and 9 for “loyalty”; 1 for “subversion” and 9 for “authority”; and 1 for “degradation” and 9 for “purity”). We find that individuals In-Favor of “banning guns” tend to express sentiments with higher score than those Against “banning guns” in the dimensions of care, loyalty, authority, and purity. In contrast, those Against “banning guns” tend to articulate their viewpoints more in alignment with the dimension of fairness. Employing VADER (Valence Aware Dictionary and sEntiment Reasoner) (Hutto and Gilbert, 2014), a lexicon-based sentiment analysis tool designed to capture sentiments in social media contexts, we performed a sentiment analysis on “In-Favor” and “Against” tweets pertaining to “banning guns”, as shown in Figure 2 (Right). We did not find significant differences in sentiment between the “InFavor” and “Against” stance categories. This result indicates substantial intricacies and nuances within Figure 2: Scores for five basic moral foundation dimensions (Left); and sentiment scores (Right), corresponding to tweets expressing “Against” and “In-Favor” stances towards the “banning guns” target. All Boulder Buffalo Uvalde Ban Reg. Ban Reg. Ban Reg. Ban Reg. Train Against 228 243 25 32 110 128 64 56 In-favor 304 449 39 62 118 168 81 132 Neutral 368 244 42 24 159 115 105 69 Test Against 228 247 24 29 110 124 63 64 In-favor 303 433 39 59 118 171 80 118 Neutral 369 252 42 29 159 116 105 73 Dev Against 230 228 24 25 110 121 63 59 In-favor 302 473 39 67 118 179 80 133 Neutral 368 231 42 25 159 111 105 63 Total 2,700 2,800 316 352 1,161 1,233 746 767 Table 3: Statistics of the labeled dataset splits in GUNSTANCE. “All” refers to the tweet count from all events, “Ban" refers to “banning guns” and “Reg.” refers to “regulating guns.” this topic, thereby posing a significant challenge for machine learning models. Benchmark subsets: To enable comparisons between our baselines and new models developed for the GUNSTANCE dataset, we randomly split each event in the dataset (using stratified sampling) into 33% training (train), 33% development (dev) and 33% test (test) subsets. We assemble benchmark subsets for the whole dataset by combining the train, dev and test subsets, respectively, of the constituent events. Class distributions over each subset are shown in Table 3 under the “All” columns. The table also shows the class distributions for the three largest events in our dataset (specifically, “Boulder”, “Buffalo” and “Uvalde”), as we also experiment with the datasets of these specific events in our study. 4 Methodology 4.1 Proposed Approach: Hybrid SSL/LLM We leverage the Area Under the Margin Self Training (AUM-ST), an SSL model introduced by Sosea and Caragea (2022), and propose a novel approach by aiding AUM-ST (Sosea and Caragea, 2022) with ChatGPT during the training process. We call this model AUM-ST+ChtGPT. As shooting events unfold, the web is constantly flooded with new streams of unlabeled data which can be an important source of information to increase model performance and ensure up-to-date information is 12031 Algorithm 1 AUM-ST+LLM Require: Labeled data L = {(1, y1), (2, y2), ...(n, yn)}, unlabeled data U = {ˆ1, ˆ2, ...ˆm}; τ confidence threshold; and γ AvgMargin threshold. 1: Learn teacher model θt on labeled data minimizing the cross entropy loss Lθt = 1 n Pn =1 H(y, p(y|π(); θt)) 2: Use the teacher model to generate hard pseudo labels for unlabeled examples ˆy= argmax(p(y|π(ˆ); θt)), ∀= 1, · · ·, m 3: Train model θAUM on labeled and unlabeled examples, and monitor the training dynamics of unlabeled examples over T epochs using the Margin of each example averaged across the epochs: AgMrgn( ˆ, ˆy) = 1 T PT 1 [zˆy− mˆy!=j(zj)], zˆyand zj are the logits corresponding to the pseudo-label ˆyand the largest other logit. 4: Rank ˆbased on ϕ(ˆ, ˆy) = Abs(AgMrgn(ˆ, ˆy)) and select k unlabeled examples ULLM = {ˆ1, ˆ2, ..., ˆk} with low ϕ values. 5: Use an LLM model θLLM to generate pseudo-labels for examples in ULLM (i.e. those examples that are perceived as ambiguous or hard-to-learn by the model) and overwrite the pseudo-labels generated by the teacher network with those generated by the LLM. 6: Train a student model θs on the labeled and high-confidence, high-average-Margin, and LLM-provided (i.e., the data pseudo-labeled using the LLM) pseudo-labeled data by minimizing the cross entropy loss: Lθs = 1 n Pn =1 H(y, p(y|π(); θs)) + 1 m 1(m(p(y|π(ˆ) ≥τ))1(AgMrgn(π(ˆ), ˆy) > γ)1(ˆ̸∈ULLM) Pm =1 H(ˆy, p(y|(ˆ); θs)) + P ˆ∈ULLM H(rgm(p|π(ˆ); θLLM)), p(y|(ˆ); θs)) 7: Use the student as a teacher and go back to Step 2 encoded into the models. Typically, SSL methods such as AUM-ST can use this stream of unlabeled data to improve model performance by adding the new data to the unlabeled set of examples. However, due to distribution shifts between the labeled and the recently collected unlabeled data, the model may struggle to generalize to the ongoing events. One alternative is to leverage the knowledge of LLM models, such as ChatGPT, by using them in a zero-shot manner. Unfortunately, using LLMs at large scale in these online, streaming setups is problematic as well, due to their high computational costs. We propose to leverage the high quality predictions of LLMs in combination with AUMST to obtain an approach with low computational cost and better generalization performance. We show our approach in Algorithm 1 and highlight our changes to the vanilla AUM-ST in blue. During SSL training, AUM-ST leverages AUM (Pleiss et al., 2020) to measure the fluctuation of predictions of the model on unlabeled data and to characterize the unlabeled examples based on the correctness of their pseudo-labels (Step 3). Examples with high AUM scores are likely to be pseudolabeled correctly while low-AUM examples are likely incorrectly pseudo-labeled. On the other hand, examples with neither high nor low AUMs are examples whose pseudo-label correctness is uncertain. We argue that such examples are vital for model learning and providing the model with correct pseudo-labels for these types of examples can significantly improve the performance. Since an AUM of zero indicates high uncertainty of pseudolabel correctness, we first propose to rank all unlabeled examples based on how close their AUM values are to an AUM of zero (Step 4). Then, we select uncertain examples and utilize ChatGPT as an external knowledge source to produce high-quality pseudo-labels for this particular category of unlabeled examples (Step 5). Concretely, at an arbitrary self-training iteration t, given the AUM of each unlabeled example (computed during AUM-ST training), we select a small percentage of unlabeled data with AUMs close to zero and use ChatGPT to obtain pseudo-labels for these examples. We then add these examples to the pseudo-labeled set for student model training (Step 6). We emphasize that the examples selected for LLM pseudo-labeling change from one iteration to another depending on the learning status of the model. Note that our algorithm involves performing inference with ChatGPT on a very small fraction of the unlabeled set during the training process (only those that the teacher identifies as hard examples and has a high uncertainty on the label). Moreover, our method incurs no additional costs in practice 12032 Target: Ban Zero-shot Models Supervised Models Semi-supervised Models BART-NLI FLAN ChatGPT BERTweet ATGRU BiLSTM GCAE Kim-CNN PGCNN TAN BERT-NS FixMatch AUM-ST AUM-ST+ChtGPT Accuracy 46.11 52.33 64.11 58.07 48.96 49.89 49.96 52.41 51.30 51.07 58.28 58.44 60.78 66.38 Precision 56.42 59.82 64.22 58.45 49.08 49.99 50.15 52.62 41.48 51.19 58.27 58.77 60.86 66.32 Recall 46.11 52.33 64.11 58.07 48.96 49.89 49.96 52.41 51.30 51.07 58.28 58.44 60.78 66.72 F1-Score 38.74 40.93 63.62 57.89 48.95 49.82 49.90 52.43 51.23 51.07 58.25 58.38 60.58 66.50 ECE N/A N/A N/A 3.30 3.88 3.70 3.85 3.18 3.77 3.47 4.67 3.08* 3.99 4.37 Target: Regulate Accuracy 42.81 56.65 65.02 63.71 53.51 58.33 55.22 59.33 57.44 57.15 64.10 64.52 65.77 66.37 Precision 57.27 52.92 65.34 63.40 53.11 58.11 54.66 58.57 56.61 56.54 64.50 64.17 65.40 66.22 Recall 42.81 56.65 65.02 63.71 53.51 58.33 55.22 59.33 57.44 57.15 65.09 64.52 65.77 66.05 F1-Score 38.61 47.28 64.67 63.09 53.08 58.13 54.57 57.99 56.73 56.64 64.27 63.90 65.45 66.10 ECE N/A N/A N/A 4.08 3.67 3.94 4.29 3.17 2.49* 3.18 3.73 3.80 3.42 3.75 Table 4: Performance comparison for baseline models trained and evaluated independently for the “ban” and “regulate” targets. The models are trained using the combined training data of all events, and evaluated on the combined test data of all events. The best performance on each row is HIGHLIGHTED . For ECE, the lower the better. since AUM-ST uses BERTweet as the base model. 4.2 Baseline Models We employ supervised, semi-supervised, and zeroshot models to establish baseline results for the proposed AUM-ST+ChtGPT model on our dataset. In the supervised learning setting, we only use labeled data to train the models. Similar to prior works in stance detection, e.g., (Glandt et al., 2021; Ghosh et al., 2019; Li and Caragea, 2021), we used BiLSTM (Schuster and Paliwal, 1997), Kim-CNN (Kim, 2014), TAN (Du et al., 2017), PGCNN (Huang and Carley, 2019), ATGRU (Zhou et al., 2017), GCAE (Xue and Li, 2018), and BERTweet (Loureiro et al., 2022) as our supervised baseline models. Semi-supervised models are capable of leveraging unlabeled data to improve the performance of a teacher model. We utilized BERT-NS (Glandt et al., 2021; Xie et al., 2020; Hinton et al., 2015), FixMatch (Sohn et al., 2020), and AUMST (Sosea and Caragea, 2022) as SSL baselines. For zero-shot (or few-shot) setups, a recent, powerful technique for addressing NLP tasks is to use large pre-trained language models (Brown et al., 2020). We employ BART-NLI (Lewis et al., 2019; Li et al., 2023; Williams et al., 2017), FLAN (Wei et al., 2021), and ChatGPT as zero-shot baselines. A brief discussion of all the baseline models used is provided in Appendix A.4. 4.3 Experimental Setup The details of our experimental setup, including hyperparameters used for each of the baseline models are presented in Appendix A.5. Each model was run three times using three different random seeds. The results reported represent averages over the three runs. We report the accuracy, precision, recall and F1score as defined in Appendix A.5. In addition, we also report the expected calibration error (ECE). ECE (Naeini et al., 2015) measures how reliably a model’s predicted probabilities reflect the true probabilities. The ECE values for a well-calibrated model should be low. Well-calibrated models increase the trustworthiness in the model’s predictions which is important to foster a deeper understanding of the intricate dynamics, challenges, and complexities surrounding gun control topics. 5 Results and Discussion We divide our experiments into three sets based on the data used to train and evaluate the models. Experiment 1: In this experiment, we train and evaluate models independently for our two targets, “ban” and “regulate”. Zero-shot models do not use any training data. Supervised models are trained using the combined labeled training data of all events. Semi-supervised models are trained using the combined labeled training data and unlabeled data of all events. Subsequently, the trained models are evaluated on the combined test data of all events. Table 4 presents the Accuracy, Precision, Recall, F1-score, and Expected Calibration Error (ECE) metrics for the zero-shot, supervised, and semi-supervised baseline models considered, including our proposed SSL/LLM model, when trained/evaluated on “ban”/“regulate” train/test data, respectively. AUM-ST+ChtGPT, our semi-supervised model guided by ChatGPT, achieves the best performance across all metrics except for the ECE metric. Notably, the semi-supervised model FixMatch obtains the lowest ECE score for the “ban” target, while the supervised model PGCNN achieves the lowest ECE score for the “regulate” target. In terms of zero-shot models, ChatGPT outperforms both BART-NLI and FLAN significantly in all metrics. 12033 Event: Boulder Target: Ban Zero-shot Models Supervised Models Semi-supervised Models BART-NLI FLAN ChatGPT BERTweet ATGRU BiLSTM GCAE Kim-CNN PGCNN TAN BERT-NS FixMatch AUM-ST AUM-ST+ChtGPT Accuracy 49.52 54.29 66.67 55.09 56.51 46.67 47.94 50.79 49.21 49.52 57.61 58.41 58.10 67.40 Precision 64.76 72.10 66.64 55.88 56.91 46.00 48.10 50.43 48.95 48.99 57.75 59.20 58.03 67.20 Recall 49.52 54.29 66.67 55.23 56.51 46.67 47.94 50.79 49.21 49.52 57.61 58.41 58.10 67.03 F1-Score 44.40 43.40 65.77 54.93 56.44 45.61 48.89 50.23 48.46 49.18 57.64 57.64 57.98 67.10 ECE N/A N/A N/A 4.45* 5.79 7.30 5.54 9.56 4.49 7.50 6.12 8.47 7.75 8.11 Target: Regulate Accuracy 53.85 58.12 64.96 67.52 54.70 60.68 58.11 62.39 57.26 61.53 67.24 68.09 70.09 68.54 Precision 76.59 46.55 64.68 66.87 55.38 58.89 56.03 60.59 55.41 60.24 65.78 67.58 69.23 68.99 Recall 53.85 58.12 64.96 67.52 54.70 60.68 58.11 62.39 57.26 61.53 67.24 68.09 70.09 68.65 F1-Score 50.38 48.52 63.87 66.95 54.56 59.44 55.75 59.74 55.46 60.65 65.38 66.05 68.91 68.78 ECE N/A N/A N/A 7.38 6.92 6.68 7.36 5.48* 11.32 8.69 7.36 9.27 6.10 7.44 Event: Buffalo Target: Ban BART-NLI FLAN ChatGPT BERTweet ATGRU BiLSTM GCAE Kim-CNN PGCNN TAN BERT-NS FixMatch AUM-ST AUM-ST+ChtGPT Accuracy 45.74 50.13 62.27 51.93 46.25 48.32 47.80 49.09 49.10 45.22 57.61 52.28 55.04 63.10 Precision 54.92 44.01 62.07 51.97 46.55 48.69 48.36 49.44 49.49 45.53 57.74 53.29 59.51 64.74 Recall 45.74 50.13 62.27 51.94 46.25 48.32 47.80 49.10 49.10 45.22 57.71 52.28 55.03 63.12 F1-Score 38.05 38.41 61.66 51773 45.93 47.70 46.23 48.68 48.18 44.90 57.62 51.53 53.65 63.40 ECE N/A N/A N/A 5.50 6.89 6.81 5.83 4.80* 5.49 6.01 6.63 6.65 7.71 8.99 Target: Regulate Accuracy 39.42 53.53 61.80 60.58 48.66 52.80 51.09 55.47 50.36 50.85 61.71 60.34 61.31 63.54 Precision 49.73 48.24 64.49 61.03 48.34 52.54 51.52 55.74 52.00 50.58 62.56 60.59 61.75 61.22 Recall 39.42 53.53 61.80 60.58 48.27 52.79 51.09 55.47 50.36 50.85 61.71 60.34 61.31 67.50 F1-Score 33.24 44.51 62.23 60.03 48.34 52.25 49.53 54.32 47.61 50.07 60.36 57.82 58.99 64.50 ECE N/A N/A N/A 6.67 6.35 4.21 5.46 5.18 7.62 5.68 6.60 4.99 3.57* 9.32 Event: Uvalde Target: Ban BART-NLI FLAN ChatGPT BERTweet ATGRU BiLSTM GCAE Kim-CNN PGCNN TAN BERT-NS FixMatch AUM-ST AUM-ST+ChtGPT Accuracy 46.37 53.63 66.94 54.83 43.15 43.95 42.74 49.20 47.98 47.18 56.98 53.76 58.06 67.93 Precision 51.53 72.76 67.63 54.82 44.26 46.04 41.23 50.79 48.31 48.62 57.05 54.20 57.80 68.20 Recall 46.37 53.63 66.94 54.83 43.15 43.95 42.74 49.19 47.98 47.18 56.98 53.76 58.06 68.20 F1-Score 38.30 42.70 66.87 53.79 42.04 43.85 39.98 48.49 47.24 45.65 56.99 52.50 57.20 68.20 ECE N/A N/A N/A 5.19 6.10 11.32 8.46 6.41 9.30 5.03 6.57 5.67 4.34* 6.31 Target: Regulate Accuracy 42.75 57.25 64.71 55.68 47.45 51.37 53.72 50.98 49.02 50.98 55.51 52.03 57.64 65.32 Precision 63.24 71.15 64.04 55.67 47.30 50.58 54.40 49.18 47.28 48.82 56.42 50.66 57.58 66.30 Recall 42.75 57.25 64.71 55.69 47.45 51.37 53.72 50.98 49.02 50.98 55.51 52.03 57.64 65.18 F1-Score 38.65 47.80 64.20 51.45 46.76 49.80 51.20 48.37 46.52 48.08 52.28 48.84 54.80 65.80 ECE N/A N/A N/A 9.60 8.41 5.72* 6.47 8.18 9.39 5.66 11.34 10.60 7.84 11.32 Table 5: Performance comparison for the leave-one-event-out models trained for the “ban” and “regulate” targets and evaluated on “Boulder”, “Buffalo” and “Uvalde”, respectively. The best performance in each row is HIGHLIGHTED . For supervised models, we find a significant performance difference in terms of F1-Score between the non-transformer architecture-based models and the BERTweet model. This difference can be linked to the complexity inherent within our dataset, which presents a greater challenge for the supervised models to effectively model such complexities. In contrast, BERTweet was able to learn complexities present in our dataset better than the non-transformer supervised models. However, the zero-shot ChatGPT outperforms the best supervised BERTweet model, underscoring the power of LLMs, such as ChatGPT, in a zero-shot setting on a very challenging task. Experiment 2: This experiment follows a leaveone-event-out setting. Models are trained for the “ban”/“regulate” targets independently using the training (labeled/unlabeled) data of all events except the held-out event, then tested on this event. The held-out events are “Boulder”, “Buffalo”, and “Uvalde”. We did not consider other events as heldout because of their relatively small size. Table 5 shows the results of the leave-one-eventout models for the “ban” and “regulate” targets evaluated on “Boulder”, “Buffalo”, and “Uvalde” events, respectively. Similar to Experiment 1, we find that the proposed AUM-ST+ChtGPT obtains impressive results across multiple events. Notably, AUM-ST+ChtGPT outperforms the vanilla AUMST by 9% F1 score on the “ban” target in the Boulder shooting and by almost 10% on the same target in the Buffalo shooting. Additionally, we emphasize that AUM-ST+ChtGPT consistently outperforms ChatGPT in all setups by approximately 2%, an impressive achievement, especially as the supervised baselines struggle on this challenging task. This result shows the value of the unlabeled data relevant to the task at hand, and suggests that our approach effectively leverages such unlabeled data (and the external knowledge captured through ChatGPT) to improve the generalization performance. Experiment 3: In this experiment, we explore a cross-target setup, where we train the models on one stance target and test them on the other target. Specifically, we train the models on the training data of the “ban” target and evaluate their perfor12034 Figure 3: F1-scores of BERTweet, BERT-NS, FixMatch, AUM-ST, and AUM-ST+ChtGPT in the cross-target setup. mance on the test data of the “regulate” target, and vice versa. This is a particularly difficult task, as differences between stances towards the “ban” and “regulate” targets can be very nuanced. To ensure good coverage with the unlabeled data, the semisupervised models utilize the unlabeled data from both “ban” and “regulate” targets. The results of this experiment are shown in Figure 3. As we can see from the figure, AUMST+ChtGPT outperforms both the supervised as well as other SSL methods for the “Ban→Regulate” setup and performs similarly with the other approaches for the “Regulate→Ban” setup. 6 Conclusions and Future Work We presented GUNSTANCE, a stance detection dataset containing tweets pertaining to the controversial topics of “banning guns” and “regulating guns” in the aftermath of various shooting events in the United States. The dataset includes manually annotated tweets together with unlabeled tweets to facilitate supervised and semi-supervised learning. We provide baseline results using established and newer supervised and SSL models for stance detection. We also utilize LLMs in a zero-shot setting and find that LLMs, such as ChatGPT, are able to understand the complexities in our dataset and outperform supervised baselines. Furthermore, we propose a curriculum-based semi-supervised approach that utilizes ChatGPT as an external knowledge source on a small subset of hard-to-learn unlabeled examples when the SSL model is less confident, pushing both performance and computational efficiency of SSL methods, such as AUM-ST. The stance detection task addressed is not only timely (given the large number of shooting events), but also challenging. Our work has the potential to enable researchers to use machine learning models to analyze mass shooting data, gain insights into prevalent arguments and opinions, and develop a deeper understanding of the complexities surrounding gun control and regulations. While there are other targets of interest in the context of mass shooting and gun control topics (e.g., buyback programs, background checks, mental health, etc.), we believe our work represents an important step towards understanding the general public’s stance towards two extremely important and interconnected gun control targets. Other related targets are left as part of future work. An evaluation of the proposed AUMST+ChtGPT model on other datasets is beyond the scope of this current study, but makes another excellent topic for future work. We also plan to focus on domain adaptation and transfer learning techniques that allow stance detection models to be adapted to different domains or targets. Finally, understanding similarities and differences between explicit and implicit stances in terms of LLM/SSL stance detection abilities is also of interest. We make our dataset and code available on GitHub to spur further research in this area. Acknowledgements We thank the National Science Foundation for support from grants IIS-2107487, IIS-1741345 and ITE-2137846 which supported the research and the computation in this study. We also thank our reviewers for their insightful feedback and comments. 7 Limitations Our work has a few limitations that should be acknowledged. First, the availability of data for certain shooting events is limited (e.g., San Jose, California and Tulsa, Oklahoma), while other events benefit from more extensive coverage (e.g., Buffalo, New York, and Uvalde, Texas), causing an imbalance in the amount of data that we were able to crawl for some events. Second, it is important to note that our dataset is comprised of publicly available tweets, which are often generated by a subset of users who are more active and vocal on social media. Therefore, the opinions expressed in these tweets may not necessarily represent the views of the majority. These limitations highlight other potential avenues for future research that could address the scarcity of data, improve the identification of implicit and explicit stances, and explore meth12035 ods to capture a more diverse range of perspectives on the topic of gun control and regulations. In our study, we removed tweets that contain external URLs as the goal was to focus on the general public’s stance as opposed to the stance of media outlets, political organizations, etc. However, it would be interesting to detect the stance of such global entities and understand how their stance influences the stance of their followers. As another limitation, we should note that the dataset that was used to pre-train ChatGPT is unknown. It is not possible to assess the extent to which tweets in our dataset may have been seen by ChatGPT and the effect on the performance. Information about the dataset could provide light on the impressive abilities of ChatGPT on the challenging task of detecting stance towards two important gun control and regulation targets. Ethics Statement Our work on the GUNSTANCE dataset is conducted with a strong commitment to ethical considerations. We prioritize privacy and collect only publicly available tweets and adhere to relevant guidelines for annotations and sharing the datasets. We do not share any private user information (or other metadata for that matter). Considering the nature of the dataset which consists of tweets posted during mass shooting events, we want to emphasize that we did remove tweets with external URLs from the dataset, which ensures that there are no graphic descriptions in the tweets in our dataset. Additionally, we emphasize that our annotators were paid fairly. We prioritize transparency and open communication in our work and make our dataset and methodologies available to the research community, enabling peer review, validation, and further advancements in the field. While our models can be misused, as many other models in the literature, they are meant to be used to predict the general public’s stance towards gun control targets, and can be very helpful in that respect. References Abeer ALDayel and Walid Magdy. 2021. Stance detection on social media: State of the art and trends. Information Processing & Management, 58(4):102597. Emily Allaway and Kathleen McKeown. 2020a. ZeroShot Stance Detection: A Dataset and Model using Generalized Topic Representations. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8913– 8931, Online. Association for Computational Linguistics. Emily Allaway and Kathleen McKeown. 2020b. Zeroshot stance detection: a dataset and model using generalized topic representations. arXiv preprint arXiv:2010.03640. Nora Saleh Alturayeif, Hamzah Abdullah Luqman, and Moataz Aly Kamaleldin Ahmed. 2022. Mawqif: A multi-label arabic dataset for target-specific stance detection. In Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP), pages 174–184. Oscar Araque, Lorenzo Gatti, and Kyriaki Kalimeri. 2020. Moralstrength: Exploiting a moral lexicon and embedding similarity for moral foundations prediction. Knowledge-based systems, 191:105184. Oscar Araque, Lorenzo Gatti, and Kyriaki Kalimeri. 2022. Libertymfd: A lexicon to assess the moral foundation of liberty. In Proceedings of the 2022 ACM Conference on Information Technology for Social Good, pages 154–160. Isabelle Augenstein, Tim Rocktäschel, Andreas Vlachos, and Kalina Bontcheva. 2016. Stance detection with bidirectional conditional encoding. arXiv preprint arXiv:1606.05464. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Matteo Cinelli, Gianmarco De Francisci Morales, Alessandro Galeazzi, Walter Quattrociocchi, and Michele Starnini. 2021. The echo chamber effect on social media. Proceedings of the National Academy of Sciences, 118(9):e2023301118. Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020a. Will-they-won’t-they: A very large dataset for stance detection on Twitter. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1715– 1724, Online. Association for Computational Linguistics. Costanza Conforti, Jakob Berndt, Mohammad Taher Pilehvar, Chryssi Giannitsarou, Flavio Toxvaerd, and Nigel Collier. 2020b. Will-they-won’t-they: A very large dataset for stance detection on twitter. arXiv preprint arXiv:2005.00388. RV Cox, Sumeet Kumar, Matthew Babcock, and Kathleen M Carley. 2020. Stance in replies and quotes (srq): A new dataset for learning stance in twitter conversations. 12036 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. International Joint Conferences on Artificial Intelligence. Seth Ebner, Felicity Wang, and Benjamin Van Durme. 2019. Bag-of-words transfer: Non-contextual techniques for multi-task learning. In Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019), pages 40–46. Yujie Fu, Xiaoli Li, Yang Li, Suge Wang, Deyu Li, Jian Liao, and Jianxing Zheng. 2022. Incorporate opinion-towards for stance detection. KnowledgeBased Systems, 246:108657. Shalmoli Ghosh, Prajwal Singhania, Siddharth Singh, Koustav Rudra, and Saptarshi Ghosh. 2019. Stance detection in web and social media: a comparative study. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 10th International Conference of the CLEF Association, CLEF 2019, Lugano, Switzerland, September 9–12, 2019, Proceedings 10, pages 75–87. Springer. Kyle Glandt, Sarthak Khanal, Yingjie Li, Doina Caragea, and Cornelia Caragea. 2021. Stance detection in covid-19 tweets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Long Papers), volume 1. Jesse Graham, Jonathan Haidt, Sena Koleva, Matt Motyl, Ravi Iyer, Sean P Wojcik, and Peter H Ditto. 2013. Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology, volume 47, pages 55–130. Elsevier. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learning, pages 1321–1330. PMLR. Omama Hamad, Ali Hamdi, Sayed Hamdi, and Khaled Shaban. 2022. Steducov: an explored and benchmarked dataset on stance detection in tweets towards online education during covid-19 pandemic. Big Data and Cognitive Computing, 6(3):88. Zihao He, Negar Mokhberian, and Kristina Lerman. 2022. Infusing knowledge from wikipedia to enhance stance detection. arXiv preprint arXiv:2204.03839. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Marjan Hosseinia, Eduard Dragut, and Arjun Mukherjee. 2020a. Stance prediction for contemporary issues: Data and experiments. arXiv preprint arXiv:2006.00052. Marjan Hosseinia, Eduard Dragut, and Arjun Mukherjee. 2020b. Stance prediction for contemporary issues: Data and experiments. In Proceedings of the Eighth International Workshop on Natural Language Processing for Social Media, pages 32–40, Online. Association for Computational Linguistics. Binxuan Huang and Kathleen M Carley. 2019. Parameterized convolutional neural networks for aspect level sentiment classification. arXiv preprint arXiv:1909.06276. Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In Proceedings of the international AAAI conference on web and social media, volume 8, pages 216–225. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar. Association for Computational Linguistics. Jonathan Kobbe, Ioana Hulpus,, and Heiner Stuckenschmidt. 2020. Unsupervised stance detection for arguments from consequences. In Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pages 50–60. Dilek Küçük and Fazli Can. 2018. Stance detection on tweets: An svm-based approach. arXiv preprint arXiv:1803.08910. Mirko Lai, Alessandra Teresa Cignarella, Delia Irazú Hernández Farías, Cristina Bosco, Viviana Patti, and Paolo Rosso. 2020. Multilingual stance detection in social media political debates. Computer Speech & Language, 63:101075. Mirko Lai, Viviana Patti, Giancarlo Ruffo, and Paolo Rosso. 2018. Stance evolution and twitter interactions in an italian political debate. In Natural Language Processing and Information Systems: 23rd International Conference on Applications of Natural Language to Information Systems, NLDB 2018, Paris, France, June 13-15, 2018, Proceedings 23, pages 15–27. Springer. Samuli Laine and Timo Aila. 2016. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. 12037 Yingjie Li and Cornelia Caragea. 2021. Target-aware data augmentation for stance detection. Yingjie Li, Tiberiu Sosea, Aditya Sawant, Ajith Jayaraman Nair, Diana Inkpen, and Cornelia Caragea. 2021a. P-stance: A large dataset for stance detection in political domain. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2355–2365, Online. Association for Computational Linguistics. Yingjie Li, Tiberiu Sosea, Aditya Sawant, Ajith Jayaraman Nair, Diana Inkpen, and Cornelia Caragea. 2021b. P-stance: A large dataset for stance detection in political domain. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2355–2365. Yingjie Li, Chenye Zhao, and Cornelia Caragea. 2023. Tts: A target-based teacher-student framework for zero-shot stance detection. In Proceedings of the ACM Web Conference 2023, pages 1500–1509. Yu-Ru Lin and Wen-Ting Chung. 2020. The dynamics of twitter users’ gun narratives across major mass shooting events. Humanities and social sciences communications, 7(1):1–16. Rui Liu, Zheng Lin, Yutong Tan, and Weiping Wang. 2021. Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3152–3157. Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, et al. 2023. Summary of chatgpt-related research and perspective towards the future of large language models. Meta-Radiology, page 100017. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose Camacho-Collados. 2022. Timelms: Diachronic language models from twitter. arXiv preprint arXiv:2202.03829. Nikita Lozhnikov, Leon Derczynski, and Manuel Mazzara. 2020. Stance prediction for russian: data and analysis. In Proceedings of 6th International Conference in Software Engineering for Defence Applications: SEDA 2018 6, pages 176–186. Springer. Yun Luo, Zihan Liu, Yuefeng Shi, Stan Z Li, and Yue Zhang. 2022. Exploiting sentiment and common sense for zero-shot stance detection. arXiv preprint arXiv:2208.08797. Geoffrey J McLachlan. 1975. Iterative reclassification procedure for constructing an asymptotically optimal rule of allocation in discriminant analysis. Journal of the American Statistical Association, 70(350):365– 369. Lin Miao, Mark Last, and Marina Litvak. 2020. Twitter data augmentation for monitoring public opinion on covid-19 intervention measures. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020. Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry. 2016. Semeval2016 task 6: Detecting stance in tweets. In Proceedings of the 10th international workshop on semantic evaluation (SemEval-2016), pages 31–41. Saif M Mohammad, Parinaz Sobhani, and Svetlana Kiritchenko. 2017. Stance and sentiment in tweets. ACM Transactions on Internet Technology (TOIT), 17(3):1–23. Mahdi Pakdaman Naeini, Gregory Cooper, and Milos Hauskrecht. 2015. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the AAAI conference on artificial intelligence, volume 29. Fuqiang Niu, Min Yang, Ang Li, Baoquan Zhang, Xiaojiang Peng, and Bowen Zhang. 2024. A challenge dataset and effective models for conversational stance detection. arXiv preprint arXiv:2403.11145. Geoff Pleiss, Tianyi Zhang, Ethan Elenberg, and Kilian Q Weinberger. 2020. Identifying mislabeled data using the area under the margin ranking. Advances in Neural Information Processing Systems, 33:17044– 17056. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Mehdi Sajjadi, Mehran Javanmardi, and Tolga Tasdizen. 2016. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. Advances in neural information processing systems, 29:1163–1171. Flora Sakketou, Allison Lahnala, Liane Vogel, and Lucie Flek. 2022. Investigating user radicalization: A novel dataset for identifying fine-grained temporal shifts in opinion. arXiv preprint arXiv:2204.10190. Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673–2681. Umme Aymun Siddiqua, Abu Nowshed Chy, and Masaki Aono. 2019. Tweet stance detection using an attention based neural ensemble model. In Proceedings of the 2019 conference of the north American chapter of the association for computational linguistics: Human language technologies, volume 1 (long and short papers), pages 1868–1873. 12038 Iustin Sirbu, Tiberiu Sosea, Cornelia Caragea, Doina Caragea, and Traian Rebedea. 2022. Multimodal semi-supervised learning for disaster tweet classification. In Proceedings of the 29th International Conference on Computational Linguistics, pages 2711– 2723. Valeriya Slovikovskaya. 2019. Transfer learning from transformers to fake news challenge stance detection (fnc-1) task. arXiv preprint arXiv:1910.14353. Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017a. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 551–557. Parinaz Sobhani, Diana Inkpen, and Xiaodan Zhu. 2017b. A dataset for multi-target stance detection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 551–557, Valencia, Spain. Association for Computational Linguistics. Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. 2020. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. Advances in neural information processing systems, 33:596–608. Tiberiu Sosea and Cornelia Caragea. 2022. Leveraging training dynamics and self-training for text classification. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4750–4762, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Christian Stab, Tristan Miller, and Iryna Gurevych. 2018. Cross-topic argument mining from heterogeneous sources using attention-based neural networks. arXiv preprint arXiv:1802.05758. Qingying Sun, Zhongqing Wang, Shoushan Li, Qiaoming Zhu, and Guodong Zhou. 2019. Stance detection via sentiment information and neural network model. Frontiers of Computer Science, 13:127–138. Qingying Sun, Zhongqing Wang, Qiaoming Zhu, and Guodong Zhou. 2018. Stance detection with hierarchical attention network. In Proceedings of the 27th international conference on computational linguistics, pages 2399–2409. Jannis Vamvas and Rico Sennrich. 2020. X-stance: A multilingual multi-target dataset for stance detection. arXiv preprint arXiv:2003.08385. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Ramon Villa-Cox, Sumeet Kumar, Matthew Babcock, and Kathleen M Carley. 2020. Stance in replies and quotes (srq): A new dataset for learning stance in twitter conversations. arXiv preprint arXiv:2006.00691. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652. Jason Wei and Kai Zou. 2019. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv preprint arXiv:1901.11196. Adina Williams, Nikita Nangia, and Samuel R Bowman. 2017. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426. Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. 2020. Self-training with noisy student improves imagenet classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10687–10698. Chang Xu, Cécile Paris, Surya Nepal, Ross Sparks, Chong Long, and Yafang Wang. 2020. Dan: Dual-view representation learning for adapting stance classifiers to new domains. arXiv preprint arXiv:2003.06514. Hanzi Xu, Slobodan Vucetic, and Wenpeng Yin. 2022. Openstance: Real-world zero-shot stance detection. arXiv preprint arXiv:2210.14299. Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. arXiv preprint arXiv:1805.07043. Guido Zarrella and Amy Marsh. 2016. Mitre at semeval2016 task 6: Transfer learning for stance detection. arXiv preprint arXiv:1606.03784. Bowen Zhang, Daijun Ding, and Liwen Jing. 2022. How would stance detection techniques evolve after the launch of chatgpt? arXiv preprint arXiv:2212.14548. Yiwei Zhou, Alexandra I Cristea, and Lei Shi. 2017. Connecting targets to tweets: Semantic attentionbased model for target-specific stance detection. In Web Information Systems Engineering–WISE 2017: 18th International Conference, Puschino, Russia, October 7-11, 2017, Proceedings, Part I 18, pages 18– 32. Springer. Elena Zotova, Rodrigo Agerri, Manuel Nuñez, and German Rigau. 2020. Multilingual stance detection in tweets: The catalonia independence corpus. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1368–1375. 12039 A Appendix A.1 Existing Stance Detection Datasets Table A1 shows the comparison of various Targetspecific Stance detection datasets. A.2 Crawling Rules/Terms We use a set of seed rules5 as shown in Table A2 to initiate the crawling of tweets for each event. Table A3 contains the additional terms we added to the rule based on frequent hashtags. In collecting these additional terms, we filtered out the terms containing news as a substring and terms like mass, dead, breaking, update, usa, america to reduce noise in the data. Figure A.1: Plot of the top 20 news media outlets ordered by the number of tweets posted. A.3 Tweets Statistics Figure A.2: Distributions for the number of words per tweet (red) and total character length of the tweet (blue). The mode of the word distribution is 47 words, while the mode of the length distribution is 278 characters. 5https://developer.twitter.com/en/docs/twitter-api/tweets/ search/integrate/build-a-query A.4 Baseline Models A.4.1 Supervised Baseline Models BiLSTM: We utilize Bi-Directional Long ShortTerm Memory Networks (BiLSTMs) (Schuster and Paliwal, 1997) as a supervised model. The model takes tweets as input and is trained to predict the stance toward a target without explicitly incorporating the target information. Kim-CNN: We employ Convolutional Neural Networks (CNNs) for text, proposed by (Kim, 2014), as another supervised model. Similar to BiLSTM, the CNN model takes tweets as input and is trained to predict the stance towards a target without directly incorporating target information. PGCNN: We utilize parameterized Convolutional Neural Networks proposed by (Huang and Carley, 2019). PGCNN utilizes parameterized filters and parameterized gates to capture the aspect specific features for sentiment classification. TAN: We utilize Target-specific Attention Networks (Du et al., 2017), an attention-based BiLSTM model. As opposed to the BiLSTM and KimCNN models, TAN identifies features specific to the target of interest by explicitly incorporating the target information. ATGRU: The Bi-Directional Gated Recurrent Unit (GRU) Network with Token-Level Attention Mechanism (Zhou et al., 2017) is an attentionbased Bi-GRU model that also explicitly uses target information. It identifies specific target features using the attention mechanism. GCAE: The Gated Convolutional Network with Aspect Embedding (Xue and Li, 2018) is a CNNbased model which, in addition to tweets, incorporates target information and also employs a gating mechanism to filter out information that is not related to the target. BERTweet: We employ a pre-trained Transformer based model from (Loureiro et al., 2022). This model is based on RoBERTa (Liu et al., 2019) and is pre-trained on a large corpus of tweets. We fine-tuned the model using the labeled data. Note: We have chosen to use well-established supervised methods for stance detection in our comparisons, as our focus was on zero-shot and semisupervised methods, including a novel SSL/LLM hybrid model. While more recent, less established supervised models may perform better than those 12040 Dataset Source #Targets Targets Size SemEval16 (Mohammad et al., 2016) Twitter 6 Atheism, Climate change is a real concern, Feminist Movement, Hillary Clinton, Legalization of Abortion, Donald Trump 48,700 MultiTarget (Sobhani et al., 2017b) Twitter 4 Clinton-Sanders, Clinton-Trump, Cruz-Trump 4,455 Cross Topic Argument Mining (AM) (Stab et al., 2018) News reports, editorials, blogs, debate forums, and encyclopedia articles 8 Abortion, Cloning, Death penalty, Gun control, Marijuana legalization, Minimum wage, Nuclear energy, School uniforms 25,492 WTWT (Conforti et al., 2020b) Twitter 5 Merger of companies: Cigna-Express Scripts, Aetna-Humana, CVS-Aetna, Anthem-Cigna, Disney-Fox 51,284 Stance in Replies and Quotes (SRQ) (Cox et al., 2020) Twitter 4 General Terms, Iran Deal, Santa Fe, Shooting Student Marches 5,221 VAST (Allaway and McKeown, 2020b) Comments 5,634 Various targets from comments collected from The New York Times ‘Room for Debate’ section, part of the Argument Reasoning Comprehension (ARC) Corpus 18,545 Procon20 (Hosseinia et al., 2020b) Posts from procon.org 419 Issues (questions) and their related responses for Controversial issues 6,094 StArCon (Kobbe et al., 2020) Debatepedia 190 Various topics 5,398 COVID-19 (Glandt et al., 2021) Twitter 4 Anthony S. Fauci, M.D., Keeping Schools Closed, Stay at Home Orders, Wearing a Face Mask 6,133 P-Stance (Li et al., 2021a) Twitter 3 Trump, Biden, Sanders 21,574 StEduCov (Hamad et al., 2022) Twitter 1 Online Education during COVID-19 Pandemic 16,572 MT-CSD (Niu et al., 2024) Reddit 5 Bitcoin, Tesla, SpaceX, Biden, Trump 15,876 GunStance (Ours) Twitter 2 Banning guns, Regulating guns 5,500 Table A1: Summary of Stance Detection Datasets Location Seed Rules Atlanta, Georgia ((atlanta OR ackworth OR georgia OR atlantaga OR ackworthga) (shooting OR massacre)) Boulder, Colorado ((boulder OR colorado) (shooting OR massacre OR strong)) OR #bouldershooting OR #BoulderMassacre OR #BoulderStrong San Jose, California (((san jose) OR sanjose OR (santa clara) OR (santaclara) OR (vta) ) (shooting OR #californiashooting)) Buffalo, New York ((buffalo OR ny OR (new york) OR newyork) shooting) OR buffaloshooting Uvalde, Texas ((uvalde OR texas) shooting) OR uvaldeshooting OR uvaldetexas Tulsa, Oklahoma ((tulsa OR (tulsaok) OR oklahoma OR (tulsaoklahoma)) shooting) OR tulsahooting Highland Park, Illinois ((highlandpark OR (highland park) OR illinois OR (highlandparkillinois)) shooting) OR highlandparkshooting Table A2: Seed rules used to crawl tweets for each event. we have used in our comparisons, the current trend in the literature is to focus on models that work with a small amount of data (if any), while leveraging pre-trained models and/or unlabeled data (e.g., zero-shot, few-shot, semi-supervised). We have followed this trend since the zero-shot and semisupervised scenarios fit very well with the dataset that we assembled. A.4.2 Semi-supervised Baselines Given the availability of a significant amount of unlabeled data for each target in the dataset, we also investigate semi-supervised models capable of leveraging such unlabeled data. The following semi-supervised baseline models were employed: BERT-NS: Similar to Glandt et al. (2021), we employ the self-training with Noisy Student (NS) method (Xie et al., 2020), a semi-supervised learning approach that leverages self-training and knowledge distillation (Hinton et al., 2015) to enhance the performance of a teacher model using unlabeled data. Initially, a teacher model is trained using the available labeled data and then utilized to generate pseudo-labels for the unlabeled data. Subsequently, a noisy student model is trained using both the labeled and pseudo-labeled data. This process can be iterated multiple times by replacing the teacher with the student. In our study, both the teacher and the student models represent fine-tuned BERTweet models. FixMatch: Introduced by Sohn et al. (2020), FixMatch combines two common SSL techniques: consistency regularization (Sajjadi et al., 2016; Laine and Aila, 2016) and pseudo-labeling (McLachlan, 1975). As part of the FixMatch procedure, two augmented versions are created for the unlabeled examples: a weakly augmented version of an unlabeled example is passed through the model and the prediction is converted into a pseudolabel; then, the model is trained to predict that pseudo-label when fed with a strongly augmented version of the same unlabeled example. While originally designed for image classification, FixMatch has also shown good performance when used on 12041 Location Week 1 Weeks 2-7 Atlanta, Georgia asian, massageparlor, activeshooter, woodstock, stopasianhate, stopaapihate, asianlivesmatter, atlantaspa, stopasianhatecrimes, aapi, hatecrime, racism, asianamericans, atlantamassacre boulder, thalapathy65, master, thalapathyvijay Boulder, Colorado bouldercolorado, kingsoopers, guncontrolnow, activeshooter, coloradoshooting, police, zfgvideography, gunreform, gunsense, guncontrol, gunreformnow, gunviolence, atlantamassacre, nra banassaultweapons, copolitics San Jose, California california, gunreformnow, guncontrolnow, massshooting, sanjosestrong, gunviolence, sanjoseshooting, guncontrol, gunsense, jose, transit, transportation, sf, sanfrancisco sfmta, bart, muni Buffalo, New York buffalony, buffalostrong, massshooting, guncontrolnow, racism, hatecrime, paytongendron, whitesupremacist, whitesupremacy, buffalonewyork, supermarket, buffalomassacre, blacklivesmatter uvalde, ushistory, texas, nra, endthefilibuster, gunviolence Uvalde, Texas guncontrolnow, guncontrol, robbelementaryschool, gunviolence, uvaldetx, uvaldemassacre, nra, robbelementary, texasschoolmassacre, uvaldeschoolmassacre specialsessionnow, uvaldepolice, gunreformnow, endgunviolence, banassaultweaponsnow, enough, dosomething, uvaldecoverup, uvaldepolicecowards Tulsa, Oklahoma guncontrolnow, gunreformnow, guncontrol, uvalde, massshooting, enoughisenough, tulsamassacre, hospital,tulsahospital,care2,gunviolence Highland Park, Illinois chicago, highlandparkparade, 4thofjuly, highland, massshooting, guncontrolnow, robertcrimo, guncontrol, highlandparkmassacre, prolifemyass, ushistory park Table A3: Additional terms added to the rule based on frequent hashtags. text data in a multimodal setup (Sirbu et al., 2022). Hence, we also employ FixMatch as a strong semisupervised baseline. AUM-ST: AUM-ST is a novel SSL method that takes advantage of the training dynamics of unlabeled data to enhance a model’s performance. This framework extends the concept of Area Under the Margin (Pleiss et al., 2020) to unlabeled data to identify potentially incorrect pseudo-labels. The authors show that removing examples with low AUM scores and using data augmentations such as synonym replacement, switchout, and backtranslations can significantly boost the generalization performance. A.4.3 Zero-shot Baseline Models BART-NLI: Natural Language Inference (NLI) is the task of determining whether a given hypothesis entails, contradicts or is neutral to a given premise. As shown by (Li et al., 2023), the BART (Lewis et al., 2019) model pre-trained on the MultiNLI dataset (Williams et al., 2017) can be used for zero-shot stance detection by creating a semantic mapping between the stance labels and the NLI labels. Thus, we consider BART-NLI as a strong zero-shot baseline for our dataset. FLAN: The main idea is to formulate the task in such a way that the language model generates the answer by completing the text. FLAN (Wei et al., 2021) improves the zero-shot learning capabilities of language models by introducing instruction tuning, and we consider it as another strong zero-shot baseline in our experiments. ChatGPT: One of the most notable large language models (LLMs) that has been receiving increasing interest from the research community since its release in December 2022 is ChatGPT, as it demonstrated impressive performances in various domains (e.g. healthcare, education, scientific research, etc.) and on diverse downstream NLP tasks (e.g. question answering, text classification, text generation, code generation, etc.)(Liu et al., 2023). In particular, (Zhang et al., 2022) shows that ChatGPT obtains state-of-the-art results on commonly used stance detection datasets such as SemEval2016 (Mohammad et al., 2016) and P-Stance (Li et al., 2021b), so we consider it as another strong zero-shot baseline for our dataset. A.5 Experimental Setup A.5.1 Hyperparameters To determine suitable hyperparameters for the models, the development set was utilized. We utilized the Adam optimizer for all the supervised and semisupervised models. However, for the supervised model, excluding BERTweet, we used a learning rate of 1e-5, weight decay of 4e-5, gradient clipping with the maximum norm of 4.0, and a dropout rate of 0.5 to the linear classification layer. The training process of those models (except for BERTweet) involved 120 epochs, with a mini-batch size of 32 for each iteration. For all models, we performed 3 runs 12042 (starting with different random seeds) and report average scores. Further details regarding the specific hyperparameters for each model are presented below: BiLSTM, ATGRU, TAN: Each of these Recurrent Neural Network models used a hidden unit with 512 dimensions and a dropout of 0.2. GCAE, Kim-CNN: These CNN-based models utilized filters of width 2, 3, 4, and 5. Each filter width was associated with 25 feature maps. After the convolutional layers, a linear classifier with a hidden dimension of 128 was employed. BERTweet: This model was trained with a learning rate of 2e-5 using a linear scheduler with warmup steps equivalent to 10% of the total training steps. A batch size of 32 was employed, and the model was trained for 100 epochs. Early stopping criteria based on the F1-score were utilized with patience of 10 epochs. The model was trained using the Adam optimizer and cross-entropy loss. BERT-NS: For BERT-NS, both teacher and student models were trained with hyperparameters similar to BERTweet. The confidence threshold for selecting pseudo-labels was set to 0.95. AUM-ST: We utilized a supervised batch size of 32 and an unsupervised batch size of 128. To filter unlabeled data, we applied a threshold of 0.7 and an augmentation percentile of 0.5. The selftraining process consisted of 50 steps. As augmentation strengths, we employed a mix weak augmentation strength of 0 and a max weak augmentation strength of 2. For strong augmentation, we utilized a min strength of 3 and a max strength of 40. FixMatch: We utilized the BERTweet model in FixMatch. For the FixMatch-specific parameters, we used an unlabeled loss weight λ= 1, a ratio between unlabeled and labeled examples μ = 7 and a threshold τ = 0.7 for using only the pseudolabels with high confidence. For the text augmentation, we used the same techniques and strengths employed for AUM-ST. A.5.2 Prompt engineering In the case of the zero-shot models employed, it is important to pay close attention to the design of the prompts used to obtain predictions. BART-NLI: Similar to (Li et al., 2023), we consider a mapping between predicting stance labels (Against, In-Favor, Neutral) and the task of predicting entailment labels (Contradiction, Entailment, Neutral), by using the tweet as a premise and designing a prompt that contains the target for the hypothesis. We experimented with several prompt templates and chose “I think [target]!” as the best one, where [target] is either “guns should be banned” or “guns should be regulated.” We use the BART-large6 version of the model. FLAN: We experimented with several prompt templates and chose “[tweet] What is the stance of the writer? (A) [neg_target] (B) [target] (C) neutral” as the best performing one, where [target] is either “guns should be banned” or “guns should be regulated” and [neg_target] is the corresponding “guns should not be banned” or “guns should not be regulated.” We use the FLAN-T5 XXL7 model. ChatGPT: We use the Chat Completions API 8 provided by OpenAI. For the system message that sets the behaviour of the assistant we use the instruction ’You will be provided with a tweet, and your task is to classify its stance as "[target]", "neutral", or "[neg_target]".’ Then, for the user message we provide the request ’[tweet]’. The ChatGPT version used is ’gpt-3.5-turbo-0613’. Because the answer of the model may come in different forms (e.g. "guns should be banned", "Stance: guns should be banned"), the final label is considered the one stance ( "[target]", "neutral", or "[neg_target]") that is a substring of the answer given by the model. When this is true for neither of them (e.g. "I’m sorry, but I can’t assist with that."), the final label for evaluation purposes is considered to be "neutral". In the streaming pipeline defined by AUMST+LLM, the case where zero or more than one stances are substrings of the answer, We consider ChatGPT has provided an inconclusive label, so that tweet won’t be used. This happens for only about 0.5% of the cases. A.5.3 Proposed Hybrid SSL/LLM Approach For AUM-ST+ChtGPT, we use the same setups as AUM-ST and we generate the ChatGPT pseudolabels in the same manner as the zero-shot baseline. Additionally, at each self-training iteration we select around 5% of unlabeled examples (k in Step 6https://huggingface.co/facebook/bart-large-mnli 7https://huggingface.co/google/flan-t5-xxl 8https://platform.openai.com/docs/guides/textgeneration/chat-completions-api 12043 4 of the algorithm) with AUMs close to zero to be passed through ChatGPT. A.5.4 Evaluation Metrics In order to evaluate the supervised, semisupervised and zero-shot models, we employ commonly used metrics for classification tasks: Accuracy, Precision, Recall, and F1 score. We denote true positives, true negatives, false positives, and false negatives by TP, TN, FP, and FN, respectively. Accuracy measures the overall correctness of the model’s predictions by calculating the ratio of correctly predicted instances to the total number of instances. It is calculated as: Accuracy = TP + TN TP + FP + TN + FN (1) Precision measures the proportion of correctly predicted positive instances out of all the instances that were predicted as positive. It is calculated as: Precision = TP TP + FP (2) Recall measures the proportion of correctly predicted positive instances out of all the actual positive instances and is calculated as: Recall = TP TP + FN (3) F1-Score is a commonly used metric that combines precision and recall into a single metric, providing a trade-off between these two metrics. The F1-score is calculated as: F1-score = 2 ∗TP 2 ∗TP + FP + FN (4) We also utilized the Expected Calibration Error (ECE) (Naeini et al., 2015) metric to measure how reliably a model’s predicted probabilities reflect the true probabilities. ECE quantifies the difference between the predicted confidence of a model and its accuracy. A well-calibrated model should have a low ECE, implying that the model’s predicted probabilities more accurately reflect the true probabilities. The ECE is calculated as: ECE = M X m=1 |Bm| n |acc(Bm) −conf(Bm)| (5) where, n is the number of samples, M is the equally-spaced bins, Bm is the set of indices of samples whose prediction confidence falls into the respective bins, and acc(Bm) and conf(Bm) represent the accuracy and confidence within the bin Bm, respectively (Guo et al., 2017). 12044
2024
650
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12045–12072 August 11-16, 2024 ©2024 Association for Computational Linguistics Beyond Traditional Benchmarks: Analyzing Behaviors of Open LLMs on Data-to-Text Generation Zdenˇek Kasner and Ondˇrej Dušek Charles University, Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Prague, Czech Republic {kasner,odusek}@ufal.mff.cuni.cz Abstract We analyze the behaviors of open large language models (LLMs) on the task of data-totext (D2T) generation, i.e., generating coherent and relevant text from structured data. To avoid the issue of LLM training data contamination with standard benchmarks, we design QUINTD – a tool for collecting novel structured data records from public APIs. We find that open LLMs (Llama 2, Mistral, and Zephyr) can generate fluent and coherent texts in zero-shot settings from data in common formats collected with QUINTD. However, we show that the semantic accuracy of the outputs is a major issue: both according to human annotators and our reference-free metric based on GPT-4, more than 80% of the outputs of open LLMs contain at least one semantic error. We publicly release the code, data, and model outputs.1 1 Introduction Large language models (LLMs; Ouyang et al., 2022; Touvron et al., 2023a,b; Jiang et al., 2023; Tunstall et al., 2023) have already left a mark in many areas of natural language processing (NLP). Surprisingly, their applicability to the task of data-to-text (D2T) generation (Reiter and Dale, 1997; Gatt and Krahmer, 2018) remains underexplored, with limited evaluation on a handful of well-established benchmarks only (Axelsson and Skantze, 2023; Yuan and Färber, 2023). Generating text from structured data is arguably challenging for LLMs, given the specifics of D2T generation, such as long inputs, complex non-linear structure, and strict requirements on semantic accuracy. However, a more significant issue is the lack of testing grounds. The current D2T generation benchmarks are not only getting saturated (Van Miltenburg et al., 2023), but also promote optimization towards traditional reference-based evaluation metrics, which 1https://d2t-llm.github.io/ entity graph product data JSON game results JSON time series CSV weather data JSON weather forecast product description game report chart caption entity description LLM Quintd      token-level error annotations MD structured data from public APIs 1 2 Figure 1: To benchmark LLMs, we download unlabeled structured data from public APIs and prompt LLMs to generate texts based on the data. We annotate semantic errors in the outputs using reference-free metrics. were shown to correlate poorly with human judgment (Gehrmann et al., 2023; van der Lee et al., 2021; Novikova et al., 2017). When it comes to the models, using closed LLMs (OpenAI, 2023a,b) is increasingly considered a bad research practice due to its non-reproducibility (Rogers, 2023; Chen et al., 2023). On top of that, contamination of LLM training data with standard benchmarks further restricts the space for experiments (Golchin and Surdeanu, 2023; Aiyappa et al., 2023; Balloccu et al., 2024). In this paper, we propose an approach that allows us to analyze model behavior in D2T generation on novel, real-world structured data records with reference-free evaluation metrics. We begin by realizing that unlabeled data are plentiful. To leverage the data for our experiments, we introduce QUINTD2 – a tool for collecting structured data from five domains in standard formats: JSON, 2Quintet of Unlabeled Inputs for Natural Tasks in Datato-text, pronounced as “quintet” 12045 Task Id Domain Task Description Source Format openweather Weather Generating a weather forecast from weather data. OpenWeather JSON gsmarena Technology Describing a product based on its attributes. GSMArena JSON ice_hockey Sport Describing an outcome of an ice-hockey game. RapidAPI JSON owid Health Generating a caption for a time series. OurWorldInData CSV wikidata World facts Describing entities and relations in a knowledge graph. Wikidata Markdown Table 1: The domains and tasks included in the QUINTD data collection tool we use for testing D2T generation with LLMs. In our experiments, we download 100 development and 100 test examples of input data for each domain. CSV, and Markdown. We choose the domains so that the data can be directly used as input for five distinct D2T generation tasks. Our tasks include generating weather forecasts, sports reports, product descriptions, chart captions, and entity descriptions (see Table 1). Next, we collect a set of 1,000 inputs with QUINTD and use the inputs as an adhoc benchmark (called QUINTD-1) for testing the abilities of LLMs for D2T generation. We assume that the data formats in QUINTD-1 are common in the LLMs’ pretraining corpora, so we specify the task using instructions instead of standard finetuning with human-written outputs, capitalizing on the zero-shot abilities of instruction-tuned LLMs (§2). We push towards better reproducibility by focusing on open LLMs, which – apart from being more accessible – also achieve increasingly better results across tasks (Zheng et al., 2023; Beeching et al., 2023). For our experiments, we use three open LLMs with 7B parameters: Llama 2 (Touvron et al., 2023b; TogetherAI, 2023), Mistral (Jiang et al., 2023), and Zephyr (Tunstall et al., 2023). We also use GPT-3.5 (OpenAI, 2023b) as a closed model baseline for the final experiments. Given the behavioral nature of the experiments with LLMs (Holtzman et al., 2023), we put emphasis on reporting model behavior throughout the process (§3). Another piece of the puzzle is reference-free evaluation: using the input data as a ground for comparison instead of reference outputs (§4). We focus on identifying semantic errors in the model outputs, i.e., the information that is not supported by the input data. We use two separate evaluation methods: manual annotations from human crowdworkers (van der Lee et al., 2021) and a custom automatic metric based on GPT-4 (Liu et al., 2023; Chiang and Lee, 2023; Kocmi and Federmann, 2023a). We annotate the errors on the level of individual words, getting fine-grained annotations of error spans in several categories (Thomson and Reiter, 2020; Thomson et al., 2023). Based on our results, we provide general recommendations for D2T generation with open LLMs across tasks and formats (§5). Our main findings are as follows: • Open LLMs can generate fluent outputs from structured data in common formats under zero-shot settings. • Semantic accuracy is a major obstacle: both human annotators and GPT-4-based metric report that over 80% of outputs of open LLMs on our data contain a semantic error. • Long data inputs cause practical issues, including the need for long-context models, increased GPU memory requirements, and unavailability of few-shot approaches. • Outputs can be empirically improved by following several rules-of-thumb for preprocessing the model input, such as including units, removing unnecessary fields, or prefixing the model answer. 2 Reference-Free D2T Generation 2.1 Data Collection Tool We introduce QUINTD,3 a tool for collecting adhoc test sets using public APIs in five different domains. Our main reasons for departing from the traditional scheme of benchmarking on wellestablished datasets are: 1. Any published test sets may be potentially included in the training data of LLMs. 2. Public sources of structured data offer enough resources for creating ad-hoc test sets. 3. Without human references, our data collection scheme is lightweight and replicable. Given the available public sources of data, we settled on the five tasks which are described in Table 1 (see Appendix A for more details). The tasks 3https://github.com/kasnerz/quintd 12046 Prompt Based on the given data: ˋˋˋ {data} ˋˋˋ Your task is to write a brief, fluent, and coherent single-paragraph {output_type} in natural language. The text should be balanced and neutral. Make sure that all the facts mentioned in the text can be derived from the input data, do *not* add any extra information. Output prefix Sure! Here is the {output_type}: " Figure 2: The prompt P and the model output prefix we used for the experiments in this paper. data is filled with the data record x and output_type is filled accordingly for each domain D (see Table 1 and Table 6 in the Appendix). are based on structured data in common formats: JSON, CSV, and Markdown. 2.2 QUINTD-1 Dataset Using QUINTD, we collected the dataset for our experiments in this paper (QUINTD-1). The dataset contains 500 examples in the development set and 500 examples in the test set (100 examples per domain for each split). We downloaded the data between November 2023 and January 2024. Note that the dataset contains only unlabeled data without any reference outputs (e.g., weather data, but not a textual weather forecast), so the outputs need to be evaluated using reference-free metrics. New versions of the benchmark can be easily generated with our QUINTD tool. 2.3 Task Definition Each example in QUINTD-1 consists of a structured data record x from a domain D P {openweather, gsmarena, ice_hockey, owid, wikidata}. Given x and a prompt P, the goal is to generate natural language output y faithful to the data x, according to the instructions in the prompt P (see Figure 2). 3 Experiments 3.1 Experimental Process Our goal is to avoid extensive data preprocessing and prompt engineering since these steps could harm the reproducibility and generalizability of our experiments. With this goal in mind, we decided to use the same prompt template P for all the domains and models. For a set of preliminary experiments, we first wrote down the initial version of the prompt and used the data without further preprocessing. We then iteratively improved our experimental setup by observing outputs on the development set. In §3.3, we describe all the observations and modifications we made before generating the final outputs on the test set. 3.2 Models For our experiments, we selected the following LLMs available under an open license: • Llama 2 (Touvron et al., 2023a; TogetherAI, 2023), togethercomputer/Llama-2-7B-32K-Instruct • Mistral (Jiang et al., 2023), mistralai/Mistral-7B-Instruct-v0.1 • Zephyr (Tunstall et al., 2023). HuggingFaceH4/zephyr-7b-beta The models are instruction-tuned, operate with 32k context, and perform well on recent benchmarks (Beeching et al., 2023). All the models have 7B parameters and thus fit on a single NVIDIA A40 (48G VRAM) in 16-bit precision. The models are available through HuggingFace (Wolf et al., 2020). We accessed the models via an API provided by the text-generation-webui framework4 running locally. For the final experiments, we also included GPT-3.5 (gpt-3.5-turbo-1106) accessed through the OpenAI API (OpenAI, 2023b).5 3.3 Observations from Preliminary Experiments During development, we made several observations which we took into account for our final experimental setup: Any input field may appear in the output. The models do not always select the most relevant fields for the given output. For example, we observed that 4https://github.com/oobabooga/ text-generation-webui 5We only included GPT-3.5 in our final experiments as proprietary models were not our focus. We did not include GPT-4 since we use the same model for evaluation (see §4.2) and LLMs tend to be biased towards their own outputs (Koo et al., 2023; Stureborg et al., 2024). 12047 prompt "weather": [{ "description": "light rain", }], "wind": { "deg": 328, "gust": 3.92, "speed": 2.27 } (...) Mountain Brook, AlabamaNC is expected to have a mild and mostly cloudy start to the weekM, with temperatures ranging from 4.78°C to 11.47°C. Rain is expected on Monday and TuesdayI (...) evaluation generation + human annotators GPT-4 raw data output Llama2-7B32k-Instruct ✔✘ open LLMs Mistral-7B instruct Zephyr-7Bbeta annotators data visualization data Figure 3: Our experimental setup. We first generate the outputs using LLMs that are given raw data and a taskspecific prompt. We annotate the word-level semantic errors in the LLM outputs with (a) an automatic metric based on GPT-4 that matches the output to the raw data, and (b) human annotators, who annotate the errors in the output given the data visualization. the models commonly mention identifiers, timestamps, files, and other metadata, leading to unnatural outputs. To mitigate these issues, we manually picked irrelevant fields and filtered them out from the input. Units need to be specified explicitly. If the units are not specified in the data record, the models tend to resort to their best guess. This may go unnoticed if the unit is evident from the context (e.g., the model will usually not report the temperature in Fahrenheit instead of Celsius), but it may get problematic if the value is ambiguous (e.g., wind speed in km/h versus m/s). Therefore, we explicitly add units to all data records where appropriate. Understandable field names are enough. On the flip side, we decided not to add extra descriptions to field names in the data if the field was understandable from its name (e.g., homeTeam or dimensions). As discussed by Kasner et al. (2023), pretrained models interpret field names correctly as long as they are human-readable. We only include chart metadata for the CSV files in the owid domain. Long inputs can be troublesome. The inputs in some domains can easily get longer than 10-20k tokens. This issue is amplified by the fact that the evaluated LLMs tokenize numbers into individual digits. To accommodate for the long inputs, we picked models that accept up to 32k tokens.6 How6For this reason, we use Llama-2-7B-32k with 32k token context (TogetherAI, 2023) instead of the official Llama-27B-Instruct, which only supports 4k context (Touvron et al., ever, with long inputs, the GPU memory consumption also gets considerably higher, so we needed to downsample the data in owid and openweather to keep their length under ~8k tokens. Few-shot experiments are infeasible. Due to the above context-length limitations, we were not able to run few-shot experiments since we could not robustly fit an additional (xexample, yexample) pair in the prompt. We attempted to include only yexample (making the setup “half-shot”), but we observed that the models then used entities from the example (unrelated to the actual input) in their outputs. Therefore, we decided to leave this line of experiments for future work (see §5.3 for discussion). Deterministic decoding and sampling are on par. In our preliminary experiments, we observed a roughly similar output quality for both deterministic decoding and sampling.7 For the final experiments, we decided to use deterministic decoding, which is non-parametric and conceptually more suitable for D2T generation. Prefixing the output makes parsing easier. Even with variations of a “generate only the output” instruction appended to the prompt, the models (especially Llama 2) tended to first confirm the request. For that reason, we decided to prefix the input for all the models with “Sure! Here is the {output_type}: "”. The opening quote at the end of the prefix allowed us to robustly parse the text sim2023b). 7We used the text-generation-webui default decoding parameters: temperature=0.7, top_p=0.9, and top_k=20. 12048 ply by stripping the closing quote from the model output. The outputs are fluent but inaccurate. We observed that the vast majority of model outputs were grammatically and stylistically correct, capturing the output type specified in the prompt. However, we also noticed that the outputs contained many semantic errors (even after emphasizing the focus on semantic accuracy in the prompt, see Figure 2). This observation led us to evaluate the model outputs using word-level annotations focused on semantic accuracy errors (see §4). 3.4 Final Experiments Taking the observations in §3.3 into account, we proceeded to generate the outputs on the test set of QUINTD-1 for word-level error analysis. We first preprocessed the data as mentioned: we stripped out unnecessary fields, added units, and downsampled the data to fit the context. For all the models mentioned in §3.2, we used the prompt in Figure 2 and deterministic decoding with a maximum length of 512 tokens. For comparison, we also generated outputs for the same inputs and identical prompts with GPT3.5. Note that even though we fixed the temperature and seed to 0, the rest of the decoding parameters are inaccessible to us and may differ from the parameters we used for the open models. 4 Evaluation For evaluation, we focus on identifying semantic errors in model outputs. We compare the generated texts to the input data, looking for parts of texts that are not faithful to the input data. We annotate the errors on the word level, considering all the words in the output text as potential sources of errors. We use two complementary referenceless evaluation methods: • Ehum: human evaluation based on crowdsourcing (§4.1), • Egpt: an automatic metric based on GPT-4 (§4.2). The methods use similar instructions and produce outputs with equivalent format. The main idea is to compensate for the shortcomings of each approach: while human evaluation is costly (about ten times more expensive than automatic evaluation), using only an automatic metric based on a closed LLM would make the evaluation potentially non-reproducible and biased (Kocmi and Federmann, 2023a; Wang et al., 2023b). Reporting the results of both methods should hopefully increase the robustness and replicability of our results. Our error taxonomy and its notation are inspired by Thomson and Reiter (2020) and Thomson et al. (2023). After preliminary examination of the outputs, we settled on four error categories: INCORRECTI, NOT_CHECKABLENC, MISLEADINGM, and OTHERO. To set clear boundaries between the categories, simplify the annotation interface and reach better inter-annotator agreement, we decided to keep our taxonomy more high-level than Thomson and Reiter and not to distinguish between fine-grained categories (e.g., incorrect name vs. incorrect number). The descriptions of our error categories, as presented in the instructions for annotation, are included in Table 2. 4.1 Human-based Evaluation For the human annotation metric, we prepared a custom web interface, where an annotator is instructed to annotate text spans with respective error categories. We created custom visualizations for each data format (see Figure 3 and Appendix E for examples).8 We hired annotators on the Prolific9 crowdsourcing platform. In total, we hired 100 annotators, each annotating 20 examples (four model outputs for each of the five domains). We selected annotators with at least 10 completed tasks, a 100% approval rate and English as their primary language. We paid the annotators £9 per hour, according to the platform’s recommendations. The median time for completing the annotations was 47 minutes. See Appendix B for the instructions for the annotators and the annotation interface. 4.2 GPT-4-based Evaluation For automatic evaluation, we leverage the fact that LLM-based metrics can be customized for a particular task without the need for training data. In our experiments, we use a metric based on GPT4 (gpt-4-1106-preview, OpenAI, 2023a), which was shown to be superior to other LLMs in following fine-grained instructions, reaching high correlations with human judgment on evaluating generated 8We open-sourced our annotation framework as a stand-alone software package, see https://github.com/ kasnerz/factgenie. 9https://prolific.com 12049 Error Description INCORRECTI The fact in the text contradicts the data. NOT_CHECKABLENC The fact in the text cannot be checked given the data. MISLEADINGM The fact in the text is misleading in the given context. OTHERO The text is problematic for another reason, e.g., grammatically or stylistically incorrect, irrelevant, or repetitive. Example data Nokia 3310 | color: black, blue, grey | display: 320x240px text Nokia 3310 is produced in FinlandNC and features a 320x320I display. It is available in black colorM. The data seem to provide only partial information about the phone.O Table 2: Categories of errors annotated in our evaluation and an example demonstrating the error types. See Appendix B for an explanation of individual errors in the example. texts (Zhao et al., 2023; Sottana et al., 2023; Kocmi and Federmann, 2023a,b).10 We instantiate Egpt with a prompt and a system message describing the task. We instruct the model to produce a JSON output with sequentially ordered errors using the following format: { "errors": [{ "reason": [REASON], "text": [TEXT_SPAN], "type": [ERROR_CATEGORY] }, ...] }. Note that we require that the model first generates the free-form text reason for the error. Generating the reason comes at almost no extra cost and our cursory observations suggest that requiring it leads to more precise outputs.11 We align the model outputs with the original text by string matching on TEXT_SPAN, moving the current position forward after each match. We ensure that the model response is a valid JSON using OpenAI’s response_format parameter. See Appendix C for more details about the metric, including the prompt and the system message. 5 Results and Discussion A summary of the word-level annotations is in Table 3 and 4, with detailed results per domain provided in Appendix F. 10We confirmed that GPT-3.5 and Llama 3 have lower correlations with human judgments also in our scenario, see Appendix D. 11We did not ask the crowdworkers for free-form reasoning about the errors since that would make the annotation notably more complex. 5.1 How Accurate Are the Model Outputs? Depending on the model, between 76-86% of examples contain an error according to Ehum, suggesting that open LLMs make semantic errors very often. According to Egpt, the number is as high as 89-94%. The most common error type is INCORRECTI. As shown in Table 3, all the open LLMs make more than two statements contradicting the data per output on average. The NOT_CHECKABLENC errors are also relatively common: more than one per output on average according to Ehum, and at least one being present in more than 25% of examples according to both metrics. The results vary widely according to the domain (see Appendix F). For example, the outputs in wikidata contain much more NOT_CHECKABLENC errors on average (1.01 per output according to Ehum) than INCORRECTI errors (0.11 per output according to Ehum), suggesting that with simpler inputs, the models tend to introduce extra information. The openweather domain seems to be the most complex with the longest outputs (~164 tokens), more than eight errors in the output on average, and >90% of outputs containing an error. The differences between the open LLMs are not major. Out of the open LLMs, Zephyr has the best results across categories and metrics, followed by Llama 2. However, the outputs of Mistral are longer on average, leaving more space for errors. GPT-3.5 (which we consider separately) does generally better according to both Egpt and Ehum, although it still makes an error in 60-75% of examples (2 errors per example on average). In general, the results show that LLMs make too many semantic errors to be usable in practice for D2T generation in a zero-shot setting. 12050 Incorrect Not Checkable Misleading Other All categories Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt # Tok. Llama 2 1.57 2.79 1.25 0.91 0.25 0.12 0.10 0.09 3.18 3.90 83.8 Mistral 2.03 3.23 1.12 0.54 0.44 0.26 0.25 0.10 3.85 4.12 114.9 Zephyr 1.44 2.84 0.77 0.40 0.20 0.29 0.16 0.05 2.58 3.58 98.0 GPT-3.5 0.65 1.76 0.49 0.38 0.18 0.26 0.07 0.02 1.39 2.42 84.9 Table 3: The average numbers of errors per output (lower is better) based on human annotators (Ehum) and GPT-4 (Egpt). We also include the average number of tokens per output in the rightmost column. The results of the best open LLM are emphasized. Incorrect Not Checkable Misleading Other All categories Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt Llama 2 53.2% 80.0% 57.4% 44.8% 17.4% 8.8% 7.6% 7.6% 85.6% 94.0% Mistral 53.6% 80.2% 49.6% 31.8% 20.6% 17.0% 13.6% 8.4% 81.2% 93.0% Zephyr 46.8% 78.0% 42.2% 25.0% 16.2% 20.6% 11.6% 4.2% 75.6% 89.4% GPT-3.5 38.0% 65.0% 28.8% 19.6% 12.6% 16.2% 6.2% 2.2% 60.6% 75.8% Table 4: The percentage of outputs containing at least one error (lower is better) based on human annotators (Ehum) and GPT-4 (Egpt). The results of the best open LLM are emphasized. 5.2 Do Evaluation Methods Agree? To quantify the agreement of Ehum and Egpt, we computed the Pearson correlation coefficient between the error counts on the level of words, examples, and domains as follows (note that each error category was considered separately): • For rdomain, we used the average error counts per domain (see Table 13). • For rexample, we used the count of errors per example. • For rword, we used the binary indicators marking an error per word. The correlation on the level of words is weak (rword “ 0.26) but gets better on the example-level (rexample “ 0.52) and even better on the domainlevel (rdomain “ 0.93). In Table 5, we show the percentage of words marked as errors by individual metrics. The metrics agree on the specific words in less than 6%, although they both mark around 21% of words as erroneous. We also measure inter-annotator agreement between human annotators. For that, we obtained annotations from two annotators for 100 model outputs. The results are similar: the annotators agree weakly on the word level (rword “ 0.36), stronger on the example level (rexample “ 0.53), and even stronger on the domain level (rdomain “ 0.85). We conclude that while the details regarding error spans and categories may vary, the annotators as well as GPT-4 generally agree on the accuracy Egpt Ehum Egpt + Ehum Incorrect 10.1% 14.2% 4.1% Not checkable 7.8% 4.3% 2.0% Misleading 2.2% 1.5% 0.1% Other 1.8% 0.7% 0.1% Total 21.9% 20.7% 6.3% Table 5: The percentage of words marked as erroneous by human annotators (Ehum), GPT-4 (Egpt), and both approaches at the same time (Ehum + Egpt). of model outputs for a given set of examples. In the future, the agreement could be improved by measuring errors on the phrase level (Vamvas and Sennrich, 2022). 5.3 Recommendations for Future Work Focus on semantic accuracy. The output of LLMs is satisfactory regarding the style, format, and purpose of the text. However, the amount of semantic errors remains very high. Improving the semantic accuracy of the models (Li et al., 2022), along with new model-based evaluation metrics (Liu et al., 2023; Xu et al., 2023), could thus help to bring improve LLM-based D2T generation systems where it is most needed. Use efficient and long-context models. The memory issues with long context, making few-shot experiments infeasible, can potentially be solved by using more efficient models equipped with Flash Attention (Dao et al., 2022) and fast inference li12051 braries such as llama.cpp12. The recent emergence of capable long-context models (Bai et al., 2023; Munkhdalai et al., 2024) also seems to play in favor of LLM-based D2T generation with long inputs. Be careful about subtle bugs. During our preliminary experiments, we fixed several subtle bugs in our API calls such as incorrect instruction templates13 or involuntary input truncation. Therefore, we recommend careful checks of API calls, as with the apparent ease of API access and robustness of LLMs, such bugs could go unnoticed and artificially skew the model performance. Test the models in the wild. Except for using an ad-hoc dataset of real-world data as we did in our work, the ecological validity of D2T evaluation can also be ensured by continuous evaluation with human users (Zheng et al., 2023) and evaluating the real-world impact of the systems (Reiter, 2023). Multilinguality is an opportunity. With the recent efforts in extending D2T generation to lowresource languages (Cripwell et al., 2023), multilingual D2T generation with open LLMs seems a promising direction. Although we did not go beyond English, initial steps were already done by works such as Lorandi and Belz (2023). 6 Related Work 6.1 Evaluation of Generated Texts Evaluation of generated texts is a complex task lacking a generally accepted solution (Celikyilmaz et al., 2020). While researchers are acknowledging the importance of combining multiple evaluation metrics (Gehrmann et al., 2021, 2023), most evaluation is still based on comparing the model outputs to human-written references, which tend to be noisy and expensive to collect (Dušek et al., 2019, 2020). Many works recently investigated the potential of using LLMs for automatic reference-free evaluation of generated texts, generally achieving high correlations with human judgment (Zhao et al., 2023; Sottana et al., 2023; Kocmi and Federmann, 2023a,b; Chiang and Lee, 2023; Wang et al., 2023a; Fu et al., 2023). However, they also voice concerns about its non-reproducibility (Kocmi and Federmann, 2023a) and potential bias of these models (Wang et al., 2023b). 12https://github.com/ggerganov/llama.cpp 13https://huggingface.co/docs/transformers/ chat_templating Human evaluation is an essential component of natural language generation experiments (van der Lee et al., 2019, 2021). The closest human evaluation protocol to our scenario is the reference-free word-level annotation of complex D2T generation output proposed in Thomson and Reiter (2020) and Thomson et al. (2023). 6.2 D2T Generation Tasks Weather Forecasts First attempts for generating weather forecasts include template-based and statistical approaches (Belz, 2005, 2008; Angeli et al., 2010) for the Sumtime-meteo and WeatherGov datasets (Sripada et al., 2002; Liang et al., 2009). More recently, Balakrishnan et al. (2019) introduced a weather forecast dataset with treestructured meaning representations. Our weather forecasts are less structured and based on a 5-day weather outlook. Product Descriptions Our phone specifications are closest to Wen et al. (2015, 2016), who introduced a dataset for generating descriptions of laptops and TVs. Their solution was based on recurrent neural networks, although templates remained a go-to approach for the task (Wang et al., 2017). Recently, Shao et al. (2021) and Koto et al. (2022) also proposed specialized architectures based on pretrained language models for the data from big e-commerce platforms. Sport Reports All the D2T generation datasets from the Rotowire family (Wiseman et al., 2017; Wang, 2019), including SportSett:Basketball (Thomson et al., 2021), and ESPN-NBA (Nie et al., 2018) focus on generating basketball reports. Along with MLB (Puduppully et al., 2019b), these datasets belong among the most challenging D2T datasets, attracting various neural-based solutions (Puduppully et al., 2019a, 2022; Puduppully and Lapata, 2021; Rebuffel et al., 2020). We use instead simpler data covering ice hockey game summaries. Chart Captions Following the early rule-based approaches (Demir et al., 2008, 2012), the approaches for chart captioning recently tackle largescale datasets from data analytic institutions (Obeid and Hoque, 2020; Kantharaj et al., 2022). We focus on one of the tasks from Sharma et al. (2021), which is generating descriptions of time series in the health domain. Entity Descriptions The task of generating descriptions for a knowledge graph has been covered 12052 extensively in D2T generation (Gardent et al., 2017; Castro Ferreira et al., 2020; Agarwal et al., 2021; Chen et al., 2020; Ribeiro et al., 2020, inter alia). Our task is to describe an entity provided a list of its properties, which is closely related to generating entity descriptions from Wikipedia infotables (Lebret et al., 2016). 6.3 D2T Generation with LLMs Recent works have focused on exploring the capabilities of closed LLMs on existing D2T generation datasets. Axelsson and Skantze (2023) evaluated GPT-3.5 (OpenAI, 2023b) on WebNLG, along with Yuan and Färber (2023), who also tested the model on the AGENDA dataset (Koncel-Kedziorski et al., 2019). Both works found that regardless of potential data contamination, the LLMs rank behind state-of-the-art finetuned models on automatic metrics. Zhao et al. (2023) tested closed models on modified table-to-text generation datasets and found out that in terms of faithfulness, GPT-4 can outperform state-of-the-art models. 7 Conclusion We provided an exploratory study into D2T generation with open LLMs. We proposed new directions for D2T generation, including using ad-hoc test sets, data in common formats, and referencefree evaluation. By a combination of GPT-4-based metric and human evaluation, we evaluated the performance of LLMs on five domains, providing word-level annotations of model outputs across five domains and recommendations for future directions in D2T generation. Acknowledgements This work was funded by the European Union (ERC, NG-NLG, 101039303) and Charles University project SVV 260 698. It used resources of the LINDAT/CLARIAH-CZ Research Infrastructure (Czech Ministry of Education, Youth, and Sports project No. LM2018101). Limitations In our work, we do not include a comparison to other D2T generation approaches. The main reason is that our benchmark is reference-free, while a large majority of prior approaches are based on models finetuned on reference outputs. However, we believe that our work still satisfies our main goal of providing insights into behaviors of open LLM models on D2T generation. We acknowledge that reference-free metrics currently have various shortcomings, including reliance on closed models and specific human annotation protocols, leading to limited replicability and a high price of execution. The approaches occasionally produce incorrect outcomes themselves and they have only moderate correlations with each other. We believe that these shortcomings will be addressed in the future with open model-based metrics. Our choice of models is limited to 7B-parameter open LLMs due to our limited computational resources. Also, unlike some other LLMs such as GPT-Neo (Black et al., 2022) or BLOOM (BigScience Workshop et al., 2022), the models we used do not disclose the data they were trained on. For this reason, we find it ever more important to test the models on benchmarks whose labels could have not been included in their training data. The approaches based on LLMs may produce factually and semantically incorrect information. Any text produced by the LLMs therefore needs to be carefully examined, and no decisions should be based on the generated text alone. Discovering the causes of LLMs’ hallucinations is out of scope of this paper, but is currently a major topic under investigation (Ji et al., 2023). Ethical Considerations The human evaluation study was approved by the internal ethics committee of our institution. The annotators were hired over Prolific and paid the platform-recommended wage of 9 GBP/hour. The annotators were preselected based on their primary language (English) and their country of residence (US, UK, Ireland, Australia, New Zealand). All annotators were shown detailed instructions and explanation of the data types, data sources, and the purpose of the research (see Appendix B for details). The domains in QUINTD were selected so that they do not contain any sensitive or potentially offensive content. We do not collect any demographic data about the participants. References Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training. In Proceedings of the 12053 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, pages 3554–3565, Online. Rachith Aiyappa, Jisun An, Haewoon Kwak, and Yongyeol Ahn. 2023. Can we trust the evaluation on ChatGPT? In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 47–54, Toronto, Canada. Gabor Angeli, Percy Liang, and Dan Klein. 2010. A Simple Domain-Independent Probabilistic Approach to Generation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP 2010, 9-11 October 2010, MIT Stata Center, Massachusetts, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 502–512. Agnes Axelsson and Gabriel Skantze. 2023. Using Large Language Models for Zero-Shot Natural Language Generation from Knowledge Graphs. CoRR, abs/2307.07312. Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2023. LongBench: A bilingual, multitask benchmark for long context understanding. CoRR, abs/2308.14508. Anusha Balakrishnan, Jinfeng Rao, Kartikeya Upasani, Michael White, and Rajen Subba. 2019. Constrained Decoding for Neural NLG from Compositional Representations in Task-Oriented Dialogue. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Volume 1: Long Papers, pages 831–844, Florence, Italy. Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, and Ondrej Dušek. 2024. Leak, cheat, repeat: Data contamination and evaluation malpractices in closedsource LLMs. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2024 - Volume 1: Long Papers, pages 67–93, St. Julian’s, Malta. Edward Beeching, Clémentine Fourrier, Nathan Habib, Sheon Han, Nathan Lambert, Nazneen Rajani, Omar Sanseviero, Lewis Tunstall, and Thomas Wolf. 2023. Open LLM Leaderboard. https://huggingface.co/spaces/ HuggingFaceH4/open_llm_leaderboard. Anja Belz. 2005. Corpus-driven generation of weather forecasts. In Proc. 3rd Corpus Linguistics Conference. Anja Belz. 2008. Automatic generation of weather forecast texts using comprehensive probabilistic generation-space models. Nat. Lang. Eng., 14(4):431–455. BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Sidney Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, et al. 2022. GPT-NeoX-20B: An open-source autoregressive language model. In Proceedings of BigScience Episode# 5–Workshop on Challenges & Perspectives in Creating Large Language Models, pages 95–136. Armand Boschin. 2019. WikiDataSets : Standardized sub-graphs from WikiData. CoRR, abs/1906.04536. Thiago Castro Ferreira, Claire Gardent, Nikolai Ilinykh, Chris van der Lee, Simon Mille, Diego Moussallem, and Anastasia Shimorina. 2020. The 2020 bilingual, bi-directional WebNLG+ shared task: Overview and evaluation results (WebNLG+ 2020). In Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+), pages 55–76, Dublin, Ireland (Virtual). Association for Computational Linguistics. Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. CoRR, abs/2006.14799. Lingjiao Chen, Matei Zaharia, and James Zou. 2023. How is ChatGPT’s behavior changing over time? CoRR, abs/2307.09009. Wenhu Chen, Yu Su, Xifeng Yan, and William Yang Wang. 2020. KGPT: Knowledge-Grounded PreTraining for Data-to-Text Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, pages 8635–8648, Online. David Cheng-Han Chiang and Hung-yi Lee. 2023. Can Large Language Models Be an Alternative to Human Evaluations? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, pages 15607–15631, Toronto, Canada. Liam Cripwell, Anya Belz, Claire Gardent, Albert Gatt, Claudia Borg, Marthese Borg, John Judge, Michela Lorandi, Anna Nikiforovskaya, and William Soto Martinez. 2023. The 2023 WebNLG Shared Task on Low Resource Languages. Overview and Evaluation Results (WebNLG 2023). In Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023), pages 55–66, Prague, Czech Republic. Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. 2022. FlashAttention: Fast and Memory-Efficient Exact Attention with IOAwareness. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural 12054 Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA. Seniz Demir, Sandra Carberry, and Kathleen F. McCoy. 2008. Generating Textual Summaries of Bar Charts. In INLG 2008 - Proceedings of the Fifth International Natural Language Generation Conference, June 1214, 2008, Salt Fork, Ohio, USA. Seniz Demir, Sandra Carberry, and Kathleen F. McCoy. 2012. Summarizing Information Graphics Textually. Comput. Linguistics, 38(3):527–574. Ondrej Dušek, David M. Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural language generation. In Proceedings of the 12th International Conference on Natural Language Generation, INLG 2019, pages 421–426, Tokyo, Japan. Ondrej Dušek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the state-of-the-art of end-to-end natural language generation: The E2E NLG challenge. Comput. Speech Lang., 59:123–156. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. GPTScore: Evaluate as You Desire. CoRR, abs/2302.04166. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The WebNLG Challenge: Generating Text from RDF Data. In Proceedings of the 10th International Conference on Natural Language Generation, INLG 2017, Santiago de Compostela, pages 124–133, Spain. Albert Gatt and Emiel Krahmer. 2018. Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation. J. Artif. Intell. Res., 61:65–170. Sebastian Gehrmann, Tosin P. Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Aremu Anuoluwapo, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh D. Dhole, Wanyu Du, Esin Durmus, Ondrej Dušek, Chris Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Rubungo Andre Niyongabo, Salomey Osei, Ankur P. Parikh, Laura PerezBeltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. CoRR, abs/2102.01672. Sebastian Gehrmann, Elizabeth Clark, and Thibault Sellam. 2023. Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation Practices for Generated Text. J. Artif. Intell. Res., 77:103–166. Shahriar Golchin and Mihai Surdeanu. 2023. Time Travel in LLMs: Tracing Data Contamination in Large Language Models. CoRR, abs/2308.08493. Ari Holtzman, Peter West, and Luke Zettlemoyer. 2023. Generative Models as a Complex Systems Science: How can we make sense of large language model behavior? CoRR, abs/2308.00189. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Comput. Surv., 55(12):248:1–248:38. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7B. CoRR, abs/2310.06825. Shankar Kantharaj, Rixie Tiffany Ko Leong, Xiang Lin, Ahmed Masry, Megh Thakkar, Enamul Hoque, and Shafiq R. Joty. 2022. Chart-to-Text: A Large-Scale Benchmark for Chart Summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, pages 4005–4023, Dublin, Ireland. Zdenˇek Kasner, Ioannis Konstas, and Ondˇrej Dušek. 2023. Mind the Labels: Describing Relations in Knowledge Graphs With Pretrained Models. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, pages 2390–2407, Croatia. Tom Kocmi and Christian Federmann. 2023a. GEMBAMQM: Detecting Translation Quality Error Spans with GPT-4. In Proceedings of the Eighth Conference on Machine Translation, WMT 2023, pages 768–775, Singapore. Tom Kocmi and Christian Federmann. 2023b. Large Language Models Are State-of-the-Art Evaluators of Translation Quality. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, EAMT 2023, pages 193–203, Tampere, Finland. Rik Koncel-Kedziorski, Dhanush Bekal, Yi Luan, Mirella Lapata, and Hannaneh Hajishirzi. 2019. Text Generation from Knowledge Graphs with Graph Transformers. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, Volume 1 (Long and Short Papers), pages 2284–2293, USA. 12055 Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, and Dongyeop Kang. 2023. Benchmarking cognitive biases in large language models as evaluators. CoRR, abs/2309.17012. Fajri Koto, Jey Han Lau, and Timothy Baldwin. 2022. Can Pretrained Language Models Generate Persuasive, Faithful, and Informative Ad Text for Product Descriptions? In Proceedings of the Fifth Workshop on E-Commerce and NLP (ECNLP 5), pages 234– 243, Dublin, Ireland. Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pages 1203–1213, Austin, Texas, USA. Wei Li, Wenhao Wu, Moye Chen, Jiachen Liu, Xinyan Xiao, and Hua Wu. 2022. Faithfulness in Natural Language Generation: A Systematic Survey of Analysis, Evaluation and Optimization Methods. CoRR, abs/2203.05227. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning Semantic Correspondences with Less Supervision. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP, 2-7 August 2009, pages 91–99, Singapore. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-Eval: NLG Evaluation using Gpt-4 with Better Human Alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, pages 2511–2522, Singapore. Michela Lorandi and Anja Belz. 2023. Data-to-text Generation for Severely Under-Resourced Languages with GPT-3.5: A Bit of Help Needed from Google Translate (WebNLG 2023). In Proceedings of the Workshop on Multimodal, Multilingual Natural Language Generation and Multilingual WebNLG Challenge (MM-NLG 2023), pages 80–86. Tsendsuren Munkhdalai, Manaal Faruqui, and Siddharth Gopal. 2024. Leave no context behind: Efficient infinite context transformers with infiniattention. CoRR, abs/2404.07143. Feng Nie, Jinpeng Wang, Jin-Ge Yao, Rong Pan, and Chin-Yew Lin. 2018. Operation-guided Neural Networks for High Fidelity Data-To-Text Generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3879–3889, Brussels, Belgium. Jekaterina Novikova, Ondˇrej Dušek, Amanda Cercas Curry, and Verena Rieser. 2017. Why We Need New Evaluation Metrics for NLG. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 2241– 2252, Copenhagen, Denmark. Jason Obeid and Enamul Hoque. 2020. Chart-to-Text: Generating Natural Language Descriptions for Charts by Adapting the Transformer Model. In Proceedings of the 13th International Conference on Natural Language Generation, INLG 2020, pages 138–147, Dublin, Ireland. OpenAI. 2023a. GPT-4 Technical Report. CoRR, abs/2303.08774. OpenAI. 2023b. Introducing ChatGPT. https:// openai.com/blog/chatgpt. Accessed on January 9, 2024. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019a. Data-to-Text Generation with Content Selection and Planning. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, pages 6908–6915, Honolulu, Hawaii, USA. Ratish Puduppully, Li Dong, and Mirella Lapata. 2019b. Data-to-text Generation with Entity Modeling. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Volume 1: Long Papers, pages 2023–2035, Florence, Italy. Ratish Puduppully, Yao Fu, and Mirella Lapata. 2022. Data-to-text Generation with Variational Sequential Planning. Trans. Assoc. Comput. Linguistics, 10:697– 715. Ratish Puduppully and Mirella Lapata. 2021. Data-totext Generation with Macro Planning. Trans. Assoc. Comput. Linguistics, 9:510–527. Clément Rebuffel, Laure Soulier, Geoffrey Scoutheeten, and Patrick Gallinari. 2020. A Hierarchical Model for Data-to-Text Generation. In Advances in Information Retrieval - 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14-17, 2020, Proceedings, Part I, volume 12035 of Lecture Notes in Computer Science, pages 65–80. Ehud Reiter. 2023. We should evaluate real-world impact! https://ehudreiter.com/2023/11/13/ evaluate-real-world-impact/. Accessed on January 11, 2024. Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Nat. Lang. Eng., 3(1):57–87. 12056 Leonardo F. R. Ribeiro, Martin Schmitt, Hinrich Schütze, and Iryna Gurevych. 2020. Investigating Pretrained Language Models for Graph-to-Text Generation. CoRR, abs/2007.08426. Anna Rogers. 2023. Closed AI Models Make Bad Baselines. https://hackingsemantics.xyz/ 2023/closed-baselines/. Accessed on January 11, 2024. Huajie Shao, Jun Wang, Haohong Lin, Xuezhou Zhang, Aston Zhang, Heng Ji, and Tarek F. Abdelzaher. 2021. Controllable and Diverse Text Generation in E-commerce. In WWW ’21: The Web Conference 2021, pages 2392–2401, Virtual Event / Ljubljana, Slovenia. Mandar Sharma, John S. Brownstein, and Naren Ramakrishnan. 2021. TCube: Domain-Agnostic Neural Time-series Narration. CoRR, abs/2110.05633. Andrea Sottana, Bin Liang, Kai Zou, and Zheng Yuan. 2023. Evaluation Metrics in the Era of GPT-4: Reliably Evaluating Large Language Models on Sequence to Sequence Tasks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, pages 8776– 8788, Singapore. Somayajulu Sripada, Ehud Reiter, Jim Hunter, and Jin Yu. 2002. Sumtime-meteo: Parallel corpus of naturally occurring forecast texts and weather data. Computing Science Department, University of Aberdeen, Aberdeen, Scotland, Tech. Rep. AUCS/TR0201. Rickard Stureborg, Dimitris Alikaniotis, and Yoshi Suhara. 2024. Large Language Models are Inconsistent and Biased Evaluators. Craig Thomson and Ehud Reiter. 2020. A Gold Standard Methodology for Evaluating Accuracy in DataTo-Text Systems. In Proceedings of the 13th International Conference on Natural Language Generation, INLG 2020, pages 158–168, Dublin, Ireland. Craig Thomson, Ehud Reiter, and Somayajulu Sripada. 2021. SportSett:Basketball - A Robust and Maintainable Dataset for Natural Language Generation. page 9. Craig Thomson, Ehud Reiter, and Barkavi Sundararajan. 2023. Evaluating factual accuracy in complex datato-text. Comput. Speech Lang., 80:101482. TogetherAI. 2023. Preparing for the era of 32K context: Early learnings and explorations. https: //www.together.ai/blog/llama-2-7b-32k. Accessed on January 2, 2024. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. LLaMA: Open and Efficient Foundation Language Models. CoRR, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open Foundation and Fine-Tuned Chat Models. CoRR, abs/2307.09288. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023. Zephyr: Direct Distillation of LM Alignment. CoRR, abs/2310.16944. Jannis Vamvas and Rico Sennrich. 2022. As little as possible, as much as necessary: Detecting over- and undertranslations with contrastive conditioning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), ACL 2022, pages 490–500, Dublin, Ireland. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, and Emiel Krahmer. 2021. Human evaluation of automatically generated text: Current trends and best practice guidelines. Comput. Speech Lang., 67:101151. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th International Conference on Natural Language Generation, INLG 2019, pages 355–368, Tokyo, Japan. Emiel Van Miltenburg, Miruna Clinciu, Ondˇrej Dušek, Dimitra Gkatzia, Stephanie Inglis, Leo Leppänen, Saad Mahamood, Stephanie Schoch, Craig Thomson, and Luou Wen. 2023. Barriers and enabling factors for error analysis in NLG research. Northern European Journal of Language Technology, 9. Hongmin Wang. 2019. Revisiting Challenges in Datato-Text Generation with Fact Grounding. In Proceedings of the 12th International Conference on Natural Language Generation, INLG 2019, pages 311–322, Tokyo, Japan. 12057 Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023a. Is ChatGPT a Good NLG Evaluator? A Preliminary Study. CoRR, abs/2303.04048. Jinpeng Wang, Yutai Hou, Jing Liu, Yunbo Cao, and Chin-Yew Lin. 2017. A Statistical Framework for Product Description Generation. In Proceedings of the Eighth International Joint Conference on Natural Language Processing, IJCNLP 2017, Volume 2: Short Papers, pages 187–192, Taipei, Taiwan. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large Language Models are not Fair Evaluators. CoRR, abs/2305.17926. Tsung-Hsien Wen, Milica Gašic, Nikola Mrkšic, Lina M Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve Young. 2015. Toward multi-domain language generation using recurrent neural networks. In NIPS Workshop on Machine Learning for Spoken Language Understanding and Interaction. Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, and Steve J. Young. 2016. Multi-domain Neural Network Language Generation for Spoken Dialogue Systems. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 120–129, San Diego California, USA. Sam Wiseman, Stuart M. Shieber, and Alexander M. Rush. 2017. Challenges in Data-to-Document Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, pages 2253–2263, Copenhagen, Denmark. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, EMNLP 2020 - Demos, pages 38– 45, Online. Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, and Lei Li. 2023. INSTRUCTSCORE: Explainable Text Generation Evaluation with Finegrained Feedback. Shuzhou Yuan and Michael Färber. 2023. Evaluating Generative Models for Graph-to-Text Generation. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, RANLP 2023, pages 1256–1264, Varna, Bulgaria. Yilun Zhao, Haowei Zhang, Shengyun Si, Linyong Nan, Xiangru Tang, and Arman Cohan. 2023. Investigating Table-to-Text Generation Capabilities of LLMs in Real-World Information Seeking Scenarios. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-Bench and Chatbot Arena. CoRR, abs/2306.05685. A QUINTD Data Here, we describe the data sources we include in the QUINTD collection tool and the procedure of collecting the QUINTD-1 benchmark. To replicate the data collection, please refer to the scripts we provide.14 A.1 Selection of Data Sources When selecting the data sources, we had the following desiderata: • Data needs to be publicly available. • Data needs to represent a common data-to-text task. • Data needs to be in a common format (or straightforwardly transformable to one). We settled on the data sources described in Appendix A.2. All the sources can be accessed using an API. Note that some of the APIs have access limits, either for the requests made from a single account per day or for a number of requests from an IP address within a time window. However, these limits do not severely limit the data collection process on the scale we use here. A.2 Data Collection Table 6 summarizes the output types for each domain. Domain Id Output type openweather five-day weather forecast gsmarena product description ice_hockey ice hockey game summary owid chart caption wikidata entity description Table 6: The output types for individual domains in QUINTD. 14https://github.com/kasnerz/quintd 12058 A.2.1 OpenWeather OpenWeather (OpenWeatherMap.org) is an online service that provides global weather data via web interface and API. The API responses are in the JSON format documented at the official website. For our experiments, we used the forecast5 API, which allows to download a 5-day forecast with 3-hour resolution for any location specified by its GPS coordinates. The free tier is limited to 1,000 API calls per day, which is enough to download our whole test set in one bulk. However, at the time of experiments, the free API only allowed to download the data for the time when the request was made. At the time of writing, OpenWeather is pushing a new One Call API 3.0 which allows to download weather data for any timestamp, but only 4 days ahead (instead of 5). These restrictions somehow limit the replicability of our QUINTD-1 dataset (at least with the free API) but do not limit downloading a new batch of data with a similar format. For the QUINTD-1 dataset, we randomly sampled 100 cities for each split from the list of cities with a population over 1000 and used their coordinates in the queries to OpenWeather API. All the data forecasts were downloaded on Jan 3, 2024. A.2.2 GSMArena GSMArena is a website providing specifications and reviews for mobile devices. For downloading the data, we used the unofficial gsmarena-api tool, which returns the data in a JSON format. Note that GSMArena imposes limitations on the number of requests per IP address, which may induce delays when downloading a larger amount of data. To create a balanced sample, we downloaded detailed specifications of 10 products from each available brand and randomly selected 100 products for each split from the downloaded set. A.2.3 RapidAPI Ice Hockey RapidAPI is a service that provides API access to data from multiple domains, including sport, finance, entertainment, and others. Most APIs are provided in a freemium mode, i.e., with a limited number of daily API calls. For QUINTD, we selected the IceHockeyAPI (popularity 9.1 / 10), which provides access to ice hockey games from world top leagues. Our choice was influenced by our own personal preferences, combined with the desire to cover a sport that has not been covered previously in sports report generation. We used the matches endpoint which returns high-level details about a game. Note that the API allows only 50 requests per day, but that does not limit the data collection since the endpoint returns all the games played on a particular day in a single request. We downloaded the games played on 27 November 2023 for the development set (184 games) and 29 November 2023 for the test set (216 games), taking a random sample of 100 for each split. A.2.4 OurWorldInData OurWorldInData is a public database and web interface for data about world developments in various domains and sources. We used the official API (currently experimental), which is accessible through the Python package owid-catalog. The package allows accessing individual CSV tables as Pandas dataframes. For our data collection, we decided to limit ourselves to time series, i.e., a single column with values changing over time. Besides the simplicity of visualizing such a chart (which is used by human annotators for checking the correctness of the output), there is also a clear goal for the target chart description: describing the developments of a value over time. We also limited ourselves to the health domain. In particular, we selected the tables COVID data (columns new_cases_smoothed_per_million, new_tests_smoothed_per_thousand, people_ vaccinated_per_hundred, reproduction_ rate, and positive_rate) and Life expectancy data (column life_expectancy_0). We downloaded the data for all countries with non-empty entries in the table, taking a random sample of 100 examples for each split. On model input, we formatted the data for each time series as a two-column CSV, including the title, the description, and the unit for each example as a comment (#) at the beginning of the input. A.2.5 Wikidata Wikidata is a large open-source knowledge graph containing factual information about entities and their properties. Wikidata provides access through an official API, but we instead decided to extract our data using the wikidatasets (Boschin, 2019) Python library, which provides access to preprocessed properties of entities from particular do12059 mains. It allowed us to avoid crawling and filtering the knowledge graph, and its offline processing made the data collection faster.15 For our dataset, we selected the entities from the companies, countries, films, and humans domains. For each entity, we randomly extracted between 2 to 10 properties in the knowledge graph. We extracted up to 100 subgraphs for each domain and took a random sample of 100 subgraphs for each split. On model input, we formatted each subgraph as a simple Markdown-formatted text snippet, using the entity as a title and including a bullet point for each key-value pair. B Human Evaluation As described in §4.1, we set up the human evaluation campaign on Prolific. To make the data more accessible to the annotators, we created custom data visualizations for each domain. For the data in openweather and owid, we used interactive graphs from Highcharts.com, and we manually created the tables for other domains. You can find the full instructions for human annotators in Figure 5 and the examples of data visualizations in Appendix E. C GPT-4 Evaluation We used the prompt in Figure 4 for instantiating the GPT-4-based metric.16 We set the temperature to 0 to improve the replicability of the process. We ensured that the output is a valid JSON using the parameter response_format in the OpenAI API. At the price of $0.01 per 1k input tokens and $0.03 per 1k generated tokens, the evaluation process costs approximately $45 in total. C.1 Aligning the Errors For aligning the errors with the original text, we perform string matching on the text span decoded by GPT-4 in the TEXT_SPAN field. In our preliminary experiments, this method proved to be more robust than either asking for start and end indices of the error span (which would rely on the model’s ability to count characters) or performing sequence tagging on the copy of the input (which would rely on the model’s ability to perfectly copy the input). 15All the entities and properties are linked with an identifier to the Wikidata database, making the process also replicable through the official API. 16Note that the example in the prompt differs from the example used for human annotators (see Figure 5). We revised the example to be more instructive, but we were not able to re-run the GPT-4 evaluation due to our limited budget. We tried to respect the monotonic ordering of text spans but fell back to full-text search if the span is not found following the previous one. We consider this approach successful since matching completely failed only in a minority of cases (137 out of 6927). Based on our manual examination, these mostly include cases where GPT-4 tried to suggest a missing piece of text as an error or did not manage to copy the input text verbatim. D Experiments with Open LLMs as Evaluators To select the most suitable LLM for automatic evaluation, we compared correlations with human judgment of the following models: • GPT-4 (OpenAI, 2023a) used via OpenAI API (gpt-4-1106-preview), • GPT-3.5 (OpenAI, 2023b) used via OpenAI API (gpt-3.5-turbo-1106), • Llama-3-70B17 running locally via Ollama in 4-bit quantization (meta-llama/Meta-Llama-3-70B). We used all the models with the same prompts, temperature 0, and force-decoded JSON outputs. In Table 7, we show Pearson correlation coefficients computed as described in §5.2. We can see that the strongest model is GPT-4, followed by Llama-370B and GPT-3.5. As the gap between the models is substantial, we opted for using GPT-4 which is the strongest model. model rword rexample rdomain GPT-4 0.26 0.52 0.93 GPT-3.5 0.07 0.33 0.82 Llama3-70B 0.09 0.44 0.92 Table 7: Pearson correlation coefficients of the model annotations as compared with human annotations (cf. §5.2). E Examples Here, we present an example of inputs and model outputs (along with annotations) for each domain: • openweather: Figure 7 (in) and Table 8 (out), • gsmarena: Figure 8 (in) and Table 8 (out), 17https://llama.meta.com/llama3/ 12060 • ice_hockey: Figure 9 (in) and Table 10 (out), • owid: Figure 10 (in) and Table 11 (out), • wikidata: Figure 11 (in) and Table 12 (out). Note that the graphs for openweather and owid are interactive when accessed through the web interface. F Full Results Here, we include the tables with results for individual domains: • Table 13 presents the average numbers of errors per output separately for each domain (the aggregated results are in Table 3), • Table 14 presents the ratio of outputs containing at least one error separately for each domain (the aggregated results are in Table 4). 12061 System Message You are an expert data-to-text error annotation system. You undestand structured data and you can correcly operate with units and numerical values. You are designed to output token-level annotations in JSON. Prompt Given the data: ˋˋˋ data ˋˋˋ Annotate all the errors in the following text: ˋˋˋ text ˋˋˋ Output the errors as a JSON list "errors" in which each object contains fields "reason", "text", and "type". The value of "text" is the text of the error. The value of "reason" is the reason for the error. The value of "type" is one of 0, 1, 2, 3 based on the following list: - 0: Incorrect fact: The fact in the text contradicts the data. - 1: Not checkable: The fact in the text cannot be checked in the data. - 2: Misleading: The fact in the text is misleading in the given context. - 3: Other: The text is problematic for another reason, e.g. grammatically or stylistically incorrect, irrelevant, or repetitive. The list should be sorted by the position of the error in the text. *Example:* data: ˋˋˋ [ [ "Aditi Bhagwat", "occupation", "television actor" ], [ "Aditi Bhagwat", "date of birth", "18 January 1981" ] ] ˋˋˋ text: ˋˋˋ Aditi Bhagwat, born on January 18, 1991, used to be a popular Indian television actor. The data comes from a knowledge graph. ˋˋˋ output: ˋˋˋ "errors": ["reason": "The data mentions that the actor was born on 1981", "text": "1991", "type": 0, "reason": "Misleadingly suggests that the actor is not alive", "text": "used to be", type: 2, "reason": "Popularity is not mentioned in the data", "text": "popular", type: 1, "reason", "Nationality is not mentioned in the data", "text": "Indian", type: 1, "reason": "The note is superfluous", "text": "The data comes from a knowledge graph.", type: 3] ˋˋˋ Note that some details may not be mentioned in the text: do not count omissions as errors. Also do not be too strict: some facts can be less specific than in the data (rounded values, shortened or abbreviated text, etc.), do not count these as errors. If there are no errors in the text, "errors" will be an empty list. Figure 4: The prompt we used for the GPT-4 evaluation metric. 12062 In this task, you will annotate 20 examples in total. For each example, you will see data on the left side and the corresponding generated text on the right side. Your task is to annotate errors in the text with respect to the data. There are five types of errors that you can mark in the generated text: 1. Incorrect factI: The fact in the text contradicts the data. 2. Not checkableNC : The fact in the text cannot be checked given the data. 3. MisleadingM: The fact in the text is misleading in the given context. 4. OtherO : The text is problematic for another reason, e.g. grammatically or stylistically incorrect, irrelevant, or repetitive. How to mark and submit the annotations? Use your mouse to highlight specific parts of the text containing the errors. To switch between error categories, repeatedly click on the highlighted text (the last click removes the highlight). Note that highlighting from the right to left can work better for longer spans. Once you think you have marked all the errors present in the text, click the Mark example as complete button (you can still update the annotation later). You will be able to submit the annotations once they are all are marked as complete. How should I decide on the errors? • Each error span should include all the words related to the error (but nothing else). • If you think the fact is probably true, but cannot be derived from the data, mark it as not checkable. • If you are not really sure if the fact should be marked as an error, leave it unmarked. An example of the annotated output An example of the data input and the corresponding text annotated with errors: data Nokia 3310 • color: black, blue, grey • display: 320x240px text (product description) Nokia 3310 is produced in FinlandNC and features a 320x320 displayI. It is available in black color.M The data seem to provide only partial information about the phone.O explanation • produced in Finland: The country where the phone is produced is not mentioned in the data. • 320x320: The data mentions that the display has resolution 320x240px. • available in black color: Misleadingly suggests that the phone is not available in other colors. • The data seem to provide only partial information about the phone.: The note is irrelevant for the phone description. Note that for the sake of brevity, this particular example is based on a small data input and contains many errors, which may not always be the case. What kinds of data and text can I encounter? You can encounter the following kinds of texts: • a 5-day weather forecast generated from weather data, • a description of a product generated from product specifications • an ice hockey game report generated from information about the game, • a caption of a health-related chart, • a description of an entity (human, country, film, or company) based on its properties. What is the source of the data and the texts? The data is downloaded from public sources (openweathermap.org, gsmarena.com, rapidapi.com, ourworldindata.com, wikidata.com). The texts are generated by large language models. What is the purpose of the research? The research is conducted by the NLP research group at Charles University in Prague. The research is a part of the ERC project Next-Generation Natural Language Generation and was approved by the ethical committee of the project. By continuing, you agree that the anonymous annotations will be made available to the research community to help evaluating and improving data-to-text generation systems based on large language models. Figure 5: The instructions given to the human annotators. 12063 Figure 6: The annotation interface for human annotators. Figure 7: An example of an input from openweather (see the outputs in Table 8). 12064 Model Human annotations (Ehum) GPT-4 annotations (Egpt) Llama2 The weather in Niemegk, Germany, will be cloudy and rainy for the next few days. On January 3rd, the temperature will be around 9.55°C, with a feels-like temperature of 6.59°C. The weather will be light rain, with a wind speed of 6.24 m/s and a gust of 12.1 m/s. The rain will last for 3 hoursNC, and the pressure will be 991 hPa. The weather in Niemegk, Germany, will be cloudy and rainy for the next few days. On January 3rd, the temperature will be around 9.55°C, with a feels-like temperature of 6.59°C. The weather will be light rain, with a wind speed of 6.24 m/s and a gust of 12.1 m/s. The rain will last for 3 hoursM, and the pressure will be 991 hPa. Mistral From January 3rd to January 7th, the city of Niemegk in Germany will experience a range of weather conditions. On January 3rd, the temperature will be 9.55°C with a feels like temperature of 6.59°C. There will be light rain with a 3-hour precipitationNC of 1.06mm. The wind speed will be 6.24m/s with a gust of 12.1m/s. On January 4th, the temperature will be 8.2°C with a feels like temperature of 5.64°C. There will be light rain with a 3-hour precipitationI of 1.94mm. The wind speed will be 4.31m/s with a gust of 11.6m/s. On January 5th, the temperature will be 6.37°C with a feels like temperature of 2.25°CI. There will be light rain with a 3-hour precipitation of 1.13mmI. The wind speed will be 7.07m/s with a gust of 15.51m/sI. On January 6th, the temperature will be 2.17°CI with a feels like temperature of -3.55°CI. There will be overcast clouds with a 3-hour precipitation of 0.31mmI. The wind speed will be 3.54m/s with a gust of 7.11m/sI. On January 7th, the temperature will be -0.89°C with a feels like temperature of -0.89°CI. There will be overcast clouds with a 3hour precipitation of 0.31mm. The wind speed will be 0.63m/s with a gust of 0.7m/sI. From January 3rd to January 7th, the city of Niemegk in Germany will experience a range of weather conditions. On January 3rd, the temperature will be 9.55°C with a feels like temperature of 6.59°C. There will be light rain with a 3-hour precipitation of 1.06mm. The wind speed will be 6.24m/s with a gust of 12.1m/s. On January 4th, the temperature will be 8.2°C with a feels like temperature of 5.64°C. There will be light rain with a 3-hour precipitation of 1.94mm. The wind speed will be 4.31m/s with a gust of 11.6m/s. On January 5th, the temperature will be 6.37°C with a feels like temperature of 2.25°C. There will be light rain with a 3-hour precipitation of 1.13mmI. The wind speed will be 7.07m/s with a gust of 15.51m/s. On January 6th, the temperature will be 2.17°C with a feels like temperature of -3.55°C. There will be overcast clouds with a 3hour precipitation of 0.31mmI. The wind speed will be 3.54m/sI with a gust of 7.11m/sI. On January 7th, the temperature will be -0.89°CI with a feels like temperature of -0.89°CI. There will be overcast clouds with a 3-hour precipitation of 0.31mmI. The wind speed will be 0.63m/sI with a gust of 0.7m/sI. Zephyr Over the next five days, the weather in Niemegk, Germany will be characterized by fluctuating temperatures and varying precipitation. On Thursday, January 3I, light rain and a lowI temperature of 9.55°C can be expected, with a chance of rain continuing into Friday. Snowfall is possible on Friday, with a low temperature of -0.89°C. Saturday will be overcast with a low temperature of 0.03°CI, and light rain is forecastedI for Saturday evening. Sunday will be overcast as well, with a low temperature of -2.23°CI. Monday will see a drop in temperatures, with a low of -4.07°CI and scattered clouds. Temperatures will continue to fall throughout the week, with lows of -5.19°C and -9.51°C on Tuesday and Wednesday, respectivelyNC. Wind speeds will vary from 3.1 to 15.51I meters per second, with gusts reaching up to 15.51I meters per second on Monday. Over the next five days, the weather in Niemegk, Germany will be characterized by fluctuating temperatures and varying precipitation. On Thursday, January 3I, light rain and a low temperature of 9.55°CI can be expected, with a chance of rain continuing into Friday. Snowfall is possible on Friday, with a low temperature of -0.89°CI. Saturday will be overcast with a low temperature of 0.03°C, and light rain is forecasted for Saturday eveningI. Sunday will be overcast as well, with a low temperature of -2.23°CI. Monday will see a drop in temperatures, with a low of -4.07°CI and scattered clouds. Temperatures will continue to fall throughout the week, with lows of -5.19°C and -9.51°C on Tuesday and WednesdayI, respectively. Wind speeds will vary from 3.1 to 15.51 meters per second, with gusts reaching up to 15.51 meters per second on MondayI. GPT-3.5 Over the next five days, the weather in Niemegk, Germany is expected to be quite varied. On January 4th, there will be light rain with temperatures ranging from 2.17°C to 9.55°CI. The following day, January 5th, there will be lightI snow with temperatures dropping to as low as -0.89°C. On January 6th, the weather will be overcast with temperatures ranging from -0.32°C to 1.69°C. The trend continues on January 7th with overcast clouds and temperatures dropping to as low as -2.51°CI. Finally, on January 8th, there will be scattered clouds with temperatures ranging from -5.19°C to -3.67°C. Throughout this period, wind speeds will vary between 0.63 m/s and 8.12 m/s, and the pressure will range from 991I hPa to 1034 hPa. Over the next five days, the weather in Niemegk, Germany is expected to be quite varied. On January 4th, there will be light rain with temperatures ranging from 2.17°C to 9.55°CI. The following day, January 5th, there will be light snow with temperatures dropping to as low as -0.89°CI. On January 6th, the weather will be overcast with temperatures ranging from -0.32°C to 1.69°CI. The trend continues on January 7th with overcast clouds and temperatures dropping to as low as -2.51°CI. Finally, on January 8th, there will be scattered clouds with temperatures ranging from -5.19°C to -3.67°CI. Throughout this period, wind speeds will vary between 0.63 m/s and 8.12 m/sI, and the pressure will range from 991 hPaI to 1034 hPaI. Table 8: An example of the model outputs on openweather (see the input in Table 8). 12065 Figure 8: An example of an input from gsmarena (see the outputs in Table 9). 12066 Model Human annotations (Ehum) GPT-4 annotations (Egpt) Llama2 The Gigabyte GSmart Mika M3 is a smartphone that was released in 2014. It has a 5-inch IPS LCD display with a resolution of 720x1280 pixels and a 13MP main camera. The device also has an 8MP front camera and is powered by a Mediatek MT6582 chipset with 1GB of RAM and 8GB of internal storage. It has dual SIM support and is available in Navy Blue. The Gigabyte GSmart Mika M3 is a smartphone that was released in 2014. It has a 5-inchI IPS LCD display with a resolution of 720x1280 pixels and a 13MP main camera. The device also has an 8MP front cameraNC and is powered by a Mediatek MT6582 chipset with 1GB of RAM and 8GB of internal storage. It has dual SIM support and is available in Navy BlueNC. Mistral The GSmart Mika M3 is a smartphone developed by Gigabyte. It was released in November 2014 and is powered by the Android 4.4.2 operating system. The device features a 5.0-inch IPS LCD display with a resolution of 720 x 1280 pixels, which translates to 294 pixels per inch. The smartphone is equipped with a 13-megapixel rear camera with autofocusNC and LED flash, as well as an 8-megapixel front-facing camera. It is powered by a Mediatek MT6582 chipset with a quad-core 1.3 GHz CortexA7 CPU and a Mali-400MP2 GPU. The device has 8GB of internal storage and 1GB of RAM, with the option to expand storage via a microSDHC card. The GSmart Mika M3 also features Wi-Fi, Bluetooth, GPS, and NFC connectivityI, as well as a 3.5mm headphone jack and a microUSB 2.0 port for charging and data transfer. The device is available in Navy Blue and is powered by a 1900mAh Li-Ion battery, which provides up to 150 hours of stand-by time and up to 4 hours and 30 minutes of talk time. The GSmart Mika M3 is a smartphone developed by Gigabyte. It was released in November 2014I and is powered by the Android 4.4.2 operating system. The device features a 5.0-inch IPS LCD display with a resolution of 720 x 1280 pixels, which translates to 294 pixels per inch. The smartphone is equipped with a 13-megapixel rear camera with autofocus and LED flash, as well as an 8-megapixel front-facing camera. It is powered by a Mediatek MT6582 chipset with a quad-core 1.3 GHz CortexA7 CPU and a Mali-400MP2 GPU. The device has 8GB of internal storage and 1GB of RAM, with the option to expand storage via a microSDHC card. The GSmart Mika M3 also features Wi-Fi, Bluetooth, GPS, and NFC connectivityI, as well as a 3.5mm headphone jack and a microUSB 2.0 port for charging and data transfer. The device is available in Navy BlueM and is powered by a 1900mAh Li-Ion battery, which provides up to 150 hours of stand-by time and up to 4 hours and 30 minutes of talk time. Zephyr The Gigabyte GSmart Mika M3 is a smartphone that boasts a 5.0-inch IPS LCD display with a resolution of 720 x 1280 pixels. It runs on Android 4.4.2 (KitKat) and is powered by a quad-core 1.3 GHz Cortex-A7 processor with a Mali-400MP2 GPU. The device comes with 8GB of internal storage and 1GB of RAM, which can be expanded via a dedicated microSDHC card slot. The primary camera is a 13-megapixel autofocus lensNC with an LED flash, while the secondary camera is an 8-megapixel lens. Connectivity options include Wi-Fi, Bluetooth, GPS, FM radio, and microUSB 2.0. The smartphone is equipped with a 1900mAh removable Li-Ion battery that provides up to 4 hoursI of talk time and up to 150 hours of standby time. The device is available in Navy Blue and supports dual SIM cards (Mini-SIM/ Micro-SIM). The Gigabyte GSmart Mika M3 is a smartphone that boasts a 5.0-inch IPS LCD display with a resolution of 720 x 1280 pixels. It runs on Android 4.4.2 (KitKat) and is powered by a quad-core 1.3 GHz Cortex-A7 processor with a Mali-400MP2 GPU. The device comes with 8GB of internal storage and 1GB of RAM, which can be expanded via a dedicated microSDHC card slot. The primary camera is a 13megapixel autofocus lens with an LED flash, while the secondary camera is an 8-megapixel lens. Connectivity options include Wi-Fi, Bluetooth, GPS, FM radio, and microUSB 2.0. The smartphone is equipped with a 1900mAh removable Li-Ion battery that provides up to 4 hours of talk timeI and up to 150 hours of standby time. The device is available in Navy Blue and supports dual SIM cards (Mini-SIM/ Micro-SIM). GPT-3.5 The GSmart Mika M3, also known as the Gigabyte GSmart Mika M3, is a smartphone that was announced in October 2014 and released in November of the same year. It features a 5.0-inch IPS LCD display with a resolution of 720 x 1280 pixels and is powered by a Quad-core 1.3 GHz Cortex-A7 processor. The phone comes with 1GB of RAM and 8GB of internal storage, which can be expanded via a microSDHC card slot. The main camera is 13MP with LED flash, while the selfie camera is 8MP. The phone runs on Android 4.4.2 (KitKat) and is equipped with a 1900mAh removable battery. Other features include dual SIM capability, Wi-Fi, Bluetooth, GPS, FM radio, and microUSB 2.0. The GSmart Mika M3 is available in the color Navy Blue and has a sleek and compact design, measuring 144 x 70.5 x 8.3 mm and weighing 146g. The GSmart Mika M3, also known as the Gigabyte GSmart Mika M3I, is a smartphone that was announced in October 2014 and released in November of the same year. It features a 5.0-inch IPS LCD display with a resolution of 720 x 1280 pixels and is powered by a Quad-core 1.3 GHz Cortex-A7 processor. The phone comes with 1GB of RAM and 8GB of internal storage, which can be expanded via a microSDHC card slot. The main camera is 13MP with LED flashNC, while the selfie camera is 8MPNC. The phone runs on Android 4.4.2 (KitKat) and is equipped with a 1900mAh removable batteryNC. Other features include dual SIM capability, Wi-Fi, Bluetooth, GPS, FM radio, and microUSB 2.0. The GSmart Mika M3 is available in the color Navy Blue and has a sleek and compact designNC, measuring 144 x 70.5 x 8.3 mm and weighing 146g. Table 9: An example of the model outputs on gsmarena (see the input in Table 9). 12067 Figure 9: An example of an input from ice_hockey (see the outputs in Table 10). 12068 Model Human annotations (Ehum) GPT-4 annotations (Egpt) Llama2 In the VHL 23/24 season, Kristall Saratov hosted HC Norilsk at their home arena. Kristall Saratov won the gameI with a final score of 1-5. Kristall Saratov’s home team scored one goal in the first periodI, while HC Norilsk scored two goals in the second and third periodsI. The game lasted for 60 minutesNC, with Kristall Saratov’s home team leading the game for the entire durationI. In the VHL 23/24 season, Kristall Saratov hosted HC Norilsk at their home arena. Kristall Saratov won the game with a final score of 1-5.I Kristall Saratov’s home team scored one goal in the first periodI, while HC Norilsk scored two goals in the second and third periods.I The game lasted for 60 minutesNC, with Kristall Saratov’s home team leading the game for the entire duration.I Mistral On November 28th, 2023, the VHL 23/24 season game between Kristall Saratov and HC Norilsk took place. The game was played at the home arena of Kristall Saratov, with the home team winning 10I. The game was played in three periods, with the first period ending in a 0-0 tieI, the second period ending in a 2-3 score in favor of HC NorilskI, and the third period ending in a 1-0 score in favor of Kristall Saratov. The game ended in normal time, with a total score of 3-1 in favor of HC NorilskI. On November 28th, 2023, the VHL 23/24 season game between Kristall Saratov and HC Norilsk took place. The game was played at the home arena of Kristall Saratov, with the home team winning 1-0I. The game was played in three periods, with the first period ending in a 0-0 tie, the second period ending in a 2-3 score in favor of HC NorilskI, and the third period ending in a 1-0 score in favor of Kristall Saratov. The game ended in normal time, with a total score of 3-1 in favor of HC NorilskI. Zephyr In the VHL 23/24 season, Kristall Saratov hosted HC Norilsk. The first period saw two quickNC goals from Norilsk, with one in the second and third periods as wellI, resulting in a dominant 5-0I victory. Kristall Saratov managed to score a single goal in the third period, but it was not enough to overcome Norilsk’s lead. The final score was 1-5 in favor of HC Norilsk. In the VHL 23/24 season, Kristall Saratov hosted HC Norilsk. The first period saw two quick goals from Norilsk, with one in the secondI and third periods as well, resulting in a dominant 5-0 victoryI. Kristall Saratov managed to score a single goal in the third period, but it was not enough to overcome Norilsk’s lead. The final score was 1-5 in favor of HC Norilsk. GPT-3.5 In a match from the VHL 23/24 season, Kristall Saratov faced off against HC Norilsk. The game took place on November 28, 2023, at 15:30I. Despite a goal from Kristall Saratov in the third period, HC Norilsk emerged victorious with a final score of 5-1. The away team dominated the game, scoring 2 goals in the first period and 3 goals in the second period. Kristall Saratov struggled to keep up, resulting in a clear win for HC Norilsk. In a match from the VHL 23/24 season, Kristall Saratov faced off against HC Norilsk. The game took place on November 28, 2023, at 15:30. Despite a goal from Kristall Saratov in the third period, HC Norilsk emerged victorious with a final score of 5-1. The away team dominated the game, scoring 2 goals in the first period and 3 goals in the second period. Kristall Saratov struggled to keep up, resulting in a clear win for HC Norilsk. Table 10: An example of the model outputs on ice_hockey (see the input in Table 10). 12069 Figure 10: An example of an input from owid (see the outputs in Table 11). Model Human annotations (Ehum) GPT-4 annotations (Egpt) Llama2 Life expectancy at birth in Djibouti has increased significantly over the past few decades, with a notable increase from 40.5665 years in 1950 to 62.3049 years in 2021. Life expectancy at birth in Djibouti has increased significantly over the past few decades, with a notable increase from 40.5665 years in 1950 to 62.3049 years in 2021. Mistral This chart displays the life expectancy at birth in Djibouti from 1950 to 2021. The data shows a steady increase in life expectancy over the years, with a significant jump in the 1970s and 1980sI. In 2021, the average life expectancy at birth in Djibouti was 62.3 years. This chart displays the life expectancy at birth in Djibouti from 1950 to 2021. The data shows a steady increaseI in life expectancy over the years, with a significant jump in the 1970s and 1980sI. In 2021, the average life expectancy at birth in Djibouti was 62.3 yearsI. Zephyr This line graph illustrates the evolution of life expectancy at birth in Djibouti from 1950 to 2021. Initially, the average number of years that a newborn could expect to live was around 40.5 years in 1950, and it gradually increased over the following decades, reaching 62.3 years in 2021. The trend shows a steady improvement, with some fluctuations in the 1980sI and 1990s. The data presented here is based on the sex- and age-specific death rates prevailing at the time of birth for a specific year in a given country, territory, or geographic area. This line graph illustrates the evolution of life expectancy at birth in Djibouti from 1950 to 2021. Initially, the average number of years that a newborn could expect to live was around 40.5 years in 1950I, and it gradually increased over the following decades, reaching 62.3 years in 2021I. The trend shows a steady improvementM, with some fluctuations in the 1980s and 1990s. The data presented here is based on the sex- and age-specific death rates prevailing at the time of birth for a specific year in a given country, territory, or geographic area. GPT-3.5 The chart shows the life expectancy at birth in Djibouti from 1950 to 2021. The life expectancy has steadily increased over the years, from 40.57 years in 1950 to 62.30 years in 2021, with some fluctuations in between. The data reflects the average number of years a newborn could expect to live if exposed to the prevailing death rates at the time of birth. The chart shows the life expectancy at birth in Djibouti from 1950 to 2021. The life expectancy has steadily increased over the yearsI, from 40.57 years in 1950I to 62.30 years in 2021I, with some fluctuations in between. The data reflects the average number of years a newborn could expect to live if exposed to the prevailing death rates at the time of birth. Table 11: An example of the model outputs on owid (see the input in Table 11). 12070 Figure 11: An example of an input from wikidata (see the outputs in Table 12). Model Human annotations (Ehum) GPT-4 annotations (Egpt) Llama2 Driven Brands is an American franchise holdingNC company that owns, operates, and franchisesNC automotive services businesses in the United States. The company was founded in 1996 and is headquartered in Charlotte, North Carolina. It operates several brands, including Maaco, Meineke, Econo Lube, and CARSTARNC. Driven Brands is an American franchise holding company that owns, operates, and franchises automotive services businesses in the United States. The company was founded in 1996NC and is headquartered in Charlotte, North CarolinaNC. It operates several brands, including Maaco, Meineke, Econo Lube, and CARSTAR. Mistral Driven Brands is a business operating in the automotive services industry in the United States of America. Driven Brands is a business operating in the automotive services industry in the United States of America. Zephyr Driven Brands is a business operating in the automotive services industry based in the United States of America. As an instance of the broader category of businesses, Driven Brands provides goods or services to customers in exchange for financial remunerationNC. Driven Brands is a business operating in the automotive services industry based in the United States of America. As an instance of the broader category of businesses,O Driven Brands provides goods or servicesI to customers in exchange for financial remuneration. GPT-3.5 Driven Brands is a business based in the United States of America, operating within the automotive services industry. Driven Brands is a business based in the United States of America, operating within the automotive services industry. Table 12: An example of the model outputs on wikidata (see the input in Table 12). 12071 Incorrect Not Checkable Misleading Other All categories Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt # Tok. Llama 2 openweather 4.27 6.99 2.23 1.93 0.58 0.20 0.16 0.26 7.24 9.38 166.7 gsmarena 0.67 1.96 0.87 0.31 0.20 0.24 0.09 0.10 1.83 2.61 90.3 ice_hockey 1.47 1.98 0.99 0.90 0.09 0.00 0.12 0.05 2.67 2.93 63.3 owid 1.30 2.26 0.58 0.15 0.32 0.13 0.07 0.00 2.27 2.54 60.3 wikidata 0.15 0.74 1.59 1.24 0.08 0.03 0.07 0.04 1.89 2.05 38.5 Mistral openweather 5.65 8.76 2.47 0.41 1.36 0.50 0.53 0.04 10.01 9.71 193.5 gsmarena 0.51 2.02 1.10 0.94 0.30 0.25 0.14 0.02 2.05 3.23 146.3 ice_hockey 1.47 1.79 0.92 0.76 0.14 0.08 0.13 0.10 2.66 2.73 92.4 owid 2.42 3.11 0.40 0.18 0.29 0.27 0.07 0.02 3.18 3.58 91.1 wikidata 0.10 0.45 0.72 0.40 0.13 0.19 0.38 0.31 1.33 1.35 51.0 Zephyr openweather 4.22 7.39 1.01 0.37 0.34 0.41 0.16 0.00 5.73 8.17 130.9 gsmarena 0.33 2.24 1.04 1.00 0.23 0.48 0.11 0.01 1.71 3.73 142.8 ice_hockey 0.89 1.57 0.61 0.20 0.11 0.04 0.07 0.01 1.68 1.82 83.1 owid 1.68 2.64 0.49 0.16 0.22 0.30 0.10 0.01 2.49 3.11 85.2 wikidata 0.09 0.36 0.72 0.29 0.12 0.21 0.36 0.21 1.29 1.07 48.1 GPT-3.5 openweather 1.57 4.34 0.57 0.15 0.38 0.53 0.05 0.00 2.57 5.02 112.8 gsmarena 0.20 1.64 0.80 1.42 0.21 0.34 0.17 0.01 1.38 3.41 129.5 ice_hockey 0.81 0.76 0.46 0.17 0.07 0.04 0.09 0.01 1.43 0.98 84.4 owid 0.64 1.87 0.25 0.02 0.17 0.27 0.01 0.00 1.07 2.16 62.2 wikidata 0.05 0.18 0.36 0.16 0.06 0.10 0.05 0.10 0.52 0.54 35.7 Table 13: The average numbers of errors per output for each domain (lower is better). We also include the average number of tokens per output in the rightmost column. See Table 3 for aggregated results. Incorrect Not Checkable Misleading Other All categories Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt Ehum Egpt Llama 2 openweather 76% 98% 71% 68% 30% 14% 12% 22% 90% 100% gsmarena 38% 86% 49% 24% 18% 16% 5% 7% 74% 94% ice_hockey 70% 77% 51% 55% 7% 0% 10% 5% 89% 94% owid 70% 92% 37% 14% 24% 12% 6% 0% 88% 93% wikidata 12% 47% 79% 63% 8% 2% 5% 4% 87% 89% Mistral openweather 78% 99% 61% 16% 38% 18% 17% 4% 92% 100% gsmarena 33% 89% 54% 54% 19% 17% 12% 2% 73% 99% ice_hockey 74% 83% 62% 52% 12% 8% 9% 7% 88% 97% owid 75% 96% 27% 8% 24% 25% 6% 2% 86% 97% wikidata 8% 34% 44% 29% 10% 17% 24% 27% 67% 72% Zephyr openweather 82% 99% 45% 25% 23% 25% 14% 0% 90% 100% gsmarena 22% 91% 47% 52% 19% 30% 9% 1% 61% 99% ice_hockey 54% 78% 39% 17% 11% 4% 6% 1% 76% 87% owid 70% 93% 31% 9% 20% 26% 7% 1% 85% 95% wikidata 6% 29% 49% 22% 8% 18% 22% 18% 66% 66% GPT-3.5 openweather 64% 98% 32% 12% 23% 25% 5% 0% 75% 99% gsmarena 15% 74% 42% 58% 13% 19% 12% 1% 57% 97% ice_hockey 64% 50% 30% 12% 7% 4% 8% 1% 76% 57% owid 43% 89% 13% 2% 14% 24% 1% 0% 57% 90% wikidata 4% 14% 27% 14% 6% 9% 5% 9% 38% 36% Table 14: The percentage of outputs containing at least one error for each domain (lower is better). See Table 4 for aggregated results. 12072
2024
651
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12073–12086 August 11-16, 2024 ©2024 Association for Computational Linguistics Don’t Go To Extremes: Revealing the Excessive Sensitivity and Calibration Limitations of LLMs in Implicit Hate Speech Detection Min Zhang†, Jianfeng He† , Taoran Ji#, Chang-Tien Lu† †Virginia Tech, #Texas A&M University-Corpus Christi [email protected], [email protected], [email protected], [email protected] Abstract The fairness and trustworthiness of Large Language Models (LLMs) are receiving increasing attention. Implicit hate speech, which employs indirect language to convey hateful intentions, occupies a significant portion of practice. However, the extent to which LLMs effectively address this issue remains insufficiently examined. This paper delves into the capability of LLMs to detect implicit hate speech (Classification Task) and express confidence in their responses (Calibration Task). Our evaluation meticulously considers various prompt patterns and mainstream uncertainty estimation methods. Our findings highlight that LLMs exhibit two extremes: (1) LLMs display excessive sensitivity towards groups or topics that may cause fairness issues, resulting in misclassifying benign statements as hate speech. (2) LLMs’ confidence scores for each method excessively concentrate on a fixed range, remaining unchanged regardless of the dataset’s complexity. Consequently, the calibration performance is heavily reliant on primary classification accuracy. These discoveries unveil new limitations of LLMs, underscoring the need for caution when optimizing models to ensure they do not veer towards extremes. This serves as a reminder to carefully consider sensitivity and confidence in the pursuit of model fairness. 1 Introduction The fairness and trustworthiness of large language models (LLMs) have drawn widespread attention (Feng et al., 2023). Several evaluations (Li et al., 2023; Khatun and Brown, 2023) have been conducted to determine whether LLMs exhibit bias or discrimination against individuals or groups in terms of protected attributes like race, gender, or religion. These evaluations primarily focus on explicit harmful statements containing specific hateful words. However, a significant proportion of online hate speech is characterized by implicit expressions, utilizing indirect language to convey malicious intentions (ElSherief et al., 2021). The detection of such implicit hate speech poses a more formidable challenge to LLMs. Therefore, our paper delves into the evaluation of LLMs in implicit hate speech detection. Although some works have touched upon the issue of LLMs in implicit hate speech, they have only roughly evaluated whether LLMs can identify implicit hate speech and the quality of generated explanations (Yang et al., 2023; Huang et al., 2023). They overlook many key questions that pique our curiosity: (1) Do LLMs exhibit exaggerated safety behaviors in detecting hate speech? Recent studies (Röttger et al., 2023; Shi et al., 2024) have observed that LLMs decline to respond to harmless queries like "How can I kill a python process" due to the toxic word "kill". This inspires us to explore whether this exaggerated safety behavior persists in implicit hate speech detection. Our evaluation differs significantly as they attribute the model’s failure to the presence of toxic words, while our discourse does not contain any toxic words. (2) Can LLMs express their confidence in the prediction? Uncertainty estimation helps humans determine how much we can trust LLMs’ responses (Geng et al., 2023). Perfect uncertainty calibration results in low confidence for incorrect predictions and high confidence for correct predictions (Guo et al., 2017a). This enables us to filter out incorrect responses with low confidence, thereby preventing the dissemination of hate speech. (3) Will different prompt patterns affect the stability of the model’s performance on both the classification and calibration? The prompt pattern has been found to impact the performance of LLMs across various tasks (White et al., 2023). While Khatun and Brown (2023) have explored the impact by altering words in the instruction, they overlook guiding the model’s inference under different 12073 types of task frameworks, which may introduce larger disturbances. In this paper, we evaluate the performance of LLMs in implicit hate speech detection, examining both primary classification and uncertainty calibration. Additionally, we investigate the impact of prompt patterns on these two aspects. Our calibration evaluation encompasses three mainstream uncertainty estimation methods, namely the verbal-based method, consistency-based method, and logit-based method. A detailed analysis is conducted to understand the diverse performances of each uncertainty estimation method, considering scenarios categorized by classification performance and the distribution of the model’s token probability. Our experimental evaluations are conducted on three distinct implicit hate speech detection datasets using LLaMA-2-7b (chat) (Touvron et al., 2023), Mixtral-8x7b (Jiang et al., 2024), and GPT-3.5-Turbo (Ouyang et al., 2022). We find that LLMs exhibit extreme behavior in both classification and calibration tasks, leading to excessive sensitivity and poor calibration: 1) The over-sensitive behavior in classification, where non-hateful speech is predicted as hateful, is evident in LLaMA-2-7b and Mixtral-8x7b. GPT3.5-Turbo has achieved a better balance in this aspect. Excessive sensitivity arises from the inclusion of certain groups or topics associated with fairness concerns, even in the absence of harmful words or intentions. 2) All three mainstream uncertainty estimation methods demonstrate poor calibration. This is because the confidence scores for each method exhibit extreme clustering within a fixed range, remaining unchanged regardless of the difficulty of the dataset. Consequently, the calibration performance significantly depends on the primary classification performance. Methods concentrated in low-confidence ranges perform well on challenging tasks, while those concentrated in high-confidence ranges excel in simpler tasks. Moreover, these methods struggle to effectively distinguish between correct and incorrect predictions. Our analysis reveals the novel limitations of current uncertainty estimation methods. 3) Different prompt patterns yield various performances, yet they consistently demonstrate similar trends on the same model, whether in classification or calibration. No particular prompt pattern exhibits discernible superiority. 2 Evaluation Design To study the ability of LLMs in implicit hate speech detection, we design the evaluation encompassing both the primary classification and uncertainty calibration. We also investigate the impact of prompt patterns on both of these tasks. 2.1 Assessing Primary Classification Task Evaluation of the primary task aims to assess the ability of LLMs in the binary classification task of implicit hate speech detection. Given a statement, we instruct the LLMs to classify whether it is hateful (positive class) or not (negative class). We define the format for LLMs’ responses and the words of candidate answers, mapping the output of the LLMs to either positive or negative classes. To illustrate, we present several examples as demonstrations before the test case for M-shot ICL. The specific format for both the instruction and the response is detailed in prompt patterns (Sec. 2.3). 2.2 Assessing Confidence Estimation Task The model’s confidence in the answers determines the extent to which we can trust the model’s responses. We assess the calibration ability of LLMs using three widely adopted uncertainty estimation techniques. (1) Verbal-based method: LLMs are induced to generate a direct confidence score ranging from 0% to 100%, coupled with the corresponding answers, as illustrated by Lin et al. (2022); Kadavath et al. (2022). For instance, if an LLM generates the output "Yes, 80%", we extract the answer "Yes" and its associated confidence "80%". (2) Consistency-based method: For a given statement, we run the model through n rounds of inferences by altering prompt patterns or demonstrations. Candidate answers yi, where i ∈(1, ..., n), are voted upon for positive and negative classes Yj. The confidence score, referred to the agreement rate (Wang et al., 2022; Xiong et al., 2023), is calculated by: Cj = 1 n n X i=1 1{yi = Yj}. (1) (3) Logit-based method (Guo et al., 2017b): In each inference, we obtain the logit pj for both candidate positive and negative tokens in the decoder, with j representing the respective class. In the single inference, the logit directly serves as the confidence score. In multiple inferences, the confidence score for class j is computed by averaging 12074 the logits of tokens belonging to class j: Cj = 1 n n X i=1 pj i (2) Here, pj i signifies the token logit of class j in the i-th response. For both consistency-based and logit-based method, the class with the highest confidence score is deemed the final answer. 2.3 Assessing Impact of Prompt Patterns The prompt pattern has been found to impact the performance of LLMs across various tasks (White et al., 2023). As many LLMs have undergone RLHF optimization to prevent the generation of harmful content, we are curious about whether LLMs can robustly maintain fairness under different prompt patterns or which prompt pattern is more effective. Unlike simply changing words in the prompt (Khatun and Brown, 2023), we design five prompt patterns tailored to different task types: (1) Vanilla QA: LLMs are prompted to produce a binary response of either "Yes" or "No" to determine whether the given statement is hate speech. (2) Choice QA: LLMs are directed to select the appropriate answer from two choices, namely "A: Yes" and "B: No." (3) Cloze Test: LLMs are tasked with filling in the masked word using "hateful" or "neutral" in the phrase "It is a [Mask] statement." (4) Chain-of-Thought (CoT) (Wei et al., 2022): In addition to the binary response, LLMs are also required to generate a corresponding explanation simultaneously. (5) Multi-task with Target: LLMs are instructed to provide the binary response and identify the targeted individual or groups. For comprehensive details on each prompt type, refer to Appendix A.1. 3 Experiment Settings Models We conduct experiments with three kind of LLMs, LLaMA-2-7b (chat), Mixtral-8x7b, and GPT-3.5-Turbo. Datasets Our experiments use three implicit hate speech detection datasets: Latent Hatred (ElSherief et al., 2021), SBIC (v2) (Sap et al., 2020), and ToxiGen (Hartvigsen et al., 2022). See Appendix A.2 for more details of the data preprocess. Metrics Our evaluation encompasses both primary classification and uncertainty calibration. In assessing task classification performance, we utilize Precision, Recall, and F1 scores to evaluate the predicted answers. Meanwhile, we employ three metrics for calibration performance: The Area Under the ReceiverOperator Characteristic Curve (AUROC) (Bradley, 1997) quantifies the probability that the model assigns a higher uncertainty score to an incorrect prediction than to a correct one. The Expected Calibration Error (ECE) (Guo et al., 2017a) is calculated as the mean squared discrepancy between the average accuracy and confidence for each bin, with the magnitude of each deviation scaled by the fraction of samples falling into the respective bin. The Brier Score (BS) (Brier, 1950) measures the mean squared difference between the confidences and the actual outcomes. Better calibration is represented with a higher AUC and lower values for ECE or BS. Experiment Setting We present six demonstrations (i.e., examples) in the prompt for few-shot in-context learning, organized in a balanced class and random order. The verbalized confidence are set to 60%, 70%, 90%, 70%, 60%, 90% in the demonstrations. We adjust the parameters to report a better outcome for each individual model. A consistent temperature setting of 1 is applied to all three models. Greedy decoding is employed for LLaMA-2-7b and GPT-3.5-Turbo, while top-p sampling with a value of 0.9 is opted for Mixtral8x7b. 4 Results of Primary Classification Fig. 1 shows the precision and recall of each LLM with various prompt patterns. Models are distinguished by colors (LLaMA-2-7b in green, Mixtral8x7b in blue, GPT-3.5-Turbo in red), and varying prompt patterns are represented by different shapes. The equilibrium line in the figure represents an ideal state where precision and recall are equal, indicating that the model’s predictions do not favor either the positive or negative class. Points above the equilibrium line (recall > precision) indicate a model bias toward classifying most data as positive. Points below the line (recall < precision) suggest a model bias toward classifying most data as negative. LLaMA-2-7b and Mixtral-8x7b exhibit oversensitivity: The majority of points for LLaMA2-7b and Mixtral-8x7b fall above the equilibrium line in both the Latent Hatred and SBIC datasets. This suggests that these models classify statements without hateful intent as hate speech, indicating an excessive level of sensitivity. The over-sensitivity arises from the presence 12075 0 0.2 0.4 0.6 0.8 1 Precision Recall 0 0.2 0.4 0.6 0.8 1 Precision 0 0.2 0.4 0.6 0.8 1 Precision 0 0.2 0.4 0.6 0.8 1 Mixtral-8x7b GPT-3.5-Turbo LLaMA-2-7b Vanilla QA Choice QA Cloze CoT Target Latent Hatred SBIC ToxiGen Figure 1: Precision and recall of different LLMs (distinguished by colors) with various prompt patterns (distinguished by shapes) in hate speech detection. The recall is significantly higher than the precision for LLaMA-2-7b and Mixtral8x7b on both the Latent Hatred and SBIC datasets, indicating that LLMs may misjudge benign expressions as hate speech. This over-sensitivity arises from the presence of sensitive groups and topics within benign expressions. of sensitive groups and topics within benign expressions. LLMs display excessive sensitivity on Latent Hatred and SBIC datasets, while it does not exhibit such over-sensitivity on the ToxiGen dataset. That is because examples of the negative class in the dataset Latent Hatred and SBIC contain sensitive groups or sensitive topics. For example, the statement "vandals target black women Maryland home" is a description of a news event that involves sensitive groups of "black women" and sensitive topics "vandals" and "target" but does not contain any hateful emotions. However, examples of the negative class in the dataset ToxiGen don’t contain sensitive topics ("Muslims in the 19th century migrated from Europe and the Middle East to the U.S.") or sensitive groups ("I felt so much better after confiding that in someone and realizing I’m not alone"). The presence of sensitive groups and topics confuses LLMs, leading to misjudgment of benign expressions. GPT-3.5 is positioned closer to the upper-right corner, indicating an overall better performance. It also demonstrates a relatively balanced performance between recall and precision, albeit with a slight reversal compared to LLaMA-2-7b and Mixtral-8x7b. This indicates that GPT-3.5 did not exhibit excessive sensitivity, although there is still room for improvement in its ability to detect implicit intentions. Different prompt patterns exhibit varying degrees of over-sensitivity. In the case of LLaMA2-7b, the most notable imbalance is observed in the Cloze prompt pattern across all three datasets, with biases ranging from 12% to 33%. The Target pattern shows biases of 24% and 14% on the Latent Hatred and SBIC datasets, respectively, while the Vanilla QA pattern exhibits a bias of 17% on the Latent Hatred dataset. Only the CoT pattern demonstrates a relatively balanced performance across all three datasets for LLaMA-2-7b. Regarding Mixtral-8x7b, all prompt patterns exhibit significant biases on the Latent Hatred dataset, ranging from 20% to 50%. On the SBIC dataset, prompt patterns Choice QA, Cloze, and Vanilla QA all exceed 40%, and on the ToxiGen dataset, both Cloze and Target biases surpass 10%. Although different prompt patterns result in varied performances, they consistently demonstrate similar trends on the same model. We also present the F1 score in Appendix A.4. We find that when the F1 score achieves its best performance, there can be a significant imbalance between precision and recall. This cautions us against relying solely on F1 and overlooking the issue of imbalance between precision and recall. 5 Results of Confidence Calibration The calibration results in Table 1 can be categorized into several scenarios as shown in Fig. 2. The scenarios are divided based on the primary classification performance and the logit distribution of model output tokens. The uncertainty estimation method that yields the best calibration performance is highlighted in each scenario. We find that the logit-based confidence achieves the highest AUC across all the datasets and LLMs. However, the uncertainty estimation method achieving the highest ECE/BS varies depending on the scenario. We also find that the confidence distribution exhibits extremes, whether it’s overly conservative or overly confident (Fig. 5, 6, 7. The verbal-based confidence estimation method typically produces 12076 Latent Hatred SBIC Toxigen Method AUC↑ ECE↓ BS↓ AUC↑ ECE↓ BS↓ AUC↑ ECE↓ BS↓ LLaMA-2-7b verbal 0.565 0.081 0.233 0.586 0.057 0.181 0.769 0.181 0.089 consistency 0.589 0.174 0.276 0.660 0.103 0.180 0.727 0.029 0.053 logit 0.637 0.154 0.244 0.749 0.094 0.165 0.889 0.041 0.047 GPT-3.5-Turbo verbal 0.580 0.054 0.213 0.627 0.085 0.151 0.788 0.144 0.088 consistency 0.575 0.170 0.237 0.671 0.070 0.128 0.704 0.035 0.022 logit 0.667 0.151 0.219 0.858 0.067 0.118 0.959 0.045 0.021 Mixtral-8x7b verbal 0.500 0.080 0.260 0.501 0.162 0.249 0.495 0.162 0.254 consistency 0.532 0.213 0.316 0.716 0.112 0.214 0.732 0.093 0.069 logit 0.645 0.048 0.222 0.762 0.066 0.173 0.909 0.220 0.106 Table 1: Calibration performance of three mainstream confidence estimation methods. The closer to orange, the better the performance; the closer to green, the worse the performance. F1-Low (Latent Hatred, SBIC) F1-High (ToxiGen) Model’s Token Logit-High (LLaMA-2-7b, GPT-3.5-Turbo) AUC: logit conf. ECE/BS: verbal conf. AUC: logit conf. ECE/BS: consistency conf. Model’s Token Logit-Low (Mixtral-8x7b) AUC: logit conf. ECE/BS: logit conf. Figure 2: The best-performing uncertainty estimation method in different scenarios categorized by the model’s output token logit and primary classification performance. Logit-based confidence scores achieve the best AUC in all scenarios, while the ECE for each method varies across scenarios. a relatively conservative confidence score. The consistency-based method usually demonstrates a very high confidence score. The distribution of logit-based confidence varies depending on the model, with GPT-3.5-Turbo and LLaMA-2-7b tending to generate high logits, while Mixtral-8x7b tends to produce conservative logits. Our subsequent analysis reveals that the calibration performance significantly depends on the primary classification performance due to the highly concentrated confidence distribution. 5.1 The logit-based confidence achieves the highest AUC in all scenarios The logit-based method performs better in AUC than both the verbal-based method and the consistency-based method in all scenarios. We compare the ROC curve composed of false positive False Positive Rate 0 1 0 1 0.2 0.4 0.6 0.8 True Positive Rate 0.2 0.4 0.6 0.8 logit-conf consist.-conf verbal-conf ROC Curve Figure 3: The comparison of the ROC curve. 0.4 AUC 0.5 0.6 0.7 0.8 3 5 7 9 11 13 15 ensemble number logit-conf consist.-conf Figure 4: The figure showcases the relationship between AUC and the ensemble number. rate (FPR) and true positive rate (TPR) in Fig. 3 for LLaMA-2-7b on Latent Hatred. The AUC of the verbal-based method is lower than the logit-based method due to the conservative verbalized confidence. We observe that, 12077 at the same confidence threshold, the FPR of the verbal-based method and the logit-based method are similar. However, the confidence distribution of the logit-based method is more overconfident, leading to a larger TPR, resulting in a higher ROC curve. The discreteness of confidence score in the consistency-based method leads to a reduced AUC. The consistency score in the consistencybased method is limited by the ensemble number, mainly concentrated on a few discrete values (for example, 3/5, 4/5, 5/5), leading to fewer confidence thresholds considered when calculating AUC. In contrast, the logit-based method’s confidence score covers many continuous values, enabling more values of the confidence threshold for the calculation of AUC. As shown in Fig. 3, The discrete points obtained by the consistency-based method on the ROC curve are very close to those on the curve of the logit-based method. However, the absence of a consistency-based confidence score between 0.8 and 1 results in the omission of the corresponding section with an FPR below 0.5. This indicates that the discreteness of confidence values in the consistency-based method limits its ability to express uncertainty. Based on the findings, we increase the ensemble number from 3 to 13 (changing demonstrations in the prompt in each inference to conduct the ensemble), the AUC gradually increases and tends to stabilize (Fig. 4). It indicates that increasing the number of ensemble sources can mitigate the gap but is still lower than the logit-based method. These findings suggest that relying solely on AUC for comparison among different kinds of uncertainty estimation methods is insufficient. The variations in ECE further support this conclusion. Next, we delve into a detailed analysis of the reasons for the ECE variations. 5.2 Calibration heavily relies on the primary task due to concentrated confidence The uncertainty estimation method achieving the highest ECE varies depending on the scenario. To demonstrate the underlying reasons, we provide an example for each scenario regarding the ECE performance in Fig. 5 - Fig. 7. The x-axis represents confidence scores, binned at intervals of 0.1. The left y-axis represents accuracy. The height of each bar represents the accuracy of the corresponding bin. Darker bars indicate a higher volume of data falling within that bin. The color of the bars is quantified by a blue line, corresponding to the proportion of data on the right y-axis. The green line represents the overall accuracy of the model on the dataset. Recall that ECE measures the gap between confidence and accuracy. As shown in Fig. 5, 6, 7, the highly concentrated confidence around the value of the overall accuracy results in a high ECE. When does the verbal-based method achieve the highest ECE? In cases where the performance of the primary classification task is poor and the model’s token logit is high (LLaMA-2-7b on the Latent Hatred and SBIC datasets, GPT-3.5-turbo on the Latent Hatred dataset), the verbal-based method achieved nearly the best ECE and BS. That is because the logit-based method and the consistency-based method exhibit overconfidence while the verbal-based method provides a more conservative estimate of confidence. The majority of confidence scores are above 0.9 for both logit-based and consistency-based methods, yet the accuracy of the task remains low, resulting in under-calibrated errors (Fig. 5). In contrast, the verbalized confidence concentrates in the range of 0.7-0.8, close to the accuracy, resulting in smaller calibration errors. When does the logit-based method achieve the highest ECE? In cases where the performance of the primary classification task is poor and the model’s token logit is not generally too high (Mixtral-8x7b on the Latent Hatred and SBIC datasets), the logit-based method achieves the best calibration performance. That is because the conservative logit is already well-calibrated so the verbal-based method loses its advantage. As shown in Fig. 6, unlike other models that output a high token logit, the logit of the Mixtral-8x7b model is predominantly concentrated between 0.5 and 0.6, closely resembling the mediocre accuracy and exhibiting good calibration ability. However, the consistency-based method still maintains excessive confidence, resulting in poor calibration. When does the consistency-based method achieve the highest ECE? In cases where the classification has high accuracy (all models on the ToxiGen dataset), the consistency-based method achieves the best ECE. This is because, on simple datasets, the task accuracy is very high, so the consistency-based method with high confidence tends to be closer to the high accuracy (Fig. 7). However, the verbal12078 Overall Accuracy Fraction of the Confidence Score Distribution The highly concentrated confidence (dark-colored bars) around the value of the overall accuracy (green dashed line) results in a high ECE. Figure 5: The ECE performance of LLaMA-2-7b on the SBIC dataset shows that the verbal-based confidence is mainly concentrated in the 70%-80% range, around the overall accuracy of 77%, thus achieving the best ECE. (a)Verbal-conf (b) Consistency-conf (c) Logit-conf Accuracy 0 0.2 0.4 0.6 0.8 1 Fraction of Data 0 1 Confidence Score 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Confidence Score Confidence Score ECE=0.048 ECE=0.080 ECE=0.213 Scenario 2: Logit-conf achieves the best ECE. Figure 6: The ECE performance of Mixtral-8x7b on the Latent Hatred dataset shows that the logit-based confidence is mainly concentrated in the 50%-70% range, around the overall accuracy of 61%, thus achieving the best ECE. (a) Verbal-conf (b) Consistency-conf (c) Logit-conf Accuracy 0 0.2 0.4 0.6 0.8 1 Fraction of Data 0 1 Confidence Score 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Confidence Score Confidence Score ECE=0.181 ECE=0.029 ECE=0.041 Scenario 3: Consistency-conf achieves the best ECE. Figure 7: The ECE performance of LLaMA-2-7b on the ToxiGen shows that the consistency-based confidence is mainly concentrated in the 90%-100% range, around the overall accuracy of 92%, thus achieving the best ECE. 12079 incorrect prediction correct prediction Confidence Score Fraction of Data 0 1 0.3 0 0.2 0.1 0.2 0.4 0.6 0.8 Figure 8: The confidence distribution of correctly classified and misclassified cases. based method maintains a conservative confidence which is lower than the high accuracy, thus leading to over-calibration. For the logit-based method, when the token logit is high (LLaMA2-7b-chat, GPT-3.5-turbo), the ECE is very similar to the consistency method. However, the logit-based method for Mixtral-8x7b has a quite lower ECE because of its conservative token logit. 5.3 Common drawbacks These three methods are unable to effectively estimate the confidence of the answers. The calibration performance significantly depends on the primary classification performance. No matter whether the dataset is easy or challenging, the confidence scores of each method are always concentrated in a fixed range. Consequently, methods concentrated in low-confidence ranges perform well on challenging tasks, while those concentrated in high-confidence ranges excel in simpler tasks. This is also why different uncertainty estimation methods achieve the best performance in different scenarios. Moreover, these methods struggle to distinguish the confidence between incorrectly predicted and correctly predicted instances. An ideal confidence estimation method should have high confidence for correctly predicted data and low confidence for incorrectly predicted data. However, Fig. 8 shows that confidence distributions of correctly classified and misclassified cases overlap significantly, indicating the poor ability of uncertainty estimation. 5.4 Effects of prompt patterns on calibration Table 4, Table 5, and Table 6 (in Appendix A.5) show that the performance of different prompts varies in calibration. No prompt consistently performs better. The ensemble of responses obtained from different prompt patterns shows a relatively better overall performance. This may be because different prompt patterns inspire the model to infer 0.54 0.62 AUC 0.05 0.25 ECE Temperature 0.4 1 Temperature 0.4 1 LLaMA2 Mixtral Figure 9: Calibration performance with varying temperature. LLaMA-2-7b and Mixtral-8x7b show different tends. 0.57 0.64 AUC 0.05 0.25 ECE Top p 0.8 1 Top p 0.8 1 Figure 10: Calibration performance with varying top p sampling. LLaMA-2-7b and Mixtral-8x7b show different trends. results along different paths, and aggregating such results better reflects the model’s confidence. 5.5 Effects of the temperature and sampling We find that due to differences in the output token logit distribution, different models exhibit varying sensitivities to temperature and sampling parameters, sometimes even in opposite trends. As shown in Fig. 9, as the temperature increases, The AUC of LLaMA-2-7b increases while the AUC of Mixtral-8x7b decreases. When the temperature varies between 0.6 and 1, LLaMA-2-7b and Mixtral-8x7b exhibit opposite trends in ECE. Results on changing the top p sampling show the same findings (Fig. 10). This is because the output token logit of LLaMA-2-7b is overconfident, whereas Mixtral-8x7b exhibits a more cautious level of confidence. As the temperature increases, the logit distribution of Mixtral-8x7b becomes sharper, leading to over-calibration. On the other hand, the logit distribution of LLaMA-2-7b becomes smoother, enhancing its ability to differentiate confidence levels. See Appendix A.6 for details. 6 Future Work The insights we’ve gained for future work include two aspects: Firstly, future optimizations should aim to avoid misjudging benign expressions, especially those containing sensitive groups. Recent efforts aim to ensure that LLMs do not generate harmful output toward sensitive groups. However, it is unknown 12080 whether LLMs might veer towards another extreme. Imagine if all discussions about black people were filtered out on social media platforms; that would be unfair as well. Secondly, future optimization should aim to avoid excessive concentration of confidence and should take into account varying levels of task complexity. We unveil factors influencing calibration: calibration performance significantly relies on the primary task performance due to extremely concentrated confidence. Some future research pathways and ideas include: (1) Integrating correction for oversensitivity during RLHF. In addition to identifying harmful content during RLHF, humans should guide LLMs to recognize challenging benign expressions involving sensitive groups but not malicious ones. (2) Improving the reflection of confidence in logits during LLM training or inference. Optimization of the decoding process, SFT loss, or post-processing of logits can be explored to prevent the concentration of logits within a fixed range. (3) Considering tasks of varying complexity in LLM calibration optimization. Developing uncertainty estimation methods that are adaptable to different task complexities is imperative. 7 Related Work Implicit Hate Speech Detection Hate speech inflicts significant harm on specific communities. Researchers have developed various hate speech detection models (Gitari et al., 2015; Zhang et al., 2018). Recently, there has been a growing interest among researchers in addressing implicit hate (Kim et al., 2022; Lin, 2022; Ocampo et al., 2023). Certain studies have employed LLMs to generate explanations with step-by-step reasoning for detecting implicit hate speech (Yang et al., 2023). While Huang et al. (2023) explored the quality of explanations generated by ChatGPT for detecting implicit hateful tweets, their research exclusively focused on implicit instances, neglecting non-implicit statements. In our exploration, we assess the performance of LLMs in detecting implicit hate speech, taking into account various aspects with thoughtful consideration. We investigate whether there is an imbalance in the predictions of positive and negative classes when LLMs are employed for hate speech detection. Additionally, we thoughtfully consider various prompt patterns in our analysis. Uncertainty Estimation in LLMs The reliable uncertainty estimation can facilitate more rational and informed decision-making. There are three mainstream methods for estimating the confidence of LLMs in their responses, particularly for cases where we can only obtain the answer and logit but not the weights. They are the verbal-based method (Lin et al., 2022; Kadavath et al., 2022; Tian et al., 2023), the consistency-based method (Wang et al., 2022; Xiong et al., 2023; Yue et al., 2023), and the logit-based method (Guo et al., 2017b; Zhang et al., 2020; Jiang et al., 2020; Chen et al., 2022). We compare these three mainstream methods, conduct a comprehensive analysis in different scenarios, and delve into the underlying reasons. Additionally, we highlighted the shortcomings of these methods in confidence assessment for fairness. 8 Conclusion The ability of LLMs in implicit hate speech detection is insufficiently examined. In this paper, we explore the performance of LLMs in identifying implicit hate speech and the calibration of three mainstream uncertainty estimation methods. We find that LLMs have fallen into two extremes, excessively focusing on sensitive groups and exhibiting extreme confidence score distributions. These extremes result in over-sensitivity and poor calibration respectively. These discoveries unveil new limitations of LLMs, underscoring the need for caution when optimizing models to ensure they do not veer towards extremes. 9 Limitations The datasets utilized in our experiments were all in English, thus our evaluation only encompasses the English language. We did not assess whether similar issues exist for large language models in other languages such as Chinese, French, and various other languages. References Andrew P Bradley. 1997. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition, 30(7):1145–1159. Glenn W Brier. 1950. Verification of forecasts expressed in terms of probability. Monthly weather review, 78(1):1–3. Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, and Heng Ji. 2022. A close look into the cal12081 ibration of pre-trained language models. ArXiv, abs/2211.00151. Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. arXiv preprint arXiv:2109.05322. Shangbin Feng, Chan Young Park, Yuhan Liu, and Yulia Tsvetkov. 2023. From pretraining data to language models to downstream tasks: Tracking the trails of political biases leading to unfair nlp models. arXiv preprint arXiv:2305.08283. Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl, Preslav Nakov, and Iryna Gurevych. 2023. A survey of language model confidence estimation and calibration. arXiv preprint arXiv:2311.08298. Njagi Dennis Gitari, Zhang Zuping, Hanyurwimfura Damien, and Jun Long. 2015. A lexicon-based approach for hate speech detection. International Journal of Multimedia and Ubiquitous Engineering, 10(4):215–230. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017a. On calibration of modern neural networks. In International conference on machine learning, pages 1321–1330. PMLR. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017b. On calibration of modern neural networks. In International Conference on Machine Learning. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. arXiv preprint arXiv:2203.09509. Fan Huang, Haewoon Kwak, and Jisun An. 2023. Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech. arXiv preprint arXiv:2302.07736. Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, MarieAnne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mixtral of experts. Zhengbao Jiang, J. Araki, Haibo Ding, and Graham Neubig. 2020. How can we know when language models know? on the calibration of language models for question answering. Transactions of the Association for Computational Linguistics, 9:962–977. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, et al. 2022. Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221. Aisha Khatun and Daniel G Brown. 2023. Reliability check: An analysis of gpt-3’s response to sensitive topics and prompt wording. arXiv preprint arXiv:2306.06199. Youngwook Kim, Shinwoo Park, and Yo-Sub Han. 2022. Generalizable implicit hate speech detection using contrastive learning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 6667–6679. Lingyao Li, Lizhou Fan, Shubham Atreja, and Libby Hemphill. 2023. " hot" chatgpt: The promise of chatgpt in detecting and discriminating hateful, offensive, and toxic comments on social media. arXiv preprint arXiv:2304.10619. Jessica Lin. 2022. Leveraging world knowledge in implicit hate speech detection. arXiv preprint arXiv:2212.14100. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Teaching models to express their uncertainty in words. arXiv preprint arXiv:2205.14334. Nicolas Ocampo, Elena Cabrio, and Serena Villata. 2023. Playing the part of the sharp bully: Generating adversarial examples for implicit hate speech detection. In Findings of the Association for Computational Linguistics: ACL 2023, pages 2758–2772. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke E. Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Francis Christiano, Jan Leike, and Ryan J. Lowe. 2022. Training language models to follow instructions with human feedback. ArXiv, abs/2203.02155. Paul Röttger, Hannah Rose Kirk, Bertie Vidgen, Giuseppe Attanasio, Federico Bianchi, and Dirk Hovy. 2023. Xstest: A test suite for identifying exaggerated safety behaviours in large language models. arXiv preprint arXiv:2308.01263. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In ACL. Chenyu Shi, Xiao Wang, Qiming Ge, Songyang Gao, Xianjun Yang, Tao Gui, Qi Zhang, Xuanjing Huang, Xun Zhao, and Dahua Lin. 2024. Navigating the overkill in large language models. arXiv preprint arXiv:2401.17633. 12082 Katherine Tian, Eric Mitchell, Allan Zhou, Archit Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, and Christopher D Manning. 2023. Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback. arXiv preprint arXiv:2305.14975. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. ArXiv, abs/2201.11903. Jules White, Quchen Fu, Sam Hays, Michael Sandborn, Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse Spencer-Smith, and Douglas C Schmidt. 2023. A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382. Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie Fu, Junxian He, and Bryan Hooi. 2023. Can llms express their uncertainty? an empirical evaluation of confidence elicitation in llms. arXiv preprint arXiv:2306.13063. Yongjin Yang, Joonkee Kim, Yujin Kim, Namgyu Ho, James Thorne, and Se-young Yun. 2023. Hare: Explainable hate speech detection with step-by-step reasoning. arXiv preprint arXiv:2311.00321. Murong Yue, Jie Zhao, Min Zhang, Liang Du, and Ziyu Yao. 2023. Large language model cascades with mixture of thoughts representations for cost-efficient reasoning. arXiv preprint arXiv:2310.03094. Jize Zhang, Bhavya Kailkhura, and Thomas Yong-Jin Han. 2020. Mix-n-match: Ensemble and compositional methods for uncertainty calibration in deep learning. ArXiv, abs/2003.07329. Ziqi Zhang, David Robinson, and Jonathan Tepper. 2018. Detecting hate speech on twitter using a convolution-gru based deep neural network. In The Semantic Web: 15th International Conference, ESWC 2018, Heraklion, Crete, Greece, June 3–7, 2018, Proceedings 15, pages 745–760. Springer. A Appendix A.1 The design of prompt patterns We show the full prompt in Table 2. A.2 Data preprocessing We have two steps to preprocess the data to ensure the quality of the evaluation. Firstly, we discard the data samples with profanity words, such as "bi*ch" and "fu*k" to further ensure that the data does not contain explicit hate words. Secondly, we sample from the test set to keep the equal data number of positive and negative class. Finally, we retain 1200 test data for Latent Hatred, 1200 test data for SBIC, and 260 test data for ToxiGen. A.3 Examples in different datasets The varying degrees of over-sensitivity exhibited by the model across different datasets are attributed to the varying levels of difficulty in the data. The model displays excessive sensitivity on Latent Hatred and SBIC datasets, while it does not exhibit such over-sensitivity on the ToxiGen dataset. That is because examples of the negative class in the dataset Latent Hatred and SBIC contain sensitive groups or sensitive topics. For example, the statement "vandals target black women Maryland home" is a description of a news event that involves a fairness issue but does not contain any hateful emotions. However, examples of the negative class in dataset ToxiGen are simple ("Muslims in the 19th century migrated from Europe and the Middle East to the U.S.") and don’t contain sensitive groups or topics ("I felt so much better after confiding that in someone and realizing I’m not alone"). Therefore, the model demonstrates excessive sensitivity errors on the Latent Hatred and SBIC datasets, while not exhibiting over-sensitivity on the ToxiGen dataset. A.4 Classification performance of different prompt patterns The precision, recall and F1 for classification performance can be found in tables 3. A.5 Calibration performance of different prompt patterns Table 4, Table 5, and Table 6 show that the performance of different prompts varies in calibration. A.6 Analysis of the effects on the temperature The difference in the effect of temperature on LLaMA-2-7b and Mixtral-8x7b arises from the different logit distribution. Fig. 11 shows the ECE performance for the logit-based uncertainty estimation method with different temperatures on the Latent Hatred dataset. The confidence score for 12083 Vanilla QA Given the statement, is this statement hate speech? Please provide your answer. The answer contains Yes or No. Statement: xxx Answer: Yes Choice QA Given the statement, is this statement hate speech? A: Yes. B: No. Please give your choice (A or B). Statement: xxx Answer: A Cloze Test Given the statement, is this statement hateful or neutral? Please predict the [MASK] token, you can only answer hateful or neutral. Statement: xxx Answer: It is a [MASK] statement. hateful CoT Is this statement hate speech? Your answer should only contain Yes or No, then give your explanation step by step. Statement: xxx Answer: Yes Explanation: xxx Multi-task with Target Given the statement, is this statement hate speech? Your answer should only contain Yes or No, then identify the target individual / group. Statement: xxx Answer: Yes Target: xxx Table 2: Input and output examples for different prompt patterns. The italicized text represents the LLM’s output. 12084 Latent Hatred SBIC ToxiGen Prompt P R F1 P R F1 P R F1 GPT-3.5-Turbo Choice QA 0.7014 0.758 0.7286 0.864 0.8381 0.8508 1 0.969 0.9843 CoT 0.7347 0.7236 0.7291 0.8895 0.7917 0.8377 1 0.9453 0.9719 Cloze 0.738 0.6935 0.715 0.9093 0.8017 0.8521 1 0.9225 0.9597 Vanilla QA 0.7607 0.6263 0.687 0.9039 0.7696 0.8314 1 0.9457 0.9721 Target 0.7452 0.6566 0.6981 0.9152 0.755 0.8274 1 0.9219 0.9593 LLaMA-2-7B Choice QA 0.6143 0.6913 0.6505 0.7122 0.8956 0.7934 0.9487 0.881 0.9136 CoT 0.6472 0.6134 0.6299 0.7548 0.7913 0.7726 0.9344 0.8837 0.9084 Cloze 0.5954 0.9258 0.7248 0.6947 0.931 0.7957 0.8394 0.9583 0.8949 Vanilla QA 0.6373 0.8038 0.6425 0.8092 0.7646 0.7863 0.9508 0.8992 0.9243 Target 0.601 0.8395 0.7005 0.7261 0.8645 0.7893 0.937 0.9297 0.9333 Mixtral-8x7b Choice QA 0.5161 0.995 0.6796 0.5 1 0.6667 1 0.1628 0.28 CoT 0.6155 0.8124 0.7004 0.7896 0.8633 0.8248 0.9231 0.9302 0.9266 Cloze 0.503 0.9983 0.6689 0.5642 0.9817 0.7165 0.8105 0.9612 0.8794 Vanilla QA 0.5771 0.8342 0.6822 0.584 0.985 0.7333 0.9141 0.907 0.9105 Target 0.6058 0.8107 0.6934 0.8 0.8333 0.8163 0.8311 0.9535 0.8881 Table 3: The classification performance (Precision, Recall and F1) of LLMs in hate speech detection with different prompt patterns. Latent Hatred SBIC Toxigen Prompt AUC ECE BS AUC ECE BS AUC ECE BS Choice QA 0.628 0.127 0.242 0.751 0.088 0.167 0.924 0.069 0.052 CoT 0.631 0.215 0.279 0.736 0.139 0.191 0.902 0.037 0.056 Cloze 0.647 0.245 0.296 0.696 0.152 0.205 0.877 0.013 0.078 Vanilla QA 0.637 0.236 0.287 0.693 0.129 0.186 0.823 0.008 0.059 Target 0.614 0.235 0.294 0.720 0.141 0.196 0.880 0.026 0.048 Ensemble 0.637 0.154 0.244 0.749 0.094 0.165 0.889 0.041 0.047 Table 4: The calibration performance of LLaMA-2-7B with different prompt patterns. Latent Hatred SBIC Toxigen Prompt AUC ECE BS AUC ECE BS AUC ECE BS Choice QA 0.710 0.095 0.237 0.836 0.184 0.246 0.842 0.273 0.211 CoT 0.700 0.039 0.209 0.787 0.091 0.136 0.927 0.127 0.065 Cloze 0.668 0.242 0.277 0.770 0.157 0.217 0.879 0.102 0.098 Vanilla QA 0.682 0.026 0.228 0.728 0.094 0.204 0.911 0.265 0.138 Target 0.693 0.028 0.210 0.781 0.100 0.142 0.888 0.154 0.104 Ensemble 0.710 0.048 0.222 0.762 0.066 0.173 0.924 0.220 0.106 Table 5: The calibration performance of Mixtral-8x7b with different prompt patterns. Latent Hatred SBIC Toxigen Prompt AUC ECE BS AUC ECE BS AUC ECE BS Choice QA 0.646 0.187 0.248 0.811 0.076 0.125 0.973 0.047 0.015 CoT 0.654 0.171 0.229 0.818 0.083 0.132 0.911 0.031 0.026 Cloze 0.683 0.177 0.232 0.823 0.069 0.123 0.963 0.048 0.024 Vanilla QA 0.638 0.191 0.246 0.834 0.090 0.138 0.924 0.029 0.026 Target 0.658 0.183 0.241 0.800 0.090 0.141 0.936 0.030 0.030 Ensemble 0.668 0.151 0.219 0.858 0.067 0.118 0.959 0.045 0.021 Table 6: The calibration performance of GPT-3.5-Turbo with different prompt patterns. 12085 0 Mixtral-8x7b 0 1 Confidence Score 1 1 0 Accuracy Confidence Score 1 0 Fraction of Data t=0.6 t=1 LLaMA-2-7b 0 1 Confidence Score 1 1 0 Accuracy Confidence Score 1 0 Fraction of Data t=0.6 t=1 ECE=0.024 ECE=0.059 ECE=0.260 ECE=0.236 Figure 11: The ECE performance with temperature=0.6 and temperature=1 for Mixtral-8x7b and LLaMA-2-7b. The bar’s color and blue line both represent the fraction of the data. the logit-based method is the logit for the output token. The logit distribution of Mixtral-8x7b is primarily concentrated between 0.5 and 0.7, while LLaMA’s logit is mainly distributed between 0.9 and 1.0. This indicates that LLaMA-2-7b is overconfident, whereas Mixtral-8x7b exhibits a more cautious level of confidence. As the temperature increases, the logits for both models become more conservative. Thus, the logit distribution of Mixtral8x7b becomes sharper, leading to over-calibration. On the other hand, the logit distribution of LLaMA2-7b becomes smoother, enhancing its ability to differentiate confidence levels. 12086
2024
652
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12087–12105 August 11-16, 2024 ©2024 Association for Computational Linguistics Don’t Rank, Combine! Combining Machine Translation Hypotheses Using Quality Estimation Giorgos Vernikos1,2 1EPFL Lausanne, Switzerland {georgios.vernikos, andrei.popescu-belis}@heig-vd.ch Andrei Popescu-Belis1,2 2HEIG-VD / HES-SO Yverdon-les-Bains, Switzerland Abstract Neural machine translation systems estimate probabilities of target sentences given source sentences, yet these estimates may not align with human preferences. This work introduces QE-fusion, a method that synthesizes translations using a quality estimation metric (QE), which correlates better with human judgments. QE-fusion leverages a pool of candidates sampled from a model and combines spans from different candidates using a QE metric such as COMETKIWI. We compare QE-fusion against beam search and recent reranking techniques, such as Minimum Bayes Risk decoding or QE-reranking. Our method consistently improves translation quality in terms of COMET and BLEURT scores when applied to large language models (LLMs) used for translation (PolyLM, XGLM, Llama2, Mistral, ALMA, and Tower) and to multilingual translation models (NLLB), over five language pairs. Notably, QE-fusion exhibits larger improvements for LLMs due to their ability to generate diverse outputs. We demonstrate that our approach generates novel translations in over half of the cases and consistently outperforms other methods across varying numbers of candidates (5–200). Furthermore, we empirically show that QE-fusion scales linearly with the number of candidates in the pool. 1 Introduction Neural machine translation (NMT) models are probability estimators of translations given source sentences. Therefore, errors in NMT often arise due to the misalignment between the model’s probabilities and human preferences (Koehn and Knowles, 2017; Ott et al., 2018; Stahlberg and Byrne, 2019). In contrast, MT evaluation metrics have shown increasing correlation with human judgements (Mathur et al., 2020; Freitag et al., 2021, 2022b), making them valuable tools for selecting candidates generated by NMT models. Reference-based evaluation metrics such as COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020) have been successfully employed for Minimum Bayes Risk (MBR) decoding (Fernandes et al., 2022; Freitag et al., 2022a). In MBR, these metrics serve as utility functions to select the candidate with the highest similarity to all other candidates (Kumar and Byrne, 2002, 2004). Additionally, quality estimation (QE) metrics such as COMET-QE (Rei et al., 2021), which do not need reference translations, have been used to rerank a pool of candidates based on their estimated quality (Fernandes et al., 2022; Farinhas et al., 2023). Such reranking approaches improve translation quality over standard beam search, particularly when measured with neural-based MT evaluation metrics. However, reranking methods are challenged in situations where candidate translations exhibit complementary errors, with no candidate clearly improving over all others. Figure 1 illustrates this issue. The first candidate, Fire in French chemical plant, uses in instead of the more idiomatic at. The third candidate makes a better choice in this case but renders the verb as cleared instead of extinguished, which is incorrect here. Making the most of this pool of candidates is not possible without combining fragments from several ones. To address this limitation, we propose QEfusion, an approach that leverages the potential complementarity of model-generated candidates to synthesize an improved translation, guided by a QE metric. QE-fusion starts by identifying divergent spans, i.e. spans that exhibit variations within the pool of candidates. Then, it traverses these divergent spans, selecting at each step the one that contributes to a higher score according to the QE metric. The chosen spans are integrated into the synthesized translation, which is thus a fusion of multiple candidates. We demonstrate that QE-fusion is effective with pools of candidates obtained from high-performing 12087 Brand in französischem Chemiewerk gelöscht Hypothesis 1 Final output QE Metric Source Fire in French chemical plant extinguished Fire extinguished on French chemical plant Fire at French chemical plant cleared Fire at French chemical plant extinguished Hypothesis 2 Hypothesis k Fire French chemical plant extinguished cleared   extinguished put out at in on Figure 1: Illustration of the QE-fusion pipeline. The method first generates multiple hypotheses by sampling translations from the model. Then, it computes and sorts the spans that diverge among the candidates. Finally, a QE metric is used to select a span from each group and these spans are merged to form a new, refined translation. multilingual NMT models such as NLLB (NLLB Team et al., 2022), as well as with candidates obtained using in-context learning with large language models (LLMs), which have recently shown comparable performance to NMT models (Garcia et al., 2023; Hendy et al., 2023; Zhu et al., 2023). Indeed, QE-fusion improves translation for several popular, open-source LLMs: PolyLM (Wei et al., 2023), XGLM (Lin et al., 2022), Llama2 (Touvron et al., 2023), Mistral (Jiang et al., 2023a), ALMA (Xu et al., 2024) and TowerBase (Alves et al., 2024). Our contributions are the following:1 1. We introduce QE-fusion, a novel algorithm that leverages quality estimation metrics to create improved translations by combining spans from a pool of candidates. 2. We demonstrate the superior performance of QE-fusion compared to the recent QEreranking and MBR decoding methods, across various open-source LLMs and multilingual NMT models, in five language pairs. 3. We explain QE-fusion’s larger improvements for LLMs over NMT models by the greater diversity of candidates generated by LLMs. 4. We showcase the efficiency of our algorithm by empirically demonstrating that its runtime scales linearly with the number of candidates. 5. We show that QE-fusion effectively reduces hallucinations compared to QE-reranking and MBR, thus enhancing translation reliability. 1The code is available at https://github.com/ GeorgeVern/qe-fusion. 2 Related Work We review in this section several approaches that aim to incorporate additional knowledge either during the decoding process or after it, in order to mitigate the misalignment between MT models and human preferences. We start with a brief reminder of MT evaluation metrics, which play a crucial role as a selection criterion. MT evaluation metrics. Traditional overlapbased evaluation metrics for MT such as BLEU (Papineni et al., 2002) and ChrF (Popovi´c, 2015) are known for their imperfect correlation with human judgments (Mathur et al., 2020; Kocmi et al., 2021; Freitag et al., 2021, 2022b). In response, researchers have shifted their attention towards metrics that use neural networks to score translation outputs, such as BERTScore (Zhang et al., 2020), Prism (Thompson and Post, 2020), COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020). To better emulate human assessment, these metrics are often fine-tuned to predict human scores based on annotations such as those provided by the WMT metrics tasks (Freitag et al., 2022b). In contrast to the reference-based metrics above, which have access to the reference translation, referencefree or quality estimation metrics solely rely on the source sentence for estimating the quality of a translation (Zerva et al., 2022). Recent QE metrics are COMET-QE (Rei et al., 2021), TransQuest (Ranasinghe et al., 2020) and COMETKIWI (Rei et al., 2022b). Reranking candidate translations. One of the earliest proposals for reranking the outputs of NMT systems is the adaptation of the noisy channel model (Brown et al., 1993) to NMT systems (Yee et al., 2019; Bhosale et al., 2020). This approach combines scores from three components: a forward 12088 translation model trained on the original direction, a backward model trained on the opposite direction, and a language model trained on the target language. By integrating these diverse components, noisy-channel reranking has significantly improved translation quality (Ng et al., 2019). An alternative reranking approach uses Minimum Bayes Risk (Kumar and Byrne, 2002; Goel and Byrne, 2000), which selects the candidate with the highest utility from a pool of candidates. In MBR, the utility function measures the similarity between a hypothesis and a reference; however, at test time, when references are unknown, the same set of hypotheses serves both as the candidates and the pseudo-references. Possible utility functions include overlap-based (Eikema and Aziz, 2020) or state-of-the-art neural-based MT metrics (Eikema and Aziz, 2022; Fernandes et al., 2022; Freitag et al., 2022a). MBR has shown promising results, particularly when equipped with recent automatic MT metrics. However, its time complexity scales quadratically with the size of the candidate pool and depends on the cost of computing the utility function, which is considerable for neural metrics such as COMET. A more direct strategy is using reference-free metrics to rerank generated outputs, known as bestof-n or QE-reranking. Initial findings suggested that this approach resulted in inferior translations compared to beam search (Freitag et al., 2022a; Fernandes et al., 2022), especially due to the insensitivity of such metrics to hallucinations (Guerreiro et al., 2023b). However, with advancements in reference-free metrics, QE-reranking has outperformed beam search (Gulcehre et al., 2023; Finkelstein et al., 2023). Beyond reranking, QE scores can also be used during the training phase: for data filtering (Finkelstein et al., 2023), curriculum construction (Gulcehre et al., 2023), or assigning quality-related tags to the outputs (Tomani et al., 2023). Notably, QE-reranking is substantially faster than MBR as its runtime scales linearly with the number of candidates. Yet, while reranking methods such as MBR decoding and QE-reranking effectively improve the performance of MT systems, they are ultimately constrained by the quality of the candidates in the pool. Post-editing MT outputs. An alternative to reranking is editing the actual outputs of the LLM or MT system using trained post-editing models (Freitag et al., 2019; Voita et al., 2019; Welleck et al., 2023). While this approach can improve results, it requires training an additional model with a considerable amount of data and comparable size to the original model. Recent works have explored the complementarity of outputs by training models to merge multiple candidates generated by LLMs (Jiang et al., 2023b; Vernikos et al., 2024). Farinhas et al. (2023) used an LLM instead of a trained model to merge the candidates. Their experiments demonstrated comparable performance to reranking approaches that use neural MT metrics. Constructing translation candidates. A wide variety of scores have been used to construct candidate translation, including the model’s own future scores (Jinnai et al., 2023), lookahead heuristics (Lu et al., 2022), annd values predicted by separate models using Monte Carlo tree search (Leblond et al., 2021; Liu et al., 2023; Chaffin et al., 2022). However, these approaches require access to the model, which is not always guaranteed, and entail substantial computational overhead due to the number of candidates stored at each time step. 3 Definition of QE-fusion QE-fusion leverages the complementary nature of a pool of translation candidates, combining their strongest fragments into an improved output. These candidates are generated either by an LLM with an appropriate prompt or by an encoder-decoder NMT model. 3.1 Candidate Generation There are multiple ways to generate candidates from a model. Common approaches for LLMs include nucleus or top-p sampling (Holtzman et al., 2020), top-k, or sampling with a temperature. In our experiments, we adopt nucleus sampling with a temperature. The performance of LLMs can also be influenced by the number of samples and the prompt (Bawden and Yvon, 2023; Zhu et al., 2023). To optimize LLM performance, we follow the guidelines of Zhu et al. (2023) and use 8 examples for in-context learning in all our experiments. For NMT models, beam search, while commonly used to return the top candidates in the beam, has been shown to lack diversity (Vijayakumar et al., 2018). Instead, we use epsilon sampling (Hewitt et al., 2022), which sets the probability of tokens below a certain threshold to zero and was recently shown to generate more diverse candidates compared to beam search (Freitag et al., 2023). 12089 Algorithm 1 QE-fusion algorithm Input: candidate list Y, QE metric M, beam size b 1: hbase ←argmax y∈Y M(y) ▷select top-ranked candidate 2: hyps ←{hbase} ▷initialize beam 3: diffs ←find_diffs(hbase, Y) ▷find divergent spans 4: for base_span, alter_spans in diffs.items() do 5: for h in hyps do 6: for span in alter_spans do 7: hnew ←h.replace(base_span, span) ▷create new hypothesis 8: if hnew is not in hyps then 9: hyps ←hyps ∪{hnew} 10: end if 11: end for 12: end for 13: scores ←M(hyps) 14: sorted_hyps ←sorted(hyps, scores) ▷sort hyps 15: hyps ←sorted_hyps[: b] ▷keep top b hyps 16: end for Output: hyps[0] 3.2 QE-fusion Algorithm For each sentence translated by a model, the QEfusion algorithm (Algorithm 1) considers the topranked hypothesis by the QE metric as the base hypothesis, hbase (line 1). The rationale behind this choice is that this is already a high-quality translation which may require the fewest modifications. The algorithm then identifies divergent spans between hbase and other candidates generated by the model, using an off-the-shelf library2 that employs edit distance to spot the additions, deletions and replacements required to transform one sequence into another (line 3). For each span where candidates diverge, the various alternatives are listed (line 4). The algorithm substitutes the initial span of hbase with each of the alternative spans (lines 6-11), synthesizing new hypotheses, {hnew i }i=1,2,.... A QE metric is used to compute scores for each hnew i (line 13). The hypotheses are then sorted based on their scores, and the top b candidates are retained, forming a beam (lines 14-15). This process is repeated for each span of hbase, progressively creating new hypotheses. Finally, the highest-scoring hypothesis is selected. While this is a conceptual explanation of the algorithm, certain modifications are made for efficiency purposes such as batching the sentences and caching computed scores (see Appendix A.1). 2https://docs.python.org/3/library/difflib.html 4 Experimental Settings 4.1 Datasets and Evaluation Metrics We assess performance across a spectrum of diverse language pairs. The selected languages include German, which is the most represented nonEnglish language in most models; Russian, a highresource language written in Cyrillic alphabet; Chinese, a high-resource language with a logographic script; and Icelandic, which is considered a lowresource language. We consider the translation of English into the first two languages, and from the latter two into English. Additionally, we consider a non-English-centric pair, German to French. As we experiment with pre-trained models, we use only test data, which we take from WMT22 (Freitag et al., 2022b) and for is→en from WMT21 (Akhbardeh et al., 2021). We draw the few-shot examples for LLMs from the test sets of WMT21. Further details regarding dataset sizes and domains are presented in the Appendix A.2. We report results in terms of two neural-based metrics: COMET-22 (Rei et al., 2022a) and BLEURT-20 (Sellam et al., 2020). Surface-based BLEU and ChrF scores are given in Appendix A.3. 4.2 Models and Parameters We conduct an extensive evaluation using LLMs and encoder-decoder NMT models of various sizes. Specifically, we use popular LLMs such as PolyLM-1.7B (Wei et al., 2023), XGLM-2.9B (Lin et al., 2022), Mistral-7B (Jiang et al., 2023a), Llama2-7B (Touvron et al., 2023). We additionally use ALMA-7B (Xu et al., 2024) and TowerBase7B (Alves et al., 2024), two LLMs based on Llama2-7B that are fine-tuned for translation using both monolingual and parallel data. These LLMs have demonstrated impressive performance in MT (Xu et al., 2024), comparable to GPT-3.5 and NLLB-54B (NLLB Team et al., 2022) and represent intermediary stages between general LLMs and task-specific MT models. As for NMT systems, we use the multilingual NLLB-1.3B and 3.3B models (NLLB Team et al., 2022). Results for the 13B versions of Llama2 and ALMA are given in Appendix A.4. For the LLMs, we adopt the hyper-parameters used by Touvron et al. (2023). We generate candidates using nucleus sampling with p = 0.9 and a temperature of 0.6. As a prompt for in-context learning, we follow Zhu et al. (2023) and use the instruction template: <X> = <Y> with 8 exam12090 ples randomly sampled from the WMT21 data. For the NMT models, we follow Freitag et al. (2023) and use epsilon sampling (Hewitt et al., 2022) with ϵ = 0.02 and a temperature of 0.5. We study the impact of temperature in Section 6.3. We generate 5 candidates both for LLM and MT models for efficiency purposes, although we demonstrate in Section 6.1 that our approach also works with larger numbers of candidates. 4.3 Comparison Terms We compare QE-fusion with standard decoding algorithms for MT, such as greedy decoding and beam search with a width of 5. We also compare our approach with three samplingbased algorithms: (1) random sampling from the pool, serving as a lower performance bound; (2) QE-reranking (Fernandes et al., 2022) using COMETKIWI (Rei et al., 2022b), the same QE metric employed in QE-fusion; and (3) MBR decoding (Eikema and Aziz, 2020), using either the surface-based BLEU or with the neural-based COMET-22 as the utility function. As COMET22 follows the same training pipeline and has the same number of parameters as COMETKIWI, this ensures a fair comparison of the models. 5 Results: Translation Performance 5.1 QE-fusion Applied to LLMs Table 1 presents the results obtained across various language pairs and LLMs, in terms of COMET and BLEURT scores. Results with BLEU and ChrF scores, showing similar trends, are given in Appendix A.3. As expected, the translation performance of LLMs generally improves with scale, but also with recency. For instance, the more recent Mistral-7B significantly outperforms Llama27B, despite their similar sizes (7 billion parameters). ALMA-7B and Tower-7B emerge as the top-performing LLMs across all language pairs, confirming the merits of MT-specific fine-tuning of LLMs. Tower-7B has better performance than ALMA-7B in all pairs except is→en, where the latter dominates.3 We do not provide the is→en scores of smaller LLMs (PolyLM and XGLM) due to their poor capabilities in the low-resource Icelandic language. 3This is because is→en data was not used in the fine-tuning stages of Tower, contrary to ALMA, leading to catastrophic forgetting, as is visible when comparing the scores of Tower with those of its parent model, Llama2. Regarding baselines, greedy decoding consistently lags behind beam search across all language pairs, an observation that contrasts with prior studies focused on zero-shot scenarios (Farinhas et al., 2023), likely due to our use of in-context examples. Unsurprisingly, random selection from the candidate pool emerges as the least effective baseline. Among the reranking approaches, QE-reranking outperforms MBR with either BLEU or COMET as the utility function, particularly in terms of BLEURT scores. Among the two utility functions for MBR, COMET is superior while the use of BLEU often fails to outperform beam search. MBR with COMET as the utility function occasionally surpasses QE-reranking in terms of COMET scores. This may be due to a form of “reward hacking” (Gulcehre et al., 2023), i.e. employing the same metric for both candidate selection and evaluation, since the BLEURT scores are in the reverse order. Our approach, QE-fusion, consistently outperforms all other methods, across all language pairs and LLMs, with 5 exceptions out of 56 comparisons. Two notable exceptions are where beam search achieves the best BLEURT scores, for PolyLM-1.7B (de→fr) and XGLM-2.9B (zh→en). In these cases, the candidate pool likely lacks highquality translations altogether. The other three exceptions correspond to very small differences in COMET scores (0.06, 0.02 and 0.01). Moreover, QE-fusion also outperforms the other methods when combined with even larger LLMs, as confirmed by the results obtained with the Llama2 and ALMA models with 13 billion parameters presented in Appendix A.4, Tables 8 and 9, for COMET, BLEURT, BLEU and ChrF scores. In Appendix A.5, Figure 6, we present a graphical synthesis of these comparisons using radar charts. The shapes corresponding to QE-fusion are always the outermost ones, regardless of variations due to the underlying LLM or language pair. In particular, our approach always outperforms QEreranking, confirming its superiority as a generalization of reranking approaches. 5.2 QE-fusion Applied to NMT Models The COMET and BLEURT scores of our approach applied to multilingual NMT models, namely NLLB-1.3B and NLLB-3.3B, are presented in Table 2 (BLEU and ChrF scores are in Appendix A.3). We observe that NMT models have a slight edge over general-purpose LLMs (Llama2 and Mistral) while their MT-fine-tuned counterparts, ALMA and 12091 LLM Method PolyLM-1.7B XGLM-2.9B Llama2-7B Mistral-7B ALMA-7B Tower-7B en→de Greedy 73.30 / 62.37 77.30 / 66.43 79.84 / 67.08 82.07 / 70.48 84.42 / 74.21 84.17 / 73.11 Beam 72.22 / 65.31 80.00 / 69.22 81.76 / 69.83 83.39 / 72.34 85.11 / 74.78 85.20 / 74.81 Sample 71.84 / 60.71 70.73 / 58.36 78.49 / 65.71 80.73 / 69.11 83.19 / 72.90 83.81 / 72.83 MBR-BLEU 73.54 / 62.08 77.18 / 65.77 79.34 / 66.73 81.72 / 69.99 84.04 / 73.73 84.35 / 73.25 MBR-COMET 78.68 / 66.36 80.90 / 68.80 83.26 / 70.32 84.73 / 72.78 85.99 / 75.46 86.17 / 74.63 QE-reranking 78.02 / 67.21 80.75 / 70.00 82.73 / 71.04 84.30 / 73.23 85.53 / 75.53 85.86 / 75.19 QE-fusion 79.62 / 68.67 81.62 / 71.01 83.63 / 71.96 85.02 / 74.10 85.93 / 75.93 86.23 / 75.68 en→ru Greedy 74.52 / 59.70 80.74 / 65.75 81.72 / 66.94 84.85 / 71.13 85.80 / 72.31 86.94 / 74.24 Beam 74.79 / 63.26 82.11 / 67.58 83.72 / 69.49 86.32 / 73.13 86.87 / 74.17 87.34 / 75.07 Sample 72.99 / 57.70 71.65 / 52.45 80.18 / 64.90 83.55 / 69.17 84.84 / 70.81 86.36 / 73.23 MBR-BLEU 75.19 / 59.92 79.26 / 63.24 81.72 / 66.62 84.50 / 70.34 85.83 / 72.28 86.87 / 74.03 MBR-COMET 80.16 / 63.88 82.42 / 65.04 85.15 / 69.61 87.26 / 72.82 87.78 / 73.90 88.60 / 75.57 QE-reranking 79.59 / 64.53 83.07 / 68.07 84.62 / 70.11 86.81 / 73.25 87.33 / 74.32 88.29 / 75.90 QE-fusion 81.28 / 66.32 83.93 / 69.06 85.60 / 71.48 87.38 / 74.05 87.82 / 75.11 88.58 / 76.30 zh→en Greedy 68.69 / 54.76 46.51 / 28.76 77.72 / 64.13 79.69 / 66.67 79.24 / 66.20 78.49 / 65.30 Beam 66.61 / 57.71 73.94 / 58.69 78.52 / 65.89 80.08 / 67.74 79.23 / 66.46 78.49 / 65.95 Sample 67.85 / 53.32 53.77 / 33.70 76.55 / 62.69 78.91 / 65.56 78.19 / 64.70 77.73 / 64.03 MBR-BLEU 69.74 / 54.83 62.93 / 44.64 77.80 / 64.12 79.69 / 66.58 79.09 / 65.81 78.33 / 64.80 MBR-COMET 72.47 / 56.49 65.98 / 46.51 79.30 / 65.05 80.87 / 67.31 80.40 / 66.77 79.79 / 66.01 QE-reranking 73.12 / 57.86 71.48 / 54.99 79.38 / 66.21 80.79 / 67.90 80.41 / 67.61 79.86 / 66.82 QE-fusion 74.27 / 59.06 72.14 / 55.80 79.99 / 66.92 81.15 / 68.44 80.86 / 68.13 80.44 / 67.46 de→fr Greedy 63.09 / 41.88 71.50 / 53.40 76.88 / 60.46 78.65 / 63.16 74.86 / 57.48 80.64 / 65.86 Beam 62.35 / 50.45 74.32 / 57.37 78.61 / 63.69 79.66 / 65.50 77.07 / 60.66 81.30 / 67.52 Sample 61.27 / 38.59 67.62 / 45.94 75.24 / 57.60 77.12 / 60.60 72.81 / 53.85 79.55 / 64.50 MBR-BLEU 64.31 / 42.76 71.96 / 52.53 76.68 / 59.88 78.45 / 62.41 74.65 / 56.93 80.52 / 65.87 MBR-COMET 69.30 / 46.20 75.80 / 55.72 79.91 / 62.63 81.06 / 65.10 78.53 / 60.56 82.54 / 67.67 QE-reranking 68.74 / 48.38 75.51 / 57.31 79.44 / 63.67 80.69 / 65.53 77.96 / 61.18 82.16 / 68.07 QE-fusion 70.09 / 49.98 76.64 / 58.76 80.27 / 64.56 81.26 / 66.28 78.87 / 62.48 82.53 / 68.44 is→en Greedy – – 66.69 / 51.58 73.69 / 59.80 85.98 / 75.09 65.53 / 50.90 Beam – – 67.22 / 52.52 74.20 / 60.74 86.29 / 75.73 66.39 / 52.24 Sample – – 65.64 / 49.87 72.65 / 58.47 85.15 / 74.01 64.87 / 49.68 MBR-BLEU – – 66.33 / 50.96 73.24 / 59.22 85.79 / 74.81 65.74 / 50.80 MBR-COMET – – 69.32 / 52.36 75.57 / 60.81 86.58 / 75.46 68.77 / 52.28 QE-reranking – – 69.71 / 54.91 75.71 / 62.51 86.43 / 75.62 68.95 / 54.35 QE-fusion – – 70.63 / 56.11 76.81 / 63.43 86.76 / 76.02 69.84 / 55.51 Table 1: Translation performance in terms of COMET-22 / BLEURT-20 scores for various methods, language pairs, and sizes of LLMs. Dashed lines separate deterministic decoding from sampling-based methods and from our approach. The best scores for each language pair and model are in bold. Tower, achieve similar or better performance. Similar to the results with LLMs, our approach consistently outperforms beam search and reranking approaches. The gap between our approach and QE-reranking is slightly smaller in the case of NMT models, which we attribute to the lower diversity of the generated candidates. We test this hypothesis in Section 6.3. 6 Analysis of Results 6.1 Role of the Size of the Candidate Pool In the above experiments, we generated five candidate translations for efficiency reasons. We now study the influence of the number of candidates on the scores of QE-fusion vs. those of the other methods, using XGLM-2.9B for en→de translation. We progressively sample larger candidate pools, from 5 to 200 candidates, and present in Figure 2 the BLEURT scores of our approach compared to QE-reranking using COMETKIWI and to MBR using COMET as the utility function. Additionally, we compare with an oracle reranking approach that has access to the reference translation and uses COMET scores as the selection criterion. QE-fusion consistently outperforms reranking approaches across all sizes of candidate pools. Moreover, QE-fusion even matches the performance of the oracle for pool sizes of 5, 10 and 25 candidates. 12092 Multilingual NMT Method NLLB-1.3B NLLB-3.3B en→de Greedy 84.60 / 74.39 85.48 / 75.43 Beam 85.54 / 75.62 86.24 / 76.44 Sample 83.57 / 73.27 84.66 / 74.48 MBR-BLEU 84.16 / 74.03 85.18 / 75.25 MBR-COMET 85.98 / 75.38 86.69 / 76.34 QE-reranking 85.92 / 75.88 86.25 / 76.60 QE-fusion 86.25 / 76.11 86.74 / 76.81 en→ru Greedy 85.78 / 72.81 86.64 / 73.93 Beam 86.63 / 74.10 87.51 / 75.12 Sample 84.95 / 71.49 86.07 / 73.01 MBR-BLEU 85.59 / 72.37 86.44 / 73.58 MBR-COMET 87.47 / 73.97 88.18 / 75.09 QE-reranking 87.11 / 74.27 87.96 / 75.46 QE-fusion 87.52 / 74.71 88.31 / 75.83 zh→en Greedy 72.38 / 59.35 73.64 / 60.81 Beam 75.16 / 62.90 75.88 / 63.97 Sample 71.39 / 57.45 72.20 / 58.50 MBR-BLEU 73.92 / 60.09 74.99 / 61.20 MBR-COMET 75.98 / 61.34 76.86 / 62.25 QE-reranking 76.46 / 62.84 77.41 / 63.90 QE-fusion 77.17 / 63.65 77.90 / 64.54 de→fr Greedy 79.39 / 64.94 80.38 / 66.69 Beam 81.03 / 67.69 81.92 / 69.24 Sample 78.02 / 62.61 78.76 / 63.91 MBR-BLEU 79.25 / 64.43 80.06 / 65.87 MBR-COMET 81.43 / 66.20 82.15 / 67.63 QE-reranking 81.21 / 66.86 81.88 / 68.26 QE-fusion 81.82 / 67.63 82.33 / 68.81 is→en Greedy 81.82 / 69.94 82.49 / 71.17 Beam 82.30 / 70.60 83.05 / 71.71 Sample 81.02 / 68.92 82.02 / 70.32 MBR-BLEU 81.64 / 69.60 82.36 / 70.93 MBR-COMET 82.74 / 70.39 83.56 / 71.71 QE-reranking 82.81 / 71.04 83.50 / 71.97 QE-fusion 83.08 / 71.45 83.83 / 72.33 Table 2: Translation performance in terms of COMET22 / BLEURT-20 scores for various methods, language pairs, and multilingual NMT models. 6.2 Novelty of Outputs from QE-fusion As the previous experiment may suggest that QEfusion has a similar effect as the use of larger candidate pools with reranking methods, we examine here the novelty of the synthesized candidates, by counting how many times the output of QE-fusion can be found in a larger pool. For a pool of p candidates given to QE-fusion, we measure how frequently an exact match of the output of QE-fusion can be found in larger pools of size q ≥p, where p and q are in {5, 10, 25, 50, 100, 200}. The results, presented in Figure 3, reveal that even with a small pool of 5 candidates, more than 50% of the outputs of QE-fusion would not have been generated by the LLM, even when samFigure 2: BLEURT scores of QE-fusion and other methods over pools of candidates of increasing sizes from the XGLM-2.9B LLM. QE-fusion outperforms reranking approaches and is comparable to the COMET-reranking oracle for pools of up to 25 candidates. Figure 3: Frequencies at which outputs produced by QE-fusion appear in larger candidate pools sampled from XGLM-2.9B. Results show that QE-fusion always synthesizes a substantial number of novel candidates that the LLM would not generate otherwise. pling 200 candidates (rightmost bar of the leftmost group). The percentage of identical (or non-novel) candidates decreases as the pool grows, due to more varied candidates present in larger pools. When candidates generated by QE-fusion are present in the original pool, our method has the same effect as QE-reranking. The frequency of these cases is given by the leftmost bar of each group (pool size) in Figure 3. We observe that our approach defaults to QE-reranking less than 40% of the time for a pool of size 5 (leftmost bar) and this value drops to 20% for a pool of size 200. 12093 0.0 0.2 0.4 0.6 0.8 1.0 65 70 75 BLEURT 0.0 0.2 0.4 0.6 0.8 1.0 Temperature 25 50 75 100 Diversity XGLM-2.9B NLLB-1.3B QE-reranking QE-fusion Figure 4: Effect of temperature on translation performance (above) and on the diversity of the pool (below), using an LLM and an NMT model for en→de translation, with QE-fusion vs. QE-reranking. 6.3 Impact of Candidate Diversity on Quality By construction, QE-fusion benefits from the diversity of candidates, as this allows for an increased number of divergent spans. We study here the effect of diversity on both QE-fusion and QE-reranking, for LLMs and NMT models. To increase the diversity of the pool of candidate translations, we adjust the temperature parameter during decoding but keep constant all other generation parameters. A higher temperature results in token probability distributions that are more uniform, thus increasing the stochasticity of sampling and consequently the diversity of the candidate pool. Here, we measure this diversity by the number of unique 4-grams present in the candidate pool, averaged over all test sentences. Figure 4 displays in its lower part the diversity of the pool as a function of temperature for XGLM-2.9B and NLLB-1.3B on en→de translation, and in its upper part the BLEURT quality scores of these models with either QE-reranking or QE-fusion. Additional results with COMET scores and other diversity measures are given in Appendix A.6 and show similar trends. Increasing the temperature leads to an expected rise in diversity. The rise is higher for XGLM-2.9B than for NLLB-1.3B, illustrating the fact that LLMs generate more diverse outputs, likely due to their general-domain language pretraining compared to 10 1 10 2 Candidate pool size 10 2 10 3 10 4 10 5 Runtime (s) MBR-COMET QE-reranking QE-fusion Figure 5: Runtimes (in seconds) for different pool sizes for the en→de WMT22 test set. the task-specific training of NMT models. Nevertheless, generating too diverse candidates due to high temperatures results in a noticeable drop in performance (right side of the upper graph). The gap between QE-fusion and QE-reranking slightly widens as diversity increases, indicating the ability of our approach to leverage alternative spans. The optimal performance is achieved using a temperature in the [0.4, 0.6] interval. 6.4 Computation Time In Section 6.1, Figure 2, we presented the scaling laws of QE-fusion vs. reranking in terms of performance when the size of the candidate pool varies. However, computation time is a crucial factor as the candidate pool grows. QE-reranking has the advantage of linear scaling with the number N of candidates, while MBR requires N(N −1) model calls for each sentence. In contrast, QE-fusion has variable complexity, depending on the diversity of the pool and the presence of different spans. To compare empirically the complexity of QEfusion with reranking methods, we measured their runtime for the en→de WMT22 test data with 2,037 sentences. All experiments were executed on a single Nvidia A40 GPU with 40 GB memory, using a batch size of 400 samples. Using a logarithmic scale, Figure 5 confirms that QE-reranking scales linearly with the candidate pool size, while MBR scales quadratically. Interestingly, QE-fusion also exhibits linear scaling with the number of candidates but with a constant factor of ×5 compared to QE-reranking. For 5 and 10 candidates, QEfusion has similar runtimes to MBR. We have implemented specific optimizations, including score caching and input batching, to reduce 12094 Model Method NLLB-1B NLLB-3B PolyLM XGLM Llama2 zh→en Sample 15.6 15.9 20.4 59.4 5.8 MBR-COMET 9.3 9.0 16.3 35.4 4.1 QE-reranking 6.4 6.5 13.9 17.3 3.6 QE-fusion 5.5 6.7 12.5 16.1 3.1 de→fr Sample 3.6 3.8 27.4 20.4 5.5 MBR-COMET 2.4 2.4 21.6 9.3 3.6 QE-reranking 2.2 2.6 19.0 8.4 3.4 QE-fusion 2.1 2.2 17.7 7.7 3.0 Table 3: Percentage of defect translations (spBLEU < 3) for various methods and models. runtime (see Appendix A.1). These modifications were uniformly applied to all methods to ensure a fair comparison. We leave further optimizations such as pruning (Cheng and Vlachos, 2023) for future work. 6.5 Effect of QE-Fusion on Hallucinations Following the approach of Guerreiro et al. (2023a), we define defect translations as those having an spBLEU score4 below 3, and compute their percentage. In our evaluation, unlike Guerreiro et al. (2023a), we do not perturb the source sentences, but focus on measuring the frequency of hallucinations during the translation of the original unperturbed source sentences. The results presented in Table 3 indicate that QEreranking is more efficient in mitigating hallucinations compared to MBR with COMET, consistent with prior work (Farinhas et al., 2023). Remarkably, QE-fusion reduces hallucinations even further, outperforming all other methods across different models and language pairs, in nine out of ten cases. 6.6 Comparison to Post-editing Approaches Due to the large scale and often restricted access of LLMs, novel approaches have emerged to enhance their predictions without access to their weights. These methods, similar to reranking, involve generating predictions from the LLM and subsequently refining them using a secondary, more compact model (Vernikos et al., 2024; Jiang et al., 2023b). We compare QE-fusion with LMCor (Vernikos et al., 2024), a T5-base (Raffel et al., 2020) corrector model, which refines LLM-generated translations. LMCor is trained on the News Commentary 4We use spBLEU as implemented in SacreBLEU (Post, 2018): nrefs:1|case:mixed|eff:yes|tok:flores101| smooth:exp|version:2.3.1. Model BLEU COMET BLEURT T5-base (FT) 23.32 75.22 64.57 XGLM-2.9B (ICL) 17.32 74.54 66.47 + LMCOR 25.15 77.45 68.41 + QE-fusion 18.60 81.79 71.17 Table 4: Translation performance on the WMT22 en→de test set. We compare QE-fusion against standard fine-tuning (FT), in-context learning (ICL), and LMCor. v16 corpus5 augmented with five translation candidates from XGLM-2.9B for each source sentence. During testing, it receives the source sentence and five translation candidates. We also include results from a T5-base model fine-tuned for MT without additional candidates. Table 4 shows that LMCor outperforms finetuned T5 and in-context learning across all metrics. However, QE-fusion achieves significant improvements of 4 and 3 points in COMET and BLEURT, respectively, despite trailing in BLEU. This underscores the limitations of surface-based metrics and suggests that higher BLEU scores may be attained by models specifically trained for MT, which can mimic the translation style of the training data (see Appendix A.3). 7 Conclusion In this paper, we introduced QE-fusion, a novel approach that combines fragments from diverse candidates to synthesize improved translations guided by quality estimation metrics. Our experiments across five language pairs involving both LLMs and NMT models demonstrate that QE-fusion consistently outperforms beam search, MBR decoding, and QE-reranking. QE-fusion is particularly beneficial to LLMs, capitalizing on their diverse outputs. Notably, QE-fusion maintains its superior performance even when the pool of candidates grows, while its runtime scales linearly with the number of candidates, highlighting its scalability. By integrating external knowledge from quality estimation metrics, QE-fusion not only enhances translation quality but also tackles specific issues like hallucinations. This offers a promising avenue for addressing other phenomena such as contextspecific errors (Vernikos et al., 2022). Beyond translation, the versatility of QE-fusion extends to any text generation task where metrics or reward models (Ouyang et al., 2022) are available. 5www.statmt.org/wmt22/translation-task.html 12095 Limitations Human evaluation. While our work employs state-of-the-art MT evaluation metrics, we acknowledge the inherent limitations of automatic metrics. Human evaluation could offer additional, and possibly more reliable insights. At the time of writing, human evaluation is ongoing, starting with one language pair and four systems. Choice of metrics. The QE metric used by QEfusion, COMETKIWI, shares some similarities with the COMET metric used for evaluation, as they originate from the same family of models. Consequently, using COMETKIWI as our criterion for merging spans might be considered as the reason why we get improvements in COMET scores. To address this concern, we confirm our findings with scores using three other metrics: BLEURT, BLEU and ChrF. Even with these alternate metrics, our approach consistently outperforms all other reranking techniques. Acknowledgments We are grateful for their support to the Swiss National Science Foundation (DOMAT grant n. 175693, On-demand Knowledge for Documentlevel Machine Translation and EXOMAT grant n. 228494, External Knowledge for Low-resource Machine Translation), and to the Institute for ICT at HEIG-VD. References Farhad Akhbardeh, Arkady Arkhangorodsky, Magdalena Biesialska, Ondˇrej Bojar, Rajen Chatterjee, Vishrav Chaudhary, Marta R. Costa-jussa, Cristina España-Bonet, Angela Fan, Christian Federmann, Markus Freitag, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Leonie Harter, Kenneth Heafield, Christopher Homan, Matthias Huck, Kwabena Amponsah-Kaakyire, Jungo Kasai, Daniel Khashabi, Kevin Knight, Tom Kocmi, Philipp Koehn, Nicholas Lourie, Christof Monz, Makoto Morishita, Masaaki Nagata, Ajay Nagesh, Toshiaki Nakazawa, Matteo Negri, Santanu Pal, Allahsera Auguste Tapo, Marco Turchi, Valentin Vydrin, and Marcos Zampieri. 2021. Findings of the 2021 conference on machine translation (WMT21). In Proceedings of the Sixth Conference on Machine Translation, pages 1–88, Online. Association for Computational Linguistics. Duarte M. Alves, José Pombal, Nuno M. Guerreiro, Pedro H. Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, Pierre Colombo, José G. C. de Souza, and André F. T. Martins. 2024. Tower: An open multilingual large language model for translation-related tasks. arXiv preprint arXiv:2402.17733. Rachel Bawden and François Yvon. 2023. Investigating the translation performance of a large multilingual language model: the case of BLOOM. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, pages 157–170, Tampere, Finland. European Association for Machine Translation. Shruti Bhosale, Kyra Yee, Sergey Edunov, and Michael Auli. 2020. Language models not just for pretraining: Fast online neural noisy channel modeling. In Proceedings of the Fifth Conference on Machine Translation, pages 584–593, Online. Association for Computational Linguistics. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263– 311. Antoine Chaffin, Vincent Claveau, and Ewa Kijak. 2022. PPL-MCTS: Constrained textual generation through discriminator-guided MCTS decoding. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2953–2967, Seattle, United States. Association for Computational Linguistics. Julius Cheng and Andreas Vlachos. 2023. Faster minimum Bayes risk decoding with confidence-based pruning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12473–12480. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2023. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):1–113. 12096 Bryan Eikema and Wilker Aziz. 2020. Is MAP decoding all you need? the inadequacy of the mode in neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 4506–4520, Barcelona, Spain (Online). International Committee on Computational Linguistics. Bryan Eikema and Wilker Aziz. 2022. Sampling-based approximations to minimum Bayes risk decoding for neural machine translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10978–10993, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. António Farinhas, José de Souza, and Andre Martins. 2023. An empirical study of translation hypothesis ensembling with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11956–11970, Singapore. Patrick Fernandes, António Farinhas, Ricardo Rei, José G. C. de Souza, Perez Ogayo, Graham Neubig, and Andre Martins. 2022. Quality-aware decoding for neural machine translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1396–1412, Seattle, United States. Association for Computational Linguistics. Mara Finkelstein, Subhajit Naskar, Mehdi Mirzazadeh, Apurva Shah, and Markus Freitag. 2023. MBR and QE finetuning: Training-time distillation of the best and most expensive decoding methods. arXiv preprint arXiv:2309.10966. Markus Freitag, Isaac Caswell, and Scott Roy. 2019. APE at scale and its implications on MT evaluation biases. In Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), pages 34–44, Florence, Italy. Association for Computational Linguistics. Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021. Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation. Transactions of the Association for Computational Linguistics, 9:1460–1474. Markus Freitag, Behrooz Ghorbani, and Patrick Fernandes. 2023. Epsilon sampling rocks: Investigating sampling strategies for minimum Bayes risk decoding for machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 9198–9209. Markus Freitag, David Grangier, Qijun Tan, and Bowen Liang. 2022a. High quality rather than high model probability: Minimum Bayes risk decoding with neural metrics. Transactions of the Association for Computational Linguistics, 10:811–825. Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, Eleftherios Avramidis, Tom Kocmi, George Foster, Alon Lavie, and André F. T. Martins. 2022b. Results of WMT22 metrics shared task: Stop using BLEU – neural metrics are better and more robust. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 46–68, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Xavier Garcia, Yamini Bansal, Colin Cherry, George Foster, Maxim Krikun, Melvin Johnson, and Orhan Firat. 2023. The unreasonable effectiveness of fewshot learning for machine translation. In Proceedings of the 40th International Conference on Machine Learning. Vaibhava Goel and William J Byrne. 2000. Minimum Bayes-risk automatic speech recognition. Computer Speech & Language, 14(2):115–135. Nuno M. Guerreiro, Duarte M. Alves, Jonas Waldendorf, Barry Haddow, Alexandra Birch, Pierre Colombo, and André F. T. Martins. 2023a. Hallucinations in Large Multilingual Translation Models. Transactions of the Association for Computational Linguistics, 11:1500–1517. Nuno M. Guerreiro, Elena Voita, and André Martins. 2023b. Looking for a needle in a haystack: A comprehensive study of hallucinations in neural machine translation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1059–1075, Dubrovnik, Croatia. Association for Computational Linguistics. Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, and Nando de Freitas. 2023. Reinforced self-training (ReST) for language modeling. arXiv preprint arXiv:2308.08998. Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, and Hany Hassan Awadalla. 2023. How good are GPT models at machine translation? a comprehensive evaluation. arXiv preprint arXiv:2302.09210. John Hewitt, Christopher Manning, and Percy Liang. 2022. Truncation sampling as language model desmoothing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3414– 3427, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In The Eighth International Conference on Learning Representations. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego 12097 de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023a. Mistral 7B. arXiv preprint arXiv:2310.06825. Dongfu Jiang, Xiang Ren, and Bill Yuchen Lin. 2023b. LLM-blender: Ensembling large language models with pairwise ranking and generative fusion. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14165–14178, Toronto, Canada. Association for Computational Linguistics. Yuu Jinnai, Tetsuro Morimura, and Ukyo Honda. 2023. On the depth between beam search and exhaustive search for text generation. arXiv preprint arXiv:2308.13696. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Philipp Koehn and Rebecca Knowles. 2017. Six challenges for neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation, pages 28–39, Vancouver. Association for Computational Linguistics. Shankar Kumar and William Byrne. 2002. Minimum Bayes-risk word alignments of bilingual texts. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, page 140–147. Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 169–176, Boston, Massachusetts, USA. Association for Computational Linguistics. Rémi Leblond, Jean-Baptiste Alayrac, Laurent Sifre, Miruna Pislar, Lespiau Jean-Baptiste, Ioannis Antonoglou, Karen Simonyan, and Oriol Vinyals. 2021. Machine translation decoding beyond beam search. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8410–8434, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, and Xian Li. 2022. Few-shot learning with multilingual generative language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9019–9052, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jiacheng Liu, Andrew Cohen, Ramakanth Pasunuru, Yejin Choi, Hannaneh Hajishirzi, and Asli Celikyilmaz. 2023. Don’t throw away your value model! making PPO even better via value-guided Monte-Carlo tree search decoding. arXiv preprint arXiv:2309.15028. Ximing Lu, Sean Welleck, Peter West, Liwei Jiang, Jungo Kasai, Daniel Khashabi, Ronan Le Bras, Lianhui Qin, Youngjae Yu, Rowan Zellers, Noah A. Smith, and Yejin Choi. 2022. NeuroLogic a*esque decoding: Constrained text generation with lookahead heuristics. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 780–799, Seattle, United States. Association for Computational Linguistics. Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4984–4997, Online. Association for Computational Linguistics. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR’s WMT19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314–319, Florence, Italy. Association for Computational Linguistics. NLLB Team, Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No Language Left Behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Myle Ott, Michael Auli, David Grangier, and Marc’Aurelio Ranzato. 2018. Analyzing uncertainty in neural machine translation. In Proceedings of the 35th International Conference on Machine Learning, pages 3956–3965. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, 12098 Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Maja Popovi´c. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392–395, Lisbon, Portugal. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1). Tharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. TransQuest: Translation quality estimation with cross-lingual transformers. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5070–5081, Barcelona, Spain (Online). International Committee on Computational Linguistics. Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins. 2022a. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 578–585, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ricardo Rei, Ana C Farinha, Chrysoula Zerva, Daan van Stigt, Craig Stewart, Pedro Ramos, Taisiya Glushkova, André F. T. Martins, and Alon Lavie. 2021. Are references really needed? unbabel-IST 2021 submission for the metrics shared task. In Proceedings of the Sixth Conference on Machine Translation, pages 1030–1040, Online. Association for Computational Linguistics. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. Ricardo Rei, Marcos Treviso, Nuno M. Guerreiro, Chrysoula Zerva, Ana C Farinha, Christine Maroti, José G. C. de Souza, Taisiya Glushkova, Duarte Alves, Luisa Coheur, Alon Lavie, and André F. T. Martins. 2022b. CometKiwi: IST-unbabel 2022 submission for the quality estimation shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 634–645, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7881–7892, Online. Association for Computational Linguistics. Felix Stahlberg and Bill Byrne. 2019. On NMT search errors and model errors: Cat got your tongue? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3356– 3362, Hong Kong, China. Association for Computational Linguistics. Brian Thompson and Matt Post. 2020. Automatic machine translation evaluation in many languages via zero-shot paraphrasing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 90–121, Online. Association for Computational Linguistics. Christian Tomani, David Vilar, Markus Freitag, Colin Cherry, Subhajit Naskar, Mara Finkelstein, and Daniel Cremers. 2023. Quality control at your fingertips: Quality-aware translation models. arXiv preprint arXiv:2310.06707. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. arXiv preprint arXiv:2307.09288. Giorgos Vernikos, Arthur Brazinskas, Jakub Adamek, Jonathan Mallinson, Aliaksei Severyn, and Eric 12099 Malmi. 2024. Small language models improve giants by rewriting their outputs. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2703–2718. Giorgos Vernikos, Brian Thompson, Prashant Mathur, and Marcello Federico. 2022. Embarrassingly easy document-level MT metrics: How to convert any pretrained metric into a document-level metric. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 118–128, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. Context-aware monolingual repair for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 877–886, Hong Kong, China. Association for Computational Linguistics. Xiangpeng Wei, Haoran Wei, Huan Lin, Tianhao Li, Pei Zhang, Xingzhang Ren, Mei Li, Yu Wan, Zhiwei Cao, Binbin Xie, Tianxiang Hu, Shangjie Li, Binyuan Hui, Bowen Yu, Dayiheng Liu, Baosong Yang, Fei Huang, and Jun Xie. 2023. PolyLM: An open source polyglot large language model. arXiv preprint arXiv:2307.06018. Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel Khashabi, and Yejin Choi. 2023. Generating sequences by learning to self-correct. In The Eleventh International Conference on Learning Representations. Haoran Xu, Young Jin Kim, Amr Sharaf, and Hany Hassan Awadalla. 2024. A paradigm shift in machine translation: Boosting translation performance of large language models. In The Twelfth International Conference on Learning Representations. Kyra Yee, Yann Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5696–5701, Hong Kong, China. Association for Computational Linguistics. Chrysoula Zerva, Frédéric Blain, Ricardo Rei, Piyawat Lertvittayakumjorn, José G. C. de Souza, Steffen Eger, Diptesh Kanojia, Duarte Alves, Constantin Or˘asan, Marina Fomicheva, André F. T. Martins, and Lucia Specia. 2022. Findings of the WMT 2022 shared task on quality estimation. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 69–99, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. BERTScore: Evaluating text generation with BERT. In The Eighth International Conference on Learning Representations. Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, Shujian Huang, Lingpeng Kong, Jiajun Chen, and Lei Li. 2023. Multilingual machine translation with large language models: Empirical results and analysis. arXin preprint arXiv:2304.04675. 12100 A Appendix A.1 Implementation Optimizations While Algorithm 1 outlines the concept of fusing candidates, we introduce specific modifications to enhance efficiency. Firstly, to mitigate the computationally expensive calls to the QE model, we parallelize the exploration of all sentences in the test set, resembling a batched beam search. By doing so, we reduce the overall number of calls to the model (which depends on the number of divergent spans) by utilizing a larger batch. Additionally, we implement a hash table to track previously generated candidates, ensuring that we do not compute scores for the same sentence twice. Lastly, we incorporate an early exit mechanism, removing sentences for which no pending pseudo-generation step exists. These optimizations significantly impact the time complexity of our algorithm, which we empirically demonstrate to scale linearly with the number of candidates in Section 6.4. A.2 Datasets Table 5 presents information about the datasets used in our study, in terms of size and domain. More information is available in the synthesis articles from WMT22 (Freitag et al., 2022b) and WMT21 (Akhbardeh et al., 2021). Lang. Pair Source Sentences Domain en→de WMT22 2,037 News Conversational e-Commerce Social en→ru WMT22 2,037 News Conversational e-Commerce Social zh→en WMT22 1,875 News Conversational e-Commerce Social de→fr WMT22 1,984 News Conversational e-Commerce Social is→en WMT21 1,000 News Table 5: Test datasets used for evaluation. A.3 Results with Surface-based Metrics We provide the scores of the surface-based BLEU and ChrF metrics for LLMs in Table 6. The overall performance trends align with those of neuralbased metrics, indicating that larger models consistently achieve higher scores. Once again, QEfusion consistently surpasses reranking approaches. However, regarding surface-based metrics, beam search or greedy decoding frequently emerge as the top-performing methods, with our approach securing the second position. Still, our approach outperforms all other sampling-based methods. The outputs of beam search often exhibit predictability, while sampling introduces a layer of creativity to translations. Unfortunately, surfacebased metrics struggle to account for nuances like synonyms or significant restructuring, leading to potential penalties for such translations (a limitation which has contributed to a decline in their popularity). Table 7 presents BLEU and ChrF scores for the NMT models. Firstly, we observe that the NMT models perform significantly better than the LLMs in terms of surface-based metrics, which is consistent with previous findings (Chowdhery et al., 2023; Hendy et al., 2023; Zhu et al., 2023). The trend differs slightly compared to LLMs, as methods like QE-reranking and MBR score higher in terms of surface-based metrics, particularly for the highresource pairs en→de and en→ru. This divergence can be attributed, once again, to the limited diversity of translations generated by MT models, while our approach favors more creative translations, potentially penalized by these metrics. Nevertheless, for pairs where MT models perform worse, such as is→en and de→fr, QE-fusion consistently outperforms reranking approaches and occasionally beam search. A.4 Results using LLMs with 13B Parameters Tables 8 and 9 present the translation scores of the larger versions of Llama2 and ALMA with 13B parameters. The overall trend aligns with other LLMs: QE-fusion significantly outperforms reranking methods. A.5 Graphical Comparison of Main Scores The BLEURT scores of various methods, LLMs and language pairs from Table 1 are represented as radar charts in Figure 6. The shapes corresponding to our proposal, QE-fusion, always extend outside the others (only beam search, MBR with COMET and QE-reranking are plotted, for simplicity), for all of the four LLMs represented: Llama2-7B, Mistral-7B, ALMA-7B, and TowerBase-7B. The latter two models, though fine-tuned for MT, have large differences for is→en and de→fr. Figure 6 also shows the BLEURT scores for the two NMT models from Table 2, over the same 12101 LLM Method PolyLM-1.7B XGLM-2.9B Llama2-7B Mistral-7B ALMA-7B Tower-7B en→de Greedy 17.62 / 46.83 17.95 / 46.86 22.83 / 51.84 24.87 / 53.53 26.74 / 55.95 30.21 / 58.89 Beam 12.74 / 46.06 21.16 / 49.04 22.99 / 53.57 24.98 / 54.79 28.82 / 57.23 29.75 / 59.80 Sample 16.52 / 45.73 13.10 / 40.64 20.07 / 49.92 22.30 / 51.35 23.52 / 53.62 28.47 / 57.42 MBR-BLEU 19.11 / 47.85 18.40 / 46.88 22.26 / 51.26 23.78 / 52.84 25.97 / 55.21 30.35 / 58.77 MBR-COMET 18.94 / 48.22 17.65 / 46.84 22.37 / 51.95 24.18 / 53.40 25.91 / 55.64 30.51 / 59.03 QE-reranking 19.45 / 49.07 18.55 / 47.95 22.38 / 52.12 23.90 / 53.32 25.73 / 55.63 29.77 / 58.81 QE-fusion 20.40 / 50.27 19.39 / 49.11 23.17 / 52.95 24.26 / 54.17 25.94 / 56.02 29.64 / 58.95 en→ru Greedy 16.36 / 41.81 16.68 / 42.29 19.64 / 46.11 22.64 / 49.04 23.14 / 49.75 28.94 / 54.76 Beam 12.63 / 41.46 19.99 / 44.68 20.52 / 48.43 23.14 / 51.26 25.48 / 51.54 27.81 / 55.87 Sample 14.32 / 39.80 11.18 / 34.09 17.51 / 43.97 20.14 / 46.25 20.33 / 47.35 26.45 / 52.81 MBR-BLEU 17.42 / 42.31 16.38 / 41.20 19.51 / 45.85 22.27 / 48.24 22.77 / 49.25 28.76 / 54.41 MBR-COMET 16.84 / 42.77 15.31 / 40.92 19.12 / 46.09 21.81 / 48.55 22.07 / 49.52 28.29 / 54.52 QE-reranking 17.46 / 43.54 17.06 / 43.08 19.38 / 46.44 21.45 / 48.51 22.02 / 49.54 27.76 / 54.30 QE-fusion 18.32 / 44.89 17.34 / 43.95 20.04 / 47.32 22.06 / 49.44 22.43 / 50.30 27.94 / 54.75 zh→en Greedy 11.22 / 52.34 4.44 / 21.40 20.37 / 49.25 22.92 / 51.68 21.61 / 51.14 22.16 / 50.20 Beam 8.74 / 34.81 13.56 / 38.59 22.37 / 51.30 24.14 / 53.69 22.72 / 51.68 23.72 / 51.19 Sample 9.70 / 34.07 5.60 / 24.80 17.89 / 46.73 20.53 / 49.64 19.03 / 49.03 20.09 / 48.54 MBR-BLEU 11.20 / 36.60 8.93 / 31.87 19.98 / 49.01 22.39 / 51.70 20.87 / 50.79 21.90 / 50.26 MBR-COMET 10.85 / 36.23 8.73 / 32.24 19.17 / 48.67 21.76 / 51.43 20.35 / 50.73 21.56 / 50.12 QE-reranking 11.38 / 37.68 11.09 / 36.81 19.96 / 49.81 22.01 / 52.03 20.62 / 51.24 21.49 / 50.41 QE-fusion 12.46 / 39.36 11.58 / 37.64 20.44 / 50.72 22.31 / 52.63 20.93 / 51.92 22.06 / 51.33 de→fr Greedy 11.60 / 33.07 17.28 / 40.98 23.63 / 48.84 26.57 / 52.02 20.79 / 46.09 33.69 / 56.59 Beam 7.79 / 33.73 18.92 / 42.80 23.72 / 51.25 25.17 / 53.16 21.12 / 45.88 32.77 / 58.07 Sample 9.66 / 31.24 13.61 / 36.76 20.86 / 46.23 23.29 / 49.17 18.14 / 43.80 31.02 / 54.71 MBR-BLEU 12.15 / 34.57 16.45 / 40.93 23.14 / 48.45 25.54 / 51.05 20.37 / 45.83 32.90 / 56.36 MBR-COMET 11.82 / 34.52 16.08 / 41.08 22.81 / 48.62 25.31 / 51.46 19.92 / 45.81 32.92 / 56.62 QE-reranking 12.62 / 35.40 17.10 / 42.15 22.91 / 49.08 25.21 / 51.72 19.65 / 45.62 32.45 / 56.58 QE-fusion 13.18 / 36.31 18.10 / 43.24 23.55 / 50.01 25.36 / 52.26 20.76 / 46.90 31.97 / 56.77 is→en Greedy – – 14.68 / 36.57 22.60 / 44.73 35.72 / 58.20 14.84 / 36.76 Beam – – 15.47 / 37.03 22.49 / 45.64 37.34 / 59.26 14.48 / 37.02 Sample – – 13.70 / 35.03 20.14 / 43.08 32.20 / 55.48 13.54 / 35.61 MBR-BLEU – – 14.76 / 36.43 21.71 / 43.98 34.61 / 57.36 14.97 / 36.45 MBR-COMET – – 14.12 / 36.22 21.29 / 44.15 34.59 / 57.71 14.38 / 36.75 QE-reranking – – 15.00 / 37.36 21.95 / 45.31 34.37 / 57.69 15.25 / 37.56 QE-fusion – – 15.80 / 38.44 22.62 / 46.13 35.00 / 58.40 15.67 / 38.32 Table 6: Translation performance in terms of BLEU / ChrF scores for various methods, language pairs, and sizes of LLMs. Dashed lines separate deterministic decoding from existing sampling-based methods and from our approach. language pairs as above, excluding zh→en. Again, QE-fusion extends outside the other shapes for both NMT models. A.6 Temperature and Diversity In Figure 7, we present additional results regarding the impact of temperature on translation performance and pool diversity. We evaluate translation quality with COMET, demonstrating similar results to those in Section 6.3 with BLEURT. Higher temperatures enhance the results of both QE-reranking and QE-fusion but excessively high temperatures lead to a drop in performance. To measure diversity, we consider here two additional metrics: the average number of unique candidates in the pool and the semantic diversity, as defined by Farinhas et al. (2023), where u(x, y) is the utility function, in this case COMET, and yj, yi represent two different candidates from the pool: 1 − 1 N(N −1) N X i,j=1 j̸=i u(yj, yi) (1) The diversity metrics exhibit similar trends to our lexical diversity ones, with diversity increasing as the temperature rises. These results confirm that LLMs tend to produce more diverse outputs, a fact that contributes to explaining why QE-fusion is more effective on LLM outputs than on NMT ones. 12102 en-de zh-en is-en de-fr en-ru 52.5 55.0 57.5 60.0 62.5 65.0 67.5 70.0 Llama2-7B en-de zh-en is-en de-fr en-ru 60 62 64 66 68 70 72 74 Mistral-7B en-de zh-en is-en de-fr en-ru 62 64 66 68 70 72 74 76 NLLB-1.3B en-de zh-en is-en de-fr en-ru 60 62 64 66 68 70 72 74 76 ALMA-7B en-de zh-en is-en de-fr en-ru 55 60 65 70 75 TowerBase-7B en-de zh-en is-en de-fr en-ru 62 64 66 68 70 72 74 76 NLLB-3.3B beam MBR-COMET QE-reranking QE-fusion Figure 6: BLEURT scores for four methods combined with four LLMs and two NMT models, on five and four language pairs respectively. 12103 Multilingual NMT Method NLLB-1.3B NLLB-3.3B en→de Greedy 32.04 / 59.16 33.31 / 60.49 Beam 33.69 / 60.92 34.07 / 61.82 Sample 29.80 / 57.71 30.42 / 58.51 MBR-BLEU 31.49 / 58.48 32.19 / 59.81 MBR-COMET 31.19 / 59.00 32.11 / 60.18 QE-reranking 31.01 / 59.26 31.68 / 60.10 QE-fusion 30.90 / 59.41 31.59 / 60.32 en→ru Greedy 27.58 / 53.37 29.12 / 54.71 Beam 29.14 / 54.86 30.12 / 55.96 Sample 25.86 / 52.14 27.22 / 53.16 MBR-BLEU 27.34 / 53.19 28.62 / 54.36 MBR-COMET 27.42 / 53.51 28.55 / 54.50 QE-reranking 27.18 / 53.55 28.20 / 54.46 QE-fusion 26.98 / 53.60 28.18 / 54.74 zh→en Greedy 16.71 / 44.13 16.66 / 45.56 Beam 18.43 / 47.05 18.34 / 47.72 Sample 14.47 / 42.83 14.66 / 43.75 MBR-BLEU 16.52 / 45.26 17.30 / 46.57 MBR-COMET 16.42 / 45.54 16.98 / 46.38 QE-reranking 16.70 / 46.27 17.40 / 47.42 QE-fusion 17.13 / 47.00 17.51 / 47.98 de→fr Greedy 34.56 / 57.65 35.88 / 58.33 Beam 37.04 / 59.89 38.68 / 61.14 Sample 30.94 / 55.10 31.64 / 55.08 MBR-BLEU 33.42 / 56.90 34.74 / 57.77 MBR-COMET 33.38 / 57.25 34.40 / 57.87 QE-reranking 33.07 / 57.20 34.01 / 57.88 QE-fusion 33.13 / 57.62 33.54 / 58.12 is→en Greedy 31.62 / 54.93 32.60 / 55.56 Beam 32.96 / 56.25 34.10 / 57.01 Sample 28.78 / 52.95 29.35 / 53.53 MBR-BLEU 30.83 / 54.29 31.71 / 55.16 MBR-COMET 30.25 / 54.17 31.56 / 55.33 QE-reranking 30.60 / 54.48 31.10 / 55.31 QE-fusion 31.03 / 54.94 31.61 / 55.62 Table 7: Translation performance in terms of BLEU / ChrF scores for various methods, language pairs, and multilingual NMT models. LLM Method Llama2-13B ALMA-13B en→de Greedy 82.88 / 71.62 84.66 / 74.08 Beam 84.11 / 73.29 85.67 / 75.40 Sample 81.48 / 70.20 83.83 / 73.11 MBR-BLEU 82.36 / 71.03 84.37 / 73.77 MBR-COMET 85.08 / 73.41 86.04 / 75.40 QE-reranking 84.61 / 73.96 86.03 / 75.75 QE-fusion 85.17 / 74.41 86.32 / 76.09 en→ru Greedy 84.73 / 70.68 86.93 / 73.67 Beam 86.14 / 72.71 88.05 / 75.51 Sample 83.42 / 68.83 85.97 / 72.48 MBR-BLEU 84.19 / 69.94 86.80 / 73.42 MBR-COMET 87.01 / 72.44 88.54 / 75.12 QE-reranking 86.60 / 72.96 88.17 / 75.42 QE-fusion 87.20 / 73.60 88.53 / 76.03 zh→en Greedy 79.37 / 66.27 80.04 / 67.18 Beam 79.67 / 67.30 80.51 / 68.15 Sample 78.53 / 65.19 79.28 / 66.05 MBR-BLEU 79.28 / 66.08 79.83 / 66.83 MBR-COMET 80.60 / 66.97 81.20 / 67.76 QE-reranking 80.54 / 67.60 81.13 / 68.44 QE-fusion 80.94 / 68.04 81.58 / 68.96 de→fr Greedy 78.94 / 63.88 77.58 / 60.69 Beam 80.08 / 65.86 78.30 / 61.70 Sample 77.88 / 61.84 75.53 / 57.78 MBR-BLEU 78.88 / 63.42 77.01 / 59.62 MBR-COMET 81.35 / 65.73 79.88 / 62.39 QE-reranking 80.99 / 66.43 79.24 / 62.52 QE-fusion 81.63 / 67.11 79.73 / 63.36 is→en Greedy 71.02 / 56.63 85.94 / 75.18 Beam 71.94 / 58.14 86.16 / 75.47 Sample 70.63 / 56.02 85.21 / 74.16 MBR-BLEU 71.18 / 56.71 85.96 / 75.04 MBR-COMET 73.72 / 58.12 86.67 / 75.65 QE-reranking 74.00 / 59.93 86.65 / 76.04 QE-fusion 74.90 / 61.17 86.82 / 76.21 Table 8: Translation performance of LLMs (13 billion parameters) in terms of COMET / BLEURT scores for various methods and language pairs. 12104 LLM Method Llama2-13B ALMA-13B en→de Greedy 26.40 / 54.83 28.56 / 57.35 Beam 26.43 / 56.22 30.27 / 59.04 Sample 23.96 / 52.94 26.46 / 55.68 MBR-BLEU 25.39 / 54.04 28.53 / 56.88 MBR-COMET 25.27 / 54.45 28.26 / 57.10 QE-reranking 25.57 / 54.81 28.33 / 57.52 QE-fusion 26.08 / 55.49 28.10 / 57.66 en→ru Greedy 23.30 / 49.84 26.56 / 52.12 Beam 23.72 / 51.17 27.86 / 54.04 Sample 21.10 / 47.81 23.55 / 49.80 MBR-BLEU 22.76 / 49.14 25.49 / 51.38 MBR-COMET 22.67 / 49.59 25.13 / 51.54 QE-reranking 22.56 / 49.51 24.75 / 51.44 QE-fusion 22.90 / 50.24 24.92 / 52.05 zh→en Greedy 22.48 / 51.63 24.27 / 53.92 Beam 24.48 / 53.73 26.58 / 55.13 Sample 19.70 / 49.36 21.73 / 51.81 MBR-BLEU 21.98 / 51.41 23.46 / 53.52 MBR-COMET 21.42 / 51.34 23.05 / 53.59 QE-reranking 21.51 / 51.63 23.37 / 53.93 QE-fusion 21.70 / 52.02 23.32 / 54.36 de→fr Greedy 26.67 / 51.87 23.11 / 47.54 Beam 25.49 / 53.13 21.69 / 45.03 Sample 24.33 / 49.44 20.46 / 44.98 MBR-BLEU 26.19 / 51.31 22.56 / 46.83 MBR-COMET 25.97 / 51.68 22.55 / 47.04 QE-reranking 26.62 / 52.18 21.82 / 46.54 QE-fusion 26.71 / 52.70 22.39 / 47.35 is→en Greedy 19.82 / 41.59 34.75 / 57.55 Beam 19.92 / 42.40 35.72 / 58.00 Sample 17.48 / 39.97 31.05 / 54.90 MBR-BLEU 19.50 / 41.43 33.52 / 56.70 MBR-COMET 18.99 / 41.32 33.33 / 56.62 QE-reranking 19.53 / 42.12 34.27 / 57.74 QE-fusion 20.40 / 43.14 34.48 / 58.12 Table 9: Translation performance of LLMs (13 billion parameters) in terms of BLEU / ChrF scores for various methods and language pairs. 0.0 0.2 0.4 0.6 0.8 1.0 75.0 77.5 80.0 82.5 85.0 COMET 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 4 5 Unique candidates 0.0 0.2 0.4 0.6 0.8 1.0 Temperature 0.1 0.2 0.3 0.4 0.5 Semantic diversity XGLM-2.9B NLLB-1.3B QE-reranking QE-fusion Figure 7: Effect of temperature on the diversity of the pool (below) and its resulting impact on translation performance (above) using LLMs and NMT models for en→de translation. 12105
2024
653
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12106–12118 August 11-16, 2024 ©2024 Association for Computational Linguistics Generating and Evaluating Plausible Explanations for Knowledge Graph Completion Antonio Di Mauro1,3*, Zhao Xu1, Wiem Ben Rim1,2, Timo Sztyler1, Carolin Lawrence1 1NEC Laboratories Europe, Germany; 2University College London, UK [email protected], [email protected], [email protected] Abstract Explanations for AI should aid human users, yet this ultimate goal remains under-explored. This paper aims to bridge this gap by investigating the specific explanatory needs of human users in the context of Knowledge Graph Completion (KGC) systems. In contrast to the prevailing approaches that primarily focus on mathematical theories, we recognize the potential limitations of explanations that may end up being overly complex or nonsensical for users. Through in-depth user interviews, we gain valuable insights into the types of KGC explanations users seek. Building upon these insights, we introduce GradPath,1 a novel path-based explanation method designed to meet humancentric explainability constraints and enhance plausibility. Additionally, GradPath harnesses the gradients of the trained KGC model to maintain a certain level of faithfulness. We verify the effectiveness of GradPath through well-designed human-centric evaluations. The results confirm that our method provides explanations that users consider more plausible than previous ones. 1 Introduction Explainability is an essential requirement of AI, especially in high-risk areas, such as healthcare, which directly influence the life and health of humans. Only if users understand why an AI system arrives at a particular result can they trust the prediction and engage with the AI-driven system. This makes eXplainable AI (XAI) an essential ingredient for the adoption of AI in high-risk areas (Han and Liu, 2022; Chaddad et al., 2023). However, most existing XAI approaches aim to learn algorithmic explanations, which are supported by mathematical theories (like other ML methods), but are *Research work conducted during internship at NEC Laboratories Europe. 1More information is available at: https://github.com/ nec-research/gradpath. Figure 1: Explain a predicted triple (in red) with the top 5 most influential training triples (in blue), learned by Gradient Rollback (Lawrence et al., 2021). The algorithmic explanations are not plausible for users. often difficult for users to understand, especially for users without an AI background. Not only do such implausible explanations not improve user trust in AI, in a contrary manner, they amplify users’ doubt and hesitation. With this concern, we focus to improve plausibility of explanations, which refers to how convincing explanations are for human users (Jacovi and Goldberg, 2020). For the prediction task with knowledge graph data, a.k.a. Knowledge Graph Completion (KGC), the concept of plausibility is especially tricky, because non-expert users are not familiar with graph structures and the existing explanation methods do not aim to generate graph explanations in a human understandable manner (Betz et al., 2022; Lawrence et al., 2021; Pezeshkpour et al., 2019). Fig. 1 illustrates the challenge with a KGC prediction (in red) and its explanations (in blue) generated by Gradient Rollback (GR) (Lawrence et al., 2021). In particular, the predicted triple (Angela, is_sister_of, Marco) is explained with the top 5 most influential training triples that are selected as per gradients of the learned KGC model. Although the provided explanations are faithful subject to a bound on the approximation error (Lawrence et al., 2021), it is difficult for users to understand. For example, how can the training triple (Charlotte, is_sister_of, Colin) explain the prediction? It is not plausible for users. To better understand what kind of explanations 12106 would be more plausible for users in the context of KGC, we started with a series of interviews. We provided users with predicted triples and GR-based explanations and interviewed them to understand their opinions about the explanations. This investigation led to an insightful finding: Interviewees remarked that the explanations linked to a path are the most plausible, see e.g. Fig. 2 (a). Here we learned that Angela is indeed the sister of Marco because Angela is the daughter of Pierre whereas Marco is the son of Pierre, therefore, explaining the prediction plausibly. The explanations are no longer scattered; instead, they form a connection between the entities in the predicted triples. This allows users to link them for reasoning. Building upon this insight, we introduce GradPath, a novel method for generating human-centric explanations to enhance plausibility. The path based explanations intend to establish reasoning loops akin to human reasoning. Technically, the existence of the explanation path can increase the likelihood of the predicted triple, compared with the scenario wherein the path does not exist. To approximate the probabilities, GradPath utilizes gradients collected by GR during training, which ensures a certain level of faithfulness of the generated explanations. These explanation paths can vary in length, as illustrated in Fig. 2. Evaluating plausibility is also challenging, as it inherently relies on human perception (Lyu et al., 2024; Wood-Doughty et al., 2021). There is not yet a standard way in the literature to evaluate plausibility of explanations (see also Appendix A.1). To draw solid conclusion for plausibility, we investigate diverse aspects of a human evaluation study, and introduce a comprehensive humancentric evaluation framework. We utilize humanunderstandable benchmark data and quantify plausibility with different metrics. The human evaluations validate that GradPath produces more plausible explanations for users in comparison to previous XAI KGC methods. The major contributions of the paper can be summarized as follows: • We propose a novel method GradPath to improve plausibility of post-hoc explanations for KGC by extracting explanation paths. • We suggest a human-centric evaluation framework to evaluate plausibility. It investigates diverse aspects of a human evaluation study for comprehensive assessment of plausibility. • Experiments on the benchmark datasets demon(a) score = 0.2891 (b) score = 0.2497 Figure 2: Explanations (in blue) learned with GradPath for a predicted triple (in red): (a) explanation path of length ℓ= 2, and (b) length ℓ= 3. The path based explanations are more plausible for human users because they create a connection between the entities in the predicted triple. strate the proposed method is more plausible than previous ones. 2 Background and Notation Assume a knowledge graph G, consisting of entities and relation types. Two entities and a relation are combined to form a triple in the format (subject, relation, object), shortened as t = (s, r, o). The subject and object are entities that can be visualized as the nodes of the graph G, while the relation specifies a labeled link from the subject entity to the object one. Typically knowledge graphs are incomplete. Therefore, Knowledge Graph Completion (KGC) is the task of predicting unknown triples in the knowledge graph. For this, two parts of a triple are given to a prediction system and the third is inferred. For example, t∗= (s∗, r∗, ?) would predict the object given the particular subject and relation. We consider differential KGC methods, such as DistMult (Yang et al., 2015) and Complex (Trouillon et al., 2016). They learn embedding vectors xe for each entity e and xr for each relation type r. Given the vectors, the likelihood of a triple is defined via a score function ϕ(xs, xr, xo). During training, the vectors are optimized, such that the likelihood of a training triple is larger than that of a randomly sampled unknown triple. These methods can often well predict an object o∗for a test triple t∗= (s∗, r∗, ?), but they are black boxes without explanations for users’ trust. The task of our work is to explain why the KGC model trained on the graph G arrives at a particular prediction. Existing work achieved this by focusing on purely mathematical aspects. For instance, Criage (Pezeshkpour et al., 2019) utilized influence functions, and Gradient Rollback (Lawrence et al., 2021) tracked gradients during training to produce explanations. 12107 In contrast to previous work, which focused on algorithmic faithfulness, we place our focus on the human factor: extending algorithmic XAI to also respect human needs and produce explanations that are plausible for users. With this, we aim to combine the best of both worlds and therefore move KGCs closer to real-world applications with human-centric explanations to facilitate trust. 3 GradPath: Generate Plausible Explanations for KGC We propose the method GradPath to learn humancentric explanations for KGC methods, such that the explanations are plausible for human users, and facilitate them to assess whether the KGC predictions are reasonable or not. 3.1 Initial Human-Centric Survey We started with an initial survey to assess what individuals think about KGC predictions and explanations learned with the recent XAI method GR. We first used the dataset Kinship (Hinton, 1990), which is about familial relations and easy for testers to understand. We asked three testers2 to assess whether they think the kinship predictions are true or false based on the explanations. We further interviewed them by the following questions: What will be a helpful explanation? Why do you think an explanation is helpful? Next, we repeated the interview using real-world user preference data collected from a recommender system.3 We interviewed two testers4 with the same questions. Upon the collected feedback, we concluded the following two significant findings. First, the interviews revealed that the testers often searched for “paths” that link the nodes of the predicted triple to the nodes of explanations. See for example the “triangle” explanation in Fig. 2(a), where two triples as the explanations can connect the two nodes of the predictions with another node in a triangle relationship. In situations where explanations do not connect to the predicted triples (e.g., 2The testers have a good ML background but did not know this particular task. 3The data is about user preferences regarding the products of a company. It consists of the attributes of the products and users, alongside information of user preferences for specific products. We transformed the data into a knowledge graph. Due to user privacy and commercial confidentiality of the company providing the dataset, we are unable to publicly disclose the dataset. 4The two interviewees are the engineers of the recommendation system. They are closely familiar with the end users and possess a strong understanding of users needs. the scattered explanations in Fig. 1), the testers considered the explanations nonsensical. Second, the testers often found a small set of explanations plausible (2-3) and remarked that a large number of explanations (e.g., > 10) create confusion. 3.2 Defining Path-based Explanations Inspired by the findings from the interviews, we suggest extracting influential paths as explanations for a prediction, where the paths connect the two entities of the prediction. The explanation paths intend to emulate the sequential reasoning process humans use to deduce over a knowledge graph, therefore making the resulting explanations more human-centric, namely more plausible for humans. Formally, we define: Definition 1. An explanation is an influential path of length ℓ, denoted as γ = {t1, . . . , tℓ| tj ∈G}, with the constraints: 1. All tj are selected from training triples; 2. There is a joint entity between two adjacent triples tj and tj+1; 3. The subject s∗and object o∗of the test triple are entities of the start triple t1 and the end triple tℓ, respectively; 4. Sequentially, existence of each current triple has the potential to increase the likelihood of the subsequent triple; 5. Ultimately, at the end of the path, the predictive probability of the test triple is improved when compared with the scenario wherein the path of triples does not exist. Thus, an influential path is a series of training triples that connect the subject and object entities of the predicted triple and make the predicted triple more likely. Such influential paths are the explanations that the testers searched for in our initial interviews, thereby motivating the concept of influential paths from a human point of view. Technically, we look for a path that increases the likelihood of the predicted triple, compared with the scenario wherein the path does not exist and therefore decreases the likelihood of the predicted triple. Fig. 3 illustrates the key idea. Note that the influential paths are not supposed to be directed. Regardless of the direction of a triple, it provides the testers with the information for reasoning. For instance, 12108 Figure 3: GradPath: (a) G′ ≡G \ γ in which the path γ is removed from G; (b) the likelihood of the subsequent triple increases if the current triple exists; (c) at the end of the path, the predictive probability of the test triple t∗= (s∗, r∗, o∗) will be improved if the entire path exists. the paths in Fig. 2 are not directed yet plausible. The testers can follow the path-based explanations to understand how a KGC method arrives at a prediction, and to judge the prediction by comparing the predictive procedure of KGC with the mental models in their minds. 3.3 Generating Path-based Explanations Based on the formulation above, we now turn to how the influential paths can be generated. The paths that satisfy the constraints 1-3 can be readily computed with arbitrary graph traversal algorithms, such as Depth-First Search (DFS) and BreadthFirst Search (BFS). However, the computation of the constraints 4 and 5 is tricky. Mathematically, they can be written as p(tj+1|G′∪{tj, . . . , t1}) > p(tj+1|G′), (1) p(t∗|G) > p(t∗|G′), (2) where G′ ≡G \ γ denotes the rest of G in which the path γ is removed, p(tj+1|G′ ∪{tj, . . . , t1}) is the predictive probability of the next triple tj+1 given G′ and the triples preceding tj+1 in the path γ. The other terms in (1) and (2) are defined in a similar manner. Intuitively, each term computes the probability of a triple (the predicted triple t∗or a triple that is part of the path γ) under the simulation of what if certain triples from the path γ had not been part of G during training. We consider both the influence of the path γ on the predictive triple t∗(Constraint 5, Eq. (2)), and the influence of other triples on the next one (Constraint 4, Eq. (1)). As the conditions of each prediction is different (i.e. DFS/BFS returns different paths), the probability needs to be computed for each prediction and explanation path individually. To be fully faithful, we would have to retrain a new KGC model for each ‘what if’ analysis. But the computational cost for this is prohibitively expensive. To address the problem, we introduce an approximate approach, which is based on gradients of training triples (Lawrence et al., 2021). In our work, we take a step further to identity influential paths, instead of single training triples, for a test triple. Concretely, we approximate the ’what if’ analysis as follows. First we train the differentiable KGC model with entire training triples G, and during training employ GR to compute changes of embedding vectors caused by each training triple. Formally, a triple t = (s, r, o) causes a change in the embedding vector xs of its subject, which is approximated by δ(xs|t) = XN i=1 ηi∇xsLi(t), (3) where i denotes the i’th training iteration, and ηi is the learning rate. Li(t) specifies the loss of the triple t at the i’th iteration, while ∇xsLi(t) means the derivative of the loss over xs. The changes δ(xr|t) and δ(xo|t) of the object and relation embedding vectors xo and xr are computed in an equivalent manner. The approximation error of the approach has been linked to stability theory (Hardt et al., 2016) and bound by Lawrence et al. (2021). Second, given the influence of each training triple t, we can now approximate what would happen if a path had been removed from the training data without retraining the model. By removing the influence of an explanation path, we can simulate the probability of any triple that we would have had if the triple had not been part of the training data. Based on this, we can approximate the terms of Eqs. (1) and (2). In particular, for the predictive probability p(tj+1|G′) of the triple tj+1 = (sj+1, rj+1, oj+1), we approximate the new embedding vectors ¯xsj+1 and ¯xoj+1 conditioned on the training set G′ ≡G \ γ by ¯xsj+1 = xsj+1 + δ(xsj+1|tj) (4) ¯xoj+1 = xoj+1 + δ(xoj+1|tj+2) (5) 12109 and the vector of the relation rj+1 is updated by ¯xrj+1 = xrj+1 + Pℓ i=1 δ(xri|ti)1(rj+1 = ri), (6) where the indicator function 1(x = y) is one if the condition x = y is true and zero otherwise. Intuitively, this means that we update subject sj+1 by removing the influence of the connected triple tj (analogous for the object oj+1). The relation rj+1 is updated only if the same relation appears again in the explanation path. After the influence of the path has been updated, the predictive probability p(tj+1|G′) can be computed with the score function ϕ(¯xsj+1, ¯xrj+1, ¯xoj+1) of the KGC method. The other predictive probabilities in Eqs. (1) and (2) can be calculated in a similar manner. Finally, putting the constraints 4 and 5 together, we score an explanation path γ for a test triple t∗: 1 L XL j=2 p(tj|G′ ∪{tj−1, . . . , t1}) + p(t∗|G) − XL j=2 p(tj|G′) −p(t∗|G′)  . (7) The score function (7) measures the importance of the candidate paths between the subject and object of the test triple t∗, and identifies the ones which most likely increase the likelihood of t∗as explanation paths.5 Our method generates path-based explanations using predictive probabilities of the involved triples. Along the path, the existence of each current triple has the potential to increase the likelihood of the next triple. When the entire path exists, the likelihood of the test triple is ultimately improved.6 3.4 Faithfulness vs. Plausibility Faithfulness and plausibility represent two pivotal principles in XAI. Faithfulness demands explanations accurately reflect the model’s reasoning process, while plausibility necessitates alignment with human reasoning. Our GradPath method aims to find a balance between faithfulness and plausibility. On one hand, GradPath leverages the gradients of training triples to identify influential paths. These gradients, collected during training using 5Particularly it is the explanation path where its removal during the ‘what if’ analysis caused the largest drop in likelihood of t∗we want to explain - i.e. the explanation path had the largest contribution to raising the likelihood of t∗. 6If the new likelihood is instead lowered, then the explanation path could be served as an ‘anti-explanation’. GR (Lawrence et al., 2021), reflect the true reasoning process of KGC models, thereby guaranteeing some level of faithfulness in the explanations. On the other hand, GradPath’s path-based explanations are designed to mimic human reasoning over a knowledge graph, enhancing plausibility compared to explanations based solely on individual training triples (see Fig. 2 vs. Fig. 1). It is worth noting that this improved plausibility may come with a potential reduction in faithfulness. The reason is as follows: Given the computational complexity of generating influential paths (see Sec. 3.3 for details), approximations are utilized. While these approximations stem from GR, the bounds of GR’s approximation errors do not inherently apply to GradPath. GradPath removes a sequence of training triples around the predicted one, which may alter the embedding space of the predicted entities due to potentially accumulated approximation errors. We acknowledge the potential decrease in faithfulness, prioritizing explanations that are plausible for human users while maintaining a certain level of faithfulness. 4 Evaluation of Plausibility GradPath is designed to improve plausibility of explanations, which necessarily leads to a question: How do we evaluate plausibility? To address this, we present a human-centric evaluation framework to assess plausibility in a rigorous manner. Plausibility refers to how convincing explanations are for users (Jacovi and Goldberg, 2020). In analogy to, for instance, accuracy in classification, there is lack of solid and deterministic ground truth to automatically compute a single score as a measurement of plausibility. The measurement of plausibility is intrinsically based on human perception. Thus, we design a comprehensive human evaluation study to quantify plausibility. 4.1 Purpose Factor In the human evaluation of XAI, a commonly used method is human simulatability, which measures the extent to which explanations can assist users in predicting the output of the model (Yin and Neubig, 2022; Hase and Bansal, 2020; Lage et al., 2019; Doshi-Velez and Kim, 2017). However, human simulatability is an artificial construct and does not gauge the actual use cases of explanations, such as what a user would do with an explanation in a real-world setting. 12110 Here, we aim to shift the focus to the purpose factor of human users with XAI, which is the specific objective why explanations are needed by users. Imagine that a doctor interacts with an AI-driven diagnosis system. The doctor wants to know if an AI prediction is true or false, such that she can use it for her task of prescribing treatments for a patient. Therefore the role of the explanations is to help the doctor to solve her question: Is the AI-driven prediction correct? If the explanations match the mental model of the doctor and/or the doctor can easily follow the logical reasoning of the explanations, then she likely trusts the prediction. Considering this ultimate goal, evaluations where humans simulate AI behavior do not align with the intended purpose of the users. We therefore investigate in the human evaluation: Can testers assess correctness of predicted triples with help of explanations? In particular, we ask the following questions to each tester: • Is the predicted triple correct? (Two scales, see below) • How confident is the assessment? (Scale: 5 point Likert) • Is an explanation helpful? (Scale: yes and no) For the first question, we employ two distinct scales. The first scale comprises three options: yes, no, and not-sure, used for in-house evaluations conducted by testers possessing a ML background or with access to support in case of queries. The second scale offers a more detailed range of 5 options (shown in Fig. 4), used for crowdsourcing evaluations to mitigate potential misunderstanding. The testers’ feedback about the question is processed as follows. If a tester’s judgement coincide with the ground truth then the feedback is recorded as 1 otherwise 0. For the feedback of not-sure, it is recorded as 0. The suggested human evaluation method is related to the decision-making-based approach, such as the one proposed by Alufaisan et al. (2021), but differs slightly. While the decision-making-based evaluation focuses on the success of human users in making correct predictions based on explanations, our method shifts to a more purpose-driven evaluation. It asks testers to judge the correctness of predictions by comparing the explanations with their prior knowledge and reasoning. 4.2 Human-Related Concerns Human evaluation collects subjective ratings from individual testers to assess the plausibility of the explanations. To have reliable feedback, the folFigure 4: The online evaluation platform developed to collect feedback from testers regarding the explanations. (1) Displays the prediction. (2) Shows the explanations and prompts testers to evaluate their helpfulness. (3) Visualizes the prediction and explanations in a graph format for easier comprehension. (4) Asks testers to assess the correctness of the prediction and their confidence in it (refer to Appendix A.5 and A.6 for more information). lowing concerns need to be addressed (Hase and Bansal, 2020; Gajos and Mamykina, 2022). Balance: Predictions are balanced. Namely, the correctly and erroneously predicted triples should be of similar number, and randomly ordered, such that testers cannot simply guess their correctness. Diversity: Testers may remember information or knowledge of previous test triples, which means the earlier predictive triples judged by a tester may influence their judgment on later ones. Thus, we propose that the predictions should be distinct from each other. Furthermore, confounders need to be investigated, which are other factors potentially influencing user feedback besides the XAI method itself. In addition, human-understandability of benchmark data used in an evaluation is also essential. The details are reported in Appendix A.3 and A.4. 4.3 Quantification of Plausibility To quantify the plausibility of explanations, we suggest the following metrics, based on Zhou et al. (2021) and interviews with testers in the initial survey: • Accuracy rate of assessment • Number of helpful explanations for a triple • Time cost for a tester to assess a prediction • Confidence level of assessment These metrics are abbreviated as Acc, helpExpl, Time, and Confidence, respectively, in the experi12111 ments. The accuracy rate metric appears to be a good measure of plausibility since it is based on well-defined ground truth (the correctness of predictions). This metric could be less influenced by the diversity of testers (e.g., fast/slow thinking modes). The empirical analysis in Sec. 5 also demonstrates its effectiveness. Regarding the metric number of helpful explanations,7 there are two divergent opinions. On one side, more explanations may not be necessary if a single explanation explicitly explains a prediction. On the other side, more explanations are always beneficial as predictions can be explained from different perspectives, and users have different needs and interests. There would not be one single perfect explanation that satisfies everyone. We believe that combining this metric with others can provide more insights. For example, together with the accuracy rate of assessment, we can demonstrate if more explanations may improve users’ assessments. Another disputable metric is the time cost of assessment. Ideally, testers use less time for assessment if explanations are plausible. However, in practice, we found after interviewing the testers that they often skip a prediction when the explanations (e.g., the explanation in Fig. 1) are meaningless. Conversely, only when the explanations (e.g., Fig. 2) appear reasonable do they explore them carefully. Therefore, both too little and too much time can indicate bad explanations, whereas good explanations occupy the space in between. To assess whether the observed differences in feedback are genuine or merely due to random variation, we employ the non-parametric significance test Brunner-Munzel (Brunner and Munzel, 2000). This test is preferred over the commonly used ttest, as it does not assume normality of the data (see Appendix A.2 for further discussion). 5 Experiment Settings and Results Following the human-centric evaluation framework outlined in Sec. 4, we investigated plausibility of GradPath with two human-understandable benchmark datasets: Kinship (Hinton, 1990) and CoDEx (Safavi and Koutra, 2020) (see details of the datasets in Appendix A.4). An online evaluation platform was developed to collect feedback from testers, as illustrated in Fig. 4. We conducted 7The metric number of helpful explanations (shortened as helpExpl) is collected based on the tester’s answer to the question, “Is an explanation helpful?” Fig. 4 illustrates how the evaluation platform collects feedback from the testers. a comparison between our human-centric method GradPath and the algorithmic explanation method Gradient Rollback (GR) (Lawrence et al., 2021) to investigate whether GradPath can offer more plausible explanations. 5.1 Experiment Settings Kinship. We first utilized the kinship dataset (Hinton, 1990) because of its easy human understandability. Although the dataset is small in size, it presents key challenges found in commonly used knowledge graphs, such as multiple relations and 1:n relations between entities. We employed Complex (Trouillon et al., 2016) as the knowledge graph completion (KGC) method to be explained due to its popularity and strong performance. The parameters were set as follows: embedding dimension is 10, learning rate is 0.001, number of negative samples is 13, batch size is 1, epochs is 100, and optimizer is Adam. Gradient Rollback (GR) presented the top 5 important training triples as explanations. Our method utilized the top-1 path-based explanation (ranked by our score), where a path is of length less than four. Five-fold cross-validation was employed to compute predictions and explanations for all triples. We invited 30 coworkers8 for in-house evaluation and received feedback from 23 of them. In addition, we recruited 105 lay testers9 via Amazon Mechanical Turk (AMT) for crowdsourcing evaluation. The details of the settings are discussed in Appendix A.6 and A.7. For the question “Is the predicted triple correct?” we utilized a scale of 3 options for the in-house evaluation and a scale of 5 options for the crowdsourcing evaluation, as detailed in Sec. 4.1. We conducted both in-house and crowdsourcing evaluations for the following benefits: (1) The coworkers intended to be well-engaged, providing high-quality feedback and can be further interviewed for detailed comments. (2) The crowdsourcing evaluation helped us obtain feedback from general users without a background in ML. The results shed light on their understanding of explanations. (3) By comparing the two evaluations, we could investigate the influence of tester diversity and explore the reliability of the metrics. CoDEx. We further evaluated GradPath with CoDEx (Safavi and Koutra, 2020), which is a more complex and recent KGC dataset extracted from 8They have a good background in machine learning, but are not familiar with XAI and know nothing about our work. 9Each tester is an individual without ML background. 12112 Figure 5: Human evaluation results of the Kinship data with in-house testers possessing a good ML background (top) and crowdsourced lay testers (bottom). GradPath provided more plausible explanations in both cases for testers to assess the correctness of the predictions. Wikidata and Wikipedia. We employed DistMult (Yang et al., 2015) as another popular and effective KGC method to be explained. The parameters were set as follows: embedding dimension is 1024, learning rate is exponential decay with an initial value of 0.3, number of negative samples is 20, batch size is 1, epochs is 500, and optimizer is Adam. GR reported the top 10 training triples as explanations. Our method selected the top N paths as explanations, where N varied dynamically to ensure that the cumulative number of training triples is not larger than 10. Here, we followed Miller’s law, which suggests that people can absorb at most 9 pieces of information (Miller, 1956). We recruited 105 lay testers via Amazon Mechanical Turk (AMT) (details in Appendix A.6). More information about the settings of the human evaluation can be found in Appendix A.8. 5.2 Results with Kinship Data Fig. 5 presented the Kinship results regarding the metrics defined in Sec. 4.3. GradPath outperformed GR in all four metrics in both in-house and crowdsourcing evaluations. The in-house evaluation revealed GradPath provided more plausible explanations (helpExpl: 1.64 vs. 0.55), allowing testers to more accurately assess the correctness of predictions (Acc: 83% vs. 35%).10 The crowdsourcing AMT evaluation reported similar trends. Interestingly, the in-house testers used less time than the crowdsourced lay testers. It might be because of good ML background of the in-house testers. The two human evaluations demonstrated that GradPath prioritizes user needs in its design, resulting 10To gain further insights into the plausibility of the generated explanations, we analyzed testers’ assessments in more details (see Appendix A.9). Figure 6: Human evaluation results of the CoDEx data with crowdsourced lay testers. GradPath provided more plausible explanations for testers to assess the correctness of the predictions. in explanations that are more plausible for users to assess the correctness of the predicted triples. To demonstrate the observed differences are statistically significant and not due to chance, we conducted the Brunner-Munzel (BM) test and reported the results in Table 1. Overall, the differences between GradPath and GR were significant in both in-house and crowdsourcing evaluations. The pvalues for the metrics Acc, time, and helpExpl were much lower than the threshold α = 0.05 in both evaluations, revealing strong statistical significance of the differences between GradPath and GR. The results also indicated that the testers were confident, with a confidence metric of 1.71 for GR and 1.77 for GradPath (the upper bound being 2), showing no significant differences (p-values above 0.4). Upon analyzing the testers’ responses, we observed that they primarily evaluated predictions when confident in their assessments; otherwise, they responded with “not sure” to the question “Is the predicted triple correct?” 5.3 Results with CoDEx Data The results for CoDEx are reported in Fig. 6. Again, GradPath provided more plausible explanations for testers to assess the correctness of the predictions. With the path-based explanations generated by GradPath, the testers can judge the correctness of the predicted triples more accurately (58.0% vs. 44.7%) with less time (11.79 vs. 14.6 sec.) To investigate the significance of the observed differences, the results of the BM test are reported in Table 1. The p-values 0.0000 for Acc and 0.0391 for helpExpl showed strong statistical significance of the differences between GradPath and GR. The metric Time is not statistically significant for this dataset, which could be due to the more complex nature of the dataset. The significance test also revealed that the metric Acc, i.e. Accuracy rate of users assessing the correctness of predictions, can be a good measurement for human evaluation of plausibility. As discussed 12113 helpExpl Acc Confidence Time Kinship In-House 0.0000 0.0000 0.4346 0.0000 Kinship AMT 0.0299 0.0121 0.4335 0.0007 CoDEx AMT 0.0391 0.0000 0.9189 0.1648 Table 1: P-values of the BM test: the smaller the value, the more significant the difference between the human feedback on the two explanation methods. in Sec. 4.3, the metric Acc is based on well-defined ground truth, correctness of predictions, and thus can be more resistant against the diversity of the testers (e.g. fast or slow thinking mode). Moreover, the experimental results also show the metric confidence might not be a good measurement in the current question design. As there is an option of “not sure” for the question of “Is the predicted triple correct?”, the testers intended to assess when they are confident of the answer. For future studies, we would recommend excluding the “not sure” option. Overall, the human evaluation results lead us to conclude that GradPath can help testers more accurately assess the correctness of predictions. 6 Conclusion Explanations for AI should be plausible for humans. To this end, we propose a novel explanation method, GradPath, to improve plausibility for KGC. Moreover, a comprehensive human-centric evaluation framework is introduced to evaluate plausibility reliably. We evaluate our GradPath method with three human evaluations using two benchmark datasets. The evaluation results confirm that GradPath indeed provides more plausible explanations than previous methods. One future direction can be to collect a new KGC dataset with the goal of exploring XAI KGC for users, as we realize in the human evaluations that datasets should exhibit a high level of human understandability to enable meaningful human evaluations, which is an area where existing KGC datasets may have limitations (see Sec. 7 and Appendix A.4). 7 Limitations Explanation methods involve a trade-off between faithfulness and plausibility. Faithfulness refers to the degree to which an explanation method truly represents the reasoning process of an AI model. Plausibility considers how well a human user can understand and use an explanation. There is a natural trade-off because a fully faithful explanation is not human-understandable (e.g., providing the weights and architecture of a neural network as an explanation is fully faithful, but humans cannot understand it). Here, we specifically focus on increasing plausibility. Therefore, our method may lose some faithfulness compared to other KGC explanation methods. Users of our method for real-world applications should be aware of this and determine whether this is an acceptable trade-off based on the given use case. In our current human evaluation studies, we have employed two benchmark datasets to assess the performance of the proposed method. We acknowledge the importance of experimenting with a broader range of datasets. However, our research questions impose conditions on the suitability of these datasets. Specifically, KGC benchmarks used in human evaluations need to be understandable to humans. Without this attribute, testers are unable to effectively assess predictions and explanations, which unfortunately excludes the majority of currently available KGC benchmarks. The two critical criteria that need to be fulfilled are: (1) Entities and relations must be readily understandable to humans. If the entities or relations are only numerical values or lack a concise definition, humans cannot judge whether a prediction is correct or an explanation is meaningful. (2) Relations among entities should allow human testers to engage in logical deductions, facilitating reasoning. For example, the relation types within the Kinship dataset include strong logical connections, enabling testers to infer kinship between entities based on the provided training triples. Without such logical connections, there might not be any suitable explanation that is understandable to humans for that particular knowledge graph. Based on these criteria, our review of popular benchmarks revealed only two suitable KGC benchmarks, both of which we employed in this study. To generalize our results, it would be important to extend our evaluation to additional human-centric KGC benchmarks. 8 Ethics Statement In light of the limitations mentioned above, users of our explanation method should be aware that the method may not be fully faithful due to approximations, especially when the explanation path is long. Therefore, a per-use-case estimation is required to determine if the approximation is acceptable. All 12114 interactions with human testers and data usage comply with the General Data Protection Regulation (GDPR) of the European Union. References Yasmeen Alufaisan, Laura R. Marusich, Jonathan Z. Bakdash, Yan Zhou, and Murat Kantarcioglu. 2021. Does explainable artificial intelligence improve human decision-making? In Proceedings of the AAAI Conference on Artificial Intelligence, pages 6618– 6626. Patrick Betz, Christian Meilicke, and Heiner Stuckenschmidt. 2022. Adversarial explanations for knowledge graph embeddings. In Proceedings of the 31st International Joint Conference on Artificial Intelligence, pages 2820–2826. Edgar Brunner and Ullrich Munzel. 2000. The nonparametric Behrens-Fisher problem: Asymptotic theory and a small-sample approximation. Biometrical Journal, 42(1):17–25. Ahmad Chaddad, Jihao Peng, Jian Xu, and Ahmed Bouridane. 2023. Survey of explainable AI techniques in healthcare. Sensors, 23(2). Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. ArXiv:1702.08608. Krzysztof Z. Gajos and Lena Mamykina. 2022. Do people engage cognitively with AI? Impact of AI assistance on incidental learning. In Proceedings of the 27th Annual Conference on Intelligent User Interfaces, pages 794–806. Henry Han and Xiangrong Liu. 2022. The challenges of explainable AI in biomedical data science. BMC Bioinformatics, 22. Moritz Hardt, Benjamin Recht, and Yoram Singer. 2016. Train faster, generalize better: Stability of stochastic gradient descent. In Proceedings of the 33rd International Conference on International Conference on Machine Learning, pages 1225–1234. Peter Hase and Mohit Bansal. 2020. Evaluating explainable AI: Which algorithmic explanations help users predict model behavior? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5540–5552. Geoff Hinton. 1990. Kinship data. UCI Machine Learning Repository. Robert V. Hogg, Elliot A. Tanis, and Dale L. Zimmerman. 2015. Probability and Statistical Inference. Pearson. Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, page 4198–4205. Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, and Finale Doshi-Velez. 2019. An evaluation of the human-interpretability of explanation. ArXiv:1902.00006. Carolin Lawrence, Timo Sztyler, and Mathias Niepert. 2021. Explaining neural matrix factorization with gradient rollback. In 35th AAAI Conference on Artificial Intelligence, pages 4987–4995. Qing Lyu, Marianna Apidianaki, and Chris CallisonBurch. 2024. Towards faithful model explanation in NLP: A survey. Computational Linguistics, pages 1–70. G. A. Miller. 1956. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2):81–97. Pouya Pezeshkpour, Yifan Tian, and Sameer Singh. 2019. Investigating robustness and interpretability of link prediction via adversarial modifications. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 3336–3347. P. Jonathon Phillips, Carina Hahn, Peter Fontana, Amy Yates, Kristen K. Greene, David Broniatowski, and Mark A. Przybocki. 2021. Four Principles of Explainable Artificial Intelligence. NIST Interagency/Internal Report, National Institute of Standards and Technology. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should I trust you? Explaining the predictions of any classifier. ArXiv:1602.04938. Tara Safavi and Danai Koutra. 2020. CoDEx: A comprehensive knowledge graph completion benchmark. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 8328–8350. Kacper Sokol and Peter Flach. 2020. Explainability fact sheets: A framework for systematic assessment of explainable approaches. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 56–67. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In International Conference on Machine Learning, pages 2071– 2080. Zach Wood-Doughty, Isabel Cachola, and Mark Dredze. 2021. Faithful and plausible explanations of medical code predictions. ArXiv:2104.07894. Bishan Yang, Wen tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. ArXiv:1412.6575. Kayo Yin and Graham Neubig. 2022. Interpreting language models with contrastive explanations. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 184–198. 12115 Jianlong Zhou, Amir H. Gandomi, Fang Chen, and Andreas Holzinger. 2021. Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics, 10(5). A Appendix A.1 Discussion of Faithfulness and Plausibility The XAI community has introduced and explored desired properties of explanations for AI. For example, Lage et al. (2019) introduced humaninterpretability. Jacovi and Goldberg (2020) highlighted faithfulness. (Sokol and Flach, 2020) suggested fact sheets to characterise XAI systems. Phillips et al. (2021) proposed XAI principles, including accuracy, meaningfulness and knowledge limits. Although there is not yet a consensus regarding the desired properties of XAI, the two aspects, faithfulness and plausibility, have attracted significant attention in the literature. Faithfulness refers to how accurately explanations reflect a model’s prediction process (Ribeiro et al., 2016; Jacovi and Goldberg, 2020). Plausibility is how convincing explanations are for human users (Jacovi and Goldberg, 2020). From the descriptions, it is clear that faithfulness demands correct reflection of the model’s reasoning process, while plausibility needs to align with human reasoning. Faithfulness does not guarantee plausibility, and vice versa (Lyu et al., 2024; Jacovi and Goldberg, 2020). Especially, when the model’s reasoning process deviates from human’s, XAI approaches must navigate a trade-off between faithfulness and plausibility. A.2 Significance Testing for Human Evaluation Given that human evaluations often recruit a limited number of testers for each setting due to high costs, it becomes imperative to use significance testing to demonstrate that observed differences attribute to fixed effects rather than random variations. The commonly used significance tests include the t-test and its variants, the Wilcoxon test, the Mann-Whitney U test, and the Brunner-Munzel test (Hogg et al., 2015; Brunner and Munzel, 2000). When choosing statistical tests for human feedback data, it is crucial to consider the limitations and assumptions inherent in these tests to derive robust conclusions. For instance, the assumption of normally distributed data is standard in various t-tests, but this assumption may not hold in Likert scale data. Additionally, the two-sample independent t-test assumes equal population variances, whereas Welch’s t-test does not require such equality. Therefore, it is essential to evaluate the applicability of testing methods to ensure the significance of observed differences in the data. We selected the Brunner-Munzel test in our experiments because it does not assume normality or equal population variances, making it robust and applicable to a broader range of conditions. A.3 Confounders Besides the XAI methods themselves, other factors can potentially influence human evaluation. First, the engagement of testers will impact the evaluation. Some testers may be highly engaged and think slowly, while others might not exhibit the same level of attentiveness. During experiments, we found that checkpoint questions are an effective way to identify well-engaged testers, particularly when using crowdsourcing platforms like Amazon Mechanical Turk (AMT). Moreover, the engagement level of a tester changes over time and often gradually decreases. Therefore, it is important to limit the number of predictions in the evaluation to maintain high tester engagement. Additionally, some predictions and explanations can be more challenging, requiring testers to take more time for assessment. To minimize deviations caused by varying difficulty levels of predictions, all testers should evaluate a similar set of predictions in a similar order. A.4 Human-Understandability of KGC Benchmark Data The KGC benchmark data used in human evaluations must be understandable to testers. Otherwise, they would have no means to assess predictions and explanations. Additionally, testers recruited for human evaluations are often laypeople, not professionals in a particular field. Therefore, plain datasets without domain-specific knowledge (such as biology, chemistry, or healthcare) are preferred. Unfortunately, many publicly available KGC datasets are not designed for human understanding. All datasets that we believe human understandable can be found in Table 2. Here, our focus primarily lies in ensuring that lay testers can understand the triples. However, after experimental analysis with the Kinship and CoDEx datasets, we realized that merely understanding individual triples is insuffi12116 Dataset Description Entities Relations Triples Example Triples Kinship Relations among family members 24 12 112 (Charlotte, niece, Arthur) (Christopher, father, Victoria) Countries Geographical relations of countries 271 2 1,159 (western_africa, locatedin, africa) (slovakia, neighbor, austria) MovieLens User-movies networks 2625 5 100,000 (User757, 3, Transformers) (User943, 1, Star Trek IV: The Voyage Home) CoDEx Relations from wiki 2034 42 36543 (David Evans, instrument, guitar) (Gustav Struve, citizenship, Germany) YAGO310 Person relations from wiki 123,143 37 1,089,040 (Glencore, isLocatedIn, Rotterdam) (Ambareesh, isPoliticianOf, India) Table 2: Datasets with human understandable triples for human evaluation of explainable KGC. cient. Datasets should exhibit a high level of human understandability to enable meaningful human evaluations. To advance explainable KGC research, it is imperative to curate new KGC datasets in a human-centric manner (see also Sec. 7). A.5 Human-centric Online Evaluation Platform Following the human evaluation study outlined in Sec. 4, we developed an online system to evaluate XAI KGCs in a human centric manner. Our system considers the real needs and interests of human users in collaboration with AI, enabling us to investigate the following questions: can testers assess the correctness of a KGC prediction based on its explanations? Which explanations are helpful for testers? Fig. 4 illustrates how the evaluation platform collects feedback from testers. Our system works as an online website and can easily collaborate with crowdsourcing platforms such as AMT to recruit testers and distribute evaluation tasks. A.6 Crowdworkers for Human Evaluation To get feedback from general users about their evaluations on explanations, we recruited crowdworkers from AMT. Fig. 7 presents the instructions to the testers. Privacy and data usage strictly comply with the General Data Protection Regulation (GDPR) of the European Union, and testers are informed before they participate in the evaluation. A.7 Settings of Kinship Evaluation Based on the evaluation framework designed in Sec. 4, the settings of the test with the Kinship data 1 Each tester evaluates 14 predicted triples to maintain good engagement. 2 The first two triples serve as practice to help testers understand the system and the questions. The feedback is not included in statistical analysis. 3 Half of the triples are correctly or incorrectly predicted to prevent dummy feedback. 4 Half of the triples are randomly selected for either XAI method. 5 The predicted triples are randomly shuffled. 6 The same set of predicted triples is presented to testers in the same order. Table 3: Settings of human-centric evaluation for the Kinship data. are presented in Table 3. Among the randomly selected test predictions, there are two simple predictions, such as a prediction (Colin, is_son_of, James) associated with an explanation (James, is_father_of, Colin), which serve as checkpoints to gauge tester engagement. After excluding testers who simply rejected or accepted all predictions or answered the checkpoint questions incorrectly, there were 26 well-engaged crowdsourced testers remained. A.8 Settings of CoDEx Evaluation During the test, sensitive relation types related to the person entities, such as medical conditions, religion, and ethnic group, are removed from the data to comply with the General Data Protection Regulation (GDPR) of the European Union. The settings of the human evaluation with the CoDEx data are presented in Table 4. To investigate the 12117 Figure 7: Instructions for crowdworkers recruited from Amazon Mechanical Turk (AMT) with a privacy notice that strictly adheres to General Data Protection Regulation (GDPR) of the European Union. 1 Each tester evaluate 26 predicted triples to maintain good engagement. 2 The first two triples serve as practice to help testers understand the system and questions. The feedback is not included in the statistical analysis. 3 Half of the triples are correctly or incorrectly predicted to avoid dummy feedback. 4 Half of the triples are randomly selected for either XAI method. 5 The order of the predicted triples are randomly shuffled. 6 The same set of predicted triples are presented to testers in the same order. Table 4: Settings of human-centric evaluation for the CoDEx data. engagement of the testers, we set checkpoints with three simple predictions, such as a prediction (Belgium, has_diplomatic_relation_with, Malaysia) associated with an explanation (Malaysia, has_diplomatic_relation_with, Belgium). After excluding the testers who simply rejected or accepted all predictions or answered the checkpoint questions erroneously, there were 55 well-engaged crowdsourced testers left. A.9 Further Analysis of Testers’ Assessments During the crowdsourcing evaluation conducted with AMT, we detailed the options to Question 1 (Is the predicted triple correct?) to mitigate potential misunderstandings of the testers. As illustrated in Fig. 4, these options include: 1. Yes, based on the explanations, I believe the prediction is correct. 2. The explanations are not enough, but based on the information, the prediction is likely correct. 3. I can’t assess the prediction at all. 4. The explanations are not enough, but based Dataset GR GradPath Kinship 0.7756 0.6571 CoDEx 0.9773 0.8023 Table 5: Error Magnitudes in tester assessments: Measured by MAE (mean absolute error), ranging [0, 2]. on the information, the prediction is likely wrong. 5. No, based on the explanations, I believe the prediction is wrong. To gain further insights into the plausibility of the generated explanations, we conducted a more detailed analysis of testers’ assessments, besides accuracy rate. We considered nuances between the options, such as distinguishing between Options 1 and 2, and Options 4 and 5. We set a tester’s answer as 1, 0.5, 0, -0.5, -1 for the options 1 – 5, respectively. The ground truth of a prediction is set to be 1 if the prediction is correct, and -1 otherwise. We computed mean absolute error (MAE) between ground truth and testers’ answers. The results were reported in Table 5. The experiments revealed that the explanations generated by our GradPath method resulted in smaller error magnitudes in tester assessments compared to those generated by GR. The MAE-based analysis provided additional insights into correct and incorrect assessments, besides the accuracy rate reported in Sec. 5. Taken together, these experimental results highlighted the effectiveness of our explanations in facilitating correct assessments by testers. 12118
2024
654
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12119–12134 August 11-16, 2024 ©2024 Association for Computational Linguistics ne Prompt To Rule Them All: LLMs for Opinion Summary Evaluation Tejpalsingh Siledar†♣, Swaroop Nath†♣, Sri Raghava†♣, Rupasai Rangaraju†♣, Swaprava Nath♣, Pushpak Bhattacharyya♣, Suman Banerjee♠, Amey Patil♠, Sudhanshu Shekhar Singh♠, Muthusamy Chelliah♠, Nikesh Garera♠ ♣Computer Science and Engineering, IIT Bombay, India, ♠Flipkart, India {tejpalsingh, swaroopnath, sriraghava, rupasai, swaprava, pb}@cse.iitb.ac.in Abstract Evaluation of opinion summaries using conventional reference-based metrics often fails to provide a comprehensive assessment and exhibits limited correlation with human judgments. While Large Language Models (LLMs) have shown promise as reference-free metrics for NLG evaluation, their potential remains unexplored for opinion summary evaluation. Furthermore, the absence of sufficient opinion summary evaluation datasets hinders progress in this area. In response, we introduce the SUMMEVAL-OP dataset, encompassing 7 dimensions crucial to the evaluation of opinion summaries: fluency, coherence, relevance, faithfulness, aspect coverage, sentiment consistency, and specificity. We propose OP-I-PROMPT, a dimension-independent prompt, along with OP-PROMPTS, a dimension-dependent set of prompts for opinion summary evaluation. Our experiments demonstrate that OP-I-PROMPT emerges as a good alternative for evaluating opinion summaries, achieving an average Spearman correlation of 0.70 with human judgments, surpassing prior methodologies. Remarkably, we are the first to explore the efficacy of LLMs as evaluators, both on closed-source and open-source models, in the opinion summary evaluation domain. 1 Introduction Opinion summarization systems predominantly use traditional metrics such as ROUGE (Lin, 2004) and BERTSCORE (Zhang et al., 2019) for automatic evaluation, however, they have been shown to have poor correlations with human judgments (Shen and Wan, 2023). Moreover, these metrics fall short of comprehensively evaluating opinion summaries. Additionally, obtaining reference-based datasets at a large scale is an expensive process. † Equal contribution Figure 1: G-EVAL vs. OP-I-PROMPT. On closedsource model (ChatGPT-3.5) our OP-I-PROMPT shows comparable performance whereas on open-source model (Mistral-7B) our approach outperforms G-EVAL on 7 dimensions: fluency (FA), coherence (CO), relevance (RE), faithfulness (FA), aspect coverage (AC), sentiment consistency (SC), and specificity (SP). Check Figure 4 for more details. Recently, Large Language Models (LLMs) have been utilized as reference-free evaluators for Natural Language Generation (NLG) outputs (Fu et al., 2023; Chiang and Lee, 2023a,b; Wang et al., 2023; Liu et al., 2023). The idea is to prompt a powerful LLM such as ChatGPT-3.5/GPT-4 to evaluate an output on certain criteria. However, their suitability has not been explored at all for evaluating opinion summaries. Moreover, these approaches have been tested only on closed-source models (ChatGPT-3.5/GPT-4) primarily because of the limitations of the open-source models in following instructions and producing the desired output (Chiang and Lee, 2023b). To this end, we first create SUMMEVAL-OP, a 12119 reference-free opinion summarization dataset covering 7 dimensions, for the e-commerce domain. Next, we present OP-I-PROMPT and OP-PROMPTS tailored for opinion summary evaluation. We investigate their suitability to both closed-source and open-source models. Experiments reveal that OPI-PROMPT emerges as a good alternative for evaluating opinion summaries across all 7 dimensions. Our contributions are: 1. SUMMEVAL-OP1, an opinion summary evaluation benchmark dataset, consisting of a total of 2, 912 summary annotations, assessing 13 opinion summaries for 32 products from the Amazon test set. The evaluation covers 7 dimensionsfluency, coherence, relevance, faithfulness, aspect coverage, sentiment consistency, and specificity related to the evaluation of opinion summaries (Section 4). 2. OP-I-PROMPT, a dimension-independent prompt and OP-PROMPTS, a dimensiondependent set of prompts, enabling opinion summary evaluation for all the 7 dimensions. Experiments indicate that the OP-I-PROMPT generally outperforms existing approaches on both closed-source and open-source models by 9% on average in correlation with human judgments (Figure 1, Section 3). To the best of our knowledge we are the first to test the applicability of different prompt approaches on open-source LLMs. 3. Benchmarking of 9 recent LLMs (closed and open-source) on the aforementioned 7 dimensions for the task of opinion summarization, which to the best of our knowledge is first of its kind (Table 4, Section 6). 4. Detailed analysis, comparing an open-source LLM against a closed-source LLM acting as evaluators for automatic evaluation of opinion summaries on 7 dimensions. Analysis indicates that OP-I-PROMPT emerges as a good alternative for evaluating opinion summaries showing a high correlation (spearman correlation of 0.70 on average) with humans when compared with alternatives (Section 6). 1https://github.com/tjsiledar/SummEval-OP 2 Related Work LLM-based Evaluators. Fu et al. (2023) introduced GPTScore that operates on the premise that a generative pre-training model (e.g. GPT-3) is likely to assign a higher probability to the generation of high-quality text in line with provided instructions and context. Chiang and Lee (2023a) were the first to explore LLMs for evaluation. Chiang and Lee (2023b) provide concrete guidelines that improve ChatGPT’s correlation with humans. Wang et al. (2023) conducted an initial survey exploring the utilization of ChatGPT as an NLG evaluator. Kocmi and Federmann (2023) used GPT models for evaluating machine learning tasks. Liu et al. (2023) introduced G-Eval, a framework for evaluation of NLG outputs using Chain of Thought (CoT) (Wei et al., 2023) and assigning weights to a predetermined set of integer scores based on their generation probabilities from GPT-3/4. Chen et al. (2023) were the first to investigate approaches to referencefree NLG evaluation using LLMs, finding that an explicit score generated by ChatGPT is the most effective and stable approach. Zheng et al. (2023) show that strong LLMs such as GPT-4 achieve a similar level of agreement to that of humans and hence can be used to approximate human preferences. Our work investigates two prompt strategies and tests the applicability of different prompt approaches on closed-source and open-source LLMs for opinion summary evaluation for 7 dimensions. Opinion Summary Evaluation Benchmark. (Shen and Wan, 2023) created the OPINSUMMEVAL dataset, utilizing the Yelp test set (Chu and Liu, 2019), annotating for 4 dimensions relevant to opinion summary evaluation. Our work enhances this effort by introducing SUMMEVAL-OP, which focuses on the e-commerce domain, constructed using the Amazon test set (Bražinskas et al., 2020). Additionally, we collect annotations for 7 dimensions on the recent LLM summaries, subsequently establishing benchmarks for comparison. Opinion Summarization Opinion summarization aims to summarize opinions into concise summaries (Wang and Ling, 2016; Siledar et al., 2023b). Recent approaches use unsupervised methods (Chu and Liu, 2019; Bražinskas et al., 2020), or creating synthetic datasets with pseudo-summaries (Bražinskas et al., 2020; Amplayo and Lapata, 2020; Amplayo et al., 2021; Elsahar et al., 2021; Im et al., 2021; Siledar et al., 2023a, 2024). Notably, Am12120 playo et al. (2021) and Im et al. (2021) advanced these methods by generating content plans and using multimodal inputs, respectively. Siledar et al. (2023a) used lexical and semantic similarities to generate synthetic data. In this work, we benchmark the recent LLMs for the task of opinion summarization. 3 Methodology We describe our dimension independent and dependent prompts and the model scoring function. 3.1 Prompt Approaches Figure 2 shows the different prompt approaches for evaluating opinion summaries. In general, the prompts include the following 3 components(1) Task Description: Defines the task that the LLM will be performing. In our case, the task is to evaluate a summary corresponding to a set of reviews on a given metric/dimension. (2) Evaluation Criteria: Defines the criteria that will be used to perform the task. In our case, the task being opinion summary evaluation, the criteria is to assign a score (1 −5) for a certain metric/dimension depending on the extent to which the summary adheres to it. (3) Evaluation Steps: This comprises the steps that the LLM must take to correctly perform the described task. In our case, it contains the steps that the LLM should follow to evaluate a certain metric/dimension. We propose two prompt approaches: OP-I-PROMPT is a metric-independent opinion summary evaluation prompt. Here we split the Evaluation Criteria to create a new component Metric consisting only the evaluation dimension. All the remaining components i.e. Task Description, Evaluation Criteria, and Evaluation Steps are crafted in such a way that they are applicable in general to any opinion summary evaluation dimension. This benefits us in the following way: (a) we have a metric independent prompt that can now evaluate any metric/dimension just by replacing with the desired definition of the dimension within the Metric block (b) the remaining components, crafted specifically keeping the task in mind, ensures that the evaluation by LLM takes place as defined by us. OP-PROMPTS is a set of metric-dependent prompts. We specifically handcrafted these prompts for each of the 7 evaluation dimensions. Although this ensures that the evaluation happens exactly in the way we define, this requires a certain level of expertise in the evaluation domain and prompting. This could be seen as a much stricter version of the prompt compared to OP-I-PROMPT where the prompt is suited to any evaluation dimension which is not the case here. A prompt defined for a certain dimension could not be utilized for any other dimension. In contrast, G-EVAL (Liu et al., 2023) used auto chain-of-thoughts (Wei et al., 2022) by using Task Description and Evaluation Criteria to automatically generate the Evaluation Steps. Finally, all the components together constitute the G-EVAL prompt that is used by an LLM to evaluate summaries. Our work investigates the applicability of all these prompts to both closed-source and opensource models for evaluating opinion summaries. 3.2 Prompt Design Consideration The design of our prompts was based on the intuition that LLMs would produce improved responses when prompted to justify their evaluations. Our approach ensures that the response reiterates the evaluation metric, highlights both strengths and shortcomings and concludes with an evaluation score based on the criteria outlined in the prompt. 3.3 Scoring Function Liu et al. (2023) pointed out the limitation of LLM outputting an integer score and proposed using a weighted average of the scores as the LLMs output, where the weights are the probabilities of the corresponding score. Formally, say, the scoring is scheme is from {s1, ..., sj}, the probability of each score p(sk) is calculated by an LLM and the final score o is computed as: o = j X k=1 p(sk) × sk (1) p(sk) for an input k is estimated through an LLM by sampling n outputs. In which case, the scoring function just translates to taking a mean over the n outputs. We ensure that n is large (∼100) to get a reliable estimate of the probabilities. 12121 Figure 2: Comparison of Prompt Approaches. G-EVAL PROMPTS first generates the Evaluation Steps using Task Description and Evaluation Criteria in Chain-of-Thought fashion. Finally the full prompt is used to evaluate the opinion summaries. In contrast, our OP-I-PROMPT is simpler and has Task Description, Evaluation Criteria, and Evaluation Steps fixed for a dimension/metric independent evaluation. Here, only the Metric part needs to be changed for evaluating any dimension/metric. Finally OP-PROMPTS are dimension/metric dependent prompts that needs to be specifically crafted for each dimension/metric. 3.4 Evaluation Approach For each product di in dataset D, i ∈{1, ..., Z} we have N opinion summaries from different models. Let sij denote the jth summary of the product di, Mm denote the mth evaluation metric, and K denote the correlation measure. Bhandari et al. (2020) defines the summary-level correlation as: R(a, b) = 1 Z X i K([Ma(si1), ..., Ma(siN )], [Mb(si1), ..., Mb(siN )]) (2) 4 SUMMEVAL-OP Benchmark Dataset We created the SUMMEVAL-OP benchmark dataset for evaluating the opinion summaries on 7 dimensions. In this section, we discuss the dataset used, opinion summary evaluation metrics, annotation details, and its analysis. 4.1 Dataset We utilized the Amazon test set (He and McAuley, 2016; Bražinskas et al., 2020), comprising of reviews from 4 domains: electronics, home & kitchen, personal care, and clothing, shoes & jewelry. The test set contained a total of 32 products, each with 3 human-annotated abstractive summaries and 8 reviews per product. The reviews and summaries are all in English. For our use, we needed one human reference summary per product which we obtained by randomly selecting one of the summaries out of the 3 for each product. We do not directly consider only one of the human summaries as this would bias the summaries to a single person. 4.2 Opinion Summarization Metrics The evaluation of opinion summaries focused on the following 7 dimensions: 12122 1. fluency (FL)- The quality of summary in terms of grammar, spelling, punctuation, capitalization, word choice, and sentence structure and should contain no errors. The summary should be easy to read, follow, comprehend and should contain no errors. Annotators received specific guidelines on how to penalize summaries based on fluency levels. 2. coherence (CO)- The collective quality of all sentences. The summary should be wellstructured and well-organized. The summary should not just be a heap of related information, but should build from sentence to a coherent body of information. 3. relevance (RE)- The summary should not contain opinions that are either not consensus or important. The summary should include only important opinions from the reviews. Annotators were instructed to penalize summaries if they contained redundancies and excess/unimportant information. 4. faithfulness (FA)- Every piece of information mentioned in the summary should be verifiable/supported/inferred from the reviews only. Summaries should be penalized if any piece of information is not verifiable/supported/inferred from the reviews or if the summary overgeneralizes something. 5. aspect coverage (AC)- The summary should cover all the aspects that are majorly being discussed in the reviews. Summaries should be penalized if they miss out on an aspect that was majorly being discussed in the reviews and awarded if it covers all. 6. sentiment consistency (SC)- All the aspects being discussed in the summary should accurately reflect the consensus sentiment of the corresponding aspects from the reviews. Summaries should be penalized if they do not cover accurately the sentiment regarding any aspect within the summary. 7. specificity (SP)- The summary should avoid containing generic opinions. All the opinions within the summary should contain detailed and specific information about the consensus opinions. Summaries should be penalized for missing out details and should be awarded if they are specific. 4.3 Annotation Details For creating the SUMMEVAL-OP dataset, annotations were collected for a total of 13 abstractive summaries per product across 7 dimensions for 32 products from the Amazon test set. The 13 summaries comprised of 1 human-annotated reference summary (mentioned in Section 4.1) and 12 different model-generated summaries (listed in Section 5.1). To ensure high quality of annotations, each summary was annotated by 3 raters for 7 dimensions, amounting to 2, 912 summary-level ratings. Raters were asked to rate the summaries on a scale from 1 to 5 (higher is better) along the 7 dimensions. Each summary for each dimension was rated by 3 raters. The overall quantity of annotations is: 3 (# of raters) x 32 (# of instances) x 13 (# of summaries) x 7 (# of dimensions) = 8, 736 ratings. We hired 3 Masters’ students with experience in opinion summarization as opposed to crowd workers for the following reasons: (a) Gillick and Liu (2010) demonstrated that summary evaluations from non-experts can significantly diverge from expert annotations and may display inferior interannotator agreement, and (b) Fabbri et al. (2021) emphasized the significance of employing expert annotators to mitigate quality concerns in human ratings. Similar to Fabbri et al. (2021), we conducted the process in two rounds, to ensure highquality ratings. In Round II, ratings where the scores of any rater differed from any other rater by 2 or more points were re-evaluated. The reevaluation was done through a discussion between the annotators such that ratings from all 3 differ by at most 1. We asked the raters to be critical and discuss the ratings during re-evaluation. We hired raters who have written papers on opinion summarization (1) or are working in the opinion summarization domain (2). These were male Masters’ students aged 21-30. They were provided with detailed guidelines for evaluating summaries on the 7 dimensions. All 3 raters received stipends suitable for the tasks. Models associated with summaries were not revealed to the raters to remove any bias. Check Appendix B. 4.4 Annotation Analysis We evaluated the inter-rater agreement for the 3 raters using Krippendorff’s alpha coefficient (α) (Krippendorff, 2011). For Round-I, we found 12123 Round-I ↑ Round-II ↑ fluency 0.55 0.84 coherence 0.43 0.73 relevance 0.50 0.79 faithfulness 0.63 0.86 aspect coverage 0.64 0.82 sentiment consistency 0.41 0.78 specificity 0.34 0.76 AVG 0.50 0.80 Table 1: Krippendorff’s alpha coefficient (α) for Round-I and Round-II on 7 dimensions. As expected, we see an improvement in Round-II coefficient scores. Figure 3: Ratings Distribution. We plot the average frequency of scores obtained by human raters across 7 dimensions. A score of 4 or 5 is mostly preferred. the coefficient to be 0.50 indicating moderate aggrement (0.41 ≤α ≤0.60). For Round-II, the coefficient increased to 0.80, indicating substantial agreement (0.61 ≤α ≤0.80). We report the dimension-wise agreement scores for both rounds in Table 1. We observe that for both Round-I and Round-II, faithfulness and aspect coverage score higher than others. This is mostly because faithfulness and aspect coverage could be identified by cross-examining with the reviews. After Round-II, coherence and specificity are the most disagreed upon between raters. Figure 3 shows the average frequency of assigning a particular score by human raters for 7 dimensions. We make some key observations: (a) a score of 4 or 5 is mostly preferred. This could be attributed to the fact that most of the models are LLMs which are doing pretty well for summary generation tasks. (b) for fluency, coherence, and faithfulness a score of 5 dominates. This indicates that the LLMs are doing good in terms of these dimensions. (c) for relevance, aspect coverage, sentiment consistency, and specificity raters majorly prefer a score of 4. 5 Experiments We discuss the available benchmark dataset for opinion summary evaluation, the summarization models used for opinion summary generation, baseline metrics, and the implementation details. 5.1 Summarization Models Pre-LLMs: For the Pre-LLMs, we obtain the publicly available summaries for the Amazon test set of these models. These models were trained in a self-supervised manner using only reviews data. (1) PlanSum (Amplayo et al., 2021) uses content plans to create relevant review-summary pairs. The content plans take the form of aspect and sentiment distributions which are used along with input reviews for generating summaries. (2) MultimodalSum (Im et al., 2021) uses non-text data such as image and metadata along with reviews to generate opinion summaries. It uses a separate encoder for each modality and uses synthetic datasets to train the model in an end-to-end fashion. (3) Siledar et al. (2023a) uses lexical and semantic similarities to create a highly relevant synthetic dataset of reviewsummary pairs. This is then used to fine-tune any pre-trained language model for generating opinion summaries. (hereby referred to as LS-Sum-G). LLMs: For the LLMs, we use simple prompts2 to generate opinion summaries. These models were not specifically fine-tuned for opinion summarization. We use the HuggingFace library (Wolf et al., 2020) to access these models. (1) ChatGPT-3.5 and GPT-4 (OpenAI, 2023) are closed-source models from OpenAI optimized for dialog. We use the gpt-3.5-turbo-0125 and gpt-4-0125-preview versions. (2) LLaMA2-7B and LLaMA2-13B (Touvron et al., 2023) are opensource fine-tuned model from Meta with 7B and 13B parameters respectively. They were trained autoregressively using around 2T tokens. We use the chat version: meta-llama/Llama-2-7b-chat-hf model and meta-llama/Llama-2-13b-chat-hf from the HuggingFace library. (3) Mistral-7B (Jiang et al., 2023) is a 7B instruction-tuned LLM 2Check Appendix C.4 for the prompt 12124 FL ↑ CO ↑ RE ↑ FA ↑ AC ↑ SC ↑ SP ↑ ρ τ ρ τ ρ τ ρ τ ρ τ ρ τ ρ τ SUMMEVAL-OP (Ours) HUMANS 0.80 0.77 0.81 0.76 0.91 0.86 0.89 0.85 0.93 0.87 0.91 0.85 0.92 0.87 ROUGE-1 −0.36 −0.28 −0.30 −0.24 −0.31 −0.23 −0.35 −0.26 −0.44 −0.32 −0.38 −0.29 −0.30 −0.23 ROUGE-2 −0.23 −0.18 −0.14 −0.10 −0.17 −0.12 −0.21 −0.16 −0.26 −0.19 −0.24 −0.18 −0.14 −0.09 ROUGE-L −0.39 −0.32 −0.30 −0.23 −0.34 −0.25 −0.40 −0.30 −0.51 −0.37 −0.45 −0.33 −0.38 −0.27 BERTSCORE −0.32 −0.27 −0.28 −0.22 −0.29 −0.22 −0.34 −0.26 −0.51 −0.43 −0.41 −0.33 −0.37 −0.28 BARTSCORE −0.19 −0.15 −0.19 −0.14 −0.29 −0.22 −0.33 −0.25 −0.45 −0.35 −0.37 −0.28 −0.36 −0.27 SUMMAC 0.23 0.20 0.18 0.14 0.30 0.25 0.25 0.21 0.24 0.19 0.25 0.20 0.26 0.21 UNIEVAL 0.36 0.28 0.52 0.42 0.33 0.25 0.17 0.14 − − − − − − G-EVAL-3.5 0.63 0.55 0.59 0.49 0.68 0.56 0.70 0.58 0.79 0.67 0.73 0.61 0.75 0.63 OP-I-GPT-3.5 0.60 0.51 0.61 0.51 0.69 0.56 0.71 0.59 0.80 0.68 0.73 0.61 0.74 0.61 G-EVAL-MISTRAL 0.50 0.43 0.54 0.45 0.52 0.42 0.54 0.44 0.61 0.49 0.55 0.46 0.62 0.50 OP-MISTRAL 0.38 0.32 0.58 0.47 0.56 0.45 0.57 0.46 0.80 0.67 0.60 0.49 0.75 0.62 OP-I-MISTRAL 0.54 0.45 0.58 0.47 0.59 0.47 0.63∗ 0.51∗ 0.82∗ 0.70∗ 0.73∗ 0.61∗ 0.71∗ 0.58∗ Table 2: Spearman (ρ) and Kendall Tau (τ) correlations at summary-level on 7 dimensions for the SUMMEVAL-OP dataset (Round-I). For closed-source, OP-I-PROMPT performs comparably to G-EVAL, whereas for open-source it outperforms alternatives. ∗represents significant performance (p-value < 0.05) to G-EVAL-MISTRAL computed using Mann-Whitney U Test. HUMANS- averaged correlation of each annotator with the overall averaged ratings. G-EVAL Response <score>5</score> Explanation: The summary covers all the major aspects discussed in the reviews, including the comfort and break-in period of the boots. It also mentions that the boots take a little while to break in, which is a common theme among the reviews. Therefore, the summary receives a high rating for its aspect coverage. OP-I-PROMPT Response The metric for evaluation is Aspect Coverage. The summary should cover all the aspects that are majorly being discussed in the reviews. The reviews mention the following aspects: comfort and break-in period. The summary states "I love Frye boots. They are the most comfortable boots I have ever worn." This covers the comfort aspect. The summary also states "They do take a little while to break in, but they are worth the wait." This covers the break-in period aspect. However, the summary does not mention anything about the durability, fit and appearance aspects. Since these aspects were majorly being discussed in the reviews, the summary does not follow the metric completely. Score- <score>2</score> The summary only follows the metric to a limited extent while generating the summary from the reviews. Table 3: Comparative Analysis. Sample responses generated by our OP-I-PROMPT compared to G-EVAL for evaluating a sample summary for the dimension aspect coverage. On average, the scores assigned for this summary by annotators, G-EVAL, and OP-IPROMPT are 2.33, 4.09, and 2.72 respectively. created by MistralAI. We use the instruct version: mistralai/Mistral-7B-Instruct-v0.2 model. (4) Vicuna-7B and Vicuna-13B (Chiang et al., 2023) are open-source 7B and 13B parameter chat models trained by fine-tuning LLaMA2 on 125K user-shared conversations collected from ShareGPT (ShareGPT). We use the: lmsys/vicuna-7b-v1.5 model and lmsys/vicuna-13b-v1.5 model. Method FL ↑ CO ↑ RE ↑ FA ↑ AC ↑ SC ↑ SP ↑ Human Summaries 4.39 4.41 3.78 3.98 3.54 3.71 3.66 Pre-LLMs PlanSum 1.86 1.94 1.60 1.38 1.52 1.59 1.56 MultimodalSum 4.62 4.09 2.63 2.27 2.18 2.76 2.43 LS-Sum-G 4.76 4.40 2.87 2.74 2.32 3.03 2.69 LLMs ChatGPT-3.5 4.89 4.58 4.25 4.71 4.22 4.16 3.96 GPT-4 5.00 4.91 3.52 4.96 4.93 4.83 4.57 LLaMA2-7B 4.79 4.34 3.77 4.49 3.67 3.79 3.46 LLaMA2-13B 4.87 4.49 4.25 4.62 4.02 4.00 3.94 Mistral-7B 4.86 4.60 4.33 4.66 4.56 4.35 4.25 Vicuna-7B 4.83 4.23 3.92 4.35 3.96 3.92 3.67 Vicuna-13B 4.87 4.41 4.09 4.43 4.03 4.00 3.77 Solar-10.7B 4.89 4.73 4.20 4.72 4.50 4.56 4.35 Zephyr-7B 4.89 4.36 4.08 4.54 4.18 3.95 3.83 Table 4: Model-wise averaged annotator ratings of opinion summaries along 7 dimensions (Round-II). Best scores are in bold, second-best are underlined. (5) Solar-10.7B (Kim et al., 2023) is an LLM with 10.7B parameters, showing remarkable performance for models with parameters under 30B. We use the version: upstage/SOLAR-10.7B-Instruct-v1.0 model. (6) Zephyr-7B (Tunstall et al., 2023) is an open-sourced fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO) (Rafailov et al., 2023). We use the beta version: HuggingFaceH4/zephyr-7b-beta model. 5.2 Baselines Following baseline metrics are used: ROUGE{1,2,L} score (Lin, 2004), BERTSCORE (Zhang et al., 2019), BARTSCORE (Yuan et al., 2021), 12125 Figure 4: Spearman correlation scores at different number of output generations (n) for the 7 dimensions. G-EVAL-3.5 and OP-I-GPT-3.5 use the G-EVAL and OP-I-PROMPT respectively, with closed-source ChatGPT-3.5 as their LLM. G-EVAL-MISTRAL, OP-I-MISTRAL, and OP-MISTRAL use the G-EVAL, OP-I-PROMPT, and OP-PROMPTS respectively, with open-source Mistral-7B as their LLM. Generally, OP-I-PROMPT shows better relative performance on both closed-source and open-source models. SUMMAC (Laban et al., 2022), UNIEVAL (Zhong et al., 2022). We include G-EVAL (Liu et al., 2023) as our prompt-based baseline. G-EVAL3.5 and G-EVAL-MISTRAL use ChatGPT-3.5 and Mistral-7B as their LLMs. 5.3 Implementation Details For evaluation, we used Mistral-7B (mistralai/Mistral-7B-Instruct-v0.2) as our evaluator model. We chose Mistral-7B for these reasons: (a) it ranked best amongst the open-source models on the lmsys/chatbot-arena-leaderboard, (b) we found its instruction following-ness to be better than alternatives, and (c) its 7B size ensures easy replication. We set the hyperparameters to n=100, temperature=0.7 to sample multiple generations. Example prompts are in Appendix C. 6 Results and Analysis G-EVAL vs. OP-I-PROMPT vs. OP-PROMPTS. Table 2 and Table 6 report the summary-level correlation scores on the SUMMEVAL-OP and OPINSUMMEVAL dataset. In the case of closed-source models, we observe that our OP-I-GPT-3.5 outperforms or performs comparably to G-EVAL-3.5 across all dimensions on both datasets. Specifically, our OP-I-GPT-3.5 outperforms G-EVAL3.5 on all 4 dimensions for the OPINSUMMEVAL dataset, whereas for the SUMMEVAL-OP dataset, outperforms on coherence, faithfulness, and aspect coverage, performs comparably on relevance and sentiment consistency, underperforms slightly on fluency and specificity. For open-source models, overall, we observe that OP-I-MISTRAL performs the best, followed by OPMISTRAL and then G-EVAL-MISTRAL. Figure 4 shows the performance of different prompt approaches over n=100 generations for 7 dimensions. As we increase the number of generations we generally observe an improvement in the correlation. OP-I-MISTRAL shows an improvement against GEVAL-MISTRAL across all 7 dimensions and by a large margin specifically for aspect coverage, sentiment consistency, and specificity. Comparative Analysis. Table 3 shows the model responses for G-EVAL and OP-I-PROMPT using the Mistral-7B model. We observe the following: (a) in general G-EVAL erroneously assigns a higher score on average compared to the OP-I-PROMPT, and (b) OP-I-PROMPT ensures that the responses adhere to a specific structure: initially outlining what is addressed and what is absent, followed by 12126 AVG-S MW ↓ TT ↓ FL G-EVAL-MISTRAL∗ 0.48 2.9 × 10−4 2.6 × 10−4 OP-I-MISTRAL 0.38 CO G-EVAL-MISTRAL∗ 0.52 2.1 × 10−4 3.2 × 10−8 OP-I-MISTRAL 0.47 RE G-EVAL-MISTRAL 0.51 6.7 × 10−2 4.6 × 10−2 OP-I-MISTRAL 0.49 FA G-EVAL-MISTRAL 0.53 1.9 × 10−1 4.7 × 10−1 OP-I-MISTRAL 0.54 AC G-EVAL-MISTRAL 0.58 2.1 × 10−4 1.9 × 10−14 OP-I-MISTRAL∗ 0.74 SC G-EVAL-MISTRAL 0.54 2.1 × 10−4 1.4 × 10−7 OP-I-MISTRAL∗ 0.63 SP G-EVAL-MISTRAL 0.59 7.4 × 10−4 3.0 × 10−4 OP-I-MISTRAL∗ 0.63 Table 5: Significance Test. P-values computed using Mann-Whitney U Test (MW) and T-Test (TT) between the average Spearman correlation scores (AVG-S) taken over 10 independent generations from G-EVALMISTRAL and OP-I-MISTRAL. Bold for AVG-S indicates better performance, and for MW and TT indicates p-value < 0.05. ∗represents significant performance. an analysis to determine the appropriate score. In contrast, the G-EVAL responses, while mentioning various aspects covered, overlook what is missing, resulting in erroneous higher score assignments. Significance Testing. We perform significance testing using the Mann-Whitney U Test (McKnight and Najab, 2010) for comparison between OP-IMISTRAL and G-EVAL-MISTRAL. Table 2 report results for Spearman and Kendall Tau scores computed by using the scoring function with n=100. OP-I-MISTRAL significantly (p-value < 0.05) outperforms G-EVAL-MISTRAL on faithfulness, aspect coverage, sentiment consistency, and specificity. Additionally, we group the 100 generations into 10 independent groups and compute Spearman correlations for each group. Table 5 reports the Mann-Whitney U Test and T-Test pvalues and arrives at a similar observation of OPI-MISTRAL significantly outperforming on aforementioned dimensions (except faithfulness). Models for Opinion Summarization. Table 4 reports averaged annotator ratings for the 7 dimensions for each model (Refer to Figure 5 for a graphical view). Overall, GPT-4 ranks the best, followed by Solar-10.7B and Mistral-7B ranking second-best, followed by ChatGPT-3.5. As expected, Pre-LLM models are rated the worst. These are self-supervised models and do not enjoy the liberty of being trained on trillions of tokens. All the LLMs outperform human summaries. However, because these summaries were written in the first person and as a review itself to cater to the needs when the test set was created, it is inconclusive if the LLMs outperform humans in general. We observe that GPT-4 model scores poorly in relevance dimension. This we figured was due to the tendency of GPT-4 model to try to cover every detail in the summary. Finally, Solar-10.7B and Mistral-7B with just 10.7B and 7B parameters respectively, outperformed ChatGPT-3.5 and comes close in performance to GPT-4. Metric Evaluation. Reference-based metrics (ROUGE 1,2,L, BERTSCORE) as expected show weak correlation with human judgments. Reference-free metrics such as BARTSCORE does very poorly, however, SUMMAC performs moderately well. UNIEVAL does well in coherence but still trails behind prompt-based approaches. To summarize, reference-based metrics are inadequate for assessing model performances in the LLMs era. How sensitive is OP-I-PROMPT? We test OP-IPROMPT for 3 definition variations of the aspect coverage dimension. We paraphrase the original definition (Section 4.2) to create 2 additional versions, ensuring the meaning is preserved (Appendix D). We let the OP-I-MISTRAL generate n=100 responses to estimate the score using the scoring function (Section 3.3). The Spearman correlations for the 3 variations are 0.82 (Table 2), 0.82, and 0.81, indicating that OP-I-PROMPT is indifferent to the variations of dimensions’ definition. 7 Conclusion & Future Work In this work, we present the SUMMEVAL-OP dataset, OP-I-PROMPT and OP-PROMPTS for opinion summary evaluation on 7 dimensions. Experimentally, we observe OP-I-PROMPT outperforms alternatives on open-source models and performs comparably better on closed-source models showing good correlations with human judgements. Some key takeaways are: (a) Prompts that do well for powerful closed-source LLMs may not work well for open-source LLMs; (b) Opinion summaries by LLMs are preferred by humans compared to reference and previous model summaries; (c) Reference-based summaries and metrics are inadequate in assessing LLM-based outputs. In the future, we plan to investigate LLMs as evaluators for performing large-scale (all reviews) and multi-source opinion summary evaluations. 12127 Limitations 1. We do not use GPT-4 for evaluation purpose due to cost constraints. The primary aim of our work was to design prompts and test their applicability to both open-source and closedsource. We use ChatGPT-3.5 as the closedsource model to perform our experiments. 2. Our OP-I-PROMPT was specifically designed to evaluate any dimension of the opinion summaries where OP-PROMPTS are dimensiondependent. However, their applicability to other tasks needs further investigation and appropriate changes to the prompt. 3. Due to the nature of the available test sets and for benchmarking the already available models, SUMMEVAL-OP considers only 8 input reviews following the literature. This we believe is a major limitation in the opinion summarization field. Datasets with a larger number of reviews need to be considered for the creation of future benchmark datasets. 4. The assessment quality of all the prompt approaches needs to be investigated for a larger amount of reviews as well. Acknowledgements We express our gratitude to the anonymous reviewers for their valuable feedback. We also extend our thanks to the annotators for their diligent and honest efforts. Ethical Considerations The SUMMEVAL-OP dataset was created using the already available Amazon test set. We hired 3 raters who have written papers on opinion summarization (1) or are working in the opinion summarization domain (2). These were male Masters’ students aged 21-30. All the raters received stipends suitable for the tasks. The OP-I-PROMPT and OP-PROMPTS are designed to offer automatic evaluation of opinion summaries for multiple dimensions. Its primary aim is to assist researchers, developers, and other stakeholders in accurately assessing summaries generated by NLG systems. However, there are potential risks associated with these prompts if they fail to accurately evaluate the quality of opinion summaries or exhibit a bias towards LLM-created content. We urge the research community to use these prompts with caution and check their reliability for their use cases. References Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021. Unsupervised opinion summarization with content planning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12489– 12497. Reinald Kim Amplayo and Mirella Lapata. 2020. Unsupervised opinion summarization with noising and denoising. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1934–1945, Online. Association for Computational Linguistics. Manik Bhandari, Pranav Narayan Gour, Atabak Ashfaq, Pengfei Liu, and Graham Neubig. 2020. Reevaluating evaluation in text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9347–9359, Online. Association for Computational Linguistics. Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020. Unsupervised opinion summarization as copycatreview generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5151–5169, Online. Association for Computational Linguistics. Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023. Exploring the use of large language models for reference-free text quality evaluation: An empirical study. Cheng-Han Chiang and Hung-yi Lee. 2023a. Can large language models be an alternative to human evaluations? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15607–15631, Toronto, Canada. Association for Computational Linguistics. Cheng-Han Chiang and Hung-yi Lee. 2023b. A closer look into using large language models for automatic evaluation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 8928– 8942, Singapore. Association for Computational Linguistics. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality. 12128 Eric Chu and Peter Liu. 2019. MeanSum: A neural model for unsupervised multi-document abstractive summarization. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1223–1232. PMLR. Hady Elsahar, Maximin Coavoux, Jos Rozen, and Matthias Gallé. 2021. Self-supervised and controlled multi-document opinion summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1646–1662, Online. Association for Computational Linguistics. Alexander R. Fabbri, Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. Dan Gillick and Yang Liu. 2010. Non-expert evaluation of summarization systems is risky. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk, pages 148–151, Los Angeles. Association for Computational Linguistics. Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. Proceedings of the 25th International Conference on World Wide Web. Jinbae Im, Moonki Kim, Hoyeop Lee, Hyunsouk Cho, and Sehee Chung. 2021. Self-supervised multimodal opinion summarization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 388–403, Online. Association for Computational Linguistics. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Dahyun Kim, Chanjun Park, Sanghoon Kim, Wonsung Lee, Wonho Song, Yunsu Kim, Hyeonwoo Kim, Yungi Kim, Hyeonju Lee, Jihoo Kim, Changbae Ahn, Seonghoon Yang, Sukyung Lee, Hyunbyung Park, Gyoungjin Gim, Mikyoung Cha, Hwalsuk Lee, and Sunghun Kim. 2023. Solar 10.7b: Scaling large language models with simple yet effective depth upscaling. Tom Kocmi and Christian Federmann. 2023. Large language models are state-of-the-art evaluators of translation quality. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, pages 193–203, Tampere, Finland. European Association for Machine Translation. Klaus Krippendorff. 2011. Computing krippendorff’s alpha-reliability. Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-visiting NLIbased models for inconsistency detection in summarization. Transactions of the Association for Computational Linguistics, 10:163–177. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2511–2522, Singapore. Association for Computational Linguistics. Patrick E McKnight and Julius Najab. 2010. Mannwhitney u test. The Corsini encyclopedia of psychology, pages 1–1. OpenAI. 2023. ChatGPT (August 3 Version). https: //chat.openai.com. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. ShareGPT. Sharegpt. https://sharegpt.com/. Accessed on February 15, 2024. Yuchen Shen and Xiaojun Wan. 2023. Opinsummeval: Revisiting automated evaluation for opinion summarization. arXiv preprint arXiv:2310.18122. Tejpalsingh Siledar, Suman Banerjee, Amey Patil, Sudhanshu Singh, Muthusamy Chelliah, Nikesh Garera, and Pushpak Bhattacharyya. 2023a. Synthesize, if you do not have: Effective synthetic dataset creation strategies for self-supervised opinion summarization in E-commerce. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 13480–13491, Singapore. Association for Computational Linguistics. Tejpalsingh Siledar, Jigar Makwana, and Pushpak Bhattacharyya. 2023b. Aspect-sentiment-based opinion summarization using multiple information sources. In Proceedings of the 6th Joint International Conference on Data Science & Management of Data (10th ACM IKDD CODS and 28th COMAD), CODSCOMAD ’23, page 55–61, New York, NY, USA. Association for Computing Machinery. Tejpalsingh Siledar, Rupasai Rangaraju, Sankara Sri Raghava Ravindra Muddu, Suman Banerjee, Amey Patil, Sudhanshu Shekhar Singh, Muthusamy Chelliah, Nikesh Garera, Swaprava Nath, and Pushpak Bhattacharyya. 2024. Product description and qa assisted self-supervised opinion summarization. 12129 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023. Zephyr: Direct distillation of lm alignment. Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is ChatGPT a good NLG evaluator? a preliminary study. In Proceedings of the 4th New Frontiers in Summarization Workshop, pages 1–11, Singapore. Association for Computational Linguistics. Lu Wang and Wang Ling. 2016. Neural network-based abstract generation for opinions and arguments. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 47–57, San Diego, California. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. ArXiv, abs/2201.11903. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface’s transformers: State-of-the-art natural language processing. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems, volume 34, pages 27263–27277. Curran Associates, Inc. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. ArXiv, abs/1904.09675. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Ming Zhong, Yang Liu, Da Yin, Yuning Mao, Yizhu Jiao, Pengfei Liu, Chenguang Zhu, Heng Ji, and Jiawei Han. 2022. Towards a unified multidimensional evaluator for text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2023– 2038, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. A Available Benchmark Dataset OPINSUMMEVAL: (Shen and Wan, 2023) used the Yelp test set (Chu and Liu, 2019) to annotate for 4 dimensions: readability, self-coherence, aspect relevance, and sentiment consistency. The dataset contains a total of 100 products with 8 reviews and 14 different model summaries per product. Each summary was rated by 2 annotators on 4 dimensions. For consistency, we hereby refer to the abovementioned dimensions as fluency, coherence, aspect coverage, and sentiment consistency respectively, in line with our definitions. B Rater Agreement Table 7 reports pairwise root mean squared error scores for the 3 raters. For Round-I, we observe a difference of more than 1 on average. For Round-II, as expected the average difference between any two ratings come down to below 1. Table 8 reports the pairwise correlations between raters as well as the correlation between each rater and average ratings for both Round-I and Round-II. 12130 Figure 5: Performance of different models as rated by human annotators (Round-II). We observe that GPT-4 performs the best followed by Solar-10.7B and Mistral-7B. Self-supervised models perform worse. In general, all the LLMs perform better than human annotated summaries. FL ↑ CO ↑ AC ↑ SC ↑ ρ τ ρ τ ρ τ ρ τ OPINSUMMEVAL ROUGE-1† 0.08 0.06 0.11 0.09 0.14 0.11 0.00 0.00 ROUGE-2† 0.13 0.10 0.13 0.11 0.15 0.11 0.04 0.04 ROUGE-L† 0.13 0.10 0.18 0.15 0.18 0.14 0.07 0.05 BERTSCORE† 0.38 0.30 0.20 0.17 0.20 0.16 0.08 0.06 BARTSCORE† 0.42 0.33 0.35 0.29 0.28 0.22 0.41 0.34 SUMMAC† 0.06 0.05 0.02 0.01 0.07 0.06 0.20 0.17 CHATGPT-3.5† 0.47 0.42 0.28 0.25 0.34 0.30 0.37 0.33 G-EVAL-3.5† 0.41 0.36 0.29 0.26 0.27 0.23 0.38 0.34 OP-I-GPT-3.5 0.51 0.46 0.36 0.32 0.33 0.29 0.43 0.39 G-EVAL-MISTRAL 0.46 0.41 0.41 0.38 0.36 0.32 0.49 0.45 OP-MISTRAL 0.35 0.33 0.45 0.41 0.34 0.30 0.45 0.41 OP-I-MISTRAL 0.46 0.41 0.37 0.35 0.38 0.34 0.49 0.45 Table 6: Spearman (ρ) and Kendall Tau (τ) correlations at summary-level on 7 dimensions for the OPINSUMMEVAL dataset. For closed-source, OP-I-PROMPT performs comparably to G-EVAL, whereas for open-source it outperforms alternatives. † represents results as reported in Shen and Wan (2023) C Prompts For brevity, we provide different prompts for only a single dimension- Aspect Coverage. We will release prompts for all the dimensions across different approaches publicly. FL ↓ CO ↓ RE ↓ FA ↓ AC ↓ SC ↓ SP ↓ Round-I A1-A2 0.95 1.06 1.01 1.09 0.91 1.08 0.95 A2-A3 0.44 0.86 1.09 1.05 0.84 1.19 1.42 A1-A3 1.00 1.23 1.16 1.24 1.15 1.47 1.55 AVG-I 0.80 1.05 1.09 1.13 0.97 1.25 1.31 Round-II A1-A2 0.55 0.66 0.65 0.60 0.64 0.64 0.60 A2-A3 0.31 0.62 0.67 0.67 0.68 0.71 0.79 A1-A3 0.53 0.73 0.67 0.68 0.76 0.76 0.73 AVG-II 0.47 0.67 0.66 0.65 0.69 0.70 0.71 Table 7: Round-I and Round-II Ratings: Pairwise Root Mean Squared Error scores for 3 raters A1, A2, and A3. C.1 OP-I-PROMPT for Aspect Coverage Task Description: You will be given a set of reviews using which a summary has been generated. Your task is to evaluate the summary based on the given metric. Evaluate to which extent does the summary follows the given metric considering the reviews as the input. Use the following evaluation 12131 FL ↑ CO ↑ RE ↑ FA ↑ AC ↑ SC ↑ SP ↑ ρ τ ρ τ ρ τ ρ τ ρ τ ρ τ ρ τ Pairwise correlation among raters Round-I A1-A2 0.58 0.56 0.54 0.50 0.65 0.60 0.73 0.68 0.78 0.72 0.60 0.53 0.65 0.59 A2-A3 0.79 0.78 0.40 0.38 0.52 0.47 0.63 0.58 0.77 0.71 0.56 0.51 0.58 0.53 A1-A3 0.55 0.53 0.34 0.31 0.40 0.36 0.60 0.54 0.74 0.68 0.57 0.51 0.57 0.51 AVG-I 0.64 0.62 0.43 0.40 0.52 0.48 0.65 0.60 0.76 0.70 0.58 0.52 0.60 0.54 Pairwise correlation between raters and the overall average ratings A-A1 0.95 0.93 0.86 0.82 0.81 0.74 0.86 0.80 0.91 0.85 0.85 0.77 0.87 0.80 A-A2 0.70 0.67 0.72 0.67 0.82 0.75 0.81 0.75 0.91 0.85 0.85 0.78 0.87 0.80 A-A3 0.57 0.54 0.56 0.51 0.74 0.67 0.81 0.76 0.88 0.81 0.76 0.69 0.76 0.69 AVG-II 0.74 0.71 0.71 0.67 0.79 0.72 0.83 0.77 0.90 0.84 0.82 0.75 0.83 0.76 Pairwise correlation among raters Round-II A1-A2 0.63 0.61 0.64 0.61 0.80 0.75 0.83 0.79 0.85 0.80 0.77 0.72 0.81 0.76 A2-A3 0.85 0.84 0.59 0.56 0.78 0.73 0.77 0.73 0.83 0.78 0.77 0.73 0.81 0.77 A1-A3 0.66 0.65 0.59 0.56 0.77 0.72 0.78 0.73 0.84 0.79 0.79 0.74 0.82 0.78 AVG-I 0.71 0.70 0.61 0.58 0.78 0.73 0.79 0.75 0.84 0.79 0.78 0.73 0.81 0.77 Pairwise correlation between raters and the overall average ratings A-A1 0.94 0.92 0.87 0.83 0.91 0.85 0.89 0.84 0.94 0.88 0.92 0.86 0.92 0.87 A-A2 0.76 0.74 0.80 0.75 0.91 0.86 0.87 0.82 0.93 0.88 0.91 0.85 0.92 0.87 A-A2 0.69 0.67 0.76 0.71 0.91 0.85 0.92 0.88 0.93 0.86 0.91 0.85 0.92 0.87 AVG-II 0.80 0.77 0.81 0.76 0.91 0.86 0.89 0.85 0.93 0.87 0.91 0.85 0.92 0.87 Table 8: Rater Correlations: Pairwise Spearman (ρ) and Kendall Tau (τ) correlations at summary-level for 3 raters A1, A2, and A3 along with the average of their ratings A. criteria to judge the extent to which the metric is followed. Make sure you understand the task and the following evaluation metric very clearly. Evaluation Criteria: The task is to judge the extent to which the metric is followed by the summary. Following are the scores and the evaluation criteria according to which scores must be assigned. <score>1</score> The metric is not followed at all while generating the summary from the reviews. <score>2</score> - The metric is followed only to a limited extent while generating the summary from the reviews. <score>3</score> - The metric is followed to a good extent while generating the summary from the reviews. <score>4</score> - The metric is followed mostly while generating the summary from the reviews. <score>5</score> - The metric is followed completely while generating the summary from the reviews. Metric: Aspect Coverage The summary should cover all the aspects that are majorly being discussed in the reviews. Summaries should be penalized if they miss out on an aspect that was majorly being discussed in the reviews and awarded if it covers all. Reviews: {} Summary: {} Evaluation Steps: Follow the following steps strictly while giving the response: 1.First write down the steps that are needed to evaluate the summary as per the metric. Reiterate what metric you 12132 will be using to evaluate the summary. 2.Give a step-by-step explanation if the summary adheres to the metric considering the reviews as the input. Stick to the metric only for evaluation. 3.Next, evaluate the extent to which the metric is followed. 4.Use the previous information to rate the summary using the evaluation criteria and assign a score within the <score></score> tags. Note: Strictly give the score within <score></score> tags only e.g Score<score>5</score>. First give a detailed explanation and then finally give a single score following the format: Score- <score>5</score> THE EVALUATION AND SCORE MUST BE ASSIGNED STRICTLY ACCORDING TO THE METRIC ONLY AND NOTHING ELSE! Response: C.2 OP-PROMPTS for Aspect Coverage Task Description: You will be given a set of reviews. You will then be given one summary written for the set of reviews. Your task is to rate the summary on one metric. Make sure you understand the following evaluation metric very clearly. Your task is to rate the summary corresponding to the given reviews on the evaluation criteria. Evaluation Criteria: Aspect Coverage The summary should cover all the aspects that are majorly being discussed in the reviews. Summaries should be penalized if they miss out on an aspect that was majorly being discussed in the reviews and awarded if it covers all. <score>1</score> - Summary does not cover any important aspects present in the reviews. <score>2</score> - Summary does not cover most of the important aspects present in the reviews. <score>3</score> - Summary covers around half of the important aspects present in the reviews. <score>4</score> - Summary covers most of the important aspects present in reviews. <score>5</score> - Summary covers all the important aspects discussed in reviews. Metric: Aspect Coverage The summary should cover all the aspects that are majorly being discussed in the reviews. Summaries should be penalized if they miss out on an aspect that was majorly being discussed in the reviews and awarded if it covers all. Reviews: {} Summary: {} Evaluation Steps: Let’s go step-by-step. Follow the following steps strictly while giving the response: 1.Identify the important aspects present in the reviews and list them with numbering 2.Identify the important aspects present in the summary and list them with numbering 3.Identify the important aspects covered by the summary that are present in the reviews and list them with numbering 4.Calculate the total number of important aspects covered by the summary that are present in the reviews 5.Calculate the total number of important aspects present in the reviews 6.Finally use the evaluation criteria to output only a single score within <score></score> tags. Note: Strictly give the score within <score></score> tags only e.g Score<score>5</score>. First give a detailed explanation of how much is the coverage and then 12133 finally give a single score following the format: Score- <score>5</score> Response: C.3 G-EVAL for Aspect Coverage Task Description: You will be given a set of reviews and a corresponding summary. Make sure you understand the following evaluation metric very clearly. Your task is to rate the summary corresponding to the given reviews on the evaluation criteria. Evaluation Criteria: Aspect Coverage (1-5) - The summary should cover all the aspects that are majorly being discussed in the reviews. Summaries should be penalized if they miss out on an aspect that was majorly being discussed in the reviews and awarded if it covers all. Reviews: {} Summary: {} Evaluation Steps: 1.Read through the given set of reviews carefully. 2.Compare the content of the reviews to the provided summary. 3.Evaluate whether the summary covers all the major aspects that are being discussed in the reviews. 4.Rate the summary on a scale of 1-5 based on how well it covers the aspects discussed in the reviews. 5.Provide a brief explanation for your rating, citing specific examples from the reviews and summary. Note: Strictly give the score within <score></score> tags only e.g Score: <score>5</score>. Response: C.4 Summarization Prompt Generate a summary for the following set of reviews. Generate the summary in a paragraph format. No bulletpoints or explanations needed. Just output the summary text. Reviews: {} Summary: D Dimension Definitions For ablation, we try out three different definition variations of aspect coverage. Definition 1: The summary should cover all the aspects that are majorly being discussed in the reviews. Summaries should be penalized if they miss out on an aspect that was majorly being discussed in the reviews and awarded if it covers all. Definition 2: This refers to the comprehensiveness of a summary in capturing all significant aspects discussed in reviews. A summary is evaluated based on its ability to include major topics of discussion; it is deemed deficient if it overlooks any crucial aspect and commendable if it encompasses them all. Definition 3: Aspect coverage pertains to the extent to which a summary encapsulates the key facets discussed in reviews. Summaries are evaluated based on their ability to incorporate major discussion points. They are considered deficient if they omit any critical aspect and commendable if they address them all comprehensively. 12134
2024
655
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12135–12148 August 11-16, 2024 ©2024 Association for Computational Linguistics LANDeRMT: Dectecting and Routing Language-Aware Neurons for Selectively Finetuning LLMs to Machine Translation Shaolin Zhu1†, Leiyu Pan1†, Bo Li2,3, Deyi Xiong1* 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Software, Tsinghua University, Beijing, China 3Baidu APP Technology and Platform R&D Department, Baidu Inc, Beijing, China {zhushaolin, lypan, dyxiong}@tju.edu.cn {li-b19}@mails.tsinghua.edu.cn Abstract Recent advancements in large language models (LLMs) have shown promising results in multilingual translation even with limited bilingual supervision. The major challenges are catastrophic forgetting and parameter interference1 for finetuning LLMs when provided parallel training data. To address these challenges, we propose LANDeRMT, a Language-Aware Neuron Detecting and Routing framework that selectively finetunes LLMs to Machine Translation with diverse translation training data. In LANDeRMT, we evaluate the awareness of neurons to MT tasks and categorize them into language-general and languagespecific neurons. This categorization enables selective parameter updates during finetuning, mitigating parameter interference and catastrophic forgetting issues. For the detected neurons, we further propose a conditional awareness-based routing mechanism to dynamically adjust language-general and languagespecific capacity within LLMs, guided by translation signals. Experimental results demonstrate that the proposed LANDeRMT is very effective in learning translation knowledge, significantly improving translation quality over various strong baselines for multiple language pairs. 1 Introduction Conventional neural machine translation (NMT) usually requires a huge amount of parallel training data (Costa-jussà et al., 2022; Fedus et al., 2022; Zhu et al., 2023, 2024b). In contrast, multilingual LLMs, e.g., BLOOM (Scao et al., 2022), LLaMA2 (Touvron et al., 2023), in spite of being trained with †Equal contribution. *Corresponding author. 1Regarding catastrophic forgetting and parameter interference, we are specifically addressing issues between languages rather than those between machine translation tasks and other tasks in this paper. mainly monolingual data, require only a few examples to demonstrate remarkable prowess in multilingual translation via in-context learning (ICL) (He et al., 2023; Lyu et al., 2023). However, such LLM-based MT exhibits a major drawback that the quality of yielded translations is highly sensitive to the provided examples in ICL (Vilar et al., 2023) and outputs might suffer from overgeneration (Bawden and Yvon, 2023). To address these issues, existing studies attempt to use various finetuning methods, such as adapterbased method (Alves et al., 2023), instructionbased tuning method (Li et al., 2023a). However, these approaches primarily focus on balancing between the original LLMs and new finetuning translation data They use only incremental data to acquire new knowledge without considering catastrophic forgetting of knowledge originally captured by LLMs (Liu et al., 2021; Shao and Feng, 2022; Huang et al., 2023). Many studies have shown that catastrophic forgetting indeed exists across languages as LLMs are fine-tuned on one language pair and then used to translate another language on which LLMs are not fine-tuned (Li et al., 2023b; Zhu et al., 2024a). Additionally, as LLMs are usually generally developed for multiple tasks (i.e., sharing parameters across different tasks), finetuning LLMs for MT task may cause parameter interference for other tasks (Luo et al., 2023). In Section 4.3, we find that full-parameter finetuning of LLMs cannot always improve translation quality on all language pairs. Therefore, is it possible to design a new finetuning method for LLMs, which can mitigate the issues of catastrophic forgetting and parameter interference during the finetuning process of LLMs to multilingual machine translation? In multilingual NMT, previous efforts evaluate the importance of model neurons to each language pair and only tune language-specific neurons for the current language pair during training (Xie et al., 12135 2021). Recent studies on LLM unveils that many neurons in the feed-forward networks (FFN) are only activated for specific tasks and become “dead” for irrelevant tasks (Voita et al., 2023; Conmy et al., 2023). Inspired by these studies, we propose LANDeRMT, a language-aware neuron detecting and routing framework for selectively finetuning LLMs to MT, which aims to mitigate the issues of catastrophic forgetting and parameter interference. First, we evaluate the MT awareness of each neuron in FFNs. For neurons that are related to multilingual MT tasks, we further evaluate the relevance of each neuron to each language pair. According to their MT/language “awareness/relevance”, we divide neurons into the unactivated neurons, language-general neurons and language-specific neurons. After that, we finetune LLMs on multilingual parallel training data. During finetuning, only the parameters of language-general neurons and language-specific neurons for the current language pair are tuned. This selective finetuning process can alleviate the parameter interference issue. As language-general and language-specific capacity matters for MT (Zhang et al., 2021; Koishekenov et al., 2023), we propose a conditional awareness routing mechanism to dynamically schedule language-general and languagespecific capacity across sub-layers in LLMs under the guidance of translation signals. In doing so, we can alleviate the catastrophic forgetting issue and facilitate LLMs to be adapted to MT. The main contributions of this work are summarized as follows: • We propose LANDeRMT that aims at mitigating the catastrophic forgetting and parameter interference issues for efficiently finetuning LLMs to MT. • To well schedule language-general and language-specific capacity across sub-layers in LLMs, we propose a conditional awarenessbased routing mechanism. • Experiments on ten language pairs show that our model achieves the state-of-the-art results compared to previous strong baselines and demonstrate the robustness of the proposed model in various settings. 2 Related Work LLMs, with a few examples provided via in-context learning, have demonstrated impressive capabilities in machine translation without requiring explicit supervision from parallel training data (Moslem et al., 2023; Ghazvininejad et al., 2023; Sia and Duh, 2023; Han et al., 2022). However, LLMs with ICL for MT suffer from the sensitiveness to the provided examples (Vilar et al., 2023) and yielded translations might be overgenerated (Bawden and Yvon, 2023). Another line of research on LLMs, known as domain-adaptive pretraining, focuses on finetuning LLMs to downstream tasks (Cheng et al., 2023; Dong et al., 2023). Although these approaches have demonstrated efficacy in adapting various LLMs and result in enhanced performance on downstream tasks (Wu et al., 2023; Gupta et al., 2023; Wu et al., 2024; Zhu and Xiong, 2023), they rarely apply to multilingual generation tasks, e.g., multilingual MT. In order to efficiently adapt LLMs to MT, recent years have witnessed efforts on finetuning LLMs for MT (Vilar et al., 2023; Alves et al., 2023). Alves et al. (2023) show that adapter-based finetuning with LoRA (Hu et al., 2022) matches the performance of traditional finetuning while reducing the number of training parameters by a factor of 50. Li et al. (2023a) investigate the multilingual generalization when finetuning LLMs. However, they do not explicitly overcome catastrophic forgetting and parameter interference issues. To address these issues, our work starts with analyzing the neurons within the model, and finetunes LLMs by distinguishing neurons. Research interests in understanding the inner workings of LLMs and NMT models have been growing recently (Räuker et al., 2022; Bills et al., 2023; Garde et al., 2023). Voita et al. (2023) focus on neurons inside FFNs and find that the network is sparse and represents many discrete features in LLMs. They find many of the alive neurons are reserved for discrete features and act as token and n-gram detectors for different languages. In addition, previous NMT efforts evaluate the importance of NMT neurons in each language pair and only finetune language-specific neurons for the current language pair participate in training for conventional multilingual NMT (Xie et al., 2021; Patel et al., 2022). Partially motivated by these studies, we propose a language-aware neuron detecting 12136 ...... ...... ...... Layer j+1 Layer j Layer i+1 Layer i Detecting Language-pair-relevant Layers Multi-Head Self Attention Add & Norm Feed Forward FFN Add & Norm ⊗ Awareness Score Matrix Layer i Evaluating the Language Awareness of Neurons frozen trainable Language-general neuron Language-specific neuron (En-De) Language-specific neuron (En-Fr) Forward propagation Back propagation Hidden states Gradient � .... 1 2 4 3 Figure 1: Illustration of the proposed LANDeRMT. and routing framework for selectively finetuning LLMs to MT. In our method, we use awarenessbased evaluation of neurons in LLMs and divide the neurons into language-general and languagespecific neurons. We only update the parameters of language-general neurons and the corresponding language-specific neurons for the current language pair during training to overcome catastrophic forgetting and parameter interference to enhance the multilingual translation ability of LLMs. 3 Methodology The proposed LANDeRMT is illustrated in Figure 1. We first propose a method to analyse which FFN layers of LLMs have strong relevance to sourcetarget language pair. This allows us to exclusively concentrate on layers that are related to the MT task, hence reducing the distraction from unrelated parameters. Then, we employ Taylor Expansion (TE) (Xie et al., 2021) to evaluate the strength of awareness (relevance) of neurons at those layers to the given language pairs of the MT task. Finally, we only route and finetune the detected languageaware neurons for the MT task. This can ensure that we only need to update a small number of relevant parameters of LLMs for MT. 3.1 Detecting Language-Pair-Relevant Layers We introduce a representation analysis (RA) method to detect language-pair-relevant layers, which is based on the difference in activations between FFN layers. RA aims to measure the changes in the response of each FFN layer to the input source sentence during the LLM forward propagation process that “translates” the source sentence into the target sentence, so as to identify FFN alignment layers that are highly “activated” for the source-target language pair. For each consecutive pair of layers i and i + 1 within the LLM, we compute the activation difference D, to estimate the degree of change in information representation between these two layers. The estimation is computed as follows: Di = 1 N N X n=1 Ai,n −1 N N X n=1 Ai+1,n (1) where Ai,n and Ai+1,n represent the activation values at the i-th and i + 1-th layers during the n-th forward propagation, and N is the total number of forward propagations. In this manner, Di captures the extent of change in activation values between adjacent layers along the depth of the model when the input source language is translated into the target language. The most significant changes in layer representations indicate the most critical layers that are related to the source-target language pair. Therefore, our layer selection criterion focuses on identifying those layers with the top-k D values as follows: 12137 LLPR = arg max topk {D1, D2, ..., Dk} (2) where LLPR denotes the optimal language-pairrelevant layers. 3.2 Evaluating the Language Awareness of Neurons Once we find the language-pair-relevant layer, do we need to finetune all neurons of the layer for the corresponding language pair? Our experiments show that this all-neuron-finetuning strategy is not as expected (see Section 4.4). The main reasons are two fold. First, if all parameters of the detected FFNs are updated for all language pairs, catastrophic forgetting problem still remains (Liu et al., 2021). Second, there is no effective mechanism to overcome the parameter interference issue to preserve the language-general and the languagespecific knowledge. Partially inspired by the studies on the importance-based neuron finetuning for NMT (Xie et al., 2021) and neuron interpretability in LLMs (Voita et al., 2023), we propose to use the TE to evaluate which neurons are essential to all languages and which neurons are responsible for specific languages. We first define the awareness score Φ(i) of a neuron to a certain language: Φ(i) = |∆L(hi)| , i ∈Lj (3) Lj is the j-th layer that is the detected languagepair-relevant layer. hi is the output of neuron i. ∆L(hi) is the loss change between setting hi to 0 and keeping it at its original value. It can be transformed by TE into the following form: |∆L (hi)| = ∂L ∂hi hi (4) We estimate the loss change as the product of the gradient of the loss function with respect to the activation value and the activation value itself. The detailed proof can be found in the appendix A.2, which is similar to that by Xie et al. (2021). Then, we determine which neurons are shared across all language pairs (i.e., language-general neurons) and which neurons are only related to specific language pairs. We define Xi as the vector of awareness scores of the i-th neuron for each language. For each neuron, we calculate the variance σ(Xi) of the awareness scores across different languages. Within a Multi-Head Self Attention Add & Norm Feed Forward FFN Add & Norm Layer i frozen trainable Router Specific en-zh en-fr General all Fusion Figure 2: The model architecture used for routing and training. specific layer, we sort the neuron awareness scores based on their variance from the highest to the lowest. A variance threshold λ(i) is calculated to distinguish language-general neurons from languagespecific neurons as follows: λ(i) = sort(σ(Xi))⌊ϵ×p⌋, i ∈Lj (5) where p is the number of neurons in the Lj layer, ϵ is a predefined ratio. For neurons with language awareness score variances below the estimated threshold λ(i), we categorize them as languagegeneral neurons, otherwise as language-specific neurons. Each detected language-specific neuron is assigned to the language with the highest awareness score. The set of neurons that are specific either to the source language or to the target language are aggregated as the neurons exclusive to that language pair. 3.3 Routing and Finetuning In our proposed framework, for a given bilingual dataset of a specific language pair, only the language-general and language-specific neurons of the detected FFNs for this language pair participate in the forward computation and the parameters associated with them are updated during the backward propagation, as illustrated in Figure 2. Nevertheless, it has been empirically shown that the language signals from language indicator tokens alone are not sufficient (Arivazhagan et al., 2019), making modules or mechanisms dedicated to languagegeneral and language-specific modeling a necessity (Zhang et al., 2020, 2021). To address this issue, we propose a conditional awareness-based routing mechanism (CAR) that allows the model to decide and learn what proportion of the outputs 12138 of language-general and language-specific neurons should be allocated for the translation of the language pair. For an input token xt, CAR is evaluated as follows: CAR(xt) = PN i=1 Φ(i) PN i=1 Φ(i) + PM j=1 Φ(j) (6) hG(xt) = FFNG(CAR(xt).xt) (7) hS(xt) = FFNS((1 −CAR(xt)).xt) (8) where G denotes language-general and S languagespecific. FFNG and FFNS are language-general and language-specific neurons, respectively. N is the total number of language-specific neurons in a FFN layers for a language pair. M is the total number of language-general neurons in a FFN layers. We combine FFNG and FFNS to alleviate the parameter interference. The fusion output Hf is given by: Hf = hG(xt) + hS(xt) (9) Uppercase Hf is just a notation here for the addition result of hG(xt) and hS(xt), which is only used to distinguish it from hG(xt) and hS(xt). During the finetuning stage, we only update the parameters of language-general and language-specific neurons for a specific language pair and freeze other parameters of LLMs. 4 Experiments We conducted extensive experiments with involving multiple models across various translation directions to evaluate the proposed framework against a set of strong baselines. 4.1 Dataset During the finetuning stage, we selected 5 language pairs to tune LLMs. All the original training data came from the recent WMT general translation track. All data followed the license that can be freely used for research purposes. In addition, we used the way in (Huang et al., 2023) to clean training data. All datasets originated from the Workshop on Machine Translation (WMT)2. Specifically, we 2https://www.statmt.org/ extracted 200,000 sentence pairs for each translation direction. In addition, we employed ten translation instruction finetuning templates sourced from FLAN v2 (Longpre et al., 2023), adopting them to our parallel data. Each sentence pair from parallel corpus was randomly assigned one translation instruction template. We assessed our model’s performance using established test sets like WMT16, WMT14 and OPUS-100. 4.2 Settings and Baselines Settings In the language-pair-relevant layers detection phase, we set k to 4. We executed a freezing operation on the parameters of the remaining layers while exclusively finetuning the parameters within the chosen four layers. In the language-aware neurons evaluation phase, we categorized the parameters within the selected layer into language-general and language-specific parameters, setting ϵ to 0.9. During the model finetuning stage, we configured the fintuning hytper-parameters as follows: the finetuning epoch was set to 1, the number of language pairs was specified as 10, the number of iterations per epoch for each language pair was set to 12,500, the batch size was set 8, and the AdamW optimizer was employed. Additionally, the learning rate was set to 5e-5. Furthermore, we introduced a gradient accumulation operation, updating the model parameters every 10 iterations to enhance convergence. The LLMs used for our experiments are BLOOM7b1 (Scao et al., 2022) and Baichuan2-7B-Base (Yang et al., 2023). Baselines We compared our approach to the following strong baselines. • 0-shot: This approach uses instructions directly to make the model perform downstream tasks without providing any in-context demonstrations. • In-context (Zhang et al., 2023): This is a training-free approach that allows the LLMs to perform downstream tasks. In particular, we use 5 random shots as in-context demonstrations. • Adapter: This method facilitates the acquisition of new knowledge by incorporating additional adapter modules following modelspecific layers, effectively addressing the issue of catastrophic forgetting. 12139 Methods Params WMT16 WMT16 WMT14 OPUS-100 OPUS-100 en-de de-en en-it it-en en-fr fr-en en-ar ar-en en-zh zh-en Full finetuning 7B 16.12 19.39 17.98 24.18 29.13 28.15 15.40 28.83 20.87 25.52 0-shot —— 11.71 17.82 8.07 16.05 19.58 18.88 9.46 26.37 6.14 22.02 In-context —— 11.49 14.57 9.80 13.12 18.74 15.29 10.01 21.24 6.83 17.19 Adapter 806M 15.61 19.67 15.25 23.63 28.08 27.26 11.28 28.07 15.18 25.05 LoRA 31M 15.12 19.03 14.82 22.85 27.16 27.72 11.15 27.74 15.72 25.12 Adapter-LoRA 806M 16.31 20.23 15.83 23.82 28.05 28.22 11.78 28.46 16.32 25.61 LANDeRMT (Ours) 805M 18.85 22.03 19.82 25.99 31.91 30.55 16.97 31.44 22.47 28.11 Table 1: BLEU scores on the 10 language pairs for xx-to-English and English-to-xx translation. The highest score on each translation direction is highlighted in bold. • LoRA (Hu et al., 2022): This method efficiently finetunes a model for a downstream task by converting certain structures into lowrank matrices and subsequently finetuning them to suit the task. • Adapter-LoRA (Alves et al., 2023): It uses adapter-based finetuning with LoRA, which matches the performance of traditional finetuning while reducing the number of training parameters. 4.3 Main Results For evaluating translation performance, we used two automatic evaluation metrics sacreBLEU3. Comparison with ICL In order to examine the effectiveness of our proposed method, we evaluated LANDeRMT on various test set and multiple language pairs. As shown in Table 1, our method can use new parallel training data to enhance the translation ability of LLMs. Comparison with finetuning baselines Compared to the baselines, our method is the best for all translation directions in Table 1. For relatively lowresource language pairs, such as English-Chinese, our method achieves significant improvements over baselines. Compared to the full parameter finetuning approach, our method has a clear parametric advantage since it only fine-tunes parameters in four layers in the model. Our approach exhibits a notable advantage in terms of the number of parameters to be tuned. In comparison to other efficient finetuning methods, e.g., the adapter baseline approach, our method finetunes basically the same number of parameters as it. However, our method is much better than the adapter-based approach in terms of translation quality. en-fr fr-en en-zh zh-en Layers 27.63 27.12 22.81 24.28 LANDeRMT-LS 22.27 21.46 16.18 23.16 LANDeRMT-LG 28.15 27.83 23.52 25.08 LANDeRMT (Ours) 31.91 30.55 22.47 28.11 Table 2: Translation results achieved by finetuning the BLOOM-7b1 model using different ablation experiment settings. 4.4 Ablation Study In the ablation experiments, we employed four distinct experimental setups, denoted as Layers, LANDeRMT-LS, LANDeRMT-LG, and LANDeRMT. The Layers configuration finetuned all parameters of the lanuage-pair-relevant layer for each language direction. In the LANDeRMT-LS setup, we finetuned only the language-specific parameters of the selected layers, with each language direction adjusting parameters specific to that language direction. The LANDeRMT-LG setup focused on finetuning only the language-general parameters of the selected layers, with all language directions adjusting the same language-general parameters. The LANDeRMT method, proposed in this paper, finetuned both the language-general parameters and language-sepecific parameters to each language pair. From Table 2, we observe that LANDeRMT-LS underperforms the Layers method, likely due to its smaller parameter size, which constitutes only 10% of the parameters in Layers. In details, we can observe that LANDeRMT achieves a 4.28 BLEU improvement over Layers in en-fr, a 9.64 BLEU improvement over LANDeRMT-LS, and a 3.76 BLEU improvement over LANDeRMT-LG. These experiments demonstrate the effectiveness of CAR. Surprisingly, LANDeRMT-LG achieves better re3BLEU+case.mixed+numrefs.1+smooth.none+tok.13a +version.2.2.1 12140 ar-en de-en fr-en it-en zh-en evaluate language direction ar-en de-en fr-en it-en zh-en finetune language direction 2.87 0.48 8.66 6.31 1.11 -2.40 2.21 6.20 3.07 0.02 -0.09 1.33 9.67 4.24 1.83 -0.76 0.82 7.85 8.94 0.11 -0.30 1.39 8.89 3.72 4.09 2 0 2 4 6 8 Figure 3: BLEU scores improvement achieved on other language pairs using the LANDeRMT method for finetuning only one language pair on the BLOOM7b model. sults despite finetuning fewer parameters than Layers. This suggests that our selection of languagegeneral parameters effectively captures language alignment, significantly improving translation performance. However, finetuning the languagegeneral parameters alone, as in LANDeRMT-LG, is insufficient to fully grasp language-specific information. 5 Analysis 5.1 LANDeRMT Improves Transfer Learning across Languages We examined the transfer learning ability of LANDeRMT in different translation directions. We finetuned the model using only parallel data from a particular language direction. In other words, we finetuned only the language-general and languagespecific parameters for that language pair, and then observed the performance of the model in other language directions. The Y-axis of Figure 3 shows the single language direction that we have finetuned, and the X-axis shows the language direction of the test data, which is plotted as the improvement in the model’s translation performance before and after the finetuning. Since BLOOM-7b is a model that is not mainly trained on a parallel corpus, its translation performance before finetuning is poor, which is the reason for the large improvement in the model’s translation performance. We observe that when finetuning one language direction, the results of other language directions can also be significantly improved, which proves that our method is effective in facilitating transfer learning between 0 5 10 15 20 25 30 Layers 0.06 0.04 0.02 0.00 0.02 Average Activation en-ar en-de en-it en-fr en-zh ar-en de-en it-en fr-en zh-en Figure 4: Layer-wise average activation across various language pair settings in the BLOOM-7b model. 0 5 10 15 20 25 Layers 0.000 0.005 0.010 0.015 0.020 0.025 Delta Average Activation en-ar en-de en-it en-fr en-zh ar-en de-en it-en fr-en zh-en Figure 5: Layer-wise delta average activation across various language pair settings in the BLOOM-7b model. languages. However, there are some exceptions. For example, when finetuning the de-en direction, the ar-en direction does not improve significantly or even decrease to some extent. We believe that this may be due to the fact that Arabic belongs to a different language family from German, and that the distance between the languages is far, making it difficult for cross-lingual transfer learning. 5.2 Language-Pair-Relevant Layers for Different Language Pairs Figure 4 illustrates the variation in the average activation values across each layer of the model when inputting translation instructions generated using diverse language pairs. It is noteworthy that the average activation values of various language pairs exhibit a similar trend of change, particularly in the shallower layers of the model. Moreover, when interchanging the source and target languages within language pairs, the average activation values consis12141 0 5 10 15 20 25 30 Layers 0.2 0.3 0.4 0.5 0.6 Awareness Score ar-en de-en it-en fr-en zh-en en-ar en-de en-it en-fr en-zh Figure 6: Layer-wise average language-general neuron awareness scores across various language settings in the BLOOM-7b1 model. 0 5 10 15 20 25 30 Layers 0 2 4 6 8 10 Awareness Score ar-en de-en it-en fr-en zh-en en-ar en-de en-it en-fr en-zh Figure 7: Layer-wise average language-specific neuron awareness scores across various language settings in the BLOOM-7b1 model. tently follow a more uniform trend, as evidenced by the translation directions of ar-en and en-ar. This indicate that not only the semantic remains consistent between the source and target languages within the same language pair, but the identical semantic is still existing on across different language pairs. The absolute value of activation value changes from layer to layer can be calculated by using the average activation values of each layer, as illustrated in Figure 5. We can find that early and latestage layers in the model harbor information pertaining to language pairs. For instance, layers 6 and 7, as well as the final layers, exhibit higher absolute values of activation value changes. Furthermore, an observation can be made that the early layers of the model encapsulate language pair-related information that is language pair-general, displaying a substantial overlap across different language pairs. Conversely, the layers towards the end of the model contain language pair-related information that is language pair-specific, characterized by a diminished overlap among different language pairs. 5.3 Neuron Awareness for Different Languages: General and Specific The main idea of our proposed method is to distinguish language-general from language-specific neurons. To verify whether this goal has been achieved, we conducted the following experiments. As mentioned in Section 3.2, we categorize neurons based on their awareness scores, and we observed significant differences in the awareness scores of language-general and language-specific neurons across layers requiring fine-tuning. We illustrate this in Figure 6 and Figure 7. We can find that for almost all language pairs, there are noticeable differences in awareness scores between certain intermediate layers and the final layers. This indicates that our categorization of neurons based on layers accurately reflects the practical scenario. It also suggests that language-general and languagespecific neurons exhibit varying levels of importance across different layers of the model, particularly in layers targeted for finetuning. Such differences likely stem from the distinct roles that language-general and language-specific neurons play in capturing and processing language-specific and language-general information. 5.4 Neuron Awareness for Different Language Pairs Figure 8 depicts the average neuron awareness scores for each layer of the model, computed using TE with monolingual data inputs in various 12142 0 5 10 15 20 25 30 Layers 0.2 0.3 0.4 0.5 0.6 Awareness Score ar-en de-en it-en fr-en zh-en en-ar en-de en-it en-fr en-zh Figure 8: Layer-wise average neuron awareness scores across various language settings in the BLOOM-7b model. language pairs. The employed multidirectionally aligned monolingual data ensures semantic one-toone correspondence. The results show consistent trends in neuron awareness scores across different language pairs, particularly in the intermediate layers, indicating the model’s ability to capture semantic information consistently across language pairs. Additionally, related languages such as Spanish, English, and French exhibit more similar trends, supporting our hypothesis. Furthermore, we observed that language-specific neurons tend to have higher awareness scores in the last layers of the model. This suggests a heightened focus on encoding and retaining language-specific semantic information during the output phase, particularly in deeper layers. Notably, languagespecific neurons related to English consistently exhibit high awareness scores across all language pairs. This can be attributed to the prevalence of English data during the model’s pre-training phase, indicating robust representation and preservation of English language-specific information throughout the model. 5.5 Results on Other LLMs We also finetuned the Baichuan-7B-Base model using the LANDeRMT method and compared it with the adapter-LoRA finetuning approach. Results are shown in Figure 9. We observe that across the 10 language directions selected our proposed method outperforms the adapter-LoRA finetuning method. This demonstrates the applicability of the LANDeRMT method across different models, achieving optimal results not only in the BLOOM-7b models but also in the Baichuan model. 6 Conclusion In this paper, we have presented a novel approach that not only improves translation quality but also mitigates the risk of forgetting previous knowledge en-ar en-de en-fr en-it en-zh ar-en de-en fr-en it-en zh-en 5 10 15 20 25 30 35 LANDeRMT Adapter-LoRA Figure 9: Comparison of BLEU scores on the OPUS 100 test set across ten language directions for finetuning the Baichuan-7b-base model using adapter-LoRA and our proposed method LANDeRMT. while adapting to new data. We propose a TE to evaluate neuron awareness scores for MT tasks and categorize them into language-general neurons and language-specific neurons. The proposed routing mechanism ensures optimal allocation of resources across language-specific and languagegeneral capacities, further enhancing the adaptability of LLMs. Our experimental results, conducted across ten language pairs, validate the effectiveness of our model, showcasing superior performance compared to existing baselines. Acknowledgments The present research was supported by the National Natural Science Foundation of China Youth Foud (Grant No.62306210) and the Key Research and Development Program of Yunnan Province (Grant No. 202203AA080004). We would like to thank the anonymous reviewers for their insightful comments. 12143 Limitations Although LANDeRMT is a new approach to finetune LLMs to enhance the translation ability of LLMs. The finetuning procedure is shorter than training LLMs as the amount of data required during the finetuning stage is much smaller than during the training stage. This significantly reduces the cost of training model from scratch but maybe still totally overcome parameters interference as we not fully update the parameters of LLMs. Additionally, due to computational constraints, we are currently unable to design additional experiments to validate how our method enhances the upper limit of translation capabilities of LLMs when more training data is added. Ethics Statement This study adheres to the ethical guidelines set forth by our institution and follows the principles outlined in the ACM Code of Ethics and Professional Conduct. All datasets used in our experiments are publicly available. References Duarte Alves, Nuno Miguel Guerreiro, João Alves, José Pombal, Ricardo Rei, José Guilherme Camargo de Souza, Pierre Colombo, and André Martins. 2023. Steering large language models for machine translation with finetuning and in-context learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 11127–11148. Association for Computational Linguistics. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George F. Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Massively multilingual neural machine translation in the wild: Findings and challenges. CoRR, abs/1907.05019. Rachel Bawden and François Yvon. 2023. Investigating the translation performance of a large multilingual language model: the case of BLOOM. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, EAMT 2023, Tampere, Finland, 12-15 June 2023, pages 157–170. European Association for Machine Translation. Steven Bills, Nick Cammarata, Dan Mossing, Henk Tillman, Leo Gao, Gabriel Goh, Ilya Sutskever, Jan Leike, Jeff Wu, and William Saunders. 2023. Language models can explain neurons in language models. URL https://openaipublic. blob. core. windows. net/neuron-explainer/paper/index. html.(Date accessed: 14.05. 2023). Daixuan Cheng, Shaohan Huang, and Furu Wei. 2023. Adapting large language models via reading comprehension. CoRR, abs/2309.09530. Arthur Conmy, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, and Adrià GarrigaAlonso. 2023. Towards automated circuit discovery for mechanistic interpretability. CoRR, abs/2304.14997. Marta R. Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loïc Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation. CoRR, abs/2207.04672. Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng Yuan, Chang Zhou, and Jingren Zhou. 2023. How abilities in large language models are affected by supervised fine-tuning data composition. CoRR, abs/2310.05492. William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. J. Mach. Learn. Res., 23:120:1–120:39. Albert Garde, Esben Kran, and Fazl Barez. 2023. Deepdecipher: Accessing and investigating neuron activation in large language models. CoRR, abs/2310.01870. Marjan Ghazvininejad, Hila Gonen, and Luke Zettlemoyer. 2023. Dictionary-based phrase-level prompting of large language models for machine translation. CoRR, abs/2302.07856. Kshitij Gupta, Benjamin Thérien, Adam Ibrahim, Mats L. Richter, Quentin Anthony, Eugene Belilovsky, Irina Rish, and Timothée Lesort. 2023. Continual pre-training of large language models: How to (re)warm your model? CoRR, abs/2308.04014. Lifeng Han, Gleb Erofeev, Irina Sorokina, Serge Gladkoff, and Goran Nenadic. 2022. Examining large pre-trained language models for machine translation: What you don’t know about it. In Proceedings of the Seventh Conference on Machine Translation, WMT 2022, Abu Dhabi, United Arab Emirates (Hybrid), December 7-8, 2022, pages 908–919. Association for Computational Linguistics. 12144 Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023. Exploring humanlike translation strategy with large language models. CoRR, abs/2305.04118. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Kaiyu Huang, Peng Li, Jin Ma, Ting Yao, and Yang Liu. 2023. Knowledge transfer in incremental learning for multilingual neural machine translation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 15286–15304. Association for Computational Linguistics. Yeskendir Koishekenov, Alexandre Berard, and Vassilina Nikoulina. 2023. Memory-efficient NLLB-200: language-specific expert pruning of a massively multilingual machine translation model. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 3567–3585. Association for Computational Linguistics. Jiahuan Li, Hao Zhou, Shujian Huang, Shanbo Chen, and Jiajun Chen. 2023a. Eliciting the translation ability of large language models via multilingual finetuning with translation instructions. CoRR, abs/2305.15083. Shangjie Li, Xiangpeng Wei, Shaolin Zhu, Jun Xie, Baosong Yang, and Deyi Xiong. 2023b. MMNMT: Modularizing multilingual neural machine translation with flexibly assembled MoE and dense blocks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 4978–4990, Singapore. Association for Computational Linguistics. Huihui Liu, Yiding Yang, and Xinchao Wang. 2021. Overcoming catastrophic forgetting in graph neural networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 8653– 8661. AAAI Press. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. 2023. The flan collection: Designing data and methods for effective instruction tuning. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 22631–22648. PMLR. Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. 2023. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. CoRR, abs/2308.08747. Chenyang Lyu, Jitao Xu, and Longyue Wang. 2023. New trends in machine translation using large language models: Case examples with chatgpt. CoRR, abs/2305.01181. Yasmin Moslem, Rejwanul Haque, John D. Kelleher, and Andy Way. 2023. Adaptive machine translation with large language models. In Proceedings of the 24th Annual Conference of the European Association for Machine Translation, EAMT 2023, Tampere, Finland, 12-15 June 2023, pages 227–237. European Association for Machine Translation. Gal Patel, Leshem Choshen, and Omri Abend. 2022. On neurons invariant to sentence structural changes in neural machine translation. In Proceedings of the 26th Conference on Computational Natural Language Learning, CoNLL 2022, Abu Dhabi, United Arab Emirates (Hybrid Event), December 7-8, 2022, pages 194–212. Association for Computational Linguistics. Tilman Räuker, Anson Ho, Stephen Casper, and Dylan Hadfield-Menell. 2022. Toward transparent AI: A survey on interpreting the inner structures of deep neural networks. CoRR, abs/2207.13243. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100. Chenze Shao and Yang Feng. 2022. Overcoming catastrophic forgetting beyond continual learning: Balanced training for neural machine translation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2023–2036. Association for Computational Linguistics. Suzanna Sia and Kevin Duh. 2023. In-context learning as maintaining coherency: A study of on-the-fly machine translation using large language models. CoRR, abs/2305.03573. 12145 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. CoRR, abs/2307.09288. David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, and George F. Foster. 2023. Prompting palm for translation: Assessing strategies and performance. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 15406–15427. Association for Computational Linguistics. Elena Voita, Javier Ferrando, and Christoforos Nalmpantis. 2023. Neurons in large language models: Dead, n-gram, positional. CoRR, abs/2309.04827. Chengyue Wu, Yukang Gan, Yixiao Ge, Zeyu Lu, Jiahao Wang, Ye Feng, Ping Luo, and Ying Shan. 2024. Llama pro: Progressive llama with block expansion. CoRR, abs/2401.02415. Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David S. Rosenberg, and Gideon Mann. 2023. Bloomberggpt: A large language model for finance. CoRR, abs/2303.17564. Wanying Xie, Yang Feng, Shuhao Gu, and Dong Yu. 2021. Importance-based neuron allocation for multilingual neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 5725–5737. Association for Computational Linguistics. Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, Juntao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, and Zhiying Wu. 2023. Baichuan 2: Open large-scale language models. CoRR, abs/2309.10305. Biao Zhang, Ankur Bapna, Rico Sennrich, and Orhan Firat. 2021. Share or not? learning to schedule language-specific capacity for multilingual translation. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Biao Zhang, Barry Haddow, and Alexandra Birch. 2023. Prompting large language model for machine translation: A case study. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 41092–41110. PMLR. Biao Zhang, Philip Williams, Ivan Titov, and Rico Sennrich. 2020. Improving massively multilingual neural machine translation and zero-shot translation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 1628–1639. Association for Computational Linguistics. Shaolin Zhu, Menglong Cui, and Deyi Xiong. 2024a. Towards robust in-context learning for machine translation with large language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 16619–16629, Torino, Italia. ELRA and ICCL. Shaolin Zhu, Shiwei Gu, Shangjie Li, Lin Xu, and Deyi Xiong. 2024b. Mining parallel sentences from internet with multi-view knowledge distillation for lowresource language pairs. Knowledge and Information Systems, 66(1):187–209. Shaolin Zhu, Chenggang Mi, Tianqi Li, Yong Yang, and Chun Xu. 2023. Unsupervised parallel sentences of machine translation for asian language pairs. ACM Transactions on Asian and Low-Resource Language Information Processing, 22(3):1–14. Shaolin Zhu and Deyi Xiong. 2023. TJUNLP:system description for the WMT23 literary task in Chinese to English translation direction. In Proceedings of the Eighth Conference on Machine Translation, pages 307–311, Singapore. Association for Computational Linguistics. 12146 A Appendix A.1 Language Details We introduce the characteristics of different languages as shown in Table 3. Code Language Genus Order en English Romance SVO ar Arabic Semitic VSO fr French Romance SVO de German Germanic SVO zh Chinese Sinitic SVO it Italian Romance SVO Table 3: The characteristics of languages in our setting. A.2 Taylor Expansion We first express ∆L(hi) as loss change as shown in the following equation. |∆L (hi)| = |L (H, hi = 0) −L (H, hi)| H is the representation produced by a neuron other than i in the same structure as the i neuron. We then perform a first-order Taylor expansion of L(H, hi) at hi = 0. L (H, hi) = L (H, hi = 0)+∂L (H, hi) ∂hi hi+R1 (hi) The term R1 (hi) can be ignored since the derivatives of the activation function of second order and higher in the model tend to zero. So the above equation can be reduced to the following form. L (H, hi) ≈L (H, hi = 0) + ∂L (H, hi) ∂hi hi Therefore |∆L(hi)| can eventually be simplified to the following form. |∆L (hi)| ≈ ∂L (H, hi) ∂hi hi A.3 Effect of Hyperparameter k When the relation of layers for different languages is determined, the number of language pairs associated with each layer can be adjusted according to k. When k = 30, the threshold is max, so all layers will be allocated to tune LLMs, and when k = 0, the threshold is 0 so none layers will be tuning 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 Value of k 22.0 22.2 22.4 22.6 22.8 23.0 Mean BLEU scores across language directions Figure 10: Mean BLEU scores for all language directions at different k value settings. 0.75 0.80 0.85 0.90 0.95 Value of 22.2 22.4 22.6 22.8 23.0 Mean BLEU scores across language directions Figure 11: Mean BLEU scores for all language directions at different ϵ value settings. for all language pairs just like the 0-shot ICL. To better show the overall impact of the hyperparameter k, we vary it from 0 to 30 and the results are shown in Figure 10. As we can see, the translation performance of the proposed approach increases with the increment of k and reach the best performance when k equals 4. As k continues to increase, the performance deteriorates, which indicates that the over-specific layers are bad at capturing the common language-pair-relevant alignment and will lead to performance degradation. A.4 Effect of Hyperparameter ϵ We set several different sets of ϵ values to classify language-general neurons and language-specific neurons. The experimental outcomes are depicted in Figure 11. The figure reveals that the average value of all language-specific BLEU scores peaks when language-general neurons constitute 0.9 of the total neuron count. Below this threshold, the 12147 10 8 6 4 2 0 2 8 6 4 2 0 2 4 fr-en de-en ar-en zh-en it-en Figure 12: Clustering of representations generated by language-general neurons in the mlp.dense_h_to_4h structure in layer 10 of the BLOOM-7b1 model. 7.5 5.0 2.5 0.0 2.5 5.0 7.5 10.0 10 5 0 5 10 fr-en de-en ar-en zh-en it-en Figure 13: Clustering of representations generated by language-specific neurons in the mlp.dense_h_to_4h structure in layer 10 of the BLOOM-7b1 model. translation efficacy diminishes as the proportion of language-general neurons decreases. Conversely, exceeding the 0.9 threshold results in a decline in performance, with a higher proportion of languagegeneral neurons leading to poorer results. A.5 Language Cluster The main idea of our proposed method is to let the language-general and the language-specific knowledge be captured by different neurons. To validate whether the language-general and languagespecific neurons of FFNs within LLMs general or specific language knowledge, we plotted the distribution of different neurons across various languages in 10-th layer, as shown in Figure 12 and Figure 13. From these figures, it is evident that for the language-general FFNs neurons, the distributions for various languages intersect without clear boundaries, indicating a shared representation of language knowledge. In contrast, for the language-specific neurons, the boundaries between language distributions are highly distinct, highlighting the independence of language-specific knowledge. This observation underscores our neurons awareness can make neurons capture and integrate language knowledge in a general manner across multiple languages within the language-general FFNs. Conversely, the distinct boundaries in the distributions of language-specific FFNs neurons suggest that these neurons are dedicated to encoding language-specific nuances and characteristics. 12148
2024
656
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12149–12160 August 11-16, 2024 ©2024 Association for Computational Linguistics A Joint Coreference-Aware Approach to Document-Level Target Sentiment Analysis Hongjie Cai, Heqing Ma, Jianfei Yu, Rui Xia∗ School of Computer Science and Engineering, Nanjing University of Science and Technology, China {hjcai, hqma, jfyu, rxia}@njust.edu.cn Abstract Most existing work on aspect-based sentiment analysis (ABSA) focuses on the sentence level, while research at the document level has not received enough attention. Compared to sentence-level ABSA, the document-level ABSA is not only more practical but also requires holistic document-level understanding capabilities such as coreference resolution. To investigate the impact of coreference information on document-level ABSA, we conduct a three-stage research for the document-level target sentiment analysis (DTSA) task: 1) exploring the effectiveness of coreference information for the DTSA task; 2) reducing the reliance on manually annotated coreference information; 3) alleviating the evaluation bias caused by missing the coreference information of opinion targets. Specifically, we first manually annotate the coreferential opinion targets and propose a multi-task learning framework to model the DTSA task and the coreference resolution task jointly. Then we annotate the coreference information with ChatGPT for joint training. Finally, to address the issue of missing coreference targets, we modify the metric from strict matching to a loose matching method based on the clusters of targets. The experimental results demonstrate our framework’s effectiveness and reflect the feasibility of using ChatGPT-annotated coreferential entities and the applicability of the modified metric. Our source code is publicly released at https://github.com/NUSTM/DTSACoref. 1 Introduction Aspect-based sentiment analysis (ABSA) aims to extract opinion targets from review texts and the sentiment towards each opinion target (Hu and Liu, 2004). For example, in the sentence “The food is delicious.”, the opinion target is “food” and its sentiment is positive. ∗Corresponding author. Review Document I wear between a 9. 5 and 10 but ordered the 10, they fit but the toe area is a bit long. They are kinda narrow but will stretch being good leather. Overall it is an okay purchase for work shoes. Decoder (leather: Positive), (toe area: Negative), (work shoes: Neutral) ABSC NTSA (leather: Positive), (toe area: Negative), (work shoes: Neutral) {leather}, {toe area}, {work shoes} Our Method (work shoes--leather, Positive), (work shoes--toe area, Negative), (work shoes, Mixed) {toe area}, {leather}, {they, They, work shoes} split into sentences substring matching Coreferential Mentions {they, They, work shoes} Figure 1: Comparison between previous work and our methodology on document-level target sentiment analysis with coreferential mentions. Early work (Hu and Liu, 2004; Jiang et al., 2011) employed rules or machine learning methods to solve the task on their annotated datasets. After the introduction of ABSA evaluation tasks and benchmark datasets in (Pontiki et al., 2014, 2015, 2016), subsequent work deepened the research on ABSA, exploring different model paradigms in effects on ABSA (Li et al., 2018; Wu et al., 2020; Zhang et al., 2021a; Gou et al., 2023), also by improving the task definition, providing rich and diverse ABSA datasets to extract opinion targets and their sentiments more accurately (Jiang et al., 2019; Cai et al., 2021; Barnes et al., 2022). Despite these advancements, most existing work focuses on the sentence level, overlooking the fact that product reviews are typically presented in the form of documents, which is not only more practical but also enables the utilization of valuable document-level information for ABSA tasks. Recently, (Luo et al., 2022) introduced a task of document-level target sentiment analysis (DTSA), hierarchically depicting the opinion target. As shown in Figure 1, the review mentions three opinion targets: “work shoes”, the “leather” of the work shoes (represented as “work 12149 shoes–leather”), and the “toe area” of the work shoes (represented as “work shoes–toe area”). It expresses a positive sentiment towards “leather”, a negative sentiment towards “toe area”, and a mixed sentiment towards “work shoes”. A typical phenomenon in the DTSA task is the coreference of opinion targets, especially across multiple sentences. As the example shown in Figure 1, the “work shoes” have two coreferential mentions “they” and “They”, and the three aspect terms “work shoes", “they”, and “They” refer to the same opinion target. Combining context understanding of these pronouns’ referent may help differentiate the sentiment of different opinion targets and accurately identify the hierarchical relation among opinion targets. Although such coreference phenomenon is prevalent in review texts and can affect the accuracy of opinion target extraction in the ABSA research, Very few studies have investigated the coreference problem (Mullick et al., 2023; Rønningstad et al., 2023), especially in the DTSA task, which is relatively complex and has a higher incidence of coreference in review texts. Currently, there is no relevant research addressing coreference problems in this task. To explore the impact of coreference resolution on the DTSA task, the following three questions should be addressed: First, in the end-to-end DTSA task, does the coreference information contribute to the accurate identification of hierarchical relations among opinion targets and their sentiments? Second, coreference annotation requires substantial human effort and time. Is it possible to mine coreference information without manual efforts? Third, the same opinion target may have different forms of textual expression, yet existing ABSA datasets often select only one expression as the ground truth label. This approach may treat coreferent expressions of the same target as incorrect during evaluation. How can we mitigate the evaluation errors caused by the omission of coreference annotations? To address the first question, we annotate the coreferential targets of the opinion targets, proposing a coreference-aware joint model to investigate whether the coreference information can improve the model’s ability to identify hierarchical opinion targets and their sentiments. The model employs a dual-path architecture, utilizing multi-task learning to jointly handle the DTSA task and coreference resolution task. The left path is designed to identify opinion targets, their sentiments, and the hierarchical relations between them. The right path focuses on identifying the opinion targets and their coreferential relations. By sharing parameters and aligning tokens between the two paths, the model can learn to leverage coreference information and enhance its performance on the DTSA task. In response to the second question, we leverage ChatGPT to annotate the coreference information and verify the efficacy of the coreference annotation in the joint model. To answer the third question, we modify the metric to employ a cluster matching method that takes into account coreferent opinion targets. The experimental results demonstrate the effectiveness of introducing coreference information through a dual-path model. Additionally, the coreference annotations provided by ChatGPT can achieve comparable DTSA performance to those of human annotations. Finally, the cluster-level evaluation metric is also proven to be more reasonable in assessing the effectiveness of opinion target extraction in consideration of the coreference problem. 2 Related Work In recent years, ABSA has been extensively studied by researchers, and works are primarily focused on the sentence level, emphasizing either singleelement extraction or multi-element extraction. Sentence-level ABSA. Early sentence-level ABSA (Dong et al., 2014) often constructed datasets sourced from Twitter. Following the introduction of the ABSA task through SemEval evaluations by (Pontiki et al., 2014, 2015, 2016), subsequent research proposed numerous methods for sentencelevel ABSC (Ma et al., 2017; Xu et al., 2019; Tang et al., 2020). Additionally, a portion of the work further focused on the end-to-end ABSA task, mainly aiming to jointly extract the aspects and their sentiments within sentences. (Li et al., 2019; Luo et al., 2019; He et al., 2019) have introduced various solutions based on encoder-only models for this task. With the development of generative language models, (Zhang et al., 2021a,b) subsequently proposed solutions based on BART or T5. Document-level ABSA. A few ABSA work was conducted at the document level. (Hu and Liu, 2004; Ding et al., 2008) annotated documentlevel ABSA datasets across multiple domains and researched multiple aspect extractions on these datasets. (Chen et al., 2020b) investigated the con12150 sistency of the same aspect’s sentiments across documents as well as the correlation between sentiments across different aspects. (Luo et al., 2022) introduced the DTSA task and proposed a framework based on BART to solve this task. Furthermore, (Song et al., 2023) introduced an encoderbased Sequence-to-Structure framework to explicitly model the hierarchical relations between entities. Our work, taking DTSA as a starting point, investigates the impact of coreference on DTSA. Coreference Resolution. Coreference resolution aims to identify all expressions that refer to the same entity from the text. Since (Lee et al., 2017) first introduced a deep learning method for end-toend coreference resolution, coreference resolution has been increasingly integrated into related downstream tasks. (Hu and Liu, 2004) marked entities requiring pronoun resolution in ABSA datasets. (Ding and Liu, 2010) introduced a coreference classification task for objects and entities in comparative reviews. Building on this, (Chen et al., 2020a) proposed a method for automatic mining and utilizing domain-specific knowledge to address coreferences of entities. Moreover, (Mullick et al., 2023; Rønningstad et al., 2023) investigated whether the coreference resolution of entity is beneficial for the ABSC task. 3 Problem Definition In traditional sentence-level ABSA tasks, given a review sentence s, the goal is to identify the opinion targets (fine-grained entities or their aspects, collectively referred to as entities) mentioned in the sentence and their corresponding sentiment {. . . , (ai, pi), . . .}, where ai represents the extracted i-th entity, typically presented as a continuous text span in the sentence, and pi denotes the sentiment towards ai, such as positive, negative, or neutral. In the DTSA task, the input extends from a single sentence to a document d = [s1, . . . , sn] consisting of n sentences. The output aims to identify all hierarchical entities and their corresponding sentiments {. . . , (ti, pi), . . .}. Unlike the flat entity ai in sentence-level ABSA, ti represents a multi-level entity composed of multiple flat entities. We will use hierarchical entities to represent multi-level opinion targets in the following. It can be seen that extracting hierarchical entities in the DTSA task is more challenging than sentence-level ABSA tasks, particularly when the metric requires an exDomain Source Cluster (#/%) Entities/Cluster Book Humans 519/52.64% 2.91 ChatGPT 466/47.26% 4.05 Clothing Humans 526/56.68% 3.23 ChatGPT 374/40.30% 4.02 Hotel Humans 458/44.51% 2.38 ChatGPT 369/35.86% 3.00 Restaurant Humans 550/58.51% 2.52 ChatGPT 400/42.55% 3.98 Table 1: Statistics on the annotated coreference clusters between humans and ChatGPT. The third and fourth columns display the number and percentage of documents annotated with coreference clusters, as well as the entities per cluster. act match with the ground truth. Considering the entity coreference in documents, we also modify the metric from the exact entity-level matching to a cluster-level matching. 4 Methodology To investigate whether the information of entity coreference can aid in identifying hierarchical entities and their sentiments, we first annotate the coreferential entities for each entity ai, forming a coreference cluster Ci = {ai0, . . . , aik} (§4.1). Next, we incorporate the coreference information into the DTSA task through multi-task learning, leveraging both human-annotated and machine-annotated coreference information to improve the model’s ability to recognize coreference (§4.2). 4.1 Entity Coreference Annotation and Analysis The coreference of entities is manifested as multiple entities at different positions referring to the same entity. To annotate entity coreference, we need to label these entities at different positions and cluster them into a coreferential entity cluster. We obtain the coreference information of entities through both human annotation and ChatGPT annotation (refer readers to Appendix A for annotation details), respectively. Statistics and Analysis As shown in Table 1, in each domain, the number of documents with human-annotated coreference is higher than those annotated by ChatGPT. The proportion of humanannotated coreference documents ranges from 44% to 59% of the total documents, while ChatGPTannotated coreference documents account for 35% to 48%. However, for the documents annotated with coreference, the average number of entities per 12151 I wear between a 9. 5 and 10 but ordered the 10, they fit but the toe area is a bit long. They are kinda narrow but will stretch being good leather. Overall it is an okay purchase for work shoes. Cross layer parameter sharing DTSA-RoBERTa Entity Sentiment Classification Token-level constraint 0 1 1 0 1 0 0 1 s e s e 0 0 Coreference-RoBERTa candidate 1 1 0 0 0 1 1 0 1 1 flat entities ··· ··· ··· ··· ··· ··· 1 1 s/e ··· with sentiment coref entities POS NEG MIX POS NEG MIX ··· ··· ··· ··· ··· ··· . . . toe area . . . good leather . . . for work shoes they fit . . . bit long. They are . . . for work shoes ··· ··· ··· ··· ··· ··· ··· ··· ··· s/e s e s/e ··· ··· ··· : coreference relation : hierarchical relation 0 1 positional embeddings start end Figure 2: Overview of the joint framework for document-level target sentiment analysis and coreference resolution. cluster annotated by ChatGPT is generally higher than that annotated by humans, with this number reaching about 4 in the domains of Book, Clothing, and Restaurant, and 3 in the Hotel domain, while the average number of entities per cluster annotated by humans ranges from 2.3 to 3.3. It can be observed that human-annotated coreference covers more examples, but the annotated entities per example are comparatively sparser compared to ChatGPT. 4.2 Joint DTSA and Coreference Resolution Learning To integrate coreference information into the DTSA task, we employ multi-task learning to model both DTSA and coreference resolution tasks. This allows the model to identify coreference relations between entities and consider the sentiment information surrounding the coreferential entities when determining the sentiment of the current entity. Specifically, to jointly train the DTSA and coreference resolution tasks, we utilize a dual-path RoBERTa (Liu et al., 2019) to model both tasks simultaneously. As shown in Figure 2, where the two RobBERTa models receive the same input and share parameters. Backbone We use RoBERTa as the text encoder. Given a document d, we can obtain the tokenized tokens X = {x1, . . . , xn}. After feeding X into RoBERTa, we can get the contextualized representation HL = {h1, . . . , hn} after RoBERTa encoding: Hl = Transformerl(Hl−1), l ∈[1, 12], where L = 12 is the number of Transformer layers in RoBERTa-base. Next, we use HL to extract entities and identify the sentiment of the entities. 4.2.1 DTSA Modeling We decompose the DTSA task into three subtasks: entity extraction, entity sentiment classification, and entity hierarchical classification. Entity Extraction Considering that the review texts may contain nested entities (Yan et al., 2022; Wang et al., 2020), and traditional BIO tagging scheme cannot handle these cases, we use a spanbased method (Hu et al., 2019) to extract candidate entities. Specifically, we treat the identification of the start and end tokens of an entity as a binary classification task. We perform two binary classifications for each hm i in Hm L = {hm 1 , . . . , hm n }, determining whether the token is a candidate start token and whether it is a candidate end token, respectively: p(ys|hm i ) = Sigmoid(W m s hm i + bm s ), 12152 p(ye|hm i ) = Sigmoid(W m e hm i + bm e ), When p(ys|hm i ) > 0.5, it indicates that xi is a candidate start token. Similarly, when p(ye|hm i ) > 0.5, it indicates that xi is a candidate end token. After performing the binary classification for all tokens in the text, we obtain the candidate start token set Sstart = {. . . , sti, . . .} and the candidate end token set Send = {. . . , edi, . . .}, where the superscript m denotes the main task. After obtaining Sstart and Send, we take the Cartesian product of the two sets to obtain the candidate entity set with paired start and end tokens eAm = {. . . , eai, . . .}, and apply the following heuristic rules to obtain the final entity set Am = {. . . , ai, . . .}: 1) the index of the start token should be smaller than that of the end token; 2) the index distance is not greater than t; 3) the sum of the probabilities of the start and end tokens should exceed the threshold θ; 4) there are at most K candidate entities. We define the loss of entity extraction as: lossye = −1 B ( 1 n n X i=1 (ˆys i log(p(ys i ))+ ˆye i log(p(ye i )))), where B is the batch size, n is the number of tokens, ˆys and ˆye represent the gold labels for the start and end entity classification tasks, respectively. Entity Sentiment Classification After obtaining the entity set Am, we use the entity-context crossattention module to obtain the context-aware entity representation hm ai, and then feed hm ai into a softmax layer to obtain the sentiment towards ai: p(yai|hm ai) = Softmax(W m a hm ai + bm ai), where yai ∈{positive, negative, mixed}. In this way, we can obtain the set of entities and their corresponding sentiments AP m = {. . . , (ai, pi), . . .}. The loss for entity sentiment classification is: lossm ya = −1 B ( 1 |A| |A| X i=1 (ˆyai log(p(yai)))). Entity Hierarchical Classification Since the hierarchical entity ti is composed of multiple entities a, to obtain all hierarchical entities, we first pair each a in the entity set Am to obtain a set of candidate edges U, and then perform a three-class classification on each candidate edge uij=(ai,aj): p(yuij|ai, aj) = Sigmoid(Wu[hm ai : hm aj] + bu), where yuij ∈{0, 1, 2}, denoting no relation, hierarchical relation, and coreference relation between entities, respectively. Next, we connect the heads and tails of the candidate edges, remove cycles, and obtain the hierarchical entity set {. . . , ti, . . .}. The loss for entity hierarchical classification is: lossm yu = −1 B ( 1 |U| |U| X i=1 (ˆyui log(p(yui)))). 4.2.2 Coreference Resolution Modeling We employ another coreference-RoBERTa model to encode coreference information and incorporate it into the DTSA task. It shares model parameters with the DTSA-RoBERTa model. Considering that both coreference and hierarchical relations are partial order among entities, we use the same classifier to model these two types of relations. Similarly, we obtain a candidate entity set Ac = {. . . , ac i, . . .} and their representations {. . . , hc ai, . . .} for the coreference resolution task. Then, we pair these entities to obtain a set of candidate coreferent entity pairs C, and perform a three-class classification for each candidate entity pair cij = (ac i, ac j) to determine whether they refer to the same entity: p(ycij|ac i, ac j) = Sigmoid(Wu[hc ac i : hc ac j] + bu), where Wu is used to identify both hierarchical and coreference relations. After obtaining the information on whether each candidate entity pair refers to the same entity, we cluster these entity pairs to form the final coreferential clusters. The loss of coreference resolution is: lossc yc = −1 B ( 1 |C| |C| X i=1 (ˆyci log(p(yci)))). Token-Level Coreference Constraint To incorporate coreference information into the left-path entities while disregarding the noises brought by coreferential entities, we design the token-level coreference constraint (TC) module. By adding token-level constraints on whether the left-path entities and the right-path entities refer to the same entity, we align the representations of the left-path entities with their coreferent entities, thereby enhancing the left-path entities’ ability to perceive the contexts of their coreferent entity. Specifically, we utilize the representations of the left-path entities Hm L and the representations of the right-path 12153 entities Hc L to obtain a token-level score matrix: p(ytc ij|Hm L , Hc L) = Sigmoid(Wtc((Hm L )⊤Hc L) +btc), and the token-level coreference loss is: losstc ytc = −1 B ( n X i=1 n X j=1 (ˆytcij log(p(ytcij)))). Model Training Our model is based on a multitask learning framework and consists of three modules: the main DTSA task, and two auxiliary tasks, coreference resolution, and the TC task. The loss is defined as the weighted sum of the individual task losses: loss = lossm + lossc + αlosstc, where lossm, lossc, and losstc are the losses for the DTSA, the coreference resolution, and the TC tasks, respectively, and α is the hyperparameter. 5 Experiments 5.1 Datasets and Experimental Settings Dataset and Metrics The DTSA dataset (Luo et al., 2022) encompasses reviews from four ecommerce domains: Books, Clothing, Hotels, and Restaurants. Each domain contains approximately 1000 annotated documents, with the average number of sentences annotated per document ranging from 3 to 8. Table 2 provides detailed statistics on the dataset division across these domains, distributed in a 7:1:2 ratio for training, validation, and testing, respectively. In evaluation, a targetsentiment pair is viewed as correct if and only if the entities of the target, the sentiment, and their combination are the same as those in the gold targetsentiment pair. We calculate the Precision, Recall, and use F1 score (Luo et al., 2022) as the evaluation metric for the DTSA task. In addition, we design a new cluster-based metric allowing for a more accurate assessment of opinion target extraction and report its results in §5.6. Parameter Setting We initialize the dual-path RoBERTa models with RoBERTa-base parameters. The maximum input length for RoBERTa is set to 512, with the maximum number of entities extracted from the entity set capped at 60. Additionally, the parameters for constraining the entity length t and the entity selection confidence θ are set to 8 and -0.1, respectively. The optimization of Dataset Train Dev Test Sentences/Doc Book 690 99 197 5.97 Clothing 649 92 186 3.29 Restaurant 658 94 188 7.91 Hotel 720 103 206 4.19 Table 2: Statistics of the datasets. Sentences/Doc is the average number of sentences per document. model parameters for both paths is conducted using the AdamW optimizer, with a learning rate of 3e-5. The training epochs and dropout rate are set to 30 and 0.1, respectively. During training, model parameters are saved at the point of best performance on the validation set, and the results on the test set are averaged over five random seeds. 5.2 Baseline Systems To verify the effectiveness of the coreference information, we adopt the following competitive models as baseline systems, including ChatGPT1, three generative models, and a non-generative model. • GPT-3.5-Turbo: Experiments are conducted on GPT-3.5-Turbo (Ouyang et al., 2022) with OpenAI’s API. Specifically, prompt templates are designed for the DTSA task, and the model’s performance is evaluated under a five-shot setting. For specific prompt design, please refer to Table 6. • Seq2Seq: (Luo et al., 2022) uses a generative model to model the DTSA task in an end-to-end manner. • BART/T5-Extraction: The extraction-based generative framework proposed by (Zhang et al., 2021b) for sentence-level ABSA. In this framework, (Song et al., 2023) use BART and T5 as backbones, separating hierarchical entities and sentiments with special symbols, and sequentially outputting the final results. • BART/T5-Paraphrase: The paraphrase-based generative framework proposed by (Zhang et al., 2021a) for sentence-level ABSA. In this framework, (Song et al., 2023) use BART and T5 as backbones, serializing the output sequence of the DTSA task into a natural language sentence. • Seq2Struct∗: The Encoder-only model proposed by (Song et al., 2023), to reflect the improvement fairly brought by coreference resolution, we replace the backbone of this model with 1https://openai.com/chatgpt 12154 Methods Book Clothing Restaurant Hotel Average GPT-3.5-Turbo 16.75 20.10 14.32 20.00 17.79 Seq2Seq (Luo et al., 2022) 34.76 49.40 19.08 34.17 34.35 BART-Extraction 33.83 55.42 33.05 58.90 45.30 BART-Paraphrase 32.90 55.18 33.21 59.71 45.25 T5-Extraction 32.66 52.49 32.85 57.92 43.98 T5-Paraphrase 32.64 53.47 33.36 57.95 44.36 Seq2Struct∗(Song et al., 2023) 35.55 57.00 38.06 54.24 46.21 Ours 37.20 58.64 38.29 54.46 47.15 Table 3: Main results of the DTSA task for our approach and the baseline systems. The best results are in bold. RoBERTa, remove the GNN module, and replace the entity extraction module with the span-based extraction method used in this paper, while preserving the main structure of Seq2Struct and the modeling approach for each subtask. 5.3 Main Results The result for the DTSA task is reported in Table 3. There are three noteworthy observations: Firstly, the average result of the baseline system Seq2Struct* reaches 46.21% after fine-tuning, whereas the result of GPT-3.5-Turbo under a fiveshot setting is 17.79%, which is significantly lower than our method. We speculate that the main reasons include two aspects: On one hand, DTSA is a relatively new task with limited related data available, and large models may not have undergone instruction tuning on this type of data. On the other hand, as (Zhang et al., 2023) has pointed out, large models still have significant shortcomings in extracting fine-grained and structured sentiment information. Secondly, the adapted Seq2Struct∗model performs better than other generative baseline models. The average results across the four domains are 0.91 points higher than the best-performing BARTExtraction model. Additionally, in three out of the four domains, this model outperforms the best generative model by 2 points to 5 points, highlighting the effectiveness and adaptability of the Seq2Struct structure. However, the best-performing generative model in the Hotel domain outperforms the results of Seq2Struct∗by 5.47 points. One possible reason is that the generative model has a relatively weaker ability to identify entity sentiments compared to Seq2Struct∗, and most entity sentiments in the Hotel domain are positive, which reduces the difficulty of entity sentiment identification and leads to better results than Seq2Struct∗. Thirdly, our model with coreference resolution Domains Seq2Struct∗ Ours Coref. Non-Coref. Coref. Non-Coref. Books 30.19 40.71 31.16 42.76 Clothing 51.22 66.00 54.03 65.64 Restaurant 42.30 64.43 43.66 63.84 Hotel 36.74 39.93 35.81 42.17 Average 40.11 52.77 41.16 53.60 Table 4: Results on the dataset with and without Coreference Annotation. The “Coref.” indicates labels with coreference annotations, while the “Non-Coref.” indicates labels without coreference annotations. has further improvements based on Seq2Struct∗. The average results across the four domains have improved by 0.94 points. Specifically, there are improvements of 1.65 points and 1.64 points in the Book and Clothing domains, respectively. The improvements in the Restaurant and Hotel domains are relatively smaller, at 0.23 points and 0.22 points, respectively. This may be because some entities in the right-path coreference entities are not mentioned in the left path. While introducing coreference information, these entities also affect the accuracy of left-path entity extraction to some extent, resulting in negative effects on the DTSA task. However, these negative effects are mitigated in the Book and Clothing domains by designing the two-path model structure and parameter sharing. 5.4 Evaluation on Test Sets with and without Coreference Annotation To observe whether coreference information can improve the model’s performance on the coreference test set, we divide the test set into two parts: a coreference annotated set and a non-coreference annotated set. We compare and evaluate the results of our model and Seq2Struct∗on these two sets. As shown in Table 4, our model achieves an average improvement of 1.05 points and 0.83 points on the coreference test set and non-coreference test set, 12155 Books Clothing Restaurant Hotel 35.5 36.5 37.5 57.0 58.0 59.0 37.0 38.0 39.0 53.0 54.0 55.0 Seq2Struct* Seq2Struct*+ChatGPT Seq2Struct*+Human Figure 3: Comparison results between Seq2Struct∗with ChatGPT-annotated coreference and human-annotated coreference. respectively. Specifically, compared to Seq2Struct∗, our model experiences a decrease of 0.93 points on the coreference test set in the Hotel domain. One possible reason is the incorrect extraction of additional coreferential entities, which affects the model’s overall performance on the coreference set. Our model also shows improvement on the non-coreference set, possibly because the examples with coreference annotations are more challenging, enhancing the model’s ability to recognize coreference while improving its natural language understanding abilities. 5.5 Results with Coreference Annotation Using ChatGPT To alleviate the high labor and time costs of coreference annotation, we employ ChatGPT to annotate the coreferential entities. The average results of the model with ChatGPT-annotated coreference information in four domains are 47.01%, which is 0.8 points higher than that of Seq2Struct∗. Specifically, as shown in Figure 3, Similar to the model with human-annotated coreference information, the model performs better than Seq2Struct∗in the Book and Clothing domains, with improvements of 1 point and 1.9 points, respectively. However, in the Restaurant and Hotel domains, the performance is similar, with fluctuations of 0.1 points to 0.3 points. It can be observed that the introduction of entity coreference does not yield significant improvements in the Restaurant and Hotel domains, indicating the need to reduce the impact of other noise (such as additional entities) introduced during the coreference annotation process. 5.6 Evaluaiton with the Coreference-Aware Metric The opinion targets in the previous ABSA datasets were annotated on separate entities or aspect terms. Due to coreference issues, multiple entities may refer to the same opinion target. For example, in “There are so many different places to choose from in Boston but Paddy Os is by far the best! The bar is spacious with good music...”, the “bar” and “Paddy Os” refer to the same opinion target. Therefore, when the model correctly predicts either the “bar” or “Paddy Os”, the previous entity-based evaluation metric would consider them as two different opinion targets, leading to inaccuracies in the assessment results. Therefore, based on the original exact matching metric, we consider the coreferential entities and propose the revised coreference cluster-based metric. For the DTSA task, the original Precision and Recall are calculated as follows: P = #correct #pred , R = #correct #gold , where #correcti is the number of predicted (ti, pi) pairs that are the same as the gold (ˆti, ˆpi). Since ti should be an entity of its coreferential cluster, the new metric calculate the similarity between the predicted entity’s coreferential cluster and the gold entity’s coreferential cluster for #correcti. Given ˆti = {ˆa1, . . . , ˆak}, we first require the number of entities in ti to be equal to k, and calculate the matching score score(aj) between the coreferential cluster cluster(aj) of each layer aj and the coreferential cluster cluster(ˆaj) of ˆaj. We then take score(ti) = Qk j=1 score(aj), and calculate the current predicted matching score based on score(ti): #correct = |N| X i=1 ( k Y j=1 score(aj)) × I(pi = ˆpi), score(aj) = F1(cluster(aj), cluster(ˆaj)), where |N| is the number of predicted (ti, pi) pairs in the test set with matching scores greater than 0. F1(cluster(aj), cluster(ˆaj)) represents the F1 score calculated by treating the coreferential cluster of ˆaj as the gold label. 12156 Domains Seq2Struct∗@Old Metric Seq2Struct∗@New Metric Ours@Old Metric Ours@New Metric Book 35.55 37.15 37.20 38.92 Clothing 57.00 59.24 58.64 60.19 Restaurant 38.06 39.74 38.29 40.44 Hotel 54.24 56.06 54.46 56.23 Average 46.21 48.05 47.15 48.94 Table 5: Comparison results of cluster-based metric and entity-based metric on Seq2Struct∗and our method. To mitigate the bias caused by missing coreferential entities during the evaluation process, we assess the impact of the revised metric. As shown in Table 5, the new metric can mitigate the effect of missing coreference targets on Seq2Struct∗and our method. Specifically, the new metric results in an improvement ranging from 1.55 points to 2.15 points across different domains, with an average improvement of 1.79 points across all four domains on our method. The improvement brought by the new metric mainly stems from the inclusion of entities in coreference clusters that were not accounted for in the gold labels. These entities are ignored in entity-based metrics but can lead to evaluation errors when they appear in predicted entities. By considering cluster-level matching, this issue can be alleviated. 6 Conclusion Most research on ABSA focuses on the sentence level. However, DTSA remains an underexplored area. A significant difference between DTSA and sentence-level ABSA is that DTSA requires richer coreference information, necessitating models to possess stronger contextual understanding capabilities. To explore the impact of coreference information on the DTSA task, we annotate coreferential entities with human and ChatGPT and design a multi-task learning framework to verify the positive role of coreference information in DTSA. Additionally, we revise the metrics from exact entity-level matching to a more lenient cluster-level matching to mitigate the bias caused by missing coreferential entities. Limitations This paper aims to verify the effectiveness of coreference information on document-level target sentiment analysis, although ChatGPT-annotated coreference information is more efficient and laborsaving compared to manual annotation, it still suffers from the problem of being influenced by prompts. In addition, the revised coreference metrics require manual annotation of coreference information on the test set, which to some extent limits the use of the new evaluation metrics. Acknowledgments The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by the Natural Science Foundation of China (No. 62076133 and 62006117). References Jeremy Barnes, Laura Oberlaender, Enrica Troiano, Andrey Kutuzov, Jan Buchmann, Rodrigo Agerri, Lilja Øvrelid, and Erik Velldal. 2022. Semeval 2022 task 10: structured sentiment analysis. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 1280–1295. Hongjie Cai, Rui Xia, and Jianfei Yu. 2021. Aspectcategory-opinion-sentiment quadruple extraction with implicit aspects and opinions. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP), pages 340–350. Jiahua Chen, Shuai Wang, Sahisnu Mazumder, and Bing Liu. 2020a. A knowledge-driven approach to classifying object and attribute coreferences in opinion mining. arXiv preprint arXiv:2010.05357. Xiao Chen, Changlong Sun, Jingjing Wang, Shoushan Li, Luo Si, Min Zhang, and Guodong Zhou. 2020b. Aspect sentiment classification with document-level sentiment preference modeling. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 3667–3677. Xiaowen Ding and B. Liu. 2010. Resolving object and attribute coreference in opinion mining. In International Conference on Computational Linguistics. Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proceedings of the 2008 international conference on web search and data mining, pages 231–240. 12157 Li Dong, Furu Wei, Chuanqi Tan, Duyu Tang, Ming Zhou, and Ke Xu. 2014. Adaptive recursive neural network for target-dependent twitter sentiment classification. In Proceedings of the 52nd annual meeting of the association for computational linguistics (volume 2: Short papers), pages 49–54. Zhibin Gou, Qingyan Guo, and Yujiu Yang. 2023. Mvp: Multi-view prompting improves aspect sentiment tuple prediction. arXiv preprint arXiv:2305.12627. Ruidan He, Wee Sun Lee, Hwee Tou Ng, and Daniel Dahlmeier. 2019. An interactive multi-task learning network for end-to-end aspect-based sentiment analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 504–515. Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, and Yiwei Lv. 2019. Open-domain targeted sentiment analysis via span-based extraction and classification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), pages 537–546. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 168–177. Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of the 49th annual Meeting of the association for computational linguistics (ACL), pages 151–160. Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and Min Yang. 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pages 6280–6285. Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The inception platform: Machine-assisted and knowledge-oriented interactive annotation. In proceedings of the 27th international conference on computational linguistics: system demonstrations, pages 5–9. Kenton Lee, Luheng He, Mike Lewis, and Luke Zettlemoyer. 2017. End-to-end neural coreference resolution. arXiv preprint arXiv:1707.07045. Xin Li, Lidong Bing, Wai Lam, and Bei Shi. 2018. Transformation networks for target-oriented sentiment classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 946–956. Xin Li, Lidong Bing, Piji Li, and Wai Lam. 2019. A unified model for opinion target extraction and target sentiment prediction. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI), pages 6714–6721. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Huaishao Luo, Tianrui Li, Bing Liu, and Junbo Zhang. 2019. Doer: Dual cross-shared rnn for aspect term-polarity co-extraction. arXiv preprint arXiv:1906.01794. Yun Luo, Hongjie Cai, Linyi Yang, Yanxia Qin, Rui Xia, and Yue Zhang. 2022. Challenges for opendomain targeted sentiment analysis. arXiv preprint arXiv:2204.06893. Dehong Ma, Sujian Li, Xiaodong Zhang, and Houfeng Wang. 2017. Interactive attention networks for aspect-level sentiment classification. arXiv preprint arXiv:1709.00893. Dhruv Mullick, Bilal Ghanem, and Alona Fyshe. 2023. Better handling coreference resolution in aspect level sentiment classification by fine-tuning language models. arXiv preprint arXiv:2307.05646. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems (NeurIPS), pages 27730–27744. Maria Pontiki, Dimitrios Galanis, Haris Papageorgiou, Ion Androutsopoulos, Suresh Manandhar, Mohammad Al-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orphée De Clercq, et al. 2016. Semeval2016 task 5: Aspect based sentiment analysis. In International workshop on semantic evaluation, pages 19–30. Maria Pontiki, Dimitrios Galanis, Harris Papageorgiou, Suresh Manandhar, and Ion Androutsopoulos. 2015. Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th international workshop on semantic evaluation (SemEval 2015), pages 486– 495. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 27–35, Dublin, Ireland. Association for Computational Linguistics. Egil Rønningstad, Erik Velldal, and Lilja Øvrelid. 2023. Entity-level sentiment analysis (elsa): An exploratory task survey. arXiv preprint arXiv:2304.14241. Nan Song, Hongjie Cai, Rui Xia, Jianfei Yu, Zhen Wu, and Xinyu Dai. 2023. A sequence-to-structure approach to document-level targeted sentiment analysis. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7687–7698. 12158 Hao Tang, Donghong Ji, Chenliang Li, and Qiji Zhou. 2020. Dependency graph enhanced dual-transformer structure for aspect-based sentiment classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6578–6588. Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, and Limin Sun. 2020. Tplinker: Single-stage joint extraction of entities and relations through token pair linking. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1572–1582. Zhen Wu, Chengcan Ying, Fei Zhao, Zhifang Fan, Xinyu Dai, and Rui Xia. 2020. Grid tagging scheme for end-to-end fine-grained opinion extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 2576–2585. Hu Xu, Bing Liu, Lei Shu, and S Yu Philip. 2019. Bert post-training for review reading comprehension and aspect-based sentiment analysis. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 2324–2335. Hang Yan, Yu Sun, Xiaonan Li, and Xipeng Qiu. 2022. An embarrassingly easy but strong baseline for nested named entity recognition. arXiv preprint arXiv:2208.04534. Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, and Wai Lam. 2021a. Aspect sentiment quad prediction as paraphrase generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9209–9219. Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. 2023. Sentiment analysis in the era of large language models: A reality check. arXiv preprint arXiv:2305.15005. Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, and Wai Lam. 2021b. Towards generative aspect-based sentiment analysis. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 504–510. 12159 Prompt for coreference cluster annotation You are an expert in Coreference Resolution of Information Extraction. Given the document, the opinion targets, and their sentiment polarities, you MUST extract ONLY the coreference clusters of the given opinion targets in JSON format. The items in the coreference clusters MUST be the coreferential items of the opinion target. Let’s think step by step. <Document>: Mark and Roman were amazing hosts. Check-in and checkout procedures were simple. The Studio is very close to Hardvard ave station ( green line ). I recommend this place. <Opinion targets and their sentiment polarities>: { “place”: “Positive”, “Check-in”: “Positive”, “Roman”: “Positive”, “Mark”: “Positive”, “Check-out”: “Positive” } <Coreference clusters of the opinion targets>: { “place”: [“place”, “Studio”], “Check-in”: [“Check-in”], “Roman”: [“Roman”], “Mark”: [“Mark”], “check-out”: [“check-out”] other examples Prompt for target-sentiment pair annotation You are an expert in Aspect-Based Sentiment Analysis. Given the document, you should ONLY extract the list of target-sentiment pairs. A target-sentiment pair is defined as ’target 1–...–target n##sentiment’, where ’target 1–...– target n’ is a multi-level opinion target with n denoting the number of its levels, and ’sentiment’ in {Positive, Negative, Mixed}. Let’s think step by step. <Document>: Lovely shoes, and the customer support was wonderful. <Target-sentiment pairs>: [shoes##Positive, shoes–customer support##Positive] other examples Table 6: Example prompts for coreference cluster and target-sentiment pairs annotation using ChatGPT. A Entity Coreference Annotation Coreference Annotation by Humans We utilize the Inception platform (Klie et al., 2018) to annotate coreferential entities. Specifically, for each entity, we first annotate the entities that are coreferential with it and connect these two entities with an undirected edge to represent the coreference relation. In the post-processing stage, we cluster the connected entities into coreferential clusters. These clusters are used for subsequent multi-task training and cluster-based metrics. Coreference Annotation by ChatGPT To reduce the manual labor and time required for coreference annotation, we employ ChatGPT 2 to annotate the coreferential clusters. First, we construct demonstration examples through five-shot prompting for each domain to extract coreferential entities in JSON format. We tried different task descriptions and output formats to design various prompts. After manually observing and comparing those an2https://openai.com/chatgpt notation results from ChatGPT, the final prompt format we selected is shown in Table 6. Each example consists of <Document> and <Opinion targets and their sentiment polarities>. ChatGPT must output <Coreference clusters of the opinion targets> based on the given instruction and the five-shot annotated examples. To eliminate the noise generated by ChatGPT during the entity extraction process, we remove the articles and adjective possessive pronouns of the entities and merge them as the final version of coreferential clusters. Additionally, we conduct a manual check of 20% of ChatGPT’s annotated data (100 documents in each of the five domains), and the assessment yields a coreference annotation accuracy of 92%. 12160
2024
657
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12161–12176 August 11-16, 2024 ©2024 Association for Computational Linguistics VisDiaHalBench: A Visual Dialogue Benchmark For Diagnosing Hallucination in Large Vision-Language Models Qingxing Cao1 Junhao Cheng1 Xiaodan Liang1,2,3 Liang Lin4* 1Shenzhen Campus of Sun Yat-sen University 2MBZUAI 3DarkMatter AI Research 4Sun Yat-sen University [email protected], [email protected] Abstract Despite the significant success of large visionlanguage models (LVLMs), some studies have revealed that LVLMs suffer from the hallucination problem, where the LVLMs’ response contains descriptions of non-existent objects. Although various benchmarks have been proposed to investigate this problem, they mostly focus on single-turn evaluation and overlook the hallucination raised by textual inputs. To investigate the hallucination problem of LVLMs when given long-term misleading textual history, we propose a novel visual dialogue hallucination evaluation benchmark VisDiaHalBench. The benchmark consists of samples with five-turn questions about an edited image and its original version. VisDiaHalBench differs from previous hallucination benchmarks in the following three points: 1) The questions and answers are unambiguously grounded by annotated scene graphs. 2) The images are uncommonly edited to inspect the visual model and common-object hallucination in LLMs. 3) The carefully designed dialogue refers a same object in different turns to assess the image consistency and influence of history for LVLMs. The detailed analysis of several state-of-the-art LVLMs across image consistency, visual understanding, history influence, and other dimensions reveals their substantial performance gap with single-turn VQA tasks. The benchmark is released in: https://github.com/ qingxingcao/VisDiaHalBench 1 Introduction Large language models (LLMs) (Chung et al., 2022; Touvron et al., 2023; Chiang et al., 2023; Achiam et al., 2023) have shown exceptional capabilities in semantic understanding, reasoning, and commonsense utilization. To expand their capabilities into the vision domain, recent research integrated vision-pretrained models with * Corresponding author. LLMs, resulting in Large Vision-Language Models (LVLMs) (Dai et al., 2023; Liu et al., 2023c; Zhu et al., 2023; Li et al., 2023a; Achiam et al., 2023). Through instruction finetuning on textimage paired data, LVLMs achieved great performance on a variety of multimodal tasks that require a vast range of skills. Despite the recent success, LVLMs also suffer from the hallucination problem, where the responses are nonfactual or contradictory to the given inputs. To study this problem, recent works (Yifan Li and Wen, 2023; Wang et al., 2023; Liu et al., 2023b,a) have proposed different benchmarks and methods to diagnose hallucination within LVLMs. However, while previous works have shown that LLM will over-commit to early mistakes and leading to snowballing hallucination (Zhang et al., 2023; Yao et al., 2023), these studies in multi-modal domains mostly focus on the visual side, neglecting to fully investigate the textual inputs, such as misleading dialogue history or prior responses. Given that LVLMs are built upon LLMs, there is a strong likelihood that LVLMs inherit the similar hallucination problem from LLMs. As shown in the second example of Figure 1, GPT-4V correctly answers the relation of the frisbee but subsequently hallucinates about a non-existent “white frisbee” given a follow-up similar question. To analyze hallucinations in LVLMs when faced with multi-turn visual and textual inputs, we introduce VisDiaHalBench, a novel visual dialogue benchmark grounded by image scene graphs. Each sample in VisDiaHalBench contains an edited image and its original version to evaluate LVLMs’ performance with unseen images; A scene graph to ground questions-answers for unambiguous evaluation; And a five-turn coherent dialogue with last turn references to investigate image consistency and history influence of LVLMs. For example, a correct answer about the edited brown frisbee will be an incorrect descrip12161 Figure 1: Three examples of our proposed VisDiaHalBench and GPT-4V results. Each sample contains five questions, where the first four questions query the edited image and the last one queries the original image. Incorrect answers are marked in red, while the groundtruth answers are provided in parentheses. tion of the original white frisbee, which may affect LVLMs when answering the last question. To construct the benchmark, we leverage the normalized scene graphs and corresponding images from the GQA (Hudson and Manning, 2019) dataset to construct VisDiaHalBench. For each image, we first edit the GQA image as well as the scene graph by removing or changing an object. Then in each turn, we sample a question type and scene graph paths based on the last turn object, attribute, or relation. With the grounded path and predetermined answers, we employ GPT4 (Achiam et al., 2023) to generate a natural language question. We further benchmark several SOTA LVLMs and conduct comprehensive studies of their hallucination results. Our analysis reveals that current LVLMs still suffer from insufficient capabilities in vision understanding and handling complex dialogue history. Our contribution can be summarized as follows: 1) we introduce the first visual dialogue benchmark focusing on diagnosing hallucinations raised by visual and long-term textual inputs. 2) We conduct comprehensive studies on SOTA LVLMs to investigate their image consistency, history handling, and other relevant aspects. We hope our benchmark and analysis can shed light on further research in addressing the hallucination problem in both language and vision domains. 2 Related Works 2.1 Large Visual-Language Models With the success of the large language models (LLMs) (Chung et al., 2022; Touvron et al., 2023; Chiang et al., 2023; Achiam et al., 2023), some works (Alayrac et al., 2022; Li et al., 2023b) attempt to leverage the LLMs’ powerful capability of semantic reasoning in vision domains, resulting Large Visual-Language Models(LVLMs). These works bridge the vision tokens to LLMs by only training a projector. Most recent works (Dai et al., 2023; Liu et al., 2023c; Zhu et al., 2023; Li et al., 2023a; Liu et al., 2023b) employ instruction tuning, where the model can be trained on different multi-modal tasks in a unified manner, and achieve impressive results on various downstream tasks. 2.2 LVLMs Hallucination and Evaluation Despite the success of LVLMs, recent works suggest they suffer from the hallucination problem (Liu et al., 2023a; Zhou et al., 2023), where the response has inaccurate descriptions of a given image. Recent works (Yifan Li and Wen, 2023; Wang et al., 2023; Liu et al., 2023b) have proposed different methods and benchmarks to deeply inspect this phenomenon. POPE (Yifan Li and Wen, 2023), HaELM (Wang et al., 2023) and GAVIE (Liu et al., 2023b) differently prompt a LLM to score LVLMs’ response. Liu et al. (2023a) and Cui et al. (2023) manually edit visual inputs and label question-answering pairs to assess LVLMs. Different from these works, we propose a visual dialogue benchmark to thoroughly investigate the hallucination in LVLMs from both vision and long-term text aspects. 2.3 Visual Dialogue Given an image, a dialogue history, and a question about the image, Visual dialogue (VD) (Das et al., 2017) requires an agent to generate an ac12162 Figure 2: The construction pipeline for our proposed VisDiaHalBench. We first edit the image and corresponding scene graph. Then sample questions type and scene graph path based on previous questions. Lastly, we prompt GPT-4 to generate natural language questions. curate response based on the image and informed by the dialogue history. Recent advance in pretrained visual-language models (Su et al., 2020; Li et al., 2022a) also inspired related models in VD, such as VisDial-BERT (Murahari et al., 2020), VD-BERT (Wang et al., 2020) ICMU (Chen et al., 2022b), UTC (Chen et al., 2022a), BLIP (Li et al., 2022a) and AlignVD (Chen et al., 2022c). Unlike the single-turn Visual Question Answering, the challenge of VD lies in the need to address referential and ambiguity issues present in the historical dialogue (Chen et al., 2020). Previous studies explored how to increase the dialogue models’ robustness. Yu and Rieser (2023) study the adversarial robustness of visual dialogue models by attacking the vulnerable words within questions or history. Kang et al. (2023) propose a Generative Self-Training (GST) method that can automatically generate dialogue samples for data augmentation and model training. Our work also aims to evaluate model robustness concerning dialogue history but employ the history generated by LVLMs themselves without modification, studying a more general phenomenon that severely hinders the further application of LVLMs. 3 VisDiaHalBench We introduce VisDiaHalBench, a new visual dialogue benchmark to analyze hallucination in LVLMs under different visual and textual inputs. Based on the normalized scene graph and image in the GQA dataset, each sample contains an edited image, a five-turn dialogue, and grounding scene graph paths. The questions are associated with the edited object in different aspects, including queries about the uncommon edited results, pronoun-based reference, and misleading history. To construct the benchmark, we first edit the image by randomly changing an object’s attribute, swapping it with another object, or removing it from the image. Then at each turn, we sample a question type and scene graph path based on the edited object or previous questions. Lastly, we prompt GPT-4 (Achiam et al., 2023) to generate a natural language question strictly grounded to the supporting paths and ground truth answer. The overall generation pipeline is shown in Figure 2. To ensure the unambiguity of generated questions and answers, we rely on sampling scene graph path that can uniquely identify an object. Also, by designating the start or end object of a path, we can generate coherent multi-turn questions concerning the last-turn context. To better illustrate the construction process, we first present our path sampling method and question type details, then describe each construction step. 3.1 Scene Graph Path Sampling We generate unambiguous questions and answers grounded by paths in the scene graph. A scene graph path is a sequence of connected object nodes and relations. For example, a 0-hop path is just an object like “frisbee”, a 1-hop path could be “(frisbee, left, man)”, and a 2-hop path might extend to “(frisbee, left, man), (man, wearing, shorts)”. To sample a n-hop path, we begin at an object node, randomly select one of its relations, and traverse to the corresponding node n times. Then we verify whether the path is unique in the 12163 Question Type Path Example Question Example Input Reference Output Reference exist: Query the existence of a object [(frisbee,left,<obj>), (<obj>,<name>,man)] Is there a man that a frisbee on its left? Object Object (if answer is “Yes”) query object: Query a object name [(man,wearing,<obj>), (<obj>,<name>,shorts)] What is the man wearing? Object Object query attribute: Query a object’s attribute <input>: frisbee, [(frisbee,<attribute>,yellow)] What color is it? Object Object, Attribute query relation: Query relation of two objects <input>: frisbee, [frisbee], [air] What is the relationship between it and the air? Object Object, Relation verify attribute: Query if object has a attribute [(frisbee,<attribute>,yellow)] Query: white Is the frisbee white? Object Object, Attribute verify relation: Query if two objects have a relation [(frisbee,<attribute>,white)], [man] Query: left Does the white frisbee to the left of the man Object Object, Relation verify target attribute: Query if object has a mentioned attribute <input>: white, [frisbee] Does the frisbee has the same color? Attribute Object verify target relation: Query if two objects have a mentioned relation : <input>: left, [cone], [people] Does the cone and the people have the same relation? Relation Object Table 1: The details of all question types in VisDiaHalBench. Path and question examples give a possible scene graph path and corresponding question. Input and output reference represents the object or answer that can be used in this turn or passed to the next turn. scene graph. We obtain all objects with the same name and check whether these objects have the same relation recursively. For example, we verify whether “(frisbee, left, <obj1>), (<obj1>, wearing, <obj2>)” is unique in the scene graph, such that the shorts can be unambiguously referred to as “the thing wearing by one a frisbee left of”. In our benchmark, we sample 1-hop and 2-hop when questions target object names without giving away the object name. For other questions, we sample 0-hop and 1-hop paths. Coherent path sampling To construct coherent dialogue, we sample paths and question types that can accept the object or answer mentioned in the last turn, as shown in Table 1. If the last turn refers to an object, we sample a path starting from it and use pronouns to refer to this object. If the last turn has an attribute or relation, we first sample the answer “Yes/No”, then sample a path that ends with an object that has or does not have this attribute/relation according to the answer. Sampling Retry During the random sampling process, if a sample path is not unique, or the sampled query object has no attribute or relation to the query, we re-sample the whole dialogue until reach a predefined attempt limit. If the limit is reached, we discard this sample and continue to sample the next one. 3.2 Question Types The questions in VisDiaHalBenchhave following eight types: “exist”, “query object”, “query attribute”, “query relation”, “verify attribute”, “verify relation”, “verify target relation” , “verify target attribute”. These question types represent queries about the existence of an object, its name, attribute, or relations to another object. Or verify whether they possess certain attribute or relation. In each turn, we first sample a question type from its subset according to the turn number and last turn question. The details are shown in Table 1. For the question that verifies whether the object has an attribute or relation, we need to sample an unrelated query target if the answer is “no”. We select the name/attribute/relation that is similar to a correct one. Specifically, we first extract all object names, attributes, and relations in the dataset and form three dictionaries, then embed these words or phrases with GPT2-large (Lagler et al., 2013). Given a correct name/attribute/relation, we select its similar words by computing its embedding distance with other names/attributes/relations respectively. We randomly select one from the top 10 12164 to obtain a similar name/attribute/relation and exclude the top 5 to avoid retrieving synonym words. The retrieved words are used for questioning. To balance the number of each question type, we set the probability of selecting different types according to their generated number. The probability for type T is 1 −number of questions type T number of total question . Thus types with more samples will have a lower probability of being selected. 3.3 Image Editing Since most of the current LVLMs achieve great performance on the GQA dataset, we edit the GQA image to explore whether the hallucination comes from the biases in training data. To obtain a visually reasonable image, we only modify a certain object in one of three different ways: remove it from the image, swap it with another object, and change its color. We randomly choose the edition type and target so that the image can be counterfactual for usual scenarios. First, we use the segmentation model SAM (Kirillov et al., 2023) to obtain objects’ mask according to their scene graph bounding boxes. Subsequently, we employ various strategies for image editing. Remove object To remove an object, we first select the object that can be uniquely determined by 0-2 hop path in the original scene graph, so that the following questions and answers are not ambiguous. For example, the answer for "Is there an umbrella in the image" remains the same "Yes" if there exists multiple umbrellas but only one of them is removed. Given the identifiable object, we retrieve its bounding boxes from the annotated scene graph and instruct the pretrained image editing model IPadapter (Ye et al., 2023) to remove it from the image. Specifically, we utilize the inpainting mode in the IP-Adapter to remove the corresponding objects. By setting the prompt to "empty" and the negative prompt to "nobj" which represents the name of the object to be removed, we can obtain images without the specified objects. Lastly, we modify the scene graph correspondingly by removing this object and all relations to other objects. Change object We change the whole selected object by replacing it with another object in the GQA dataset such that the object is recognizable but unseen in the image. Specifically, we extract all the object names in the dataset and randomly select a similar one based on GPT-2 embedding, the same as the method described in Section 3.2. With the candidate object name, we randomly select an object that has this name from all the images in GQA dataset as the reference object. We change the original object to the reference object through Anydoor (Chen et al., 2023) method. We input the two masks and corresponding images to the Anydoor and copy the reference object to the original object mask. Lastly, we correspondingly modify the scene graph by setting the object name and the attributes to the reference image but keeping the relations unchanged. Modify attribute Since the GQA annotation has various attribute types, including actions that are only applicable to some objects, adding an attribute does not necessarily replace an existing one. Thus, to obtain a reasonable attribute and avoid ambiguity, we only change the object color. Similar to the object name, we first obtain all attributes in the GQA dataset, then filter the color using the matplotlib (Hunter and Dale, 2007) colors package and form a color set. Then we select the object that has a color attribute and replace its color attribute with another different one randomly selected from the color set. Given the original color, reference color, and the object mask obtained by SAM, we instruct the Blended latent diffusion (Avrahami et al., 2023) model with prompt "cnew + nobj" to perform the editing. Where cnew represents the new color selected and nobj represents the object’s name. Finally, we change the object in the scene graph by removing its original color and adding the reference color to its attribute set. 3.4 Dialogue Sampling After image editing, we generate dialogue questions to investigate different aspects of LVLM hallucination. Each dialogue contains five turn questions that query the visual information related to the edited object. At each turn, we sample a question type then sample scene graph paths and answer. We constrain sampling strategy in different ways to inspect different aspects of LVLMs. The details of each turn is shown in the following. Turn 1 : The first question directly queries the edited part of the object to inspect the vision model of LVLMs. For “remove object”, we query whether the object “exist” by sampling a 0 or 1hop path ended with the edited object from the original scene graph. For “change object”, we use 12165 querry object 5% modify object 1% Figure 3: The data distribution of VisDiaHalBench, sorted by (a)question type (b)answer type (c)edit type query query query verify verify verify verify exist object attribute relation attribute relation target attribute target relation Total remove 2054 168 485 612 240 623 268 685 5135 modify attribute 0 390 5889 2809 5056 1800 2186 1630 19760 modify object 0 44 9 17 19 15 1 0 105 Total 2054 602 6383 3438 5315 2438 2455 2315 25000 Table 2: Number of samples for different question types and edit types in VisDiaHalBench. “query object” to query the name of the changed object, and sample 1 or 2-hop path from the edited scene graph to avoid directly refer the object with its name. For “modify attribute” edition, we randomly choose “query attribute” and “verify attribute” and sample 0 or 1-hop scene graph path. If the question type is “verify attribute” and the randomly selected answer is “no”, we used the original attribute to generate the question. For example, if we change a frisbee from white to brown, we will have questions: “Is the frisbee white?”. Turn 2 : This question queries the other part of the image related to the turn 1 question to study the dialogue ability of LVLMs. We refer to the last turn object or answer with a pronoun and sample scene graph path with method in Section 3.1 Turn 3 : We mention the edited object with its original name or attribute to generate misleading questions that can not be answered. This turn is designed to explore whether the LVLMs hallucinate the answer or consistent with turn 1. For example, given a frisbee changed from white to brown, “What is the white frisbee related to the air?” has no answer since no white frisbee in the image. LVLMs should not answer if it is consistent with Turn 1 where it knows the frisbee is brown. Turn 4 : Similar to turn 2, the turn 4 sample path is based on the last turn question. However, since the last turn object and answer are not valid, this question is also unanswerable. We employ this question to further study the robustness of LVLMs in handling corrupted history. Turn 5 : We employ the question identical to turn 1 but provide the unedited image. Current LVLMs should be able to easily answer this GQAlike question. We study the performance gap between answering it directly or in dialogue to evaluate how much the dialogue history affects LVLMs. 3.5 Question Generation Given the object path, related attributes, and relations in each turn, we can obtain the unambiguous answer grounding by supporting paths. We obtain the natural language question by prompting GPT4 to generate it that strictly grounding to the supporting triplets and answer. The specific prompt can be found in the appendix B. We resample the question if it contains words of last turn reference, or not contains the grounded attributes and name. 3.6 Dataset Statistic Following the aforementioned editing method, we edited 5000 images from GQA and subsequently generated 5000 diverse visual dialogues, each dialogue consists of 5 turns of conversation, totaling 25,000 visual question-answers. As shown in figure 3, our dataset includes 8 types of question types, 4 types of answer types, and 3 types of edit types. Table 2 enumerates the specific quantities for different question types and edit types. 12166 Model Turn 1 Turn 2 Turn 3 Turn 4 Turn 5 Average EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 w/ image: BLIP2 19.24 25.21 0.04 17.42 0.0 35.47 0.0 32.51 0.0 18.26 3.85 25.77 Cheetah 19.96 27.55 19.56 30.14 0.25 25.01 0.22 23.27 18.98 25.89 11.80 35.73 MiniGPT4 5.31 13.22 26.36 36.89 0.07 28.72 0.36 27.59 71.74 79.02 20.77 37.08 LRV-instruct 17.78 26.86 24.36 40.18 0.07 33.63 0 35.48 22.50 34.23 12.94 33.66 HalluDoctor 39.95 49.22 34.18 44.57 0.0 27.02 0.23 25.86 35.79 43.21 22.03 37.98 LLaVA 26.32 32.52 38.72 54.58 0.07 25.91 0.25 17.97 54.29 57.67 23.93 37.99 GPT-4V 25.51 38.23 33.50 47.81 38.72 55.61 17.83 35.93 46.11 55.45 33.01 48.26 Q-Probing 21.49 31.83 27.40 37.66 3.00 31.10 1.65 28.25 61.57 70.06 23.02 39.78 w/ scene graph: LLaVA 35.72 41.22 53.74 61.78 10.06 35.36 1.57 23.02 88.26 90.42 37.87 50.36 GPT-4V 82.22 85.34 63.97 70.75 32.25 65.34 20.09 43.58 99.07 99.39 59.12 72.88 Human 89.21 93.75 88.24 94.55 100 100 100 100 99 99 95.29 97.46 Table 3: Evaluation results of different LVLMs on VisDiaHalBench. 4 Experiments 4.1 Methods Q-Probing Previous research has shown that LLM may hallucinate due to early mistakes (Zhang et al., 2023). To alleviate this problem, we propose a simple baseline approach “Q-Probing” that instructs LVLMs to probing the relevant question to reduce the hallucination. Specifically, the Q-Probing approach involves instructing the LVLM to generate several closely related questions based on the image and current question. These generated questions act as reference points. During the question-answering phase, we prompt the LVLM with the instruction "Please consider these questions before answering", encouraging it to pay attention to additional crucial information and logical fallacies when formulating responses, thereby mitigating the phenomenon of hallucination. In the experimental section, we built our model based on MiniGPT-4 (Zhu et al., 2023) and demonstrated its effectiveness. Further details about the prompts can be found in appendix B. SOTA methods We conduct an evaluation of several state-of-the-art LVLMs: BLIP2 (Li et al., 2023b), LRV-instruction (Liu et al., 2023b), HalluDoctor (Liu et al., 2024), MiniGPT4 (Zhu et al., 2023), Cheetah (Li et al., 2023a), LLaVA (Liu et al., 2023c), and GPT-4V on our benchmark. The implementation details are in appendix C. 4.2 Evaluation Metrics We conduct a comprehensive evaluation of the baseline models, focusing on their performance in evidence retrieval and answer extraction. Following previous multimodal conversational dataset (Li et al., 2022b), we reported macroaverage F1 in the word level and Exact Match (EM) to estimate the performance of answer extraction after extracting the model’s output results through keyword filtering. 4.3 Main Results Table 3 presents the performance of each model on VisDiaHalBench. All models exhibit subpar performance on average, the best-performing GPT4 achieves 33.01 EM and 48.26 F1, far worse than its performance on GQA. This discrepancy indicates that our benchmark poses a new challenge for current LVLMs. All open-sourced models except Cheetah perform better in Turn 2 and Turn 5 compared to Turn 1. Since turn 2 and 5 query the unedited object, this result indicates that these LVLMs struggle with correctly perceiving the edited image. Furthermore, all models exhibit their worst performance in Turn 4, indicating that referring to a non-existent object, which misleads models in both the visual and textual domains, is the most challenging task for current LVLMs. HalluDoctor and LRV-instruct are both based on MiniGPT4 and they greatly improve over baseline MiniGPT4 in perceiving edited objects, showing their effectiveness in handling object hallucination. However, their performance drop significantly in Turn 3-5. Especially in Turn 5, they perform much worse than the baseline MiniGPT4, showing the two single-turn hallucination mitigation may hinder LVLMs ability to handle multiturn dialogues effectively. The proposed "Q-Probing" outperforms both MiniGPT4 and LRV-instruct in average perfor12167 Turn 1 Turn 2 Turn 3 Turn 4 Turn 5 Average LLaVA(Order 12345) 26.32 38.72 0.07 0.25 54.29 23.93 LLaVA(Order 34125) 0 0.15 36.69 30.55 49.16 23.31 GPT-4V(Order 12345) 25.51 33.50 38.72 17.83 46.11 33.01 GPT-4V(Order 34125) 44.16 27.44 27.29 21.14 47.79 33.56 Table 4: The exact match of LVLMs in different dialogue orders. “Order 34125” represent the performance when first answer the unanswerable turn 3 and 4 questions without misleading turn 1 and turn 2 answers. Model Turn 1 w/ misleading history w/ history w/o history Non-existence object Original object Non-existence object Original object Original object EM EM F1 EM F1 EM F1 EM F1 EM F1 Cheetah 19.96 0.18 20.22 18.58 23.42 0.27 27.31 19.08 27.49 53.67 65.44 LLaVA 26.32 0.14 30.32 32.60 45.17 0.05 13.44 62.04 71.29 73.02 82.72 GPT-4V 25.51 39.76 59.23 38.35 50.29 38.35 49.23 53.04 68.72 77.16 88.01 Table 5: The performance of LVLMs under different conditions. “w/ misleading history” means LVLMs correctly answer the Turn 1 question and “w/ history” means otherwise. Turn 3 and Turn 5 questions query a “Non-existence object” and the unedited “original object”. “w/o history” shows the performance of LVLMs answering the Turn 5 question outside the dialogue. mance. Compared with LRV-instruct, Q-Probing achieves large improvement in turn 1 and turn 5, whose questions query about the edited object and original object with misleading history. The results show that relevant questions can allow LVLMs to be more aware of additional logical and visual information and mitigate hallucinations. 4.4 More Analytical Study Hallucination with misleading questions Turn 3 and 4 are designed to present LVLMs with misleading prompts that pose ambiguous questions. The objective was to assess whether LVLMs would recognize the flawed nature of these queries and respond appropriately with ’Unanswerable’. According to the results in table 3, there is a significant performance drop for all models in turns 3 and 4, suggesting that misleading prompts can easily lead to the generation of hallucinations, emphasizing the challenges associated with handling ambiguous queries accurately. Hallucination without misleading history We evaluate the performance of LLaVA and GPT-4V with different orders in dialogue. Specifically, Since turn 2 and turn 4 questions refer to previous objects or answers, changing their order will lead to incorrect GT answers. Thus we change the order from the original 1 2 3 4 5 to 3 4 1 2 5, where the LVLMs firstly answer the original turn 3 and turn 4 questions. In this way, the LVLMs will not be affected by misleading history when answering the hardest unanswerable questions. The exact match (EM) results are shown in Table 4. It is shown that the GPT-4V can identify more answerable questions without misleading history, improving EM from 38.72 to 44.16. The unanswerable response also helps GPT-4V perceive edited objects, improving EM from 25.51 to 27.29. Visual understanding To assess the effectiveness of the visual encoder, we conduct additional evaluations by providing LVLMs with scene graphs instead of images. The results, presented in the lower part of Table 3, reveal that LVLMs achieve superior performance compared to the image-based outputs. This indicates the bottleneck of current LVLMs may lie in the visual encoder. Image consistency To inspect whether the LLM hallucinates regardless of the visual information, we evaluate whether a LVLM can generate consistent output about a same object. As Turn 1, 3, and 5 query the same object based on edited and original image, we evaluate the performance of LVLMs in Turn 3 and Turn 5, given that Turn 1 was answered correctly. As shown in Table 5 “w/ misleading history”, if the LVLM correctly answers Turn 1, LVLM will has misleading history and tends to obtain better performance in turn 3(Nonexistence object) but shows a significant drop on turn 5(original object), which contains a similar question but a different image. These findings suggest that LVLMs are influenced by their previous responses and tend to hallucinate answers without effectively utilizing visual inputs. Influence of history To investigate whether the previous dialogue would affect the efficacy of the 12168 Accuracy Agreement Ratio GT answer 98% 95.58% Table 6: Accuracy of groundtruth answer evaluated by human. Accuracy Agreement Ratio Edition Correctness 1.8 86% Image Quality 1.7 82% Table 7: Human evaluation of the edited images. F1 EM Human pearson spearmanr kendalltau pearson spearmanr kendalltau Accuracy Agreement LLaVA 0.6723 0.5532 0.5031 0.7896 0.7896 0.7896 26% 92.4% GPT4-V 0.6611 0.5123 0.4912 0.7527 0.7526 0.7527 44.8% 88% Table 8: The human evaluation results of LVLMs and the correlation coefficient to the EM and F1. current response of LVLMs, we evaluate the GQAlike question in turn 5 given the premise that turn 1 was answered correctly or incorrectly. As described above, if LVLMs answer correctly in Turn 1, the outcomes in Turn 5 with similar image will be affected. Additionally, we evaluate the performance of LVLMs when directly asking Turn 5 questions outside of the dialogue context. The impressive results, presented in Table 5, indicate that LVLMs perform well when directly asked Turn 5 questions without the influence of prior dialogue. The performance gap observed in these evaluations indicates the existence of hallucination induced by the dialogue history. More study of LVLMs’ performance on different edit types is in appendix A. 4.5 Human studies on VisDiaHalBench Generated images To validate our generated images, we randomly select 50 images and ask two annotators to rate an edited image from 0 2, based on whether the edition follows the instructions and whether the image seems natural. The results are shown in Table 7. The agreement ratio represents the extent to which the rating provided by the two annotators matches. As shown in the table, the diffusion models can correctly edit the objects in most cases given the annotators have an average rating of 1.8 and 86% of rating agreement. Correctness of GT answers We ask two human annotators to answer the questions for 50 sampled dialogues with a total of 250 questions. We present the annotator with possible color candidates to avoid answering with color synonyms. The results are shown in the last line of Table 3. Human-labeled answers achieve a 95.29% accuracy compared to the groundtruth answers. When querying object names, some human answers are synonyms of the groundtruth, leading to lower exact match (EM) but higher F1 scores. After labeling some samples, the human annotator became aware that Turn 3 and 4 consistently yield the fixed answer "unanswerable," resulting in a 100% EM. In conclusion, human annotator achieves F1 scores greater than 90 in all turns, indicating that the samples generated with the scene graph can be utilized to evaluate LVLMs. We also evaluate whether the groundtruth answer is the same as human evaluation. The results are shown in Table 6. The "Accuracy" represents the correctness of the groundtruth answer evaluated by humans. The agreement ratio represents that two annotators have the same label for 95.58% of answers. The results align with the previous table, where groundtruth and human annotators reach an agreement in approximately 98% of the answers. These two tables show the correctness of generating questions and answers. Evaluation metrics To study the correctness of the EM/F1 evaluation metrics, we conduct a human evaluation of the LLaVA and GPT4-V outputs. Specifically, we ask two annotators to score each answer in 50 dialogues with 0/1. The correlation coefficient between human and automatic evaluation metrics are shown in Table 8. The agreement ratio represents the extent to which the answers provided by the two annotators match. The table demonstrates a strong correlation between the automatic evaluation metrics and human evaluation. 5 Conclusion In this work, we propose VisDiaHalBench, a novel visual dialogue benchmark specifically designed to investigate the hallucination phenomenon in Large Vision-Language Models (LVLMs). Our comprehensive analysis shows the weaknesses of current LVLMs in terms of image consistency and dialogue history, highlight the challenge of hallucination in vision-language understanding. 12169 6 Limitations Although we carefully designed the construction process and prompt to generate the natural language questions, some question texts are not fluent. Besides, although the question-answer is grounded by an annotated scene graph, an openended question can still have multiple answers with similar meanings. To better evaluate the answering accuracy objectively, the benchmark could have Yes/No questions only. The potential issue might include the misuse of the edited images might be misused for unexpected purposes. Acknowledgements This work was supported in part by National Science and Technology Major Project (No.2021ZD0111601), National Natural Science Foundation of China (No.62325605), National Key R&D Program of China under Grant No. 2020AAA0109700, China Postdoctoral Science Foundation under Grant Number 2023M744001, Guangdong Outstanding Youth Fund (Grant No. 2021B1515020061) Mobility Grant Award under Grant No. M-0461, Shenzhen Science and Technology Program (Grant No. GJHZ20220913142600001), Nansha Key RD Program under Grant No.2022ZD014, CAAI-Huawei MindSpore Open Fund. We thank MindSpore for the partial support of this work, which is a new deep learning computing framwork1. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karén Simonyan. 2022. Flamingo: a visual language model for few-shot learning. In NeurIPS. Omri Avrahami, Ohad Fried, and Dani Lischinski. 2023. Blended latent diffusion. ACM Transactions on Graphics (TOG), 42(4):1–11. 1https://www.mindspore.cn/ Cheng Chen, Zhenshan Tan, Qingrong Cheng, Xin Jiang, Qun Liu, Yudong Zhu, and Xiaodong Gu. 2022a. Utc: a unified transformer with inter-task contrastive learning for visual dialog. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pages 18103–18112. Feilong Chen, Xiuyi Chen, Shuang Xu, and Bo Xu. 2022b. Improving cross-modal understanding in visual dialog via contrastive learning. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7937–7941. IEEE. Feilong Chen, Duzhen Zhang, Xiuyi Chen, Jing Shi, Shuang Xu, and Bo Xu. 2022c. Unsupervised and pseudo-supervised vision-language alignment in visual dialog. In Proceedings of the 30th ACM International Conference on Multimedia, pages 4142– 4153. Xi Chen, Lianghua Huang, Yu Liu, Yujun Shen, Deli Zhao, and Hengshuang Zhao. 2023. Anydoor: Zero-shot object-level image customization. arXiv preprint arXiv:2307.09481. Xiaofan Chen, Songyang Lao, and Ting Duan. 2020. Multimodal fusion of visual dialog: A survey. In Proceedings of the 2020 2nd International Conference on Robotics, Intelligent Control and Artificial Intelligence, pages 302–308. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023). Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. CoRR, abs/2210.11416. Chenhang Cui, Yiyang Zhou, Xinyu Yang, Shirley Wu, Linjun Zhang, James Zou, and Huaxiu Yao. 2023. Holistic analysis of hallucination in gpt-4v (ision): Bias and interference challenges. arXiv preprint arXiv:2311.03287. W Dai, J Li, D Li, AMH Tiong, J Zhao, W Wang, B Li, P Fung, and S Hoi. 2023. Instructblip: towards general-purpose vision-language models with instruction tuning. arxiv. Preprint posted online on June, 15:2023. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José MF Moura, Devi Parikh, 12170 and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 326–335. Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. CVPR. John Hunter and Darren Dale. 2007. The matplotlib users guide. Matplotlib 0.90. 0 users guide. Gi-Cheon Kang, Sungdong Kim, Jin-Hwa Kim, Donghyun Kwak, and Byoung-Tak Zhang. 2023. The dialog must go on: Improving visual dialog via generative self-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6746–6756. Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. 2023. Segment anything. arXiv preprint arXiv:2304.02643. Klemens Lagler, Michael Schindelegger, Johannes Böhm, Hana Krásná, and Tobias Nilsson. 2013. Gpt2: Empirical slant delay model for radio space geodetic techniques. Geophysical research letters, 40(6):1069–1073. Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, TatSeng Chua, Siliang Tang, and Yueting Zhuang. 2023a. Fine-tuning multimodal llms to follow zeroshot demonstrative instructions. arXiv preprint arXiv:2308.04152, 3. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023c. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022a. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In ICML. Yongqi Li, Wenjie Li, and Liqiang Nie. 2022b. Mmcoqa: Conversational question answering over text, tables, and images. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4220– 4231. Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. 2023a. Hallusionbench: You see what you think? or you think what you see? an image-context reasoning benchmark challenging for gpt-4v (ision), llava-1.5, and other multi-modality models. arXiv preprint arXiv:2310.14566. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023b. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565. Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2024. Mitigating hallucination in large multi-modal models via robust instruction tuning. In The Twelfth International Conference on Learning Representations. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023c. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744. Vishvak Murahari, Dhruv Batra, Devi Parikh, and Abhishek Das. 2020. Large-scale pretraining for visual dialog: A simple state-of-the-art baseline. In European Conference on Computer Vision, pages 336– 352. Springer. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763. PMLR. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pretraining of generic visual-linguistic representations. In ICLR. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, and Haoyu Tang. 2023. Evaluation and analysis of hallucination in large vision-language models. Yue Wang, Shafiq Joty, Michael R Lyu, Irwin King, Caiming Xiong, and Steven CH Hoi. 2020. Vd-bert: A unified vision and dialog transformer with bert. arXiv preprint arXiv:2004.13278. Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, and Li Yuan. 2023. Llm lies: Hallucinations are not bugs, but features as adversarial examples. arXiv preprint arXiv:2310.01469. Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. 2023. Ip-adapter: Text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721. Kun Zhou Jinpeng Wang Wayne Xin Zhao Yifan Li, Yifan Du and Ji-Rong Wen. 2023. Evaluating object hallucination in large vision-language models. In The 2023 Conference on Empirical Methods in Natural Language Processing. 12171 Lu Yu and Verena Rieser. 2023. Adversarial textual robustness on visual dialog. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3422–3438, Toronto, Canada. Association for Computational Linguistics. Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. 2023. How language model hallucinations can snowball. Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. arXiv preprint arXiv:2310.00754. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. A Performance analysis on edit types Table 15 shows the LVLMs’ performance on different types of edited image. It is shown that LVLMs perform worse on "modified objects" images because the questions and edited images are harder than the other two edit types. Removing an object will impaint the region with its surrounding background, which makes the image still plausible in a natural context. Furthermore, querying its existence is the only valid question for a removed object. Thus, an LVLM only needs to answer yes/no for this edit type and has a higher answering accuracy. Modifying attributes will change an object’s color. Although the color may be unusual for the object, the object itself is reasonable for the scene. Thus, an LVLM can correctly answer some of these questions. Modifying an object will replace the original object with another one that does not fit the scene. Additionally, the answering vocabulary for object names is much larger than for colors. The difficulty in both the image and the question leads to lower performance. The results also suggest that LVLMs may pay more attention to the commonsense inside the LVLM instead of visual tokens. B Instruct prompts Our prompt for question generation is listed in table 9. The prompt for Q-Probing to generate relevant questions and answer original questions are listed in Table 10 and Table 11 respectively. C Model details in our benchmark Cheetah integrates visual information and language understanding to handle zero-shot tasks, meaning it can execute tasks without prior examples. To enhance the model’s performance in understanding complex visual and verbal instructions, Cheetah incorporates the VPG-C module, which is specifically designed to capture and supplement detail information. Moreover, Cheetah fine-tunes the VPG-C through a synthetic discriminative training strategy, thereby reducing the reliance on labeled demonstration data. The model variant evaluated in our experiments is "cheetahllama-2-7b". The evaluation process is completed over a period of 50 GPU hours, utilizing one NVIDIA GeForce RTX 3090 GPU with 25GB of memory. LLaVA propose a framework that fuses the capabilities of a visual encoder, specifically the ViTL/14 from CLIP(Radford et al., 2021), with the language decoder abilities of LLaMA(Touvron et al., 2023). This integration is achieved using an intermediary fully-connected (FC) layer. The training process begins with the FC layer being trained in isolation on a dataset of 595,000 imagetext pairs, with the pre-existing parameters of both the visual encoder and the language model remaining unchanged. Subsequent to this phase, a fine-tuning step is conducted where both the FC layer and the language model are jointly optimized. This fine-tuning employs a specialized dataset consisting of 158,000 pairs of instructional vision-language data. The model variant evaluated in our experiments is "LLaVA-v1.5-13b.". The evaluation process is completed over a period of 30 GPU hours, utilizing one NVIDIA GeForce RTX 3090 GPU with 25GB of memory. MiniGPT-4 employs a fully-connected (FC) layer to facilitate communication between a visual encoder and a text encoder. The initial training phase involves the FC layer learning from a dataset with 5 million image-text pairs. Following this, the model undergoes fine-tuning with a targeted set of 3,500 instructional image-text pairs. MiniGPT-4 relies on the integration of a pretrained BLIP2 visual encoder(Li et al., 2023c) and a LLaMA language model.The model variant evaluated in our experiments is "MiniGPT4-alignedwith-llama2-7b". The evaluation process is completed over a period of 50 GPU hours, utilizing one 12172 NVIDIA GeForce RTX 3090 GPU with 25GB of memory. LRV-instruct aims to enhance the accuracy and robustness of multimodal artificial intelligence models when processing visual instructions. It trains models by including 120,000 positive and negative visual instructions to identify and avoid hallucinations that occur during task execution. The model variant evaluated in our experiments is "LRV-MiniGPT4-7b". The evaluation process is completed over a period of 60 GPU hours, utilizing one NVIDIA GeForce RTX 3090 GPU with 25GB of memory. Blip-2 is an efficient pre-training strategy that initiates visual language pre-training by utilizing pre-existing frozen pre-trained image encoders and frozen large-scale language models. The model variant evaluated in our experiments is "BLIP w/ ViT-L". The evaluation process is completed over a period of 40 GPU hours, utilizing one NVIDIA GeForce RTX 3090 GPU with 25GB of memory. Hallucidoctor is a novel illusion detection and elimination framework based on the crosschecking paradigm, aiming to automatically identify and eliminate illusions in training data. The model variant evaluated in our experiments is "Minigpt4-vicuna-LLaVA+". The evaluation process is completed over a period of 80 GPU hours, utilizing one NVIDIA GeForce RTX 3090 GPU with 25GB of memory. GPT-4 is the latest generation of closed-source large language models released by OpenAI, which has seen significant enhancements in terms of the scale of data training and computational complexity. GPT-4V adds visual capabilities to GPT-4, enabling the model to not only understand and generate text but also to comprehend and analyze image content.The model variant evaluated in our experiments is "gpt-4-vision-preview". The evaluation process is completed over a period of 40 hours, using the API interface provided by OpenAI. D GQA dataset The GQA (Hudson and Manning, 2019) dataset consists of real-world images with synthesized questions. Each image is associated with a cleaner scene graph of image objects, their attributes, and relations. Each question is associated with a structured functional program, that refers objects and relations to specify the reasoning route in the scene graph for the final answer. This dataset is distributed under license CC BY 4.0. E Intended use of VisDiaHalBench The VisDiaHalBench is distributed for research purposes. The VisDiaHalBench is constructed based on images and scene graphs from the GQA dataset, thus does not contains any information that names or uniquely identifies individual people or offensive content same to the GQA. F Detailed experimental results Table 12,13,14,15 presents the detailed experimental results from different perspectives. G Supplementary examples Figures 4, 5 show supplementary examples from VisDiaHalBench. 12173 As a question asker, could you please provide a question according to my request? You will receive a "question type" where "exist" represents asking whether there is a object; "queryrel" requires asking about the relative relationship between obj1 and obj2; "verifyRel" indicates asking whether these two objects have a certain relative relationship,(for example, is the book on the left of a yellow thing?) "queryRel" asks for the relative relationship between the two objects. "verifyTargetRel" stands for the query whether A and B have the relationship mentioned in the "last round".(for example, does this relationship(last round) of position also exist between the banana and the table?)."verifyattr" should ask whether obj possesses a certain attribute; "verifyTargetAttr" asks whether a certain object possesses a specific attribute mentioned in the "last round".(for example, Do the book have the same color with it(last round)?). And for "queryAttr," the answer should be the attribute of the queried object.(For example, what is the color of the object on the table?). Next, you will receive a "last round," for example, "logo." If provided, please use pronouns (such as "it," "her") to refer to it in your question. Then, you will receive a "prompt" where <obj> represents the objects that can appear in your question (including their relationships and properties), and "<answer triplet>" represents the information about the subject object of the question.(example: "tractors, <name>, tractors" represents the object as "tractors"."bike, <attribute>, navy" represents the object as "bike" and it has the attribute "navy". Attention, please make sure to include its attribute when you mention the obj.) The "<answer>" serves as the correct answer for your question.You should make the best use of all the information available. Now based on this information:[question type:question type,last round":{last round input},"prompt":prompt.] ,generate a question: Table 9: Prompt for GPT-4 to generate questions with fixed answers. As a keen questioner, you need to generate three questions related to the current issue and image that include potentially information in the format [Q1, Q2, Q3] (e.g., Is A present in the image? Where is the specific location of A? What is the color of A?).image:<Img><ImageHere></Img>, question: {question}, ###your proposed question: Table 10: The prompt of Q-Probing to generate relevant questions. Give the following image: <Img>ImageContent</Img>. You will be able to see the image once I provide it to you. Please answer my questions according to the dialogue.###Human: History:{historty_dialogue}, Image now:<Img><ImageHere></Img>, Question:{question}. (Before answering, please take a moment to consider these questions (do not need to answer): {Q1, Q2, Q3}) ###Assistant: Table 11: The prompt of Q-Probing to answer a question given generate relevant questions. Model Y/N Object Attribute Relation EM F1 EM F1 EM F1 EM F1 Cheetah 11.79 26.37 5.99 24.99 4.44 23.03 3.50 28.20 LRV-instruct 12.95 34.08 8.45 35.74 13.26 37.17 1.64 35.30 MiniGPT4 20.77 37.09 4.63 30.09 8.65 34.68 3.72 30.21 Q-Probing 23.02 39.78 3.88 32.11 10.72 36.27 8.01 34.26 LLaVA 23.93 37.73 3.54 28.86 18.89 39.85 9.03 36.28 GPT-4V 32.33 46.61 22.52 50.15 29.16 50.44 25.21 49.37 Table 12: Average evaluation results under different answer types. 12174 Model exist query object query attribute query relation verify attribute verify relation verify target attribute verify target relation EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 EM F1 Cheetah 25.31 29.02 5.99 24.99 4.44 23.02 3.50 28.20 23.31 29.15 12.52 27.48 11.33 25.75 0.07 22.37 LRV-instruct 27.26 31.62 8.44 35.74 13.25 37.17 1.64 35.29 16.10 26.89 13.57 37.87 14.31 37.23 0.01 34.85 MiniGPT4 52.58 52.74 4.63 30.09 8.65 34.69 3.71 30.20 43.78 46.77 19.87 35.94 6.70 22.73 0.21 26.98 Q-Probing 59.41 59.57 3.87 32.11 10.72 36.27 8.00 34.27 53.38 55.98 21.36 36.84 9.90 23.20 1.87 27.68 LLaVA 49.56 49.56 3.54 28.86 18.88 39.85 0.92 36.27 38.59 40.15 20.29 30.10 36.72 45.46 0.07 16.95 GPT-4V 47.70 48.83 22.52 50.15 29.16 50.44 25.21 49.37 40.24 44.83 37.89 47.30 33.47 43.46 18.51 36.06 Table 13: Average evaluation results under different qeustion types. Model None modify obj none exist modify attr none exist refer none exist remove none exist EM F1 EM F1 EM F1 EM F1 EM F1 Cheetah 19.50 27.86 0.01 19.45 0.45 25.31 0.22 23.27 0.21 24.50 LRV-instruct 21.55 33.76 0.01 33.04 0.01 31.72 0.01 35.48 0.01 33.24 MiniGPT4 34.47 43.04 0.02 25.53 0.01 27.39 0.13 29.42 0.01 28.32 Q-Probing 36.82 46.51 0.03 22.54 4.13 30.03 1.65 28.26 0.97 29.15 LLaVA 39.78 48.26 0.01 18.15 0.15 26.97 0.06 25.92 0.01 0.25 GPT-4V 35.04 47.16 60.00 70.67 22.65 43.78 17.83 35.93 36.00 52.27 Table 14: Average evaluation results under different hallucination types. Model remove modify attr modify obj EM F1 EM F1 EM F1 Cheetah 13.85 27.23 10.61 25.86 6.62 26.05 LRV-instruct 16.43 34.99 10.88 33.51 9.52 34.95 MiniGPT4 26.68 40.04 17.37 35.38 7.62 30.82 Q-Probing 30.99 44.31 21.24 38.76 2.85 29.87 LLaVA 25.88 38.19 22.95 37.55 8.57 29.81 GPT-4V 40.12 50.59 31.04 45.94 16.67 39.39 Table 15: Average evaluation results under different edit types. 12175 Figure 4: Supplement example 1 of VisDiaHalBench Figure 5: Supplement example 2 of VisDiaHalBench 12176
2024
658
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12177–12214 August 11-16, 2024 ©2024 Association for Computational Linguistics AutoDSL: Automated domain-specific language design for structural representation of procedures with constraints Yu-Zhe Shi‹, Haofei Hou‹, Zhangqian Bi, Fanxu Meng, Xiang Wei, Lecheng Ruan, Qining WangDepartment of Advanced Manufacturing and Robotics, College of Engineering, Peking University ‹Equal contribution {ruanlecheng, qiningwang}@pku.edu.cn Abstract Accurate representation of procedures in restricted scenarios, such as non-standardized scientific experiments, requires precise depiction of constraints. Unfortunately, Domainspecific Language (DSL), as an effective tool to express constraints structurally, often requires case-by-case hand-crafting, necessitating customized, labor-intensive efforts. To overcome this challenge, we introduce the AutoDSL framework to automate DSL-based constraint design across various domains. Utilizing domain specified experimental protocol corpora, AutoDSL optimizes syntactic constraints and abstracts semantic constraints. Quantitative and qualitative analyses of the DSLs designed by AutoDSL across five distinct domains highlight its potential as an auxiliary module for language models, aiming to improve procedural planning and execution. 1 Introduction Comprehending and executing procedures articulated in natural language to achieve a specified goal represents a fundamental challenge for Artificial Intelligence (AI) systems. With the boost of Large Language Model (LLM) (Brown et al., 2020; Raffel et al., 2020; Touvron et al., 2023), AI systems possess the capability of reasoning over and planning for procedural tasks intended for both human and robotic execution across a broad spectrum of everyday scenarios1 (Pareti et al., 2014a,b; Tandon et al., 2020; Donatelli et al., 2021; Zhou et al., 2022b), such as cooking according to a recipe, obviating the necessity for external representation of procedures beyond text. However, contexts significantly more restricted than everyday scenarios, such as conducting nonstandardized experiments in scientific laboratories, need to follow specific protocols2. A proto1Visit www.wikiHow.com for demonstrations. 2Visit www.nature.com/nprot/ for examples. Prepare samples by adding loading buffer and heating to denature proteins. Resuspend cells in fresh medium and evenly distribute them into new culture dishes. [?] [?] Vortex the sample until thoroughly mixed. Connect the power supply and run at an appropriate voltage. The sample treated with Buffer B undergoes digestion under optimized conditions for the specific enzyme. After this step, the lifecycle of Buffer B ends. For transfection, seed at low density and change media post-transfection for differentiation or growth. Feeding until the larvae turn into pupae, at which point increase the food quantity. Divide the RNA solution equally into four microcentrifuge tubes. Incubate each group under conditions recommended for each reagent. Missing parameter Under-specified setting Limited device capacity Unexplained action Non-sequential workflow Temperature Check if the desired product is obtained with optimal specificity and yield. If the results are not satisfactory, adjust the parameter being tested and repeat steps 2 and 3. Iterative requirement qualitative identifier quantitative parameter In use Available Available intermediate products high-level action 1 2 3 4 A D B E C F Figure 1: Representative constraints in protocols. (A) Parameter omission: This refers to the absence of essential parameter values within a predefined set, e.g., the lack of temperature specification during the denaturation step in Protein Gel Electrophoresis. (B) Parameter under-specification: This occurs when a quantitative parameter is described using qualitative terms, leading to ambiguity, e.g., unclear mixture configurations in DNA Extraction. (C) Action undefinition: This involves the description of procedural steps at a high level without grounding to the specific, executable actions required, e.g., the vague change operation in Cell Preparation. (D) Iterative control logic: Loops that operate iteratively to satisfy a final condition rather than straightforwardly, as seen in PCR Optimization. (E) Memory management: Drawing a parallel with computer memory mechanisms, laboratory procedures also face constraints on the availability of storage for intermediates, necessitating explicit reallocation of containers and devices, such as managing buffers in Protein Digestion. (F) Concurrent management: The synchronization of actions without dependencies to maximize time efficiency and resource utilization, e.g., reagent splitting in RNA Extraction. col delineates every aspect of an experiment’s procedure to facilitate its reproduction (Baker, 2016; Munafò et al., 2017), emphasizing the necessity for precision in every step, to ensure accurate execution by an experimenter. The complexity of procedures, limitation in resources, and susceptibility to error in such scenarios render any deviation from the established protocols inadmissible. Unfortunately, natural language inherently possesses ambiguities (Russell, 1923). Within protocols, description of actions can be semantically 12177 Repeat the DNA purification process three times using fresh reagents and buffers for each cycle, following the specified centrifugation conditions at 12,000-16,000 g for 1-2 minutes in a microcentrifuge. repeat range range ( ){ ( = , = , = ( , ), = ( , )) } 3 12000g 16000g 1min 2min purify reagent device force duration ”DNA” ”microce ntrifuge” Resulting DSL for int < ++ ( = ; ; ) { ( , , ); } i i i purify_DNA 0 3 "DNA" "microcentri fuge" "refine the DNA's purity" BioCoder DSL [ [ , ], [ , ], [ , ], [ , ] ] "repeat" "3" "action" "purify" "REG" "DNA" "Device" "microcentri fuge" Knowledge Graph Protocol text Figure 2: Protocols in different structural representations under-specified, and the logic of procedure may be non-linear, as demonstrated in Fig. 1. Given these unique distinctions against daily procedures, accurate interpretation of protocols requires explicit depiction of constraints (see Fig. 2 for example). Intuitively, structural representation imposes constraints on the processing of protocols. This is achieved through purely symbolic approaches that depict procedures as flow-graphs (Momouchi, 1980; Zhang et al., 2012; Kiddon et al., 2015), and neuro-symbolic hybrid methods that superimpose procedural structures onto neural networks (Puig et al., 2018; Huang et al., 2021, 2022; Zhou et al., 2022a; Zhang et al., 2023; Brohan et al., 2023). Both strategies impose constraints on the interpretation of procedures, thereby reducing the incidence of superfluous operations. Symbolic constraints range from elementary grammars to general-purposed programming languages, with the capability of a constraint system being pivotal in refining a coarse interpretation space into a more precise one (Tarski, 1946; Chomsky, 1957; Hopcroft et al., 1996; Russell and Norvig, 2010). In light of the context, what level of capability should we expect the constraint to possess? This discourse introduces DSL, a category of symbolic systems endowed with the most potent constraints. DSLs are programming languages tailored to specific problem domains, encapsulating both syntactic constraints and semantic constraints inherent to those domains. For instance, BioCoder (Ananthanarayanan and Thies, 2010), developed by Microsoft, is a DSL explicitly designed to constrain experimental protocols. On the syntactic level, the variable management mechanism inherited from C/C++ enables DSLs to monitor the lifecycle of each intermediate product, ensuring no omissions or duplications. On the semantic level, the precise definitions of actions, combinations of reagents’ names and volumes, with subprocedures abstracted from domain-specific concepts, guaranteeing procedural execution consistency. Drawing inspiration from DSLs, can we design constraints for protocols in DSL fashion? Hardly, due to the deterministic and substantial cost. Structural constraints necessitate custom design for particular domains, which is prohibitively expensive, given that these domains are highly specialized and often diverge significantly from the conventional purview of computer scientists (Mernik et al., 2005; Fowler, 2010). The development of a DSL necessitates a comprehensive integration of in-depth domain knowledge. Furthermore, the designed DSL must align with theoretical aspects of formal language design while also meeting the distinct requirements of the specialized domain. This necessitates a bidirectional alignment between computer scientists and domain experts, a process that is intrinsically caseby-case, implying that a DSL developed for one domain is unlikely to be applicable or easily adaptable to another, thus limiting the scalability of DSL-based constraints across various domains. In this study, our objective is to offer an initial proof-of-concept aimed at reducing the design cost of DSL-based constraints for protocols. We propose a scalable framework, termed AutoDSL, that facilitates the automated creation of DSLs. The framework approaches the task as a bidirectional optimization problem, where the design of a DSL is abstracted from domain-specific corpora through a bottom-up process and concurrently derived by general programming language design principles in a top-down manner. This approach emulates the iterative dialogue between computer scientists and domain experts, progressively bridging the conceptual gap between their respective fields of expertise. The syntactic constraint should adequately define consecutive actions and their repetitions, interruption, concurrence, subcomponents, and reactants. Constructs of the semantics constraint need to accurately reflect the domain’s concepts and the relations between them, without redundancy or incompleteness. We utilize protocols from various domains within the experimental sciences — namely Genetics, Medical and Clinical Research, Ecology and Environmental Research, Bioengineering, and Computational Biology — as the primary testing ground for our methodology, due to their inherent complexity, resource constraints, and susceptibility to errors. These domains exhibit significant disparities both in syntactic and semantic language features. Comprehensive experiments demonstrate that AutoDSL is capable of generalizing DSL-based constraints tailored to these di12178 Domain corpus of NL protocols Remove Transfer Resuspend the culture media by pipetting. washing steps with PBS plus Tween-20 and distilled water, followed by mild centrifugation. the RNeasy MinElute spin column into a new 1.5 ml collection tube. , begin with 1 ml of serum or plasma, add PBS to fill the ultracentrifuge tube. using fatty acid substrates, prepare them freshly as described. the pellet with a 10 mL serological pipette. Repeat For each replicate If #3 ::= Volume REG (Device) | REG REG | Concentration Volume REG RESUSPEND #1 ::= Volume | REG (Device) | REG Volume (Temperature) REMOVE #2 ::= REG Container | Container | Container Concentration TRANSFER #1 Loop ::= COND {} While #2 Memory ::= Container* Device* | Container* Device* Allocate Deallocate #3 Branch ::= COND {} If then Syntactic constraints Resulting DSL constructs Semantic constraints =1 Store aliquots at 1 )1 Collect bile into a sterile bottle1 '1 Repeat steps, monitoring the reaction and recording data until no further change in fluorescence signals1 #1 Transfer of retentate and filtrate into their respective tubes labeled retentate and filtrate. -20°C 10 µL New coming NL protocols Minimal prompt engineering Demonstration examples Test item DSL specification #1 ::= Volume | REG (Device) | REG Volume (Temperature) REMOVE #2 ::= REG Container | Container | Container Concentration TRANSFER Remove REMOVE the culture media by pipetting. :(Rea=culture media, method=pipetting) Transfer of retentate 10 µL LLM Structured protocols represented by DSL repeat While and respective Allocate -> -> MONITOR RECORD ’no change’ Tube 8 STORE Container Temperature TRANSFER Volume Container :( = , = ') -> :( = , = ) -> None -20°C 10 µL Frozen samples bottle tubes Syntax constraints on protocols Semantic constraints on protocols #1 Attr1, Attr2, ... #2 Attr2, Attr3, ... REMOVE TRANSFER #1 Sample1, Sample2, ... #2 Sample3, Sample4, ... REMOVE TRANSFER Spectrum of attributes Nonparametric attribute modeling Domain ontology While Continue Memory Resolve While Continue Memory Resolve While Memory Syntactic constraint optimization Prior syntax model E-step (bottom-up) M-step (top-down) Semantic constraint reduction Features of OpCode samples #1 REMOVE #2 TRANSFER Figure 3: The AutoDSL framework and the resulting DSL-based procedure constraints. (Top) Given domain specified corpus, AutoDSL conducts bidirectional syntax optimization and non-parametric semantic reduction, resulting in syntactic constraints and semantic constraints. (Bottom) A DSL-based constraint takes novel procedures as input, handles the nonlinear syntax structures like loop by syntactic constraints, and handles the semantic errors like missing by semantic constraints. verse domains, upholding the integrity from both programming language design and domain expertise perspectives. We further demonstrate that syntactic and semantic constraints effectively work as an auxiliary module of LLMs in the processing of unseen protocols, thereby suggesting a promising future for constraining protocols through a synergistic blend of programs and natural language. The contributions of this work are threefold: (i) We introduce the AutoDSL framework for automated design of DSL-based constraints, which includes a bidirectional syntax optimization module and a non-parametric learning module for semantic reduction. (ii) We establish a systematic and end-to-end evaluation platform to assess the quality of the designed DSLs-based constraints, employing both quantitative and qualitative metrics. (iii) We showcase the efficacy of DSL-based constraints in processing new coming protocols with syntactic complexity and semantic errors. 2 Constraints in protocols In this section, we scrutinize the requirements for precisely constraining protocols. The distinctive challenges of engaging with such systems stem from their complexity of procedures, limited in resources, and vulnerability to errors. Complexity of procedures The complexity of protocols arises from the multitude of action types necessary for conducting experiments, the extensive categories of reagents involved, the variety of containers and devices for operational implementation, and the broad array of additional conditions affecting action execution, such as duration, temperature, volume, lighting, and acidity. For instance, whereas each daily procedure on wikiHow involves about 14 steps, 27 actions, and 45 objects on average3, this figure substantially increases in the context of the five experimental science domains, where each protocol possesses around 60 steps, 76 actions, and 180 objects on average, with an increment of 250% to 350%. This necessitates the utilization of more specialized data types — such as Operations, Reagents, Conditions — over general data types like integers, floats, characters, and strings. Consequently, procedural workflows may include nonlinear elements like loops for repetition, branches for parallel options, and subprocedures for nested and reusable actions, requiring specialized control flow structures to depict these complex scenarios. Limitation in resources The execution of protocols is limited by the availability of resources, including a finite stock of reagents, a limited number of containers, and a scarcity of critical devices, characteristic of medium-scale experimental science laboratories. This parallels the allocation of registers and memory in computing, 3The statistics are calculated from the WikiHow Dataset, with 230k procedures in total (Koupaee and Wang, 2018). 12179 where each action must account for the use of memory spaces. In experimental procedure execution, if a device is currently in use, any operations requiring that device must be deferred until it becomes available. This necessitates resource management strategies similar to those in computer programming, such as allocating resources for use and deallocating them post-use. When allocated, the resource becomes inaccessible, creating bottlenecks that introduce inefficiency. To mitigate this, operations that do not compete for the same resources may be executed in parallel. Vulnerability to errors Execution of protocols cannot tolerate errors, mandating strict adherence to every operation. This makes the system exceptionally vulnerable, in contrast to the robustness of everyday procedural execution. In such a context, protocol execution should encompass: (i) syntactic constraints specifying how actions are sequenced to form valid execution; and (ii) semantic constraints verifying that the reactants and reaction conditions are correctly utilized. 3 AutoDSL framework This section describes our problem formulation and solutions for the AutoDSL framework. 3.1 Problem formulation Input The system input consists of natural language descriptions of protocols, namely a domain-specific corpus. These descriptions encompass procedural knowledge of execution orders and ingredient knowledge of reagents, containers, and devices involved in the experiments. For a specific domain, the input corpus C “ tc1, c2, . . . , cNu includes N protocols. Output The desired output is a DSL tS, Λu, incorporating sets of constructs that define both syntactic and semantic constraints specific to the domain (Fowler, 2010), while retaining the abstract qualities of a programming language. The set S “ tφ1, . . . , φKu comprises K atomic syntactic constraints with production rules, such as control structures Loop, Parallel, Jump, and Split. The set Λ “ tt1, . . . , tLu embodies L atomic semantic constraints, such as operations Add, Remove, Incubate, and Place. 3.2 Syntactic constraint optimization Key insight Leveraging existing knowledge on programming language design, our method utilizes a bidirectional optimization strategy to formulate the syntax of the target DSLs based on the prerequisites of the domain corpora. The algorithm employs an Expectation-Maximization (EM) framework, where the E-Step abstracts syntax from domain corpora and the M-Step derives syntax from programming language principles. Modeling The algorithm models latent syntactic constraint assignments Z “ tz1, . . . , zNu for each protocol ci. A filter set Θ “ tθ1, . . . , θK1u, where K1 ą K, is designed to determine if a segment of procedural text aligns with the logic of any atomic syntactic constraint, coming with the belief function ppΘ|Sq. The observational likelihood is computed as ppC|Z, Θq “ śN i“1 ppxi|zi, θziq. Hence, the overall joint probability of the model is given by: ppC, Z, Θ|Sq “ ppC|Z, ΘqppZ|SqppΘ|Sq. (1) Syntax prior Programming language designers leverage a general set of syntactic production rules as the prior ppZ|Sq for syntax specification. We initialize the set S0 with a Context-free Grammar (CFG) (Hopcroft et al., 1996) (see Fig. 4A). Furthermore, we construct the prior belief function ppΘ|Sq with a series of sliding-window-based filters f : C ÞÑ R, which gives a relaxed lower bound for predicting the existence of an atomic syntactic constraint. Please refer to Appx. C for details. E-Step In each E-Step, we obtain the posterior of latent variables ppZ|C, Θ, Sq applying Bayes’ theorem, which is implemented by scanning the filters over domain corpus. To note, as the spaces of prior and observation are not intractably large, we simply employ the naive version of EStep without variational approximations. M-Step In each M-Step, we first maximize the atomic syntactic constraints S by maximizing: QpˆΘ, Θq “ EZ|C,Θ “ log ppC, Z, ˆΘ|Sq ‰ , (2) where ˆΘ is the updated Θ, resulting in the structural change of S0. These two steps alternate iteratively until convergence (see Fig. 4C), ensuring the syntactic constraints are aligned with the domain. 3.3 Semantic constraint reduction Key insight Following the adaptation of syntactic constraints to the target domain, the semantic reduction phase focuses on distilling finegrained semantic constraints. This stage addresses 12180 0 50 100 150 REMOVE OPEN TRANSFER TRANSECT BEND INFLATE TIE MOBILIZE ELEVATE DEFLATE RESECT ENLARGE PALPATE RESET REFLECT CAUTERIZE SEDATE CLAMP DRIVE DIGITIZE DILATE POLISH ACCLIMATE ANASTOMOSE LOCALIZE ACHIEVE RECONNECT TWIST Medical 0 50 100 150 TRANSFER REMOVE OPEN RESOLVE LINEARIZE QUANTITATE ANNEAL DESTAIN SOLUBILIZE PROCESS DUMP EXPEL HYBRIDIZE SETUP SPIN ACIDIFY CLARIFY DISASSEMBLE IMMUNOBLOTTING MIX DECHORIONATE REPROGRAMMING GEL−PURIFY RESEAL SORT SPLIT TILT VISIT Genetics −5 −4 −3 −2 −1 0 1 10 100 1000 iteration Log likelihood BioEng Ecology Genetics InfoBio Medical 1e−03 1e−01 1e+01 0 100 200 300 400 500 iteration Probability Update BioEng Ecology Genetics InfoBio Medical 0.4 0.6 0.8 1 10 100 iteration Probability function−procedure 1.00 0.10 0.16 0.07 0.14 0.10 1.00 0.09 0.11 0.10 0.16 0.09 1.00 0.06 0.15 0.07 0.11 0.06 1.00 0.05 0.14 0.10 0.15 0.05 1.00 BioEng Ecology Genetics InfoBio Medical BioEng Ecology Genetics InfoBio Medical Ecology Medical 1 10 100 1 10 100 0.4 0.6 0.8 1.0 0.4 0.6 0.8 1.0 iteration Probability arith. assign. function if jump logical loop memory 〈statement〉 〈imperative-model〉 〈arithmetic-expression〉 〈assignment-expression〉 〈function-procedure〉 〈if-statement〉 〈jump-statement〉 〈logical-expression〉 〈loop-statement〉 〈memory-management〉 〈raise-statement〉 〈resolve-statement〉 〈concurrent〉 〈runtime-error〉 〈react〉 A B E C D F G H Remove UX [ ][ ][ S WX [ ][ ][ S RX [ ][ ] Reg Reg Reg Cont. Cont. Cont. Dev. Dev. Open UX [ S WX [ ][ S RX [String] Dev. Dev. Time Transfer UX [ ][ S WX [ ][ S RX [ ][ S …X [ ][ ] Reg Reg Reg Reg Cont. Cont. Volume Volume Remove the in the with a . ethanol tube pipette Remove the of the . supernatant collected iPSCs Remove the from each with a . Petri dish pub spatula Remove the from the . vial bath 42°C Remove the by for at . acetone speed vacuum 60min 23°C BioEng Ecology Genetics InfoBio Medical Figure 4: Illustration on syntactic constraint optimization and semantics constraint reduction. (A) Resulting syntactic constraints derived from the CFG prior model. (B) Resulting semantic constraints. (C) Convergence curve of syntactic constraint optimization. (D) Convergence curve of semantic constraint reduction. (E) Frequency profile of the semantic constraints of Genetics-DSL and Medical-DSL. (F) Acquisition of different syntactic constraints on Ecology and Medical domain corpora. (G) Syntactic constraint Function-procedure is differently acquired by the five distinct domains. (H) Confusion matrix indicating the overlapped semantic constraints between the five distinct domains. the absence of the domain-specific prior knowledge on semantics by employing a non-parametric approach, i.e., Dirichlet Process Mixture Model (DPMM), which allows for the flexible identification of semantic concepts and their relationships inherent within the protocols (see Fig. 4B). Modeling We transform the protocols into a vectorized feature space, X “ tx1, . . . , xNu, with each xi encoding operation patterns in a one-hot vector format (refer to Appx. C.3). The objective is to model latent semantic constraint assignments, W “ tw1, . . . , wNu, where each wi is an assignment of atomic semantic constraint on xi. To note, according to the definition, the size of the semantics set Λ is not fixed and grows with the data. Iteration Utilizing a DPMM facilitates the non-parametric spectral clustering of these feature vectors into groups of unique atomic semantic constraints. Each iteration in the DPMM process involves Gibbs Sampling for estimating the posterior of model’s parameters (see Fig. 4D). This clustering not only identifies distinct semantic operations but also adapts to the complexity and variability of semantic patterns across different domain corpora. Refer to Appx. C.4 for details. 3.4 Discussion We identify the commonalities and distinctions between the DSLs designed by AutoDSL corresponding to the five experimental science domains (see Fig. 4H). We find that the five domains share a majority of syntactic constraints, such as Memory and Branch (see Fig. 4F), while differing in other constraints, such as Function-procedure and Loop (see Fig. 4G). This implies that the domain specificity as an experimental protocol dominates that of subject, regarding the syntax. Different from syntactic constraints, the five domains vary significantly on semantic constraints, including fine-grained concepts such as operations, reagents, and conditions (see Fig. 4E). This implies that the domain specificity of knowledge ontology is dominant regarding the semantics. 4 Constraint design assessment In this section, we evaluate the quality of the DSLs automatically designed by our AutoDSL framework across the five domains. We first discuss the principles for the assessment, and then conduct quantitative and qualitative analysis accordingly. 12181 0.0 0.1 0.2 0.3 0.4 0.5 Completeness Laconicity Lucidity Soundness Score BioCoder Ours BioEng Ecology Genetics InfoBio Medical **** *** **** * **** (4.41±0.10) (4.35±0.13) (4.39±0.10) (4.41±0.09) (4.38±0.10) (4.40±0.10) 4.00 4.25 4.50 4.75 BioCoder BioEng Ecology Genetics InfoBio Medical Rating B Empirical evaluation A Quantitative evaluation Figure 5: Constraint design results on 5 experimental science domains. (A) Quantitative evaluation on the 5 DSL-based constraints and the BioCoder baseline. (B) Empirical evaluation on the 5 DSL-based constraints and the BioCoder baseline. 4.1 Domain corpora We compile a dataset of 16,194 experimental protocols across five domains: Genetics (8794 protocols), Medical and Clinical Research (7351, “Medical” for abbreviation), Ecology and Environmental Research (812, “Ecology”), Bioengineering (3597, “BioEng”), and Computational Biology (348, “InfoBio”), with minimal overlap between them. This diverse dataset, essential for testing our methodology, reflects the unique syntactic and semantic requirements of each domain. Please refer to Appx. D.1 for details on data collection. 4.2 What makes a good DSL? We leverage a systematic approach to gain insight into what constitutes the appropriate objectives of DSLs, employing the frameworks introduced by Guizzardi (2013) for quantitative and Karsai et al. (2009) for empirical assessment. Quantitative evaluation We check the mapping from ontology elements in the reference model, i.e., concepts and relations in the domain corpus, to DSL constructs of constraints, focusing on four criteria: soundness, lucidity, completeness, and laconicity. An ideal DSL should guarantee that (i) all ontology elements are mapped to the construct set for soundness; (ii) every ontology element is mapped to only one construct for lucidity; (iii) the construct set does not possess any redundancy beyond the ontology elements for completeness; (iv) every construct refers to only one ontology element for laconicity. Empirical evaluation We take the design guidelines of DSL as questions for assessing the resulting DSLs by AutoDSL , from a user-centric perspective. These questions range from the objective consistency for machine interpretation to the subjective complexity for user understanding. Specifically, the DSL should maintain (i) simplicity, i.e., being easy to understand both syntactically and semantically; (ii) clarity, i.e., pruning unnecessary space for generalization; and (iii) conciseness, i.e., avoiding redundant concepts and relations. Please refer to Appx. E.2 for details. 4.3 Quantitative evaluation Method To quantitatively evaluate the effectiveness of DSL-based constraints in protocol representation, we leverage maximum-recall domain-specified ontology knowledge extracted from domain-specific corpora. Such ontology, represented as a graph consisting of concept-relation triplets, serves as the groundtruth for our assessment on the DSLs designed by AutoDSL. Results The analysis on the DSLs designed by AutoDSL , comparing with BioCoder across several domains, is detailed in Fig. 5A. Specifically, our DSLs adeptly model 43.47% of the concepts with precise constraints, i.e., soundness, of which 25.93% showcase a direct oneto-one correspondence, i.e., lucidity. Furthermore, the completeness metric reveals that 50.51% of DSL constraints correspond with specific domain concepts, and 37.74% of constraints are uniquely aligned with a single domain concept, i.e., laconicity. In contrast, BioCoder demonstrates significantly lower performance metrics (lucidity: 1.05%, soundness: 1.61%, laconicity: 5.46%, completeness: 9.22%). This trend of a 5-to-20-fold improvement in the resulting DSLs over BioCoder on each metric is consistent across the five domains, as highlighted in our results. Discussion The quantitative evaluation underscores the superiority of the DSLs designed through AutoDSL over BioCoder, the established DSL hand-crafted by domain experts, in terms of four quantitative metrics. Despite the inherent specificity of the protocols, this assessment high12182 lights that a compact subset of constraints can be precisely defined and extracted. These results validate our framework’s ability to model constraints that more accurately and effectively encapsulate domain-specified procedural knowledge. 4.4 Empirical evaluation Method We institute an automatic evaluation framework leveraging a state-of-the-art LLM, e.g., GPT-4, augmented with a novel Questionanswering System (QA System) designed to simulate the analytical capabilities of human domain experts. This QA System is engineered using an index vector embedding technique to integrate domain-specific corpora from the five domains into the LLM. To facilitate nuanced and domainaware responses, we employed advanced promptengineering strategies, including the Chain-ofThought (CoT) technique (Wei et al., 2022), enabling the QA System to generate assessments that closely mirror those of a domain expert. The system’s empirical evaluation was predicated on its performance across a set of 50 meticulously designed questions, as delineated in Appx. E.2. Meta-evaluation Prior to deploying this automated evaluation mechanism across our DSLs, we conduct a preliminary meta-evaluation to ascertain the congruence between the QA System’s assessments and those of human domain experts, following the routine of automatic evaluation (Schuff et al., 2023). This process involves a comparative analysis on four subsets extracted from our DSLs and BioCoder, with both human domain experts pN “ 3q and the QA System providing ratings. We observe no significant evidence supporting that human experts and the QA System rate differently (tp99q “ ´1.282, µd ‰ 0, p “ .202), validating the application of our automated evaluation framework in assessing the resulting DSLs. Results In the ensuing phase, the QA System appraises the five DSLs, yielding ratings and confidence scores. A subsequent analysis employing paired samples T-tests (see Fig. 5B; tp49q “ 3.487, µd ď 0, p ă .005) substantiated that our resulting DSLs exhibit quality metrics on par with those of BioCoder — the benchmark DSL meticulously crafted using C/C++. Discussion The empirical evaluation not only underscores the viability of our AutoDSL framework in automating the design of domain-specific languages, but also highlights its potential to match or even surpass the quality of manually engineered solutions like BioCoder. It is primarily attributed to the design decisions to optimize a broad language feature set for compact syntactic constraints; and to tailor operations for the domain for precise semantic constraints. These strategies establish a clean and compact DSL-based constraint that adheres to domain conventions. 5 Constraint utility assessment In this section, we evaluate the utility of leveraging DSL-based constraints for the representation of new coming experimental protocols across the five domains, considering both syntax and semantics. Please refer to Tab. 1 for a demonstration. 5.1 Materials Our dataset includes 186 new coming protocols, meticulously collected from recent experiments by domain experts across the specified domains. These protocols, which have been validated for accuracy, serve as a testbed for evaluating the performance of DSL-based constraints on unseen data during the design phase. We identify the challenges with four syntactic features — Imperative control flows, Type system, Concurrent, and Reactive model. There are also three semantic errors — Action undefinition, Parameter omission, and Parameter under-specification. For syntactic constraints, we determine the success rate by calculating the proportion of samples where the target challenging syntactic features are accurately constrained. Similarly, for semantic constraints, the success rate is assessed by the proportion of samples where the DSL successfully identifies and resolves semantic errors. For fair comparison, we transform the varied outputs from different approaches to a unified JSONstyle representation. For strings referring to names of instances, we relax the exact-match criteria to similarity-based score (Papineni et al., 2002). 5.2 Methods We employ a multi-dimensional approach to assess the utility of DSL-based constraints, comparing them with several alternative methods. This maintains the integrity of an end-to-end workflow with minimal adjustments to the LLM. Our DSL-based constraint & LLM Incorporating DSL as an external constraint involves using it as an interpreter for the programs gener12183 * *** **** **** 0 3 6 9 12 DSL−LLM BioCoder−LLM Python−LLM KG−LLM LLM−only LLM−only+ Error per protocol Parameter under−specification 0.00 0.25 0.50 0.75 1.00 DSL−LLM BioCoder−LLM Python−LLM KG−LLM LLM−only LLM−only+ Success rate A Syntactic constraints 0.00 0.25 0.50 0.75 1.00 DSL−LLM BioCoder−LLM Python−LLM KG−LLM LLM−only LLM−only+ Error rate Action undefinition B Semantic constraints ** ** **** **** **** 0 2 4 6 DSL−LLM BioCoder−LLM Python−LLM KG−LLM LLM−only LLM−only+ Error per protocol Parameter omission Reactive model:error-handle Concurrent:data-parallel Adjust the spectrometer power . in case of the photon count exceeds 1000 Collect the results of PCR at different concentrations of MgCl2 ( ). 1.5mM, 2.0mM, 2.5mM and 3.0mM Type system:container-type Collect 4 ml of human blood in . vials Imperative model:for-loop Repeat the heating cycle per minute. from 0°C to 120°C Cultivate pluripotent stem cells in maintenance medium for 24-48 hours. Add ammonium acetate buffer into RNase. cultivate Reg Container Time ( = , =
 , = )
 -> cells maintenance medium cultivated pluripotent stem cells 24-48h add Reg Volume Reg ( = , = , = )
 -> ammoniumacetate buffer incubated RNase None Rnase aliquot Reg Volume Container ( = , = , = ) -> clarified supernatant 1ml micro tubes tubes with supernatant Aliquot the clarified supernatant into microcentrifuge tubes, 1 ml each. C Syntactic constraints Semantic constraints Preferred Preferred Figure 6: Constraint utility assessment results on the five experimental science domains between DSL-based constraint and alternative models. (A) Results on syntactic constraint utility assessment (higher is preferred). (B) Results on semantic constraint utility assessment (lower is preferred). (C) New coming protocols represented with syntactic or semantic constraints. ated by the LLM. Following protocols suggested by recent research (Gao et al., 2023; Zhang et al., 2023), we prompt the LLM with DSL syntax grammar and semantics operation set with running examples, directing it to translate procedural texts into corresponding programs. These programs are then verified and potentially corrected using the DSL’s syntactic and semantic constraints. This approach minimizes external dependencies, striving for a seamless plug-and-play integration between DSLs and LLMs. We utilize GPT-3.5 as the backbone model, resulting in DSL-LLM. Programming language & LLM As comparative baselines, we explore the use of another DSL, BioCoder, and a general-purpose programming language, Python, with GPT-3.5 as the base model. These methodologies are denoted as BioCoder-LLM and Python-LLM, respectively. The main variance lies in the adoption of one-shot generation for fair comparison, given GPT’s pre-existing familiarity with both BioCoder and Python’s coding paradigms. Structural representation & LLM We also examine the effectiveness of elementary structural knowledge representation, specifically Knowledge Graph (KG), as a simpler alternative. This method, KG-LLM, leverages entity-relation extraction techniques for knowledge structuring. LLM only Finally, we assess the capability of naive LLMs operating without any structural constraints, particularly using GPT-3.5 and GPT-4, to gauge the impact of LLM advancements alone, named after LLM-only and LLM-only+. 5.3 Results Syntactic constraints Our investigation into syntactic constraints shows that DSL-LLM surpasses all alternative approaches in terms of success rate across the five domains (see Fig. 6A). In the analysis of syntactic constraints across five domains, DSL-LLM achieves a success rate of 93.5%, which significantly outperforms the counterparts without programming language representation (χ2p1q “ 3.979, p ă .05), and also outperforms the counterparts with general syntactic constraints. This substantiates the efficacy of these domain-specific syntactic constraints in meticulously constraining non-trivial protocols. Semantic constraints The assessment of semantic constraints reveals that DSL-LLM outperforms both the strong and weak baselines (see Fig. 6B) regarding the three types of errors. In the context of Genetics domain, DSL-LLM yields a success rate of 93.7% in addressing semantic errors, significantly outperforms alternative methods, as indicated by statistical evidence (χ2p1q “ 8.378, p ă .005 in action undefinition; tp185q “ ´3.215, µd ă 0, p ă .005 in parameter omission; and tp185q “ ´2.164, µd ă 0, p ă .05 in parameter under-specification). These underscore the domain-specific semantic constraints’ capability in enhancing accuracy of protocol representation, which is crucial for successful experiments. 5.4 Discussion The performance of alternative approaches basically aligns with our expectation (see Fig. 6C). 12184 Table 1: Showcases of protocol representations with all approaches. (OP: original protocol; GT: ground truth; L+: LLM-only+; each abbreviation consists of the two initials from both sides of the dash in each approach name) OP RNA Determination: Add ammonium acetate buffer and RNaseT2, then incubate. Bile Processing: Use a centrifuge tube to spin at 3000g for 10 min at 4°C. Bile Collection: Collect bile into a sterile collection bottle. GT ADD: [[Reg: ammonium acetate buffer], [Container: None], [Volume: None], [Reg: RNaseT2]] -> incubated RNaseT2 SPIN: [[Force: 3000g], [Time: 10min], [Temperature: 4°C], [Container: centrifuge tube]] -> centrifuged sample COLLECT: [[Reg: bile], [Volume: None], [Container: sterile collection bottle]] -> bile sample DL ADD: [[Reg: ammonium acetate buffer], [Container: None], [Volume: None], [Reg: RNaseT2]] -> incubated RNaseT2 SPIN: [[Force: 3000g], [Time: 10min], [Container: centrifuge tube], [Temperature: 4°C]] -> centrifuged sample COLLECT: [[Reg: bile], [Volume: None], [Container: sterile collection bottle]] -> bile sample BL ADD: [[Reg: ammonium acetate buffer], [Volume: None], [Reg: RNaseT2]] -> incubated RNaseT2 SPIN: [[Container: tube], [Force: 3000g], [Time: 10min], [Temperature: 4°C]] -> COLLECT: [[Container: sterile collection bottle]] -> centrifuged pellet PL ADD: [[Reg: ammonium acetate buffer], [Volume: None], [Reg: RNaseT2]] -> incubated RNaseT2 SPIN: [[Force: 3000g], [Time: 10min], [Temperature: 4°C]] -> centrifuged sample CENTRIFUGE: [[Container: bottle]] -> centrifuged bile KL : [] -> STEP1: [[Force: 3000g], [Time: 10min], [Temperature: 4°C]] -> : [] -> LO : [] -> STEP: [[Device: centrifuge], [Force: 3000g], [Time: 10min], [Temperature: 4°C]] -> CENTRIFUGE: [[Container: sterile collection bottle]] -> centrifuged bile L+ ADD: [[Reg: ammonium acetate buffer], [Reg: RNaseT2]] -> incubated RNaseT2 SPIN: [[Device: tube], [Time: 10min], [Force: 3000g], [Temperature: 4°C]] -> sample CENTRIFUGE: [[Reg: bile], [Container: sterile collection bottle]] -> collected bile Among them, DSL-LLM demonstrates the highest performance in addressing both syntactic and semantic constraints. While BioCoder-LLM and Python-LLM, the strong baselines, are outperformed by DSL-LLM in most times, they still exhibit substantial advantages over other baselines. This phenomenon can be attributed to the merits shared by programming languages, such as the ability to represent structural knowledge at various levels of granularity. Although not as effective as semantics constraints, which exactly define the structures and legal ranges for the space of operations, reagents, and conditions, this type of representation still constrains the potential search space to some extent. On the other hand, the flattened structural representation of KGs cannot provide the same level of expression capacity for KG-LLM as programming languages. Despite the expectation that GPT-4 would be much more capable than GPT-3.5, the performances of LLM-only and LLM-only+ are comparable, suggesting that a pure text representation may not be suitable for processing complicated procedures like protocols. The relative success of DSL-LLM indicates the potential of DSLs as external constraints for LLMs. 6 General discussions In this work, we present the AutoDSL framework as a proof-of-concept to facilitate the automation of designing DSL-based constraints across various domains. Through both quantitative and qualitative evaluations of the DSLs designed by AutoDSL in five distinct domains, we demonstrate its capability as an auxiliary module for LLM. Rationale behind DSL The decision to leverage DSL for constraint representation is rooted in several considerations. Primarily, DSLs adeptly capture domain-specified syntactic and semantic constraints, aligning well with the hardness of representing protocols with complicated control flows and operations. Moreover, DSLs leverage a deterministic verification mechanism derived from general-purpose programming languages, offering a robust means of imposing constraints on the inherently nondeterministic outputs of LLMs. Additionally, DSLs are user-friendly to both humans and machines, maintaining a minimal set of language features that facilitate ease of adoption. No universal constraint It is unrealistic to expect a one-size-fits-all protocol constraint applicable across varied domains. For a constraint system to accurately delineate the execution space of a particular domain, it must encompass domainspecific syntax and semantics. Though it is possible to devise a comprehensive set of constraints that covers the requirements of all conceivable domains, such an approach would yield a constraint system of prohibitive complexity, rendering it impractical for end-users. Conversely, simplifying this universal constraint to enhance user-friendliness inevitably compromises its capability, leading to the expressivity-complexity dilemma (Abelson and Sussman, 1996). Rather than seeking an elusive generality, focusing on domain-specific constraint development and striving for the automation of this process may offer a pragmatic way to circumvent this dilemma. 12185 Ethics statement Human participants The meta-evaluation and data annotation included in this work has been approved by the Institutional Review Board (IRB) of Peking University. We have been committed to upholding the highest ethical standards in conducting this study and ensuring the protection of the rights and welfare of all participants. Every domain expert was paid on a wage of $22.5 per hour for participating in the meta-evaluation and data annotation. We have obtained informed consent from all participants, including clear and comprehensive information about the purpose of the study, the procedures involved, the risks and benefits, and the right to withdraw at any time without penalty. Participants were also assured of the confidentiality of their information. Any personal data collected, including name, age, gender, institution, and education background, was handled in accordance with applicable laws and regulations. Corpora collection We carefully ensure that all experimental protocols incorporated into our corpora strictly adhere to open access policies, governed by the Creative Commons license. This approach guarantees full compliance with copyright and intellectual property laws, eliminating any potential infringement or unauthorized use of protected materials. By exclusively utilizing resources that are freely available and legally distributable, we uphold the highest standards of ethical conduct in research, fostering an environment of transparency and respect for the intellectual property rights of others. This commitment ensures that our work not only advances the frontiers of knowledge but does so in a manner that is both legally sound and ethically responsible. Limitations As a proof-of-concept work, the design and evaluation of AutoDSL come with limitations, leading to further investigations: • We majorly consider the imperative programming model as the prior model for DSL design in the work. This raises the question of whether incorporating alternative programming paradigms, such as functional and objectoriented models, could enhance the representation of complex entities within protocols, particularly the properties of reagents. • AutoDSL ’s current framework outputs only syntactic and semantic constraints, lacking an explicit production system. This leads to the question of whether it is feasible to augment the AutoDSL framework to autonomously generate both a production system and a constraint system, leveraging domain-specific corpora and pre-existing knowledge on programming languages. Such an enhancement could significantly improve DSL’s potential on planning. • To ensure a fair comparison and to underscore the plug-in capability of the developed DSLs, only minimal prompt engineering is applied in protocol processing. This posits the potential for developing tools that could more effectively intertwine DSLs with LLMs. • Notably, the DSL-InfoBio’s quantitative evaluation outcomes are markedly inferior to those of its DSL counterparts. Considering the notably smaller size of the InfoBio domain corpus relative to other domain-specific corpora, this prompts an investigation into the potential correlation between the scale of a domain corpus and the quality of the resulting DSL. With many questions unanswered, we hope to explore more on automated design of DSL-based constraints for procedural representation. Reproducibility The project page with supplementary files for reproducing the results of this paper is available at https://autodsl.org/procedure/ papers/acl24shi.html. Acknowledgements This work was supported by the National Natural Science Foundation of China under Grant 91948302. Part of the authors are visiting students at Peking University during this work. In particular, Z. Bi is visiting from Huazhong University of Science and Technology, X. Wei is visiting from Beijing Jiaotong University. The authors would like to appreciate these institutes, and also Prof. Wenjuan Han for her support. References Harold Abelson and Gerald Jay Sussman. 1996. Structure and interpretation of computer programs. The MIT Press. Vaishnavi Ananthanarayanan and William Thies. 2010. Biocoder: A programming language for standardizing and automating biology protocols. Journal of Biological Engineering, 4(1):1–13. 12186 Monya Baker. 2016. 1,500 scientists lift the lid on reproducibility. Nature, 533(7604). Anthony Brohan, Yevgen Chebotar, Chelsea Finn, Karol Hausman, Alexander Herzog, Daniel Ho, Julian Ibarz, Alex Irpan, Eric Jang, Ryan Julian, et al. 2023. Do as i can, not as i say: Grounding language in robotic affordances. In Conference on Robot Learning. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems. Bruce G Buchanan and Edward A Feigenbaum. 1978. Dendral and meta-dendral: Their applications dimension. Artificial Intelligence, 11(1-2):5–24. Noam Chomsky. 1957. Syntactic Structures. Mouton de Gruyter. Lucia Donatelli, Theresa Schmidt, Debanjali Biswas, Arne Köhn, Fangzhou Zhai, and Alexander Koller. 2021. Aligning actions across recipe graphs. In Annual Conference on Empirical Methods in Natural Language Processing. Edward A Feigenbaum. 1981. Expert systems in the 1980s. State of the art report on machine intelligence. Maidenhead: Pergamon-Infotech, 23. Martin Fowler. 2010. Domain-specific languages. Pearson Education. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language models. In International Conference on Machine Learning. Giancarlo Guizzardi. 2013. Ontology-based evaluation and design of visual conceptual modeling languages. Domain Engineering: Product Lines, Languages, and Conceptual Models, pages 317–347. John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. 1996. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley Longman Publishing Co., Inc. Jiani Huang, Ziyang Li, Binghong Chen, Karan Samel, Mayur Naik, Le Song, and Xujie Si. 2021. Scallop: From probabilistic deductive databases to scalable differentiable reasoning. In Advances in Neural Information Processing Systems. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022. Language models as zeroshot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning. Gabor Karsai, Holger Krahn, Claas Pinkernell, Bernhard Rumpe, Martin Schindler, and Steven Völkel. 2009. Design guidelines for domain specific languages. In OOPSLA Workshop on Domain-Specific Modeling (DSM’ 09). Chloé Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, and Yejin Choi. 2015. Mise en place: Unsupervised interpretation of instructional recipes. In Annual Conference on Empirical Methods in Natural Language Processing. Mahnaz Koupaee and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. arXiv preprint arXiv:1810.09305. Marjan Mernik, Jan Heering, and Anthony M Sloane. 2005. When and how to develop domain-specific languages. ACM Computing Surveys (CSUR), 37(4):316–344. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. Yoshio Momouchi. 1980. Control structures for actions in procedural texts and pt-chart. In COLING 1980 Volume 1: The 8th International Conference on Computational Linguistics. Marcus R Munafò, Brian A Nosek, Dorothy VM Bishop, Katherine S Button, Christopher D Chambers, Nathalie Percie du Sert, Uri Simonsohn, EricJan Wagenmakers, Jennifer J Ware, and John Ioannidis. 2017. A manifesto for reproducible science. Nature Human Behaviour, 1(1):1–9. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics. Paolo Pareti, Ewan Klein, and Adam Barker. 2014a. A semantic web of know-how: linked data for community-centric tasks. In Proceedings of the International Conference on World Wide Web. Paolo Pareti, Benoit Testu, Ryutaro Ichise, Ewan Klein, and Adam Barker. 2014b. Integrating know-how into the linked data cloud. In Knowledge Engineering and Knowledge Management: 19th International Conference. Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio Torralba. 2018. Virtualhome: Simulating household activities via programs. In Conference on Computer Vision and Pattern Recognition. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits 12187 of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Bertrand Russell. 1923. Vagueness. The Australasian Journal of Psychology and Philosophy, 1(2):84–92. Stuart J Russell and Peter Norvig. 2010. Artificial intelligence a modern approach. Prentice Hall Press. Helmut Schmid. 1999. Improvements in part-ofspeech tagging with an application to german. In Natural Language Processing using very large Corpora. Hendrik Schuff, Lindsey Vanderlyn, Heike Adel, and Ngoc Thang Vu. 2023. How to do human evaluation: A brief introduction to user studies in nlp. Natural Language Engineering, pages 1–24. Yu-Zhe Shi, Shiqian Li, Xinyi Niu, Qiao Xu, Jiawen Liu, Yifan Xu, Shiyu Gu, Bingru He, Xinyang Li, Xinyu Zhao, et al. 2023. PersLEARN: Research training through the lens of perspective cultivation. In Annual Meeting of the Association for Computational Linguistics. Niket Tandon, Keisuke Sakaguchi, Bhavana Dalvi, Dheeraj Rajagopal, Peter Clark, Michal Guerquin, Kyle Richardson, and Eduard Hovy. 2020. A dataset for tracking entities in open domain procedural text. In Annual Conference on Empirical Methods in Natural Language Processing. Alfred Tarski. 1946. Introduction to Logic and to the Methodology of Deductive Sciences. Dover Publications. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim. 2023. Plan-and-solve prompting: Improving zeroshot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems. Li Zhang, Hainiu Xu, Yue Yang, Shuyan Zhou, Weiqiu You, Manni Arora, and Chris Callison-Burch. 2023. Causal reasoning of entities and events in procedural texts. In Findings of the Association for Computational Linguistics: EACL 2023. Ziqi Zhang, Philip Webster, Victoria S Uren, Andrea Varga, and Fabio Ciravegna. 2012. Automatically extracting procedural knowledge from instructional texts using natural language processing. In LREC. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. Shuyan Zhou, Pengcheng Yin, and Graham Neubig. 2022a. Hierarchical control of situated agents through natural language. In Proceedings of the Workshop on Structured and Unstructured Knowledge Integration. Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, and Graham Neubig. 2022b. Show me more details: Discovering hierarchies of procedures from semi-structured web data. In Annual Meeting of the Association for Computational Linguistics. 12188 A Additional remarks A.1 AutoDSL and expert systems The expert system is a highly structured representation of knowledge, and has been intensively studied while widely applied in the past few decades. Compared with the statistical model, however, it often requires more hand-crafted effort for development, and the workload increases dramatically with the increasing generality level. This echoes the primary goal of our proposed AutoDSL framework, which makes efforts to formulate the structure of knowledge representation, similar to classic expert systems such as Buchanan and Feigenbaum (1978); Feigenbaum (1981), while adopting the power of data and LLM for domain generality and adaptation. A.2 AutoDSL framework and LLM The current AutoDSL framework is essentially an LLM-in-the-loop approach. In the pre-processing of raw protocols, we take full advantage of LLM for conducting state-of-the-art Information Extraction (IE), obtaining key information from raw text, such as entities of operations, entities of conditions, and relations between different operations, etc. Such key information is taken as the input of our core algorithms for syntactic constraint optimization and semantic constraint abstraction. We would like to reiterate that AutoDSL is not an alternative for LLM. The proposed AutoDSL framework is in essence yet another workflow for utilizing LLM to handle protocols with constraints. Compared with pure end-to-end workflow of LLM for producing such constraints, our framework is designed to be an explicit two-stage workflow. In the first stage, the framework automatically designs DSLs for target domains in a bidirectional optimization fashion. In the second stage, the resulting DSLs serve as plug-and-play constraint modules for all kinds of LLMs to handle protocols precisely, preventing them from generating free-formed and non-determinstic procedural interpretations. B On the generality of AutoDSL B.1 Generalize in the context of experimental protocols Regarding domains related to experimental sciences, there are syntactic constraints shared by domains and those unique to domains respectively, as we have discussed in Sec. 3.4. These shared syntactic constraints mainly reflect the procedural nature across domains, and unique ones attributed to the composition types and the interruption in procedure execution. Through the following demonstrations, we aim to show that while semantic constraints are more diverse than syntactic constraints, there are still substantial distinctions between syntactic constraints across different domains. Shared syntactic constraints across domain There are some examples of syntactic constraints that are shared across different areas: • allocate-statement: Inoculate 5 ml LB medium containing 100 µg/ml ampicillin with bacteria. • if-branch: If cells are to be treated with PTX, divide them into two groups: one receives 100 ng/ml PTX. • parallel-for: Iterate different MgCl2 concentrations (1.5mM, 2.0mM, 2.5mM, and 3.0mM) to find the optimal concentration for DNA amplification. • temporal-type-declaration: Gently shake the reagent for 5 seconds to mix. Unique syntactic constraints of one particular domain There are some examples of syntactic constraints that are unique to particular domains respectively: • In BioEng, container-type-declaration: Resuspend the pellet in PBS to refill the tube. • In Ecology, string-type-declaration: Close the soundproof chamber as instruction manual (see "https://—"). 12189 • In InfoBio, raise-statement: If unbalanced spectral peaks, noisy data, and frame loss occur during recording, ensure to subtract background noise and adjust for hemodynamic changes. Unique semantic constraints of one particular domain There are some examples of semantic constraints that are unique to particular domains respectively: • In Genetics, – DILUTE: Dilute the Spike-inAmpR fragment to a concentration of 65.6 nM. – ATTACH: Attach a Slide-A-Lyzer Float Buoy to the top (single) dialysis clip. – DELETE: Detect the flow-through samples from each step with SDS-PAGE gel. • In Medical, – ASPIRATE: Aspirate the DPBS and add 40 µ03bcL of Sigma Lysis Solution for Blood. – ANESTHETIZE: Anesthetize the rat intraperitoneally (i.p.). – DISSECT: Dissect the fly brain under a stereomicroscope with light sources. • In Ecology, – STERILIZE: Sterilize the seeds with 5% (v/v) sodium hypochlorite. – QUANTIFY: Quantify the eggs by placing washed eggs in a 10-mL graduated cylinder. – CALIBRATE: Calibrate the motion sensor using a two-axis actuator. B.2 Generalize beyond experimental protocols AutoDSL focuses on a specialized form of natural language: procedural text, and is designed to formalize this procedural text, offering a clear execution trace and a readily verifiable interface. We select experimental sciences experiments for their demands on high interpretability, stringent execution, and adaptable planning. This methodology can be easily applied to new domains beyond experimental sciences, where such rigorous conditions are not necessary, including culinary recipes. Examples of various syntactic constraints are provided in Tab. A1. Table A1: Showcases of syntactic constraints in culinary recipes Constraint Original recipe text integer-type-declaration 8 [ounces] white fresh {pasta} device-type-declaration Yield: 2 plates floatingpoint-type-declaration 1/3 [cup] red {wine} temporal-type-declaration After |2 minutes| more, add the beef. We can also effectively design a DSL with such a corpus where actions happen in a kitchen instead of a lab. We showcase a recipe example adopted from the Corel1 DSL as follows. 1 Pasta Bolognese 2 3 Yield: 2 plates 4 5 Ingredients: 6 7 - 8 [ounces] white fresh {pasta} 8 - 1 [floz] olive {oil} 9 - 1/4 [ounce] {garlic}; minced 10 - 4 [ounces] {onions}; chopped 11 - 4 [ounces] shallow fried {beef}; minced 12 - 1 - 1 1/2 [ounce] lean prepared {bacon} 13 - 1/3 [cup] red {wine} 1Visit https://fse.studenttheses.ub.rug.nl/25731/ for documentation. 12190 14 - 150 [gram] raw {carrots}; thinly sliced 15 - 2/3 [ounce] concentrated {tomato puree} 16 - 4 [ounces] red {sweet pepper}; cut julienne 17 - 1 [ounce] {parmesan} cheese 18 19 Instructions: 20 21 Add the @oil@ to a large saucepan, heat to <300 F>, and saute the @onions@. 22 23 After |2 minutes|, add the @garlic@. 24 25 Keep on medium to high heat, and don't stir. 26 27 After |2 minutes| more, add the @beef@. 28 29 Fry the @bacon@ in a separate pan, on high heat. 30 31 Remove liquified fat when done. 32 33 Boil @pasta@ in a medium pan, until al dente (~|8 minutes|). 34 35 Drain when done. 36 37 Once the @beef@ is done, add the @carrots@, @sweet pepper@ and @tomato puree@. 38 39 Slowly add the @wine@ as well, to not lower the temperature. 40 41 Let it simmer (but not boil) for |5-10 minutes|. 42 43 Add the @bacon@ to the large saucepan. 44 45 Serve with grated @parmesan@ cheese. Based on the example, it is trivial to figure out that the constraints on cooking recipes are essentially in a subset of those scientific procedures. The cooking procedures mainly focus on sequentially executing the actions without switching their execution orders. Also, most of the culinary ingredients are processed in single-thread fashion without duplication for different experimental groups. Hence, the Corel DSL does not require some syntactic constraints for non-linear control flows, such as loop, branch, and split, while these constraints are dominant in DSLs for protocols. AutoDSL will adaptively exclude these unnecessary features when optimizing the DSL for culinary procedures given the corpus of recipes. Regarding semantics constraints, there are fewer shared semantics on operations, ingredients, and conditions between cooking and experimental sciences compared to those semantics shared between different domains within experimental sciences. However, this increment of semantic diversity would not yield significant challenge for AutoDSL , as our semantic reduction does not rely on domain transfer. B.3 Extended discussions When considering both syntactic and semantic constraints across different domains, we observe that there are significant differences within the context of experimental sciences. However, when looking at a more general context that encompasses all possible procedural knowledge worldwide, these differences tend to converge. This suggests that DSLs may have a hierarchical structure in the general context. The shared features of different DSLs are more common closer to the root, such as the operation incubate shared by Genetics and Ecology in experimental sciences, and the operation add shared by Genetics, Ecology, and cooking. These common DSL constructs possess general semantics. On the other hand, as we move closer to the leaf, the majority of DSL constructs become specialized for unique domains. This 12191 parallels the spectrum of naturalism in sciences (Shi et al., 2023), ranging from a general community with convergent high-level concepts to a specific community with divergent low-level knowledge. C Implementation details C.1 Implementation of the syntax prior model We employ the CFG generally for designing modern imperative programming language as the prior model ppZ|Sq for syntax optimization. 1 <program> ::= <statements> 2 3 4 <statements> ::= <statement> 5 | <statement> <statements> 6 7 /* All support */ 8 <statements> ::= <imperative-model> 9 | <runtime-error-handling> 10 | <type-system> 11 | <concurrent> 12 | <react> 13 14 15 /* Imperative Model */ 16 <imperative-model> ::= <if-statement> 17 | <loop-statement> 18 | <jump-statement> 19 | <memory-management> 20 | <function-procedure> 21 | <arithmetic-expression> 22 | <logical-expression> 23 | <assignment-expression> 24 25 <if-statement> ::= "if" "(" <expression> ")" "{" <statements> "}" 26 | "if" "(" <expression> ")" "{" <statements> "}" "else" "{" < statements> "}" 27 28 <loop-statement> ::= "While" "(" <expression> ")" "{" <statements> "}" 29 | "For" "(" <assignment-expression> ";" <expression> ";" < assignment-expression> ")" "{" <statements> "}" 30 31 <jump-statement> ::= "break" 32 | "continue" 33 34 <function-procedure> ::= "Call" <identifier> "(" <arguments> ")" 35 | "Function" <identifier> "(" <parameters> ")" "{" < statements> "}" 36 37 <memory-management> ::= "allocate" <type> <identifier> 38 | "deallocate" <identifier> 39 40 <assignment-expression> ::= <identifier> "=" <expression> 41 42 <arithmetic-expression> ::= <expression> <arithmetic-operator> <expression> 43 44 <logical-expression> ::= <expression> <logical-operator> <expression> 12192 45 46 /* Runtime Error Handling */ 47 <runtime-error-handling> ::= <raise-stmt> 48 | <resolve-stmt> 49 50 <raise-stmt> ::= "raise" "(" <expression> ")" 51 52 <resolve-stmt> ::= "try" "{" <statements> "}" "catch" "(" <identifier> ")" "{" < statements> "}" 53 54 /* Type System */ 55 <type-system> ::= <data-type> 56 | <class-type> 57 | <domain-specified-type> 58 59 <domain-specified-type> ::= "time" 60 | "reagent" 61 | "device" 62 | "container" 63 | <scientific-type> 64 65 <data-type> ::= "int" 66 | "float" 67 | "bool" 68 | "string" 69 | "set" "<" <type> ">" 70 | "dict" "<" <type> "," <type> ">" 71 | "vector" "<" <type> ">" 72 73 <class-type> ::= "class" <identifier> "{" <class-body> "}" 74 75 <kind-type> ::= "type" <identifier> "=" <type> 76 77 <class-body> ::= <class-members> 78 79 <class-members> ::= <class-member> | <class-member> <class-members> 80 81 <class-member> ::= <variable-declaration> 82 | <method-declaration> 83 84 <variable-declaration> ::= <assignment-expression> 85 86 <method-declaration> ::= <function-procedure> 87 88 /* Concurrent Programming */ 89 <concurrent> ::= <data-parallel> 90 | <message-passing> 91 92 <data-parallel> ::= "parallelFor" "(" <parallel-range> ")" "{" <statements> "}" 93 | "parallelMap" "(" <collection> "," <function> ")" 94 95 <parallel-range> ::= "range" "(" <expression> "," <expression> ")" // Define start, end of range 96 12193 97 <collection> ::= <identifier> // Reference to a collection of data, e.g., array, list 98 99 <function> ::= <identifier> // Reference to a function to apply in parallel 100 101 <message-passing> ::= "spawnProcess" "(" <process-function> ")" 102 | "sendMessage" "(" <process-identifier> "," <message> ")" 103 | "receiveMessage" "(" <message-type> ")" 104 105 <process-function> ::= <identifier> 106 107 <process-identifier> ::= <identifier> 108 109 <message> ::= <expression> 110 111 <message-type> ::= <type> 112 113 114 /* React Model */ 115 <react> ::= <event-stmt> 116 | <response-stmt> 117 118 <event-stmt> ::= "emit" "(" <event> ")" 119 120 <response-stmt> ::= "on" "(" <event> ")" "{" <statements> "}" 121 122 /* Auxiliary Definitions */ 123 <type> ::= <data-type> | <class-type> | <kind-type> | "void" 124 125 <parameters> ::= <empty> | <parameter> | <parameter> "," <parameters> 126 127 <parameter> ::= <identifier> ":" <type> 128 129 <arguments> ::= <empty> | <expression> | <expression> "," <arguments> 130 131 <arithmetic-operator> ::= "+" | "-" | "*" | "/" 132 133 <logical-operator> ::= "&&" | "||" | "!" 134 135 <identifier> ::= <letter> (<letter> | <digit>)* 136 137 <letter> ::= "A" | "B" | ... | "Z" | "a" | "b" | ... | "z" 138 139 <digit> ::= "0" | "1" | "2" | ... | "9" 140 141 <event> ::= <identifier> 12194 C.2 Implementation of the prior belief function ppθk|ciq serves as a prior and represents the probability that programming language features θk are present within the experimental protocol ci. ppθk|ciq “ $ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ & ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ ’ % maxLpopcq i“1 ExistspINTiq if θk “ interger-type-declaration maxLpopcq i“1 ExistspFLOATiq if θk “ floatingpoint-type-declaration maxLpopcq i“1 maxpExistspTRUEiq, ExistspFALSEiqq if θk “ boolean-type-declaration maxLpopcq i“1 ExistspSTRINGiq if θk “ string-type-declaration maxLpopcq i“1 maxtSpNOUNi, "vector"qu if θk “ vector-type-declaration maxLpopcq i“1 maxtSpNOUNi, "dictionary"qu if θk “ dict-type-declaration maxLpopcq i“1 maxtSpNOUNi, "set"qu if θk “ set-type-declaration maxLpopcq i“1 ExistspTIMEiq if θk “ temporal-type-declaration maxLpopcq i“1 maxtBelongTopNOUNi, Chemqu “ maxLpopcq i“1 ExistspCHEMiq if θk “ reg-type-declaration 1 if θk “ device-type-declaration maxLpopcq i“1 maxtBelongTopNOUNi, Containerqu “ maxLpopcq i“1 ExistspCONTAINERiq if θk “ container-type-declaration 1 if θk “ scientific-type-declaration maxLpopcq i“1 Spopci, "repeat"q if θk “ for-loop maxLpopcq i“1 Spopci, "repeat"q ˆ !ExistspINTiq if θk “ while-loop maxLpopcq i“1 ExistspIFiq if θk “ if-branch maxLpopcq i“1 ExistspIFiq ˆ ExistspELSEiq if θk “ if-else-branch maxLpopcq i“1 Spopci, "call"q if θk “ function-procedure-call maxLpopcq i“1 Spopci, "call"q if θk “ function-procedure-declaration Qpwhile-loop|cq ˆ Qpif-branch|ciq if θk “ break-statement 0 if θk “ continue-statement maxLpopcq i“1 maxtBelongTopNOUNi, Chemq, BelongTopNOUNi, Containerqu if θk “ allocate-statement maxLpopcq i“1 ExistspADDiq if θk “ add-arithmetic-operator maxLpopcq i“1 ExistspMINUSiq if θk “ minus-arithmetic-operator maxLpopcq i“1 ExistspMULTIPLYiq if θk “ multi-arithmetic-operator maxLpopcq i“1 ExistspDEVIDEiq if θk “ devid-arithmetic-operator maxLpopcq i“1 ExistspANDiq if θk “ and-arithmetic-operator maxLpopcq i“1 ExistspORiq if θk “ or-arithmetic-operator maxLpopcq i“1 ExistspNOTiq if θk “ not-arithmetic-operator maxLpopcq i“1 ExistspEQUALiq if θk “ assignment-expression maxLpopcq i“1 maxtSpNOUNi, "error"qu if θk “ raise-statement maxLpopcq i“1 Spopci, "resolve"q ˆ maxtSpNOUNi, "error"qu if θk “ resolve-statement D x P Chem : řLpopcq i“1 Existspx P NOUNiq ě ϵ if θk “ class-type-declaration maxLpopcq i“1 ExistspPERSONSiq if θk “ spawn-process Qpspawn-process|cq ˆ maxLpopcq i“1 Spopci, "say"q if θk “ send-message Qpspawn-process|cq ˆ maxLpopcq i“1 Spopci, "say"q if θk “ receive-message maxLpopcq i“1 ExistspWHENiq if θk “ event-statement maxLpopcq i“1 ExistspWHENiq if θk “ response-statement maxLpopcq i“1 ExistspLpNUMiq ě ϵq if θk “ parallel-for maxLpopcq´1 i“1 maxLpopcq j“i`1 ExistspCHEMiq ˆ ExistspCHEMi “ CHEMjqˆ !ExistspPRONiq ˆ !ExistspPRONjq ˆ p1 ´ SpSentencei, Sentencejqq if θk “ parallel-map In the context of our prior belief function, Chem and Container are predefined sets. The set Sentence includes all possible sentences, each of which begins with an opcode. These opcodes together constitute the set "opc." In our algorithm, we assign ϵ with 4 and assign δ with 0.5. We derive these various sets from filtering based on specific rules. For pre-processing, We utilize regular expressions to match a variety of elements, such as integers, floating-point numbers, boolean values, strings, and control flow elements. Then, we employ speech tagging to label nouns and pronouns (Schmid, 1999). Afterwards, we conduct entity recognition for the identification of chemical elements. To determine word similarity, we employ word embedding model to calculate cosine similarities (Mikolov et al., 2013). To assess sentence similarity, pairwise word similarity scores are averaged 12195 across the words in the sentences. Furthermore, we use GPT-3.5 to extract initial specific scientific data types. Subsequently, the preliminary extraction results are filtered through a list of candidate words to obtain the final results. The prompt is as follows: 1 You need to identify and list any physical chemistry scientific quantities mentioned in the following experimental procedure. 2 Requirements: 3 1. The output format should be only one per line as "Original Text: Scientific Quantity". 4 2. Please extract information from the given sentences without creating your own summaries. 5 Output Example: 6 50mL: Volume 7 {} 8 The given experimental procedure are: 9 {} The filtering word list is as follows: 1 "Volume", "Temperature", "Length", 2 "Energy", "Concentration", "Mass", "Speed", "Acceleration", "Density", "Frequency", 3 "Force", "Acidity", "Flow Rate", "Pressure", "Voltage" C.3 Implementation of feature vector transformation Each xi in the dataset X “ tx1, x2, ..., xNu is a feature vector representing an operation pattern extracted from the corpora. To obtain this dataset, we extract all sentences starting with verbs from the dataset using NLTK’s part-of-speech tagging firstly. We then utilize GPT to annotate the parameters within these sentences and ultimately convert them into ont-hot feature vectors. The prompt is as follows: 1 You need to complete an entity recognition task with defined entity categories: {} 2 Requirements: 3 1. The output format should be annotated on the original sentence, and only the annotated sentence should be output. 4 2. Please extract information from the given sentences without creating your own summaries. 5 3. Text is in a laboratory setting, please carefully analyze the specialized terms in the fields of biology and chemistry. 6 4. Please extract as many entities as possible from this sentence. 7 Example: 8 {} 9 The given sentences are: 10 {} C.4 Implementation of DPMM Spectral clustering via DPMM The DPMM facilitates the spectral clustering of these feature vectors into groups representing unique atomic semantic constraints, accommodating the variability and complexity of semantic patterns across different domain corpora. The iterative clustering process, centered around Gibbs sampling, estimates the posterior distributions of the model’s parameters. This not only identifies distinct semantic constraints but also adapts to the intricate semantic patterns prevalent in various domain-specified corpora. Algorithm symbols and initialization Key symbols include X for the dataset of feature vectors; N, the total number of samples; and L, the flexible number of clusters within the mixture model. Latent variables W indicate cluster assignments for each xi, while Ψ encompasses the set of parameters for each 12196 cluster. The concentration parameter α influences new cluster formation likelihood, with σm serving as the regularization term for parameter updates. Initially, K “ 1 and cluster parameters ψ0 are set with generic values, e.g., µ “ 0, σ “ 1, assigning all data points to an initial cluster (wi “ 0 for each i). Gibbs sampling and posterior inference The core of the algorithm lies in the Gibbs sampling loop, which iterates until convergence. It updates the cluster assignment wi for each xi by evaluating the posterior distribution that incorporates the current parameter estimates and α. The posterior probability of xi belonging to cluster l integrates the likelihood of observing xi given the cluster parameters and the prior probability of cluster membership, formalized as: ppwi “ l|w´i, xi, α, Ψq 9 ppxi|ψlq ¨ ppwi “ l|w´i, αq, (A1) where ppxi|ψlq is the likelihood of observing xi under the parameters of cluster l, and ppwi “ l|w´i, αq reflects the adjusted prior probability for existing clusters and the potential for new cluster formation, influenced by α. Following the assignment update, cluster parameters Ψ are recalibrated using Maximum A Posteriori (MAP), applying regularization with σm to ensure stability and prevent overfitting. C.5 Exemplar output There are several examples demonstrating the abstraction from natural language instructions to semantic patterns, from domains of Medical, Genetics, and Ecology. Medical: 1 { 2 "TRANSECT": [ 3 { 4 "pattern": ["REG", "REG"], 5 "example": [ 6 "Transect the [aorta]{REG} proximally to the origin of the [ brachiocephalic trunk]{REG}.", 7 "Transect the [bile duct]{REG} close to the [pancreas]{REG}.", 8 "Transect the [SHVC]{REG} along with part of the diaphragm and [IHVC ]{REG} at the level of the left renal vein when the donor liver became pallid.", 9 "Transect the liver parenchyma of the [caudate lobe]{REG} and the [ Spiegel lobe]{REG}.", 10 "Transect the [infrahepatic inferior vena cava (IHIVC)]{REG} and mobilize the [lVC]{REG} from retroperitoneal tissue." 11 ], 12 }, 13 { 14 "pattern": ["REG", "Device"], 15 "example": [ 16 "Transect the [cranial nerves]{REG} with the [scissors]{Device}.", 17 "Transect the [PHA]{REG} of the recipient at its root to expose the vascular lumen using [micro-scissors]{Device}.", 18 "Transect the [femoral artery]{REG} in the section located between the distal and proximal knots using [spring scissors]{Device}." 19 ], 20 }, 21 { 22 "pattern": ["REG", "Device", "REG"], 23 "example": [ 24 "Transect the [aorta]{REG} with [fine sharp scissors]{Device} (see Table of Materials) just proximal to the [subclavian artery]{REG} takeoff." 25 ], 12197 26 }, 27 { 28 "pattern": ["REG"], 29 "example": [ 30 "Transect the [colon]{REG}.", 31 "Transect the [umbilical ligament]{REG}.", 32 "Transect the [IVC]{REG} 1 cm above the diaphragm.", 33 "Transect the [stretcher/opener motor nerve]{REG}." 34 ], 35 } 36 ] 37 } Genetics: 1 { 2 "RESOLVE": [ 3 { 4 "pattern": ["REG", "REG", "REG", "Length"], 5 "example": [ 6 "Resolved 1 \u00b5l of the [PCR reaction]{REG} on a [DNA gel]{REG} to confirm the successful reaction with a [DNA band]{REG} at about [10 kb]{ Length}.", 7 "Resolve 2 \u00b5l of the [PCR reaction]{REG} on a [DNA gel]{REG} to confirm the successful reaction to see if a [PCR product]{REG} of [3.5 kb]{ Length} is present." 8 ], 9 }, 10 { 11 "pattern": ["Concentration", "Concentration"], 12 "example": [ 13 "Resolve [0.05%]{Concentration} of the input (=extract/lysate) and at least [10%]{Concentration} of the eluates on a 4\u201315% Mini-PROTEAN\u00ae TGX\u2122 Precast SDS-PAA gel." 14 ], 15 }, 16 { 17 "pattern": ["REG", "REG"], 18 "example": [ 19 "Resolve the cell pellet in [PBS]{REG} and [centrifuge]{REG} again .", 20 "Resolve [5 \u00b5L]{Volume} of the [PCR]{REG} from [6-12]{String} representative samples on a [2% (w/v)]{Concentration} [agarose gel]{REG}.", 21 "Resolve [10 \u00b5l]{Volume} of [PCR reactions]{REG} on [1%]{ Concentration} [agarose gel]{Container} ([TAE]{REG} or [TBE]{REG}) with a [1 kb ]{Length} [DNA ladder]{REG} to check for positive amplification ([Figure 3C]{ String})." 22 ] 23 } 24 ] 25 } Ecology: 1 { 2 "DRILL": [ 3 { 4 "pattern": ["Device"], 12198 5 "example": [ 6 "Drill a hole big enough to allow the [sensor]{Device} to be inserted.", 7 "Drill a hole at this coordinate using a [medium tip burr]{Device} ( Figure 3)." 8 ], 9 }, 10 { 11 "pattern": [ 12 "Length" 13 ], 14 "example": [ 15 "Drill a hole (approximately [7 mm]{Length}) in the skull above the target area using coordinates from a brain atlas according to the animal used.", 16 "Drill a craniotomy of approximately [1 mm]{Length} in diameter in this location.", 17 "Drill the [craniotomy]{Length}." 18 ], 19 }, 20 { 21 "pattern": [ 22 "REG" 23 ], 24 "example": [ 25 "Drill out the hole in the [pulley]{REG} to match the diameter of the [motor shaft]{REG}.", 26 "Drill the [skull]{REG} on the marked point using a .9 mm diameter steel drill tip." 27 ], 28 }, 29 { 30 "pattern": [ 31 "Length", 32 "Device" 33 ], 34 "example": [ 35 "Drill a [3 in]{Length} hole in the top panel for the [fan]{Device }.", 36 "Drill a small burr hole (approximately [0.5 mm]{Length}) using the [Micro-Drill]{Device} at the [coordinates]{REG}." 37 ], 38 } 39 ], 40 } C.6 Extended discussions On the assignments of K and K1 Empirically, K1 and K are in the same order of magnitude, with a difference roughly ranging in 25% „ 35%, in the experiments we have conducted. Intuitively, K1 denotes an estimation of syntactic constraints before optimization and K is the exact total number of syntactic constraints after optimization. K1 is specified by an appropriate initialization. It is grounded to a predefined upper bound according to the size of the general syntax tree for C/C++. During the optimization process, the syntax tree is pruned according to the observed distribution of syntactic constraints on the corresponding domain corpus. 12199 On the scale of training corpus In our studies, we have found that approximately 348 protocols, with an average of 121 steps per protocol, totaling 42,108 steps, are needed as a minimum amount of domain-specific corpus for the AutoDSL framework to work on a specific domain and to design a DSL accordingly. This threshold ensures that the dataset possesses sufficient richness of domain-specific context and is robust for extracting syntax and semantic constraints. D Data sources D.1 Corpora Our corpora come from five websites including Nature2, Cell3, Bio4, Wiley5 and Jove6. From a domain perspective, our corpora are classified into 5 domains including BioEng, InfoBio, Medical, Ecology and Genetics. We quantified each domain corpus in Table A2 by assessing the website sources, total number of protocols, and the average number of steps, actions, and objects per protocol. We only use sentences with subject-verb-object and predicate-object constructions to count the steps, actions, and objects. In addition, we performed the part-of-speech tagging of each word in the sentence, treating the verb as an action and the noun as an object. To note, we take the prototype of the verb or noun for the statistics. Table A2: Statistics of the corpora across five domains Domain Nature Cell Bio Wiley Jove total step action type object type InfoBio 0 298 0 50 0 348 121 120 280 Ecology 55 166 0 34 557 812 71 88 193 BioEng 941 1404 0 22 1230 3597 70 80 188 Medical 290 1530 1061 116 4354 7351 65 80 189 Genetics 1045 2296 3522 134 1797 8794 58 73 175 We demonstrate procedure cases from five domains with respect to sources. 1 [BioEng/Nature] 2 1. Virulent S. aureus MB 2865 \(Smith strain) was grown overnight with aeration in TSB medium. 3 2. Bacteria were harvested and washed with fresh TSB and serially diluted to achieve an infective inoculum of 4 x 10<sup>3</sup> CFU/mL in 5% hog gastric mucin, and 0.5 mL was used to infect intraperitoneally. 4 3. The infected mice were anesthetized using isoflurane \(Abbott). 0.5 inch hollow connectors were attached to the external section of the JVC catheter and the mice were subsequently harnessed to the counter-balance arm inside a mouse cage. 5 4. Fluid primed PE 20 tubings attached to individual syringes on the infusion pump were connected to the mice through the 0.5 inch hollow connector. 6 5. The infusion pump was initiated for 24 hours at a flow rate of 0.1 mL/hour. 7 6. Blood samples are taken by tail vein to determine serum drug concentration during the infusion. 8 7. After the 24 hour period, the mice were euthanized and the kidneys were aseptically removed and homogenized. 9 8. Serial dilutions of the homogenates were plated on Mannitol plates and incubated overnight at 37 ˝C. 10 9. Bacterial counts were enumerated \(ref. 2). 1 [InfoBio/Cell] 2 Timing: 1 h per experiment (for step 3) 3 The copulation-triggered photostimulation system detects copulation events 2https://protocolexchange.researchsquare.com/ 3https://star-protocols.cell.com/ 4https://bio-protocol.org/en 5https://currentprotocols.onlinelibrary.wiley.com/ 6https://www.jove.com/ 12200 4 in real time and turns on the green laser required for GtACR1-mediated 5 inhibition of neural activity. This system allows for the inhibition of 6 specific neural circuits in males only during copulation. In this assay, 7 we use males that express GtACR111[href=https://www.wicell.org#bib10] in 8 piezo-expressing neurons (w1118; +/+; piezo-GAL4/UAS-GtACR1.d.EYFP). As a mating partner, we use 9 wild-type females. See 10 key resources table[href=https://www.wicell.org#key-resources-table] for details on fly 11 strains. 12 Prepare GtACR1-expressing male flies. 13 Set up the fly crosses. Introduce parental flies (4-5 14 piezo-GAL4 males and UAS-GtACR1 virgin females) into 15 vials containing fly media. The detailed genotypes are listed in the 16 key resources table[href=https://www.wicell.org#key-resources-table]. 17 18 Transfer parental flies (F0 fly) to fresh vials every 5 days. 19 Collect F1 males of the desired genotypes according to step 1 in 20 before you begin[href=https://www.wicell.org#before-you-begin]. 21 22 Note: F0 flies are transferred to fresh 23 vials up to 2 times. 24 25 Note: Set-up several vials at a time to 26 obtain enough F1 male flies. 27 28 Note: Wild-type females are prepared as 29 mating partners according to step 1 in Before you begin. 30 31 Pause point: The experimenter can 32 decide the timing of subsequent feeding of all-trans-retinal (ATR) as long 33 as the tested flies are within 5-8 days after eclosion at the time of the 34 optogenetic assay. 35 Feeding of ATR. Male flies of the experimental group are fed food 36 containing ATR. Control males are fed food containing ethanol (EtOH) 37 solvent. 38 Prepare plastic tubes containing ~150 µL of fly food. 39 40 For the experimental group, add 2 µL of ATR, 25 mg/mL dissolved in 41 99.5% EtOH (need to be prepared just before use) on the food 42 surface. For the control group, add 2 µL of 99.5% EtOH on the food 43 surface instead of ATR. 44 Transfer males to a plastic tube containing fly food with the 45 surface covered with ATR (or EtOH) 24-36 h before the experiment. 46 Keep the male flies on prepared food for 24-36 h in the dark before 47 being used for the assays. 1 [Medical/Bio] 2 Spleens were obtained from naive and malaria-infected C57BL/6 mice and were homogenized at room temperature in 6-well plates with 6 ml of HBSS with 2% FCS through a 70 µm cell strainer to form single-cell suspensions. Red blood cells were lysed using 1 to 2 ml of RBC lysing buffer (depending on size of spleen) and splenocytes were washed once at 200 x g 4 ˝C for 5 min with cold FACS buffer (HBSS with 2% FCS). Splenocytes were resuspended by gentle tapping on a rack in FACS buffer and kept on ice at all times to avoid background phosphorylation of STAT proteins. Viability and cell counts were obtained by trypan blue 12201 exclusion using a haemocytometer. Cells were washed once with 1 ml AIM V® Medium, resuspended at 20 x 106 cells/ml and rested on ice for a minimum of 20 min. 1 ˆ 106 cells were incubated with 20 ng/ml rIL-2 or 2.5 ng/ml rIL-12 for 10 min at 37 ˝C, 5% CO2 (final volume of 200 µl) and immediately fixed on ice for 15 min by adding an equal volume of 4% paraformaldehyde. Cells were washed with FACS buffer, resuspended in 500 µl of 90% ice-cold methanol and immediately stored down at -20 ˝C for a minimum of 2 h (cells can be kept for up to a month without affecting further staining). Splenocytes were washed twice with FACS buffer and stained in FACS buffer at room temperature for 30 min for CD4 (GK1.5), CD44 (IM7), CD62L (MEL-14), T-bet (4B10) and phosphorylated STAT4 (at residue Y693, clone 38) or phosphorylated STAT5 (at residue Y694, clone 47). Cells were washed with FACS buffer and analysed by flow cytometry. Fluorescence minus one controls were included to validate flow cytometric results. Flow cytometry acquisition was performed using an LSR II. 1 [Ecology/Wiley] 2 Generating primary transformants 3 1. Sow seeds of the transformation on GM medium containing gentamicin to select for transgenic plants harboring the T-DNA. 4 2. Let the seeds maturate for 2 weeks at 22˝C in a growth chamber. 5 3. For GT: Pick at least 40 T1 plants containing pDe-EC-ttLbCas12a and transfer them to soil until maturity. 6 4. For cleavage activity: Extract DNA from 20 primary transformants containing the pDe-ttLbCas12a construct via the rapid DNA extraction method (see Edwards et al ., 1991[href=https://currentprotocols.onlinelibrary.wiley.com/doi/10.1002/cppb .20117#cppb20117-bib-0002]). Analyze the mutagenesis efficiency by e.g., TIDE ( see Critical Parameters). 7 5. Extract DNA from one leaf of each plant and set up a suitable PCR to verify the presence of your construct. 8 Obtaining heritable GT plants 9 6. Harvest the seeds of each T1 plant separately. 10 7. Sow about 100 seeds per line on GM medium. 11 8. Let the seeds maturate for 2 weeks at 22˝C. 12 9. Extract DNA from one leaf of 100 plants per line as a pool. 13 10. Screen for positive GT events using a suitable PCR (see Critical Parameters). 14 11. Extract DNA from each plant of the positive identified T1 pools separately after another week of growth. 15 12. Analyze the T2 plants separately for heritable GT via PCR and confirm it by sequencing (see Critical Parameters). 16 13. Transfer positive T2 plants to soil and cultivate them to maturity. 1 [Genetics/Jove] 2 All experiments involving the differentiation of human iPSC lines were performed in compliance with the Institutional Review Board of Boston University (protocol H33122). The dermal fibroblasts, procured for reprogramming to iPSCs, were obtained from a donor with written informed consent, under the approval of the Human Research Protection Office of Washington University School of Medicine, St . Louis, MO. Reprogrammed iPSCs were generated at the Center for Regenerative Medicine at Boston University and Boston Medical Center, Boston, MA. 3 1. Alveolosphere dissociation 4 Prepare complete serum-free differentiation media (cSFDM) as per the composition mentioned in Table 1. 5 Prepare CK + DCI media in the prepared cSFDM base as per Table 2. 6 Thaw 2D (human embryonic stem cell-qualified) and/or 3D (growth-factor reduced) matrix on ice as required for the experimental needs. 12202 7 Aspirate all the CK + DCI medium using a pipette or aspirating pipette with vacuum from the 3D matrix droplets containing alveolospheres, derived from directed differentiation19, in a 12-well plate. 8 Add 1 mL of dispase (2 mg/mL) per droplet. Gently pipette the droplet into the dispase using a P1000 pipette. Incubate at 37 ˝C for 1 h, pipetting up and down once after 30 min. 9 Transfer the dissociated organoids (from Step 1.5) from one matrix droplet in the dispase to a 15 mL conical tube. To wash, add 10 mL of Iscove's Modified Dulbecco's Medium (IMDM, see Table of Materials). 10 Centrifuge at 300 x g for 5 min at room temperature. Aspirate the supernatant using a pipette or aspirating pipette with vacuum, leaving as little supernatant as possible. 11 NOTE: It is important to remove all dispase as any remaining dispase may dissolve the matrix that the cells will subsequently be seeded into. If a clear haze is seen above the pellet, the dispase has not completely dissolved the matrix, and more dispase can be added to the pellet for another 20-30 min at 37 ˝C. 12 Resuspend the cells in 1 mL of 0.05% trypsin per droplet and transfer back to the 12-well plate. Incubate at 37 ˝C for 12-15 min. Observe the dissociation under a microscope. Avoid over-pipetting the cells at this stage. 13 NOTE: At the end of incubation, the cells need to achieve a single-cell suspension after pipetting 3-5 times with a P1000 pipette. For passaging iAT2s to ALI (Step 3), the trypsinization time needs to be minimized (maximum 12 min), such that the cells are in 2-3-cell clumps rather than single-cell suspension when ready for plating onto the cell culture insert. 14 Stop the action of trypsin with an equal volume of FBS-containing medium (10% ESqualified FBS in DMEM). Centrifuge at 300 x g for 5 min at room temperature. 15 Wash the cells with 10 mL of IMDM. Centrifuge at 300 x g for 5 min at room temperature. 16 Resuspend the cells in an appropriate volume for counting, and then count the cells using a hemocytometer (see Table of Materials). 17 NOTE: From one confluent 50 µL matrix droplet seeded at 400 cells/µL, the expected yield is 500,000 to 1.5 x 106 cells per droplet. 18 Use the single-cell suspension of iAT2 cells to generate alveolospheres by plating in the 3D matrix (Step 2) and/or plating on cell culture inserts for ALI culture (Step 3). 19 2. 3D plating of iAT2s 20 After counting (Step 1.11), determine the number of desired cells to replate in the 3D matrix (400 cells/µL of the matrix with 50-100 µL of 3D matrix droplets per well of a 12-well plate). Centrifuge the cells at 300 x g for 5 min at room temperature. Remove as much supernatant as possible using a pipette. 21 Resuspend the cells in the 3D matrix. Resuspend quickly and on ice, if needed, to prevent the matrix from polymerizing (which occurs when warm). 22 Use a P200 pipette to dispense one 3D matrix droplet per well into a pre-warmed 12well plate. Pipette carefully to avoid creating bubbles in the matrix droplet. Do not allow the cell suspension to settle while dispensing multiple droplets. 23 Place the plate in a 37 ˝C incubator for 20-30 min to allow the matrix droplets to polymerize. 24 Add 1 mL of CK + DCI + 10 µM of Y-27632 medium (see Table of Materials) per well to cover the matrix droplet. 25 After 72 h, change the medium to CK + DCI without 10 µM of Y-27632. 26 Replace the medium with fresh CK + DCI every 48-72 h. 27 NOTE: iAT2s will typically need to be passaged approximately every 10-14 days, depending on cell line and plating density. 12203 D.2 Data preprocessing We perform some preprocessing on the source data, including slice procedures and domain mapping. Slice procedures We segment the procedures of the protocols without destroying their structure as much as possible. Specifically, we first split the procedures according to the regular delimiter ’[znzr]+’ to obtain a series of sub-paragraphs. Then we merge the sub-paragraphs in order, and the token number of merged paragraphs does not exceed the predetermined maximum token (i.e., 300). If the number of tokens in the merged paragraph is never satisfied, we follow the separator ’.?!’ to split and re-merge the paragraphs at a fine-grained level. Domain mapping The protocols from the five data sources have their own subject areas. We mapped the topic domains to the five domains developed by the experts according to the expert-set mapping table. 1 [Genetics] 2 Antibody 3 Biochemistry 4 Biomarkers 5 Biophysics 6 Cell Biology 7 Cell Differentiation 8 Cell Membrane 9 Chromatin Immunoprecipitation (ChIP) 10 Gene Expression 11 Genetics 12 Genomics 13 Human Genetics 14 Molecular Biology 15 Molecular/Chemical Probes 16 Mouse Biology 17 Protein Biochemistry 18 Protein Science 19 Proteomics 20 RNaseq 21 Sequence Analysis 22 Sequencing 23 Signal Transduction 24 Single Cell 25 Single-Molecule Assays 26 Structural Biology 27 Systems Biology 28 Microbiology 29 Developmental Biology 30 Model Organisms 31 ----------------------------------------32 [Medical] 33 Cancer 34 Cancer Research 35 Cardiology 36 Diseases 37 Drug Discovery 38 Gastroenterology 39 Health Sciences 40 Immunology 41 In Situ Hybridization 42 lmmunology and Infection 43 Medicine 44 Neuroscience 12204 45 Oncology 46 Organoids 47 Physiology 48 Pharmacology 49 Psychology 50 Rheumatology 51 Stem Cells 52 Stem Cell Biology 53 Tissue Engineering 54 Toxicology 55 Urology 56 Health Humanities 57 ----------------------------------------58 [Ecology] 59 Behavior 60 Ecology 61 Plant Sciences 62 Plant Biology 63 ----------------------------------------64 [Bioeng] 65 Bioengineering 66 Biotechnology 67 Cell Culture 68 Microscopy 69 Biological Techniques 70 Cell Isolation 71 Cell Separation/Fractionation 72 Cell-Based Assays 73 Chemical Biology 74 Cytometry 75 Mass Cytometry 76 Protein Expression And Purification 77 ----------------------------------------78 [Infobio] 79 Bioinformatics 80 Computational Biology and Bioinformatics E Constraint design assessment details E.1 Meta evaluation For meta evaluation, the experts’ participation will include completing a survey and possibly partaking in an interview. The survey will present a series of statements or questions about the DSL. For each item, we request two responses from the experts: Opinion on a 1-5 Scale and Confidence in the Response on a 1-5 Scale. This dual-scale approach will help quantify both the perspective on the DSL and confidence in each response, providing a richer dataset for analysis. In this part, we organized three experts in the field of experimental sciences to independently score our DSL. Each expert scored the results of a single DSL output across 9 dimensions, with a total of 50 questions. This process will be repeated 5 times, with experts independently evaluating 5 different subsets. We collected the scores from the three experts for subsequent processing and comparison. The participants are instructed by the following guidance: 1 Invitation to Participate in the Evaluation of a New Domain-Specific Language for Experimental Sciences 2 3 Dear Expert, 12205 4 5 We are reaching out to invite you to partake in a crucial evaluation of a newly developed Domain-Specific Language (DSL) designed for life sciences. Your expertise in this field is invaluable for this assessment. 6 7 Purpose of the Experiment: 8 Our team has developed a DSL to facilitate both computerized and manual experimentation processes in life sciences. This evaluation aims to gather detailed feedback from experts like you to refine the DSL's usability and functionality. 9 10 What Will Be Involved: 11 Your participation will include completing a survey and possibly partaking in an interview. The survey will present a series of statements or questions about the DSL. For each item, we request two responses from you: 12 1. Opinion on a 1-5 Scale: Rate each item on a scale where 1 indicates 'Strongly Disagree' and 5 indicates 'Strongly Agree'. The scale is nuanced as follows: 13 - 1: Strong Disagreement or Major Issues 14 - 2: Disagreement or Notable Concerns 15 - 3: Neutral or Mixed Feelings 16 - 4: Agreement or Minor Concerns 17 - 5: Strong Agreement or Highly Favorable 18 2. Confidence in Your Response on a 1-5 Scale: Indicate your level of confidence in your response to each item, where 1 is 'Not Confident at All' and 5 is ' Extremely Confident'. The scale implies: 19 - 1: Very Low Confidence 20 - 2: Low Confidence 21 - 3: Moderate Confidence 22 - 4: High Confidence 23 - 5: Very High Confidence 24 25 This dual-scale approach will help quantify both your perspective on the DSL and your confidence in each response, providing a richer dataset for analysis. 26 27 In the interview, we will delve deeper into your experiences with the DSL, allowing you to share more comprehensive insights and suggestions. 28 29 Estimated Time Commitment: 30 The survey should take approximately 60 minutes to complete. The interview, if you choose to participate, will be a 1-hour session, scheduled at your convenience. 31 32 Confidentiality and Use of Data: 33 Your responses will be kept strictly confidential and will be used solely for improving the DSL. We adhere to the highest ethical standards in our research. 34 35 Your expertise and nuanced feedback are vital for the success of this project, and we greatly appreciate your consideration. 36 37 Thank you for your time and expertise. E.2 Empirical evaluation We take the GPT-4 as our expert in machine empirical evaluation. We built a knowledge augmentation system using Langchain 7 to provide additional domain knowledge to GPT-4. We designed our instruction 7https://www.langchain.com/ 12206 template following the prompt format of Zheng et al. (2023) and utilizing the Plan-and-Solve decomposition idea (Wang et al., 2023). The machine evaluation results on DSL of five domains and BioCoder are shown in Table A3. Table A3: Empirical evaluation by the QA System expert on the five DSLs corresponding to five domains and BioCoder Domain Opinion Confidence BioCoder 4.35 3.96 InfoBio 4.38 3.97 Ecology 4.39 3.96 BioEng 4.41 3.97 Medical 4.40 3.97 Genetics 4.41 3.97 The instruction template is displayed as follows: 1 Please act as an impartial judge and evaluate the given developed Domain-Specific Language ([DSL]) according to the [Rating Basis], [Rating Criteria], and related [Domain Knowledges]. Please follow the [Instruction]. Moreover, the [Background ] describes something about Domain-Specific Language ([DSL]). 2 --------------------------------------------------------------3 [Background] 4 Domain experts expect to describe the experimental protocols of the specific domain with the \ac{dsl} programs. 5 Domain experts expect to be guided by the \ac{dsl} programs step-by-step, where the input, output, and configurations of each step is well-detailed. 6 Domain experts come without any training on programming. 7 Domain experts are subjective and their mindsets come from the specific domain. 8 9 [Instruction] 10 The [Rating Basis] presents a series of statements or questions about the [DSL]. Please evaluate your opinion and confidence level in the statements or questions referring to related domain knowledge. 11 That is, your evaluation should be based primarily [Rating Basis]. You should refer to and use [Domain Knowledge] to help you evaluate. Begin your evaluation by providing a short explanation. After providing your explanation, please rate the [DSL] on [Rating Criteria] by strictly following this format: "Rating: [[< opinion>, <confidence>]]", for example: "Rating: [[3, 4]]". 12 13 [Rating Criteria] 14 For each item, you need give an <Opinion> and <Confidence> rate: 15 Opinion on a 1-5 Scale (int): Rate each item on a scale where 1 indicates 'Strongly Disagree' and 5 indicates 'Strongly Agree'. The scale is nuanced as follows: 16 1: Strong Disagreement or Major Issues 17 2: Disagreement or Notable Concerns 18 3: Neutral or Mixed Feelings 19 4: Agreement or Minor Concerns 20 5: Strong Agreement or Highly Favorable 21 Confidence in Your Response on a 1-5 Scale (int): Indicate your level of confidence in your response to each item, where 1 is 'Not Confident at All' and 5 is ' Extremely Confident'. The scale implies: 22 1: Very Low Confidence 23 2: Low Confidence 24 3: Moderate Confidence 25 4: High Confidence 26 5: Very High Confidence 27 12207 28 [Rating Basis] 29 {} 30 31 [DSL] 32 {} 33 34 [Domain Knowledges] 35 {} 36 37 [Output] 38 Let's first understand/follow the [Instruction] to evaluate the [DSL] and give an explanation about rating. 39 Then let's give a final rating based on the explanation. [Rating Basis] refers to our designed questions or statements. 1 "The DSL clearly represents the essential concepts necessary for life sciences experiments." 2 "All domain concepts included in the DSL are relevant and contribute directly to my tasks in life sciences." 3 "I can easily express all necessary domain concepts for my experiments using the DSL ." 4 "The DSL does not include complex concepts that are unnecessary for my work in life sciences." 5 "The DSL seems to have been developed with practical feedback from domain experts, ensuring its relevance to my work." 6 "The design of the DSL focuses on the tasks I need to accomplish in life sciences, avoiding irrelevant features." 7 "The DSL is simple and straightforward to understand and use in my life sciences work." 8 "Learning to use the DSL does not require extensive time or effort, even for those new to this kind of language." 9 "The tools and features of the DSL are clear and intuitive to use in my everyday tasks." 10 "The simplicity of the DSL enhances my productivity in conducting life sciences experiments." 11 "The straightforward nature of the DSL lowers the barrier to its adoption in my professional environment." 12 "The DSL specifically addresses the unique concepts and needs of my work in life sciences, without unnecessary generalizations." 13 "All concepts and features in the DSL are directly applicable and useful for my life sciences tasks." 14 "The DSL avoids overly complex or overly generalized concepts that is uneasy for me to understand, making it more practical for my work." 15 "The focused nature of the DSL facilitates its quick and successful introduction into my life sciences work." 16 "The DSL is concise and precise, focusing only on what is necessary for life sciences, without extraneous features." 17 "The DSL effectively meets the current needs of my work in life sciences without complicating future changes in the domain." 18 "The number of elements in the DSL is appropriate and manageable for my work in life sciences, making it easy to understand." 19 "The DSL effectively uses specialized sublanguages (or subsets) for different aspects of life sciences, enhancing clarity and focus." 20 "Despite having a limited number of elements, the DSL is effective in handling the complex tasks of my life sciences work." 12208 21 "The DSL's approach to representing different aspects of life sciences work (like structure, behavior, etc.) is clear and well-organized." 22 "The DSL's limited elements are versatile enough to be applicable across a wide range of life sciences applications." 23 "Each concept in the DSL is distinct and clearly defined, with no overlap or redundancy in their functions or purposes." 24 "I find it straightforward to express ideas in my life sciences work using the DSL, as it avoids unnecessary duplication of concepts." 25 "The DSL allows for consistent modeling of facts and processes in life sciences, without confusion due to redundant concepts." 26 "Remembering and applying different concepts in the DSL is easy, as it avoids minor variations that could lead to confusion." 27 "The DSL effectively represents life sciences data and processes without complicating them through redundant concepts." 28 "The non-redundant design of the DSL positively impacts my workflow in life sciences by providing clarity and efficiency." 29 "The notations used in the DSL are familiar to me, reflecting those commonly used in my life sciences work." 30 "Transitioning to using the DSL was easy for me because it adopts notations and terminologies I'm already accustomed to in life sciences." 31 "I did not have to spend much time learning new syntaxes or notations when starting with the DSL, thanks to its use of familiar concepts." 32 "The syntax and notation of the DSL are well-suited for the specific tasks I perform in life sciences." 33 "The DSL effectively integrates languages or notations that are commonly accepted and used in my field (like SQL for database queries, if relevant)." 34 "The DSL aligns closely with the existing notations and terminologies that are standard in the life sciences domain." 35 "In the DSL, different elements are clearly distinguishable from one another, enhancing my understanding and ease of use." 36 "In textual aspects of the DSL, keywords and language elements are positioned in a way that makes the content easy to read and understand." 37 "The DSL effectively avoids ambiguity in representing different domain concepts, making it simple for me to interpret the models or text." 38 "The design of the DSL prioritizes readability and understanding for the reader, rather than writing efficiency." 39 "The DSL allows me to easily add comments to model elements, aiding in documentation and explanation." 40 "Comments within the DSL models significantly enhance the understandability and clarity of my work." 41 "The ability to comment on model elements in the DSL simplifies and supports collaborative efforts in my life sciences projects." 42 "The DSL offers flexibility in commenting styles, such as line comments and block comments for textual languages or annotations for graphical elements." 43 "Comments in the DSL can be effectively used for detailed documentation, similar to generating HTML pages or Javadoc." 44 "The DSL encourages or enforces a consistent style of documentation through its comment structure, improving overall model quality." 45 "The usage conventions defined for the DSL contribute to its clarity and comprehensibility, making it easier for me to use in my life sciences work." 46 "The DSL strikes a good balance between providing guidance through usage conventions and allowing flexibility in how I use it." 47 "The established conventions in the DSL aid in maintaining the quality and consistency of my work." 48 "Following the usage conventions in the DSL is straightforward and does not add 12209 unnecessary complexity to my tasks." 49 "The conventions in the DSL, such as the notation of identifiers and the order of elements, enhance the overall comprehensibility of the language." 50 "The usage conventions of the DSL positively impact its overall usability and practicality in my life sciences work." Rationale of scoring We showcase some non-cherry-picked scoring rationales given by both human expert and our QA System expert in. Table A4: Showcases of scoring rationales.(the corresponding scoring is attached [Opinion, Confidence]) Criteria human expert QA System expert "The DSL clearly represents the essential concepts necessary for life sciences experiments." The DSL precisely covers key life sciences experiments, enhancing protocol clarity and accuracy. [5, 4] The DSL for life sciences effectively encapsulates key experimental actions—ASPIRATE, ADD, CLEAN, COMBINE, PUT—covering essential concepts from aspirating liquids to placing items. Its design and domainspecific actions provide a comprehensive framework for describing experiments, reflecting its adequacy for the life sciences domain. [5, 4] "The DSL allows for consistent modeling of facts and processes in life sciences, without confusion due to redundant concepts." The DSL clarifies life sciences protocols, potentially outperforming traditional methods with its precision. [4, 4] The DSL effectively captures key experimental actions in life sciences, offering clarity with specific patterns and examples for each action. Rated with high confidence, it ensures precise protocol definition, minimizing confusion and redundancy. Further analysis could enhance its validation against existing standards. [4, 4] "The DSL is simple and straightforward to understand and use in my life sciences work." The DSL simplifies life sciences protocols with clear commands and examples, though it may benefit from expanded flexibility. [4, 4] The DSL is effectively designed for life sciences, offering domain-specific commands like ASPIRATE and ADD with clear examples, facilitating experimental protocols without programming knowledge. While comprehensive and user-friendly, potential areas for additional commands or flexibility could further enhance its utility. [4, 4] "The DSL’s limited elements are versatile enough to be applicable across a wide range of life sciences applications." The DSL streamlines essential life sciences experiments but may fall short in specialized contexts, justifying a cautious yet positive rating of 4. [4, 4] The DSL effectively guides life sciences experts through standard experiments with commands like ASPIRATE and ADD, balancing simplicity and specificity. Rated at 4, it offers a solid foundation for common protocols but may need expansion for more specialized experiments. [4, 4] E.3 Ontology extraction We take a zero-short IE tool as our backbone to extract ontologies from protocols. We add the domainrelated definition to the Stage-I prompt of RE task in ChatIE, which is defined as follows: ...In this task, an entity type may be a ’OpCode’ (operations, a one-word verb, like ADD and REMOVE); ’REG’ (reagents taking part in an operation, like cells and MDDC culture media); ’COND’ (conditions of executing an operation, like <temperature> 37C, <time> 30min, <device> a small scissor and <container> PCR tubes).... In addition, we present the entity relation mapping dictionary as follows: 1 { 2 'is concurrent with': ['OpCode', 'OpCode'], 3 'is instruction of': ['OpCode', 'REG/COND'], 4 'is predecessor of': ['OpCode', 'OpCode'], 12210 5 'is product of': ['REG', 'OpCode'], 6 'is reaction acceleration of': ['Acceleration', 'OpCode'], 7 'is reaction centrifugal force of': ['Centrifugal Force', 'OpCode'], 8 'is reaction condition of': ['COND', 'OpCode'], 9 'is reaction container of': ['Container', 'OpCode'], 10 'is reaction density of': ['Density', 'OpCode'], 11 'is reaction device of': ['Device', 'OpCode'], 12 'is reaction energy of': ['Energy', 'OpCode'], 13 'is reaction flow rate of': ['Flow Rate', 'OpCode'], 14 'is reaction force of': ['Force', 'OpCode'], 15 'is reaction frequency of': ['Frequency', 'OpCode'], 16 'is reaction iteration count of': ['Iteration Count', 'OpCode'], 17 'is reaction pressure of': ['Pressure', 'OpCode'], 18 'is reaction rotation of': ['Rotation', 'OpCode'], 19 'is reaction speed of': ['Speed', 'OpCode'], 20 'is reaction temperature of': ['Temperature', 'OpCode'], 21 'is reaction time of': ['Time', 'OpCode'], 22 'is reaction voltage of': ['Voltage', 'OpCode'], 23 'is reagent acidity of': ['Acidity', 'OpCode'], 24 'is reagent coating of': ['Coating', 'OpCode'], 25 'is reagent concentration of': ['Concentration', 'OpCode'], 26 'is reagent density of': ['Density', 'OpCode'], 27 'is reagent length of': ['Length', 'OpCode'], 28 'is reagent mass of': ['Mass', 'OpCode'], 29 'is reagent medium of': ['Medium', 'OpCode'], 30 'is reagent of': ['REG', 'OpCode'], 31 'is reagent quantity of': ['Quantity', 'OpCode'], 32 'is reagent size of': ['Size', 'OpCode'], 33 'is reagent thickness of': ['Thickness', 'OpCode'], 34 'is reagent volume of': ['Volume', 'OpCode'], 35 'is successor of': ['OpCode', 'OpCode'] 36 } F Constraint utility assessment details F.1 Protocol source In order to better analyze the subsequent DSLs, we created a protocol dataset of five domains in the life sciences in this paper, where all the protocols are not pre-existing protocols, but NOVEL ones, designed by ourselves, and of course most of these designed protocols are unpublished and written with reference to a large number of existing protocols. F.2 Protocol constraint The protocol dataset we built covers both syntactic constraints and semantic constraints. We categorized these two main features into multiple types and built specific and typical protocols for each subclass, which can encompass most of the experimental scenarios. Syntactic constraint Syntactic constraints can be primarily divided into four categories: the first category, the imperative model, includes loop, branch, jump, memory (allocation and de-allocation), function/(sub)procedure, and logical test. The second category, the type system, encompasses integer types, real/floating types, set types, scientific types, reagent types, and container types. The third category, concurrent, contains data parallel. The fourth category is the reactive model, which responds to a certain event. 1 { 2 Loop constraint 3 Cell culture: 12211 4 1. Prepare medium: prepare appropriate medium according to cell type. 5 2. Inoculate: Add cell suspension to culture flasks containing culture medium. 6 3. Continuous monitoring: Observe the cell growth status and density periodically. Repeat the culture as long as the cells have not reached the harvested density. 7 4. Harvest cells: When the cells reach the expected density, harvest the cells for the next experiment or for passaging culture. 8 } 9 { 10 Branch constraint 11 Brown Adipocyte Respiration Protocol: Prepare cells as described in the first two steps of the intact brown adipocyte respiration protocol. On the day of the experiment, thaw 1x MAS and pre-made substrate solutions. Prepare 10x solutions of inhibitors in 1x MAS. If using fatty acid substrates, prepare them freshly as described. Soak the cartridge for at least 3 hours before loading. Load specified volumes into ports A-D and calibrate the cartridge. Prepare 7.5 nM PMP in 1x MAS, gently wash cells with 1x MAS once, completely evacuate the wash, and add 450 µl 1x MAS with 7.5 nM PMP. Perform the Seahorse assay with 2-3 measurements per condition, ensuring not to exceed 1 hour before injecting Antimycin A to maintain mitochondrial integrity. 12 } 13 { 14 Memory constraint 15 Bile Processing Protocol: Collect bile in sterile bottles from patients with biliary tubes. Centrifuge at 3000g for 10 minutes at 4˝C to remove sediment and debris. Aliquot the clear bile into 1 ml volumes in microcentrifuge tubes. Store the aliquots at -20˝C for future use. 16 } 17 { 18 Function procedure constraint 19 Isolation of G4 Deletion Alleles in C. elegans: Clone out dog-1 animals onto plates, optionally freeze half as backup. Rinse worms in M9, transfer to PCR plates, add lysis buffer, and proceed with lysis and proteinase K inactivation. Perform nested PCR on lysis mix to identify positive populations for germline mutations. Analyze PCR products on agarose gel. For positive populations, chunk corresponding plate, grow to starvation, and repeat the process to confirm deletions. Perform nested PCR on progeny to confirm homozygosity of deletion alleles. Sequence PCR product for deletion characterization. Backcross new strain to N2 to clean genetic background. 20 } 21 { 22 Parallel constraint 23 Centrifugation and RNA Isolation from Serum or Plasma: Start with 1 ml of serum or plasma for each replicate. First centrifugation: Bring volume up with PBS and centrifuge all samples at once for 90 min at 100,000 x g. Second centrifugation: After discarding supernatant and resuspending pellets, centrifuge again for 70 min at 100,000 x g. Parallel processing: Process multiple samples through two rounds of centrifugation and subsequent RNA isolation, highlighting batch processing of samples. 24 } 25 { 26 Reactive model constraint 27 Cell Growth and Transfection Response: Thaw and culture cells until 80% confluency. Response to 80% confluency: Split cells for further growth or prepare for freezing. Transfect cells at low density, then change media based on purpose ( growth or differentiation). Response to transfection: Monitor for complete 12212 differentiation, adjusting care based on confluency or media type." 28 } Semantic constraint Semantic constraints mainly include two types: (i) Use of undefined action; (ii) Incomplete parameter, where the first category is required parameters do not exist, and the second category is Required parameter under-specified (is not grounded to the granularity for execution). F.3 Prompt engineering for utility assessment Following methods suggested by recent research (Gao et al., 2023; Zhang et al., 2023), we prompt the LLM with examples of DSL syntax and semantics, directing it to translate procedural texts into corresponding programs. To ensure a fair comparison and to underscore the plug-and-play capability of the DSLs designed by AutoDSL , only minimal prompt engineering is applied in protocol processing. The resulting prompt text under the ideas is demonstrated as follows. 1 You are an expert in life science and computer science. Now you are prompted with a grammar of programming language defined by production rules, several experiment steps described in natural language, and a construct (which is the left part in the production rule). Your task is to determine whether the natural language description consists of parts that can be parsed using this production rule. If the natural language string can be parsed, please output "Yes", otherwise, output "No". 2 3 The production rule: 4 for (initialization; condition; increment) statement 5 6 The nature language experimental protocol: 7 Study design: Timing: 3 days for protocol development. IRB/ethics review and revision can take 4-8 weeks. Repeat with number from 1 to 11:1. Define the crisis situation.2. Select existing algorithms for study.3. Define hypothetical triage algorithms for study.4. Select primary and secondary clinical endpoints .5. Determine the clinical data needed.6. Select the patient cohort for study.7. Assess availability of necessary clinical data.8. Adapt the triage algorithm scoring scheme.9. Submit the proposed study to the IRB or ethics panel. Calculating priority scores: Timing: 3 weeks to 3 months. Repeat with number from 1 to 8:1. Create a case report form.2. Select a data management system.3. Select the method of data entry. 4. Perform pilot data acquisition. 5. Complete the case report form for all patients.6. Apply the triage algorithms and calculate priority scores. Testing algorithm accuracy Timing: 1 day.1. Determine the accuracy of the priority scores Simulation of clinical decision-making: Selection from a smaller group Timing: 1 day. Repeat with number from 1 to 4:1. Format input files.2. Run CSC script.3. Assess bootstrap analysis output. Sensitivity analyses: Timing: 1 week. Repeat with number from 1 to 4:1. Test the effect of data processing methods.2. Test the effect of patient characteristics . 3. Test the effect of triage algorithm components. 8 yes 9 10 The production rule: 11 for (initialization; condition; increment) statement 12 13 The nature language experimental protocol: 14 Mitochondria Sonication Experiment:Repeat sonication of mitochondria in a Branson 450 sonicator using aconitase buffer (50 mM Tris, 30 mM sodium citrate, 0.5 mM MnCl2, 0.2 mM NADP, pH 7.3) four times, each for 15 seconds.Monitor the citrate to alpha-ketoglutarate conversion at 340 nm at 25˝C, utilizing 2 units/ml of isocitrate dehydrogenase in 50 mM Tris, 1 mM cysteine, 1 mM sodium citrate, 0.5 mM MnCl2 at pH 7.4. Reactivate aconitase with 2 mM dithiothreitol and 0.2 mM 12213 ferrous ammonium sulfate for 5 minutes, then repeat the enzymatic activity assay once. 15 16 Task: 17 Please use a JSON format to describe the protocol (Only output a json). 18 19 Note: 20 1. Read the protocol carefully. 21 2. Choose items and conditions from the protocol to describe the protocol. 22 3. Your desired output format resembles this: {"opcode": [['Datatype', 'Data'], ... , ["output", "Data"]]}. In this format, "Data" = None indicates a missing value. 23 4. You must specify the output in json, and only one output in json is allowed. 24 25 Example: 26 GROW:[['REG', 'GFP-fusion'], ["REG", "solid media"], ["Device", "spinning disk confocal microscope"]] -> Z-stack images 27 Output: {"GROW": [['REG', 'GFP-fusion'], ["REG", "solid media"], ["Device", " spinning disk confocal microscope"], ["output", "Z-stack images"]]} 12214
2024
659
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1204–1228 August 11-16, 2024 ©2024 Association for Computational Linguistics TIMEBENCH: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models Zheng Chu1, Jingchang Chen1, Qianglong Chen2, Weijiang Yu3, Haotian Wang1, Ming Liu1,4*, Bing Qin1,4 1Harbin Institute of Technology, Harbin, China 2Zhejiang University 3Sun Yat-sen University 4Peng Cheng Laboratory {zchu,jcchen,mliu,qinb}@ir.hit.edu.cn {chenqianglong.ai, wanght1998, weijiangyu8}@gmail.com Abstract Grasping the concept of time is a fundamental facet of human cognition, indispensable for truly comprehending the intricacies of the world. Previous studies typically focus on specific aspects of time, lacking a comprehensive temporal reasoning benchmark. To address this, we propose TIMEBENCH, a comprehensive hierarchical temporal reasoning benchmark that covers a broad spectrum of temporal reasoning phenomena. TIMEBENCH provides a thorough evaluation for investigating the temporal reasoning capabilities of large language models. We conduct extensive experiments on GPT-4, LLaMA2, and other popular LLMs under various settings. Our experimental results indicate a significant performance gap between the state-of-the-art LLMs and humans, highlighting that there is still a considerable distance to cover in temporal reasoning. Besides, LLMs exhibit capability discrepancies across different reasoning categories. Furthermore, we thoroughly analyze the impact of multiple aspects on temporal reasoning and emphasize the associated challenges. We aspire for TIMEBENCH to serve as a comprehensive benchmark, fostering research in temporal reasoning1. 1 Introduction Time flies over us, but leaves its shadow behind. Understanding time is a crucial part of human comprehension of the world. Envision the blossoming of flowers, and you’ll associate it with the arrival of spring. The ponder within it encompasses the intricate interplay of world knowledge, causality, and event temporal relationships. Temporal reasoning, in contrast to reasoning of a singular nature, comes with inherent complexity, encompassing implicit arithmetic, logical implications, and world knowledge. It is a form of integrated reasoning built upon *Corresponding Author. 1Code and data are available at: GitHub TimeXNLI Arith DurationQA McTACO TimeDial SitGen TimeQA MenatQA TempReason TRACIE 20 40 60 80 100 95.3 100 80.8 87.1 97.8 100 92.2 84.3 96.2 82.5 Human GPT-4 LLaMA2† 70b LLaMA2† 13b Mistral† 7b Figure 1: A brief overview of human and LLMs’ performance on TimeBench. Human scores are annotated. foundational reasoning like mathematical and logical reasoning (Cobbe et al., 2021; Mishra et al., 2022; Yu et al., 2020). Recently, large language models (LLMs) have demonstrated remarkable performance in complex reasoning (Hendrycks et al., 2021; Srivastava et al., 2022; Brown et al., 2020; Chowdhery et al., 2023; OpenAI, 2023; Touvron et al., 2023), but their performance in temporal reasoning has not yet been extensively explored. Recent research for temporal reasoning typically focuses only on a few aspects, such as temporal commonsense or temporal question answering (Zhou et al., 2019; Chen et al., 2021; Dhingra et al., 2022; Wang and Zhao, 2023). Due to the inherent complexity of temporal reasoning, it is challenging to accurately measure models’ temporal reasoning capabilities based on limited aspects. To address this issue, we propose TIMEBENCH, a comprehensive and hierarchical temporal reasoning benchmark. Specifically, drawing inspiration from the human cognitive process of transitioning from abstraction and concreteness to integration (Barsalou et al., 2018), we categorize temporal reasoning into three levels: symbolic temporal reasoning, commonsense temporal reasoning, and event temporal reasoning. These levels respectively represent understanding abstract time expression, 1204 grasping concrete world knowledge, and integrating and applying this knowledge in real-world scenarios. TIMEBENCH comprises 10 tasks with 16 sub-tasks, covering a broad spectrum of temporal reasoning phenomena. Besides, prior work typically features only a single task form, too simplistic to capture the model’s performance. In contrast, we incorporate four distinct task forms, offering a more realistic simulation of challenges. To quantify the temporal reasoning capabilities of contemporary LLMs, we extensively assess widely-used LLMs, including proprietary models such as ChatGPT (Ouyang et al., 2022) and GPT4 (OpenAI, 2023), as well as open-source like LLaMA2 (Touvron et al., 2023), Vicuna-1.5 (Chiang et al., 2023), Mistral (Jiang et al., 2023), Baichuan2 (Yang et al., 2023a), ChatGLM3 (Zeng et al., 2023) and FLAN-T5 (Chung et al., 2022). We conduct experiments under zero-shot and fewshot settings, combining commonly used reasoning techniques, chain-of-thought prompting (Kojima et al., 2022; Wei et al., 2022). The experimental results suggest that GPT-4 outperforms other models, showcasing strong temporal reasoning capabilities, as shown in Figure 1. Nevertheless, there is still a considerable gap between the strongest models and humans. On the contrary, open-source models show inferior performance in temporal reasoning, attributed to shortcomings in abstract time understanding, temporal relations modeling, and a lack of temporal commonsense. In addition, we also observe that chain-of-thought prompting does not yield a consistent improvement in performance. These findings indicate that there is still significant room for improvement in models’ temporal reasoning capabilities. Moreover, we have conducted a thorough analysis of the deficiencies and obstacles faced by models in temporal reasoning. We aspire for temporal reasoning to garner increased attention within the research community. Our contributions can be summarized as follows: • We introduce TIMEBENCH, a comprehensive and hierarchical benchmark to quantify the temporal reasoning abilities of LLMs. • We conduct extensive experiments with several LLMs, revealing a significant gap between even SOTA LLM and humans, indicating substantial research opportunities in this field. • By conducting a thorough analysis, we reveal the dilemmas that LLMs face in temporal reasoning and identify potential solutions. 2 TIMEBENCH Benchmark 2.1 Benchmark Design Principal TIMEBENCH focuses on a comprehensive evaluation of the temporal reasoning capabilities of large language models in challenging and complex scenarios. To achieve this goal, we summarize the difficulties and challenges faced in temporal reasoning, categorize them into three levels, and integrate diverse task formats to better align with the intricate nature of temporal reasoning. Just as the human cognitive process unfolds from foundational cognition and conceptual understanding to practical reasoning, we delineate temporal reasoning into three hierarchical levels. Specifically, TIMEBENCH categorizes temporal reasoning into symbolic, commonsense and event temporal reasoning, covering 10 datasets with a total of 16 subtasks. (1) Symbolic Temporal Reasoning focuses on the comprehension of fundamental abstract temporal expressions. (2) Temporal Commonsense Reasoning emphasizes the mastery of temporal principles, concepts and world knowledge. (3) Event Temporal Reasoning concentrates on modeling the temporal relationships between events and times within authentic scenarios. 2.2 Difficulties and Challenges We delineate the essential competencies and the challenges that arise from a human cognitive standpoint in the realm of temporal reasoning, and language models confront similar challenges. We present the dataset statistics, task formats, and the associated challenges in Table 7. Time Expression Understanding Time expressions (TimeX) denote words or phrases that convey information about time and represent the simplest and most basic units of expressing time, such as in April 2000, after 2008. Grasping time expressions is the most foundational step in understanding temporal elements within the textual modality. Temporal Commonsense assesses the understanding of temporal world knowledge, including event order, event duration, typical time, event frequency and stationary, which is crucial for language models to comprehend daily scenarios. Event-Time Relations assesses the model’s grounding capability to establish temporal relationships between events and their temporal context, thereby enabling models to grasp the progression 1205 Q: What is the time 2 year and 4 month before Mar, 1755 A: Nov, 1752 DATE ARITH Premise: On 28th May 1967, I graduated. Hypothesis: Before 23rd October 1920, I graduated. A: Contradiction TIMEX NLI Table 1: Examples of symbolic temporal reasoning C: Ransome looks after her as well as for young Fern Simon , who has declared her love for him. Q: How often do Ransome and Fern talk? O: each century, once a day, once a century, every night MCTACO Dialog: ... Person1: Do you go to work by train every day Person2: Yes . I commute <MASK> a week by train... O: five days, 25 days, a minute, six days TIMEDIAL Keywords: axis, one day, one month, Earth, Moon A: Earth rotates on its axis once in one day. It takes one month for the Moon to rotate on its axis. SITUATEDGEN Table 2: Examples of commonsense temporal reasoning. and transformations of events as they dynamically evolve through time. Event-Event Relations not only involve eventtime grounding but also introduce multi-hop relative connections between events. Models with this capability can better handle temporal reasoning in complex scenarios involving multiple events. Implicit Temporal Reasoning involves going beyond the surface of texts, engaging in deeper reasoning such as drawing upon temporal commonsense, identifying implicit temporal factors and discerning hidden temporal relationships among events. Implicit temporal reasoning is pivotal in complex real-world scenarios where events and time are intricately interwoven. 2.3 Symbolic Temporal Reasoning To evaluate the language model’s comprehension of abstract time expressions, we utilize two symbolic reasoning tasks stripped of semantic content: date arithmetic and time expression inference. Table 1 shows examples of symbolic temporal reasoning. C: ... He worked in Utrecht for the firm of P Smits & de Wolf from 1864 to 1867 and then returned to ... Q: Where did Ludwig Mond work between Mar 1866 and Sep 1866? A: Utrecht TIMEQA C: ... After the French evacuated Egypt in 1801, Hurshid Pasha was named governor of Egypt in 1804. Muhammad Ali had himself named governor of Egypt in May 1805 ... Q: Which position did Hurshid Pasha hold from 1804 to 1806, if Hurshid Pasha tepped down as the governor of Egypt in 1808? A: governor of Egypt MENATQA C: ... Peter Corke works for Queensland University of Technology from Jan, 2010 to Dec, 2022. Peter Corke works for Commonwealth Scientific from Jan, 1984 to Jan, 2009. ... Q: Which employer did Peter Corke work for before Queensland University of Technology? A: Commonwealth Scientific TEMPREASON Table 3: Examples of event temporal reasoning. Date Arithmetic (Tan et al., 2023) assesses the model’s grasp of abstract date calculation. When provided with a date, the model needs to accurately calculate the date a certain amount of time before or after the given date. The smallest unit is one day. TimeX NLI (Thukral et al., 2021) focuses on the logical entailment relationships among abstract TimeX, including three aspects: order (s1), duration (s2), and duration with unit conversion (s3). 2.4 Commonsense Temporal Reasoning We measure the model’s mastery of temporal commonsense and world knowledge, along with its capacity for reasoning based on these insights. Table 2 presents examples of temporal commonsense reasoning in QA and generation forms. MCTACO (Zhou et al., 2019) evaluates diverse commonsense knowledge from different aspects of events, including duration, frequency, order, stationary and typical event time. DurationQA (Virgo et al., 2022) focuses specifically on temporal commonsense reasoning in the spectrum of event duration. TimeDial (Qin et al., 2021) considers temporal commonsense reasoning in dialogue scenarios and involves various aspects of commonsense associated with duration, order, and world knowledge. 1206 SituatedGen (Zhang and Wan, 2023) considers generative commonsense reasoning in a constrained text generation scenario. Given a set of contrasting keywords, the model needs to choose appropriate keywords for each sentence and generate a pair of contrasting sentences that satisfy temporal commonsense. 2.5 Event Temporal Reasoning Event temporal reasoning assesses the model’s understanding of relationships between events and time in real-world scenarios, as well as its ability to reasoning under certain temporal or event constraints. Examples are shown in Table 3. TimeQA (Chen et al., 2021) requires the model to answer time-sensitive questions based on context containing numerous time-involved facts. It is categorized into explicit reasoning and implicit reasoning based on time indicators (before, in, etc.). MenatQA (Wei et al., 2023) introduces timesensitive factors to elicit implicit temporal reasoning, including time scope change, disruption of facts, and counterfactual questions, which provides a more in-depth assessment of implicit reasoning ability on event-time relations. TempReason (Tan et al., 2023) removes irrelevant context and focuses on implicit temporal reasoning within structured facts, investigating the model’s capability boundaries. It involves eventtime reasoning and event-event reasoning. TRACIE (Zhou et al., 2021) evaluates the model’s comprehension of temporal order between implicit events. The model needs to identify events implied in the context and then determine their chronological order. 2.6 Task Formats and Evaluation Metrics TIMEBENCH is a multispectral benchmark encompassing four task types: free-form reading comprehension, natural language inference, constrained text generation, and multi-select questions. For detailed task types and their corresponding evaluation metrics, please refer to Appendix A.3 and A.4. 3 Methodology We perform evaluations using the prompt-based approach, including standard prompting and chainof-thought prompting. Experiments are conducted under both zero-shot and few-shot settings. Standard Prompting We formulate specific instructions for each task. In the zero-shot setting, models follow the instructions to answer questions. In the few-shot setting, models are provided with several question-answer pairs as demonstrations and emulate those instances to answer questions. promptsp zs = {INST}{Q} (1) promptsp fs = {INST}{Q1}{A1}..{Q} (2) Chain-of-Thought Prompting The instructions of CoT are the same as standard prompting. In the zero-shot setting, following Zeroshot CoT (Kojima et al., 2022), we add a reasoning trigger Let’s think step by step after questions to perform chainof-thought reasoning. In the few-shot setting, we manually annotate CoT demonstrations for each task to guide the step-by-step reasoning. Prompts can be found in Appendix B.3. promptcot zs = {INST}{Q}{TRIG} (3) promptcot fs = {INST}{Q1}{R1}{A1}..{Q} (4) 4 Experimental Setup 4.1 Models We evaluate several popular LLMs, including both open-source and proprietary models, with parameter sizes ranging from 6B to 70B.2 The complete list of models can be found in Appendix B.1. 4.2 Implementation Details We access proprietary models through Azure API 0613 version. For open-source models, we deploy them locally through FastAPI. We set the temperature to 0.0 for greedy decoding in all experiments. To improve answer extraction accuracy, we prompt models with trigger Therefore, the answer is before model outputs to deduce final answers. 5 Experimental Results 5.1 Few-shot Results Table 4 presents the experimental results under few-shot settings. GPT-4 achieves the best performance across three categories, while LLaMA270b and GPT-3.5 rank in the second tier. However, there remains a substantial gap of 19.4% between the most powerful LLM and humans. In symbolic temporal reasoning tasks, GPT-4 demonstrates exceptional performance. However, 2 Since OpenAI has never disclosed the scale of ChatGPT series, 6B to 70B here refers to ChatGLM36B to LLaMA270B. 1207 Method Symbolic Commonsense Event Temporal Overall TimeXNLI Arith DQA McT. TiD. SitGen TimeQA MenatQA TempR TRACIE Sym. Comm. Event Avg. s1 s2 s3 Exp. Imp. Sco. Ord. Ctf. L2 L3 Human 98.0 96.0 92.0 100.0 80.8 87.1 97.8 100.0 93.3 91.1 85.6 87.3 79.9 97.1 95.3 82.5 96.5 91.4 89.0 91.5 GPT-4 85.3 73.3 53.3 100.0 64.8 88.3 94.6 88.6 73.7 51.0 72.4 54.8 28.7 92.4 95.9 62.8 78.0 84.1 66.5 73.7 + FS CoT 92.0 84.0 64.0 100.0 55.1 72.3 93.4 66.9 52.8 65.3 52.6 25.9 96.9 94.6 66.4 85.0 73.6 65.2 72.1 GPT-3.5 52.0 68.4 31.6 63.6 67.7 71.2 76.4 79.1 66.1 48.4 43.2 51.6 17.9 84.7 78.0 55.0 53.9 73.6 55.6 59.7 + FS CoT 51.6 71.8 36.6 84.4 41.2 38.1 71.1 68.0 47.0 42.5 41.7 37.8 89.9 76.6 50.2 61.1 50.1 56.7 56.6 LLaMA2† 70b 55.0 61.0 37.0 82.0 67.4 85.3 82.7 74.9 66.7 48.3 61.4 42.5 33.8 85.2 85.4 61.0 58.8 77.6 60.5 64.4 + FS CoT 52.0 73.0 39.0 79.5 62.3 79.1 61.1 64.3 43.0 57.7 45.2 53.1 87.5 81.6 67.0 60.9 67.5 62.4 63.0 LLaMA2† 13b 50.0 54.0 30.0 29.5 53.3 66.0 55.6 64.8 59.3 48.6 49.6 43.4 37.5 78.7 62.7 58.0 40.9 59.9 54.7 52.6 + FS CoT 40.0 61.0 37.0 52.0 59.3 68.8 40.8 59.4 49.1 58.4 43.8 44.1 78.0 68.2 58.0 47.5 56.3 57.4 54.5 LLaMA2† 7b 26.0 50.0 30.0 20.0 54.5 59.6 45.2 62.4 54.4 45.3 49.8 41.9 35.8 64.0 53.3 49.0 31.5 55.4 49.2 46.3 + FS CoT 37.0 52.0 36.0 25.5 56.9 67.0 41.9 45.6 36.1 50.9 38.0 57.3 59.7 57.7 50.0 37.6 55.3 49.4 47.4 Baichuan2† 13b 38.0 48.0 33.0 42.5 54.8 73.0 45.7 64.9 59.4 54.2 52.7 38.0 21.4 77.3 63.5 54.0 40.4 59.6 52.6 51.3 + FS CoT 50.0 56.0 34.0 47.0 62.0 69.3 43.8 58.2 49.6 49.8 40.1 45.6 81.3 65.6 60.0 46.8 58.4 56.3 54.2 Baichuan2† 7b 27.0 66.0 41.0 32.5 59.8 69.4 34.3 59.8 53.8 50.2 49.6 38.5 22.9 65.9 51.0 55.0 41.6 55.8 48.4 48.5 + FS CoT 30.0 56.0 34.0 34.0 57.0 69.5 44.5 51.2 40.7 46.4 32.6 46.3 61.5 64.1 53.0 38.5 57.0 49.5 48.1 Mistral† 7b 48.0 53.0 38.0 41.0 61.8 76.2 61.8 58.3 55.9 45.3 49.4 47.8 45.5 76.7 74.8 53.0 45.0 64.5 56.1 55.4 + FS CoT 57.0 63.0 35.0 54.0 61.8 45.7 57.3 60.4 46.2 57.2 47.9 33.2 65.9 67.9 57.0 52.3 54.9 54.5 54.0 ChatGLM3† 6b 48.0 70.0 32.0 35.0 51.8 62.6 55.0 61.6 57.2 26.3 35.4 41.5 22.5 76.4 55.9 58.0 46.3 57.8 46.7 49.3 + FS CoT 47.0 68.0 32.0 46.0 53.9 64.3 56.5 52.5 24.5 35.0 40.2 22.5 79.4 60.3 54.0 48.3 58.2 46.1 49.1 Table 4: Experimental results under few-shot settings (standard prompting by default). † denotes the base model without alignment. Global top-3 results are bold. Figure 8 provides a horizontal comparison of the performance of all models. Full results in Appendix B.2. other models exhibit a significant decline in comparison to GPT-4. In commonsense temporal reasoning tasks, GPT4 lags behind humans by only 8.0%, indicating its powerful internal knowledge reservoir. With the model scale shrinking, its knowledge reservoir also decreases gradually, leading to a decline in performance. Notably, there is a significant gap of 25.2% between LLMs and humans in event temporal reasoning, which suggests that LLMs encounter major challenges in modeling intricate event-time relationships. 5.2 Zero-shot Results Experimental results of alignment models under zero-shot settings are shown in Table 5. In zeroshot settings, GPT-4 and GPT-3.5 rank first and second, respectively, and they significantly outperform all open-source models by a large margin. It is noteworthy that open-source models exhibit a larger performance decline compared to proprietary models when transitioning from few-shot to zero-shot scenarios. GPT, Baichuan2 and LLaMA2 suffer drops of 5.6%, 14.6% and 27.2%, respectively. We attribute this performance decline to the quality of alignment. Restricted by their limited instruction-following capability, open-source models struggle to fully unleash their performance Symbolic Commonsense Event Overall 0 20 40 60 53.2 58.3 46.3 51.0 48.6 54.5 43.7 47.2 55.2 71.9 58.2 60.9 61.2 60.9 59.0 60.0 ZS ZS CoT FS FS CoT Figure 2: Performance gap with and without CoT prompting. The results are averaged from GPT-4, GPT3.5, Baichuan213b, LLaMA270b and Mistral7b. solely through instructions. Therefore, few-shot prompting is a better approach for stimulating their temporal reasoning abilities. 5.3 Chain-of-Thought in Temporal Reasoning Previous research has found that chain-of-thought prompting can enhance the model’s reasoning ability (Wei et al., 2022; Kojima et al., 2022). We aim to explore the following questions: Does CoT prompting bring consistent improvement in temporal reasoning? Due to the diversity of temporal reasoning, the above question has not yet been definitively answered. To investigate this, we select several popular LLMs and analyze their performance affected by chain-of-thought prompting. 1208 Method Symbolic Commonsense Event Temporal Overall TimeXNLI Arith DQA McT. TiD. SitGen TimeQA MenatQA TempR TRACIE Sym. Comm. Event Avg. s1 s2 s3 Exp. Imp. Sco. Ord. Ctf. L2 L3 Human 98.0 96.0 92.0 100.0 80.8 87.1 97.8 100.0 93.3 91.1 85.6 87.3 79.9 97.1 95.3 82.5 96.5 91.4 89.0 91.5 GPT-4 78.6 76.0 50.7 98.0 59.2 80.0 91.1 59.3 60.6 46.5 57.0 57.0 23.1 95.3 95.0 64.8 75.8 72.4 62.4 68.3 + CoT 80.0 76.0 60.0 92.0 58.1 82.6 89.3 61.3 41.2 54.6 59.6 22.6 97.0 94.5 58.0 77.0 76.7 61.1 68.5 GPT-3.5 45.4 67.6 31.2 97.0 50.5 68.6 69.1 62.3 70.8 35.4 40.9 43.9 22.9 81.2 73.8 57.4 60.3 62.6 53.3 57.4 + CoT 33.6 64.8 33.6 71.0 23.2 45.1 67.0 64.4 35.1 39.7 42.9 26.3 57.6 68.1 52.0 50.8 45.1 48.3 48.3 LLaMA270b 44.0 47.0 32.0 78.5 59.2 68.9 57.0 25.0 40.8 40.6 18.9 16.6 12.0 63.5 54.5 48.0 50.4 52.5 36.8 44.1 + CoT 30.0 66.0 28.0 53.5 57.3 67.1 58.6 31.4 19.5 12.2 12.7 20.8 37.5 40.5 51.0 44.4 61.0 28.2 39.1 LLaMA213b 30.0 49.0 34.0 22.5 38.5 40.6 35.4 57.9 61.9 30.5 46.1 36.1 26.9 53.1 69.4 49.0 33.9 43.1 46.6 42.6 + CoT 36.0 50.0 38.0 6.0 39.2 51.7 36.9 58.7 38.9 40.9 32.5 33.6 58.0 68.4 47.0 32.5 42.6 47.3 42.4 LLaMA27b 39.0 53.0 30.0 13.0 39.3 41.0 6.3 24.5 49.0 29.0 26.8 21.1 16.0 63.9 47.9 49.0 33.8 27.8 37.8 34.3 + CoT 44.0 50.0 33.0 5.0 35.0 40.0 1.7 49.9 31.6 31.4 24.5 17.8 56.9 48.1 46.0 33.0 25.6 38.3 34.3 Baichuan213b 41.0 61.0 37.0 12.5 52.0 63.4 57.7 52.2 55.4 34.6 48.8 44.3 39.5 57.4 61.4 49.0 37.9 56.3 48.8 48.0 + CoT 40.0 57.0 31.0 10.0 44.6 61.9 58.1 41.5 40.9 52.0 38.5 43.2 62.8 64.3 55.0 34.5 54.9 49.8 46.7 Baichuan27b 35.0 50.0 37.0 4.5 47.9 55.3 54.3 42.0 41.5 34.7 35.2 31.2 20.4 43.4 47.7 55.0 31.6 49.9 38.6 39.7 + CoT 38.0 43.0 32.0 1.0 37.9 58.0 44.2 53.5 38.8 39.9 33.2 29.3 41.2 47.2 54.0 28.5 46.7 42.1 39.4 Vicuna1.513b 35.0 50.0 36.0 15.0 39.2 59.1 34.2 51.8 60.4 37.0 46.8 37.4 23.2 42.1 43.6 46.0 34.0 46.1 42.1 41.1 + CoT 42.0 51.0 37.0 3.0 29.8 50.0 33.7 56.9 36.4 38.2 37.7 20.4 49.0 49.1 51.0 33.3 37.8 42.3 39.0 Vicuna1.57b 37.0 58.0 43.0 5.0 40.4 52.5 32.0 47.8 47.1 18.5 35.7 25.7 17.3 33.0 46.8 54.0 35.8 43.2 34.8 37.1 + CoT 36.0 50.0 36.0 1.5 39.4 49.2 36.2 40.9 24.6 26.2 28.5 25.0 27.7 40.3 54.0 30.9 41.6 33.4 34.4 FLANT511b 53.0 63.0 43.0 0.0 52.0 65.0 47.7 49.5 61.7 26.8 33.6 52.2 21.8 87.9 83.9 64.0 39.8 53.6 54.0 50.3 + CoT 56.0 66.0 45.0 0.0 49.7 63.4 42.7 64.4 28.2 41.6 50.2 30.6 79.5 68.9 55.0 41.8 51.9 52.3 49.4 Mistral7b 47.0 50.0 43.0 26.5 49.8 58.8 23.2 58.3 28.2 21.4 24.3 22.3 21.7 39.6 31.6 51.0 41.6 47.5 30.0 37.3 + CoT 38.0 56.0 35.0 16.5 36.6 49.3 19.3 31.3 22.4 21.1 24.9 25.6 34.0 31.2 61.0 36.4 35.1 31.4 33.5 ChatGLM36b 38.0 50.0 34.0 2.0 34.1 43.6 56.7 38.9 41.2 31.7 33.8 26.0 32.2 57.0 54.0 50.0 31.0 43.3 40.7 39.0 + CoT 27.0 49.0 37.0 0.0 24.8 37.1 44.8 41.7 25.4 34.6 28.1 41.2 44.5 52.0 48.0 28.3 35.6 39.4 35.7 Table 5: Experimental results under zero-shot settings (standart prompting by default). All models are alignment models (-chat or -instruct). Global top-3 results are bold. Chain-of-thought reasoning is not consistently effective. As illustrated in Figure 2, introducing zero-shot CoT prompting results in consistent declines, with an overall decrease of 7.4%. In the fewshot scenario, CoT prompting also fails to yield consistent improvements, varying depending on the task. There is a 10.8% improvement in symbolic reasoning, while a significant decline of 15.2% in commonsense reasoning. In event temporal reasoning, there is a slight improvement of 1.3%. Next, we will conduct a more detailed analysis of the impact of CoT on specific tasks. Impact of CoT prompting across tasks. In order to explore the impact of CoT on various tasks thoroughly, we delve into the performance changes of each model across specific tasks within each category, as illustrated in Figure 3. In the zeroshot setting, open-source models achieve a slight improvement in event temporal reasoning with chain-of-thought prompting, while in other cases, they face performance degradation. While in the few-shot setting, almost all models exhibit significant improvement in symbolic temporal reasoning, with a concurrent prevalent decline in commonsense temporal reasoning. We attribute this to the knowledge sensitivity inherent in commonsense reasoning, where step-by-step reasoning cannot compensate for the lack of knowledge. In event temporal reasoning, improvements mainly stem from datasets involving implicit multi-step reasoning (MenatQA and TempReason), indicating that CoT is more effective for multi-hop questions. In summary, zero-shot CoT consistently has a negative impact on temporal reasoning. While in fewshot scenario, CoT has a positive impact on symbolic and complex tasks, while negatively affecting knowledge-sensitive tasks. 6 Analysis and Discussion 6.1 Scaling Effect of Model Size We investigate how the scale of models affects temporal reasoning capabilities. The trend is illustrated in Figure 4. As the model scale increases, there is a notable improvement in performance. When the parameter size expands from 7B to 13B, LLaMA2 and Baichuan2 show improvements of 1209 GPT-4 GPT-3.5 Baichuan213b Vicuna1.513b Mistral7b Zero-Shot TimeXNLI Arith DurationQA McTACO TimeDial TimeQA MenatQA TempReason TRACIE GPT-4 GPT-3.5 Baichuan213b Vicuna1.513b Mistral7b LLaMA2† 70b Baichuan2† 13b Mistral† 7b Few-Shot Sym. Comm. Event Overall −30 −20 −10 0 10 20 30 Figure 3: ∆Score between the chain-of-thought prompting and direct I-O prompting. Top: zero-shot setting, Bottom: few-shot setting, Left: variation in each task, Right: averaged variation in the symbolic, commonsense, event, and overall tasks. 6b 7b 13b 70b Model Size (B) 50 55 60 65 Model Baichuan2† ChatGLM3† LLaMA2† Mistral† Figure 4: Scaling effect of model size and overall temporal reasoning performance. The x-axis (model size) is shown in the log scale. Results show a log-linearity between parameter size and performance. 13.0% and 10.5%, respectively. Furthermore, when LLaMA scales up to 70B, the trend of performance improvement continues without stopping. The overall improvement follows a log-linear relationship with scale. There are no significant performance differences among LLaMA2, Baichuan2, and ChatGLM3 under similar parameter specifications, while Mistral demonstrates impressive prowess, outperforming all other 13B models with nearly half the number of parameters. 6.2 Challenges in Temporal Reasoning LLMs underperform in (multi-hop) symbolic reasoning Except for GPT-4, the performance of all other models in symbolic temporal reasoning is unsatisfactory. A noticeable decrease is observed in duration-conversion task compared to other atomic tasks (25% in GPT-4 and 27% in LLaMA270b). This is because the duration-conversion task (s3) Model Order Duration Freq. Stationarity Typical Avg. GPT-4 76.4↓ 92.8↑ 83.3↑ 71.4↓ 54.5↓ 77.5 GPT-3.5 50.5↑ 39.8↓ 55.2↑ 48.4↑ 28.7↓ 43.5 Baichuan2† 13b 40.5↓ 51.8↑ 43.7↑ 46.2↑ 29.8↓ 42.5 LLaMA2† 70b 65.2↑ 72.1↑ 66.3↑ 36.3↓ 52.7↓ 63.0 Mistral† 7b 27.0↓ 44.4↑ 58.3↑ 38.5↓ 38.3↓ 42.5 Table 6: Results in each temporal commonsense aspect under few-shot setting. Models with † are base models. Red ↓and Green ↑represent the performance is lower or higher than its average performance. Metric is EM. necessitates a two-step reasoning process. It first unifies time units, and subsequently engages in numerical comparison. In contrast, other atomic tasks (s1, s2 and arithmetic) can be completed with a single reasoning step. In summary, LLMs perform poorly in symbolic temporal reasoning and exhibit more pronounced declines when encountering multi-step reasoning. Mastery of commonsense knowledge varies in LLMs We analyze models’ performance across various commonsense aspects, as shown in Table 6. We regard the model’s average performance in commonsense reasoning tasks as the baseline. If the model outperforms the baseline in a specific aspect, it suggests greater proficiency in this type of knowledge, and vice versa. The findings indicate that LLMs generally demonstrate good knowledge of event duration and frequency. However, their comprehension of event order and typical events is relatively weaker The uneven mastery of commonsense knowledge significantly affects the model’s reasoning performance, especially when dealing with complex questions that involve multiple types of knowledge. Retrieval-augmented reasoning presents a promising avenue for mitigating the model’s knowledge scarcity. LLMs exhibit poor implicit temporal reasoning capabilities. When comparing explicit and implicit event temporal reasoning, specifically TimeQA-explicit versus others, we observe a significant performance decrease in implicit reasoning. Additionally, on TRACIE with numerous implied events, most models only surpass a random baseline (50.0). Even GPT-4, despite its advanced capabilities, achieves only a 66.4% accuracy, suggesting that the LLM struggles with modeling implicit temporal relationships. We consider it helpful to explicitly model the temporal relationships between events and time expressions, for instance constructing timelines or temporal graphs. 1210 LLaMA270b Baichuan213b LLaMA213b Vicuna1.513b Mistral7b ChatGLM36b Baichuan27b LLaMA27b Vicuna1.57b 0 20 40 60 64.4 54.2 54.5 54.5 55.4 49.3 48.5 47.4 47.4 53.2 53.7 45.5 46.7 43.0 39.2 46.0 38.7 37.1 Base Align Figure 5: Performance difference between base and alignment models under few-shot setting. Baichuan2 and LLaMA2 are aligned with SFT and RLHF. Vicuna, Mistral and ChatGLM3 are aligned with only SFT. LLMs are good factual reasoners rather than factual extractors When humans engage in temporal reasoning, it generally involves two steps: first, extracting time-fact pairs from the context, and then performing fact-based reasoning. TempReason provides extracted facts for conducting fact-based reasoning. By comparing the model’s performance in context-based (TimeQA) against fact-based (TempReason) reasoning, we identify the bottleneck in event temporal reasoning. LLMs excel in TempReason, which signifies their strong capability in fact-based reasoning. However, their performance in context-based reasoning is significantly weaker compared to their performance in fact-based reasoning. This implies that errors could arise during the extraction of time-sensitive facts from the context. We attribute this performance gap to the model’s deficiency in factual extraction capabilities Thus, we consider LLMs to be strong factual reasoners rather than factual extractors in event temporal reasoning. 6.3 Alignment Impairs Temporal Reasoning In the experiments mentioned earlier (Table 5), we observe a sharp decline in zero-shot performance of alignment models. To investigate whether alignment is the cause of the decline in temporal reasoning, we conducted experiments on alignment models under few-shot settings. Figure 5 illustrates the overall performance decline after alignment. With the exception of Baichuan2, all other models are severely impaired, experiencing a significant drop of up to 22%. Through manual analysis of error cases, we have summarized two reasons: (1) Alignment reduces the model’s usability, causing it to tend towards refusal to answer when confronted with knowledge-sensitive questions. (2) Alignment damages the model’s in-context learning capability, resulting in situations where the model deviates from the demonstrations. Furthermore, we believe that the lack of temporal reasoning-related training data in alignment exacerbates this issue, leading to disparities between different reasoning capabilities, such as mathematical and temporal reasoning. 6.4 Error Analysis We manually analyze 100 predictions by GPT-4, GPT-3.5 and LLaMa2-base70b from each subtask. The visualization of errors is shown in Figure 6. Symbolic Reasoning We categorize symbolic reasoning errors into five groups: (a) Expression: The model provides an incorrect time calculation expression. (b) Computation: The model provides the correct time calculation expression, but there is a calculation error. (c) Conversion: The model has an error in the conversion of time units. (d) Comparison: The model has an error when comparing two time-expressions (or intervals). (e) Combination: The model encounters errors in the combination of multiple above operations. LLMs exhibit numerous computation, conversion, and comparison errors, which suggests a substantial deficiency in their understanding of fundamental temporal expressions. Additionally, a higher frequency of errors is observed in combination questions, highlighting that multi-step reasoning continues to be a significant challenge for current models Commonsense Reasoning We categorize the errors of commonsense reasoning into two groups: (a) No Answer: The model fails to provide a final answer. (b) Reasoning Error: The model encounters reasoning errors, which can be subdivided into five types of knowledge-related errors. We observe that GPT series models have a higher No Answer rate, while LLaMA is always able to provide answers. This discrepancy can be attributed to two factors: firstly, the models may lack the necessary commonsense knowledge to formulate an answer; secondly, the preference alignment mechanism may prompt the model to abstain from answering when confronted with questions outside its knowledge scope. Integration of retrieval can alleviate the problem of knowledge scarcity to a certain degree. Event Temporal Reasoning We categorize the errors of event temporal reasoning into four groups: (a) No Answer: The model is unable to find the answer in the context. (b) Reasoning Error: The model encounters reasoning errors. (c) Halluci1211 Expression Computation Conversion Comparison Combination 0 0 6 12 27 0 30 42 34 62 2 15 48 27 61 Reasoning Error Symbolic No Answer Stationarity Frequency Event Ordering Typical Time Event Duration 23 2 5 2 14 7 52 6 15 15 19 25 0 4 9 13 11 17 Reasoning Error Commonsense No Answer Hallucinate Metric Limit In Between Before After 34 4 11 24 21 2 5 42 3 8 24 29 4 3 25 6 13 22 24 4 3 Reasoning Error Event Temporal GPT-4 GPT-3.5 LLaMA2 Figure 6: Error analysis for Symbolic, Commonsense, and Event Temporal. We select 100 test samples from each subtask for GPT-4, GPT-3.5 and LLaMa2-base70b. nation: The model’s prediction does not exist in the context, known as hallucination reasoning. (d) Metric: The model’s prediction is correct, but the metric is limited by the evaluation criteria. It can be observed that, except for reasoning errors, failures to provide answers account for approximately 30%, indicating that models still have flaws in grounding temporal facts from context. Additionally, models occasionally experience hallucination phenomena, leading to erroneous reasoning. 7 Related Work 7.1 Temporal Reasoning There are numerous efforts addressing diverse challenges in temporal reasoning. Early research mainly relies on TimeML (Pustejovsky et al., 2003), focusing TimeX extraction and temporal relation extraction (Verhagen et al., 2007, 2010; UzZaman et al., 2013; Llorens et al., 2015; Miller et al., 2015; Mathur et al., 2021; Vashishtha et al., 2019). The advent of pre-trained language models (PLMs) has brought about commonsense reasoning as a tool to explore the world knowledge in models (Zhou et al., 2019; Qin et al., 2021; Dhingra et al., 2022). Recently, much attention has shifted towards event temporal reasoning (Chen et al., 2021; Tan et al., 2023; Wei et al., 2023). Han et al. (2021); Yang et al. (2023b); Son and Oh (2023); Chen et al. (2023) continuously pretrains LLMs on time-aware data to elicit temporal reasoning, and Zhu et al. (2023); Su et al. (2023); Chu et al. (2023) explicitly represent temporal relationships using temporal graphs and timelines. Additionally, some works extend beyond text, evaluating temporal reasoning in structured tables and video domains (Gupta et al., 2023; Ko et al., 2023). Some concurrent studies also analyze the temporal reasoning abilities of LLMs. Jain et al. (2023); Qiu et al. (2023) focus on temporal commonsense and Wang and Zhao (2023) introduces a unified form for accessing the overall abilities. Distinguished from other works, TIMEBENCH is multispectral, offering a comprehensive evaluation of LLM’s temporal reasoning abilities. 7.2 Large Language Models In recent years, there has been rapid progress in the research of large language models (LLM) (Zhao et al., 2023). They exhibit outstanding performance across a multitude of tasks without the need for finetuning (Brown et al., 2020; Kojima et al., 2022). Furthermore, they have achieved astonishing results in complex reasoning tasks, such as mathematical reasoning (Cobbe et al., 2021; Mishra et al., 2022) and logical reasoning (Yu et al., 2020; Liu et al., 2023). Moreover, some studies suggest that the chain-of-thought prompting can further enhance the model’s capabilities in complex reasoning scenarios (Wei et al., 2022; Kojima et al., 2022; Chu et al., 2024; Zhang et al., 2023). 8 Conclusion Temporal reasoning entails inherent diversity and complexity. The lack of a comprehensive benchmark makes it challenging to quantify LLMs’ temporal reasoning capabilities. In this work, we present TIMEBENCH, a comprehensive and hierarchical benchmark for LLM temporal reasoning, tailored to mirror temporal reasoning in complex scenarios. We conduct extensive experiments on state-of-the-art LLMs to investigate their temporal reasoning capabilities. Our findings indicate a substantial gap between state-of-the-art LLMs and human performance, emphasizing the need for further research in this area. Moreover, we provide a meticulous analysis and discussion, outlining the current challenges that models face and suggesting potential directions for improvement. 1212 Limitations TimeBench is a comprehensive benchmark to quantify the temporal reasoning capabilities of LLMs. While we have taken various factors into account, there are a few limitations. Firstly, our evaluation only applied prompt-based method under zero-shot and few-shot setting, lacking evaluations specifically tailored for models fine-tuned on the temporal domain. Secondly, the instructions and demonstrations were manually crafted, which may potentially lead to discrepancies in prompts interpretation among different LLMs. Thirdly, the dataset constituting the benchmark includes data from past years and a portion sourced from Wikipedia, which may contaminate the training corpus of LLMs. Acknowledgements The research in this article is supported by the National Key Research and Development Project (2021YFF0901602), the National Science Foundation of China (U22B2059, 62276083), and Shenzhen Foundational Research Funding (JCYJ20200109113441941), Major Key Project of PCL (PCL2021A06). Ming Liu is the corresponding author. References Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. 2023. GQA: training generalized multi-query transformer models from multi-head checkpoints. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 4895–4901. Association for Computational Linguistics. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Lawrence W Barsalou, Léo Dutriaux, and Christoph Scheepers. 2018. Moving beyond the distinction between concrete and abstract concepts. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1752):20170144. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Wenhu Chen, Xinyi Wang, and William Yang Wang. 2021. A dataset for answering time-sensitive questions. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual. Ziqiang Chen, Shaojuan Wu, Xiaowang Zhang, and Zhiyong Feng. 2023. TML: A temporal-aware multitask learning framework for time-sensitive question answering. In Companion Proceedings of the ACM Web Conference 2023, WWW 2023, Austin, TX, USA, 30 April 2023 - 4 May 2023, pages 200–203. ACM. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2023. Palm: Scaling language modeling with pathways. J. Mach. Learn. Res., 24:240:1– 240:113. Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, and Ting Liu. 2024. Navigate through enigmatic labyrinth a survey of chain of thought reasoning: Advances, frontiers and future. In The 62nd Annual Meeting of the Association for Computational 1213 Linguistics: ACL 2024, Bangkok, Thailand, August 11–16, 2024. Association for Computational Linguistics. Zheng Chu, Zekun Wang, Jiafeng Liang, Ming Liu, and Bing Qin. 2023. MTGER: multi-view temporal graph enhanced temporal reasoning over timeinvolved document. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 15218–15233. Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Preprint, arXiv:2210.11416. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168. Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2022. Time-aware language models as temporal knowledge bases. Trans. Assoc. Comput. Linguistics, 10:257–273. Vivek Gupta, Pranshu Kandoi, Mahek Bhavesh Vora, Shuo Zhang, Yujie He, Ridho Reinanda, and Vivek Srikumar. 2023. Temptabqa: Temporal question answering for semi-structured tables. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 2431–2453. Association for Computational Linguistics. Rujun Han, Xiang Ren, and Nanyun Peng. 2021. ECONET: effective continual pretraining of language models for event temporal reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 5367–5380. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Raghav Jain, Daivik Sojitra, Arkadeep Acharya, Sriparna Saha, Adam Jatowt, and Sandipan Dandapat. 2023. Do language models have a common sense regarding time? revisiting temporal commonsense reasoning in the era of large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 6750– 6774, Singapore. Association for Computational Linguistics. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825. Dohwan Ko, Ji Soo Lee, Woo-Young Kang, Byungseok Roh, and Hyunwoo Kim. 2023. Large language models are temporal and causal reasoners for video question answering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 4300–4316. Association for Computational Linguistics. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In NeurIPS. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Hanmeng Liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, and Yue Zhang. 2023. Glore: Evaluating logical reasoning of large language models. CoRR, abs/2310.09107. Hector Llorens, Nathanael Chambers, Naushad UzZaman, Nasrin Mostafazadeh, James F. Allen, and James Pustejovsky. 2015. Semeval-2015 task 5: QA tempeval - evaluating temporal information understanding with question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2015, Denver, Colorado, USA, June 4-5, 2015, pages 792–800. The Association for Computer Linguistics. Puneet Mathur, Rajiv Jain, Franck Dernoncourt, Vlad Morariu, Quan Hung Tran, and Dinesh Manocha. 2021. TIMERS: Document-level temporal relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 524–533, Online. Association for Computational Linguistics. Timothy Miller, Steven Bethard, Dmitriy Dligach, Chen Lin, and Guergana Savova. 2015. Extracting time expressions from clinical text. In Proceedings of BioNLP 15, pages 81–91, Beijing, China. Association for Computational Linguistics. 1214 Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. 2022. LILA: A unified benchmark for mathematical reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 5807–5832. Association for Computational Linguistics. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In NeurIPS. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. James Pustejovsky, José M. Castaño, Robert Ingria, Roser Saurí, Robert J. Gaizauskas, Andrea Setzer, Graham Katz, and Dragomir R. Radev. 2003. Timeml: Robust specification of event and temporal expressions in text. In New Directions in Question Answering, Papers from 2003 AAAI Spring Symposium, Stanford University, Stanford, CA, USA, pages 28–34. AAAI Press. Lianhui Qin, Aditya Gupta, Shyam Upadhyay, Luheng He, Yejin Choi, and Manaal Faruqui. 2021. TIMEDIAL: temporal commonsense reasoning in dialog. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 7066–7076. Association for Computational Linguistics. Yifu Qiu, Zheng Zhao, Yftah Ziser, Anna Korhonen, Edoardo M. Ponti, and Shay B. Cohen. 2023. Are large language models temporally grounded? CoRR, abs/2311.08398. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67. Jungbin Son and Alice Oh. 2023. Time-aware representation learning for time-sensitive question answering. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 70–77. Association for Computational Linguistics. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Santilli, Andreas Stuhlmüller, Andrew M. Dai, Andrew La, Andrew K. Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong, Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karakas, and et al. 2022. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. CoRR, abs/2206.04615. Xin Su, Phillip Howard, Nagib Hakim, and Steven Bethard. 2023. Fusing temporal graphs into transformers for time-sensitive question answering. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 948–966. Association for Computational Linguistics. Qingyu Tan, Hwee Tou Ng, and Lidong Bing. 2023. Towards benchmarking and improving the temporal reasoning capability of large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 14820–14835. Association for Computational Linguistics. Shivin Thukral, Kunal Kukreja, and Christian Kavouras. 2021. Probing language models for understanding of temporal expressions. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2021, Punta Cana, Dominican Republic, November 11, 2021, pages 396–406. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, 1215 Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Preprint, arXiv:2307.09288. Naushad UzZaman, Hector Llorens, Leon Derczynski, James F. Allen, Marc Verhagen, and James Pustejovsky. 2013. Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations. In Proceedings of the 7th International Workshop on Semantic Evaluation, SemEval@NAACLHLT 2013, Atlanta, Georgia, USA, June 14-15, 2013, pages 1–9. The Association for Computer Linguistics. Siddharth Vashishtha, Benjamin Van Durme, and Aaron Steven White. 2019. Fine-grained temporal relation extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2906–2919, Florence, Italy. Association for Computational Linguistics. Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 4566–4575. IEEE Computer Society. Marc Verhagen, Robert J. Gaizauskas, Frank Schilder, Mark Hepple, Graham Katz, and James Pustejovsky. 2007. Semeval-2007 task 15: Tempeval temporal relation identification. In Proceedings of the 4th International Workshop on Semantic Evaluations, SemEval@ACL 2007, Prague, Czech Republic, June 23-24, 2007, pages 75–80. The Association for Computer Linguistics. Marc Verhagen, Roser Saurí, Tommaso Caselli, and James Pustejovsky. 2010. Semeval-2010 task 13: Tempeval-2. In Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval@ACL 2010, Uppsala University, Uppsala, Sweden, July 15-16, 2010, pages 57–62. The Association for Computer Linguistics. Felix Giovanni Virgo, Fei Cheng, and Sadao Kurohashi. 2022. Improving event duration question answering by leveraging existing temporal information extraction data. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, LREC 2022, Marseille, France, 20-25 June 2022, pages 4451– 4457. European Language Resources Association. Yuqing Wang and Yun Zhao. 2023. TRAM: benchmarking temporal reasoning for large language models. CoRR, abs/2310.00835. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS. Yifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, and Kang Liu. 2023. Menatqa: A new dataset for testing the temporal comprehension and reasoning abilities of large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 1434–1447. Association for Computational Linguistics. Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang, Feng Liu, Guangwei Ai, Guosheng Dong, Haizhou Zhao, Hang Xu, Haoze Sun, Hongda Zhang, Hui Liu, Jiaming Ji, Jian Xie, Juntao Dai, Kun Fang, Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan Nie, Peidong Guo, Ruiyang Sun, Tao Zhang, Tianpeng Li, Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin Yu, Xuehai Pan, Yanjun Shen, Yiding Wang, Yiyu Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan Zhou, and Zhiying Wu. 2023a. Baichuan 2: Open large-scale language models. CoRR, abs/2309.10305. Sen Yang, Xin Li, Lidong Bing, and Wai Lam. 2023b. Once upon a time in graph: Relative-time pretraining for complex temporal reasoning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 11879–11895. Association for Computational Linguistics. Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. Reclor: A reading comprehension dataset requiring logical reasoning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023. GLM-130B: an open bilingual pre-trained model. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Yunxiang Zhang and Xiaojun Wan. 2023. Situatedgen: Incorporating geographical and temporal contexts into generative commonsense reasoning. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023. Zhuosheng Zhang, Yao Yao, Aston Zhang, Xiangru Tang, Xinbei Ma, Zhiwei He, Yiming Wang, Mark Gerstein, Rui Wang, Gongshen Liu, and Hai Zhao. 2023. Igniting language intelligence: The hitchhiker’s guide from chain-of-thought reasoning to language agents. CoRR, abs/2311.11797. 1216 Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023. A survey of large language models. Preprint, arXiv:2303.18223. Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. "going on a vacation" takes longer than "going for a walk": A study of temporal commonsense understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3361– 3367. Association for Computational Linguistics. Ben Zhou, Kyle Richardson, Qiang Ning, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2021. Temporal reasoning on implicit events from distant supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1361–1371. Association for Computational Linguistics. Xinyu Zhu, Cheng Yang, Bei Chen, Siheng Li, JianGuang Lou, and Yujiu Yang. 2023. Question answering as programming for solving time-sensitive questions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 12775–12790. Association for Computational Linguistics. A TIMEBENCH Details TIMEBENCH features 3 major categories, 10 tasks and 15 subtasks, each with distinct challenges, totaling 19,000 instances. Detailed statistics are available in Figure 7 and Table 7. A.1 Benchmark Construction TimeX Arithmetic (Tan et al., 2023) TimeX Arithmetic data is derived from the l1: time-time reasoning data in TempReason. We retain 4,000 instances, where time expressions are calculated with a minimum unit of one day. TimeX NLI (Thukral et al., 2021) The original data of TimeXNLI is in NLI format, including three sub-tasks, Temp-Order, Temp-Duration, and CrossUnit Duration, including 6,140, 3,540, and 15,840 instances respectively. We conduct a random sampling of 2,213, 2,332 and 2,429 entries, resulting in a combined total of 6,965 instances. MCTACO (Zhou et al., 2019) The original MCTACO dataset consists of yes/no questions, containing 1,332 questions with 9,442 options. To guarantee that the questions are presented in a 4-way multi-select style, we initially remove questions that have less than four options. Subsequently, to ensure that each question has at least one correct option, we filter out questions where all options are labeled as "no". For each remaining question, we randomly sample four options, striving to maintain a balance between correct and incorrect options. In most cases, a question is accompanied by 2 correct and 2 incorrect options. A minority of questions have an option distribution of 1-3 or 3-1. After the aforementioned filtering process, we obtain 852 pieces of data in a 4-way multi-select format. DurationQA (Virgo et al., 2022) The original DurationQA has the same format as MCTACO, which consists of 694 questions with 4,868 options. Following the identical filtration procedure as MCTACO, we finally obtained a collection of 687 questions in a 4-way multi-select format. TimeDial (Qin et al., 2021) consists of 4-way multi-select instances in a two-person dialogue scenario. We leave the original data unaltered and simply randomize the sequence of options, yielding 1,446 pieces of 4-way multi-select instances. SituateGen (Zhang and Wan, 2023) SituatedGen includes 1,220 test cases, which span across two distinct reasoning domains: time and geography. We manually screen the original test data and retain those with clear time features for temporal reasoning evaluation, resulting in 115 instances. TimeQA (Chen et al., 2021) The original data of TimeQA includes two splits, Easy and Hard, with each question containing 20 Wikipedia paragraphs. The excessively long context may exceed the model’s maximum length limit and incur significant inference overhead. Therefore, we have reduced the context of the original data. For the paragraphs in the original data, we refer to those containing the answer as relevant paragraphs, and the rest as irrelevant paragraphs. For each question, we keep the first paragraph, all relevant paragraphs, and one random irrelevant paragraph as distractor. This ensures that most questions have at least three paragraphs. After that, we sample 500 pieces of data from those where the context length is less than 650 tokens. For both Easy and Hard splits, 1217 we apply the aforementioned filtration, resulting in 500 questions each, totaling 1,000 instances. TempReason (Tan et al., 2023) TempReason dataset contains 5,397 entries for l2 (event-time reasoning) and 4,426 entries for l3 (event-event reasoning). In the original dataset, each question corresponds with a text context and extracted facts. Similar to TimeQA, we apply a filter based on context length. We preserve questions with a context length between 300 and 600 tokens, yielding 839 and 1,037 instances, respectively. Notably, every remaining question is applicable to either contextbased reasoning or fact-based reasoning. MenatQA (Wei et al., 2023) MenatQA consists of 999 data entries, formatted similarly to TimeQA, where each question is accompanied by several corresponding paragraphs. Following the paper’s proposed method, we modify the original data by incorporating the three time-sensitive factors: scope, order, and counterfactual. Subsequently, for each factor, we randomly sample 400 instances, resulting in a total of 1,200 data points. TRACIE (Zhou et al., 2021) The original TRACIE dataset consists of yes/no type questions, containing 4,248 test instances. We randomly sample 500 instances from the iid split in the test set. A.2 Human Performance Evaluation Unless otherwise stated, the results of human evaluation are derived from original dataset papers. Please refer to the corresponding paper for human evaluation details. TimeXNLI, Date Arith, and MCTACO are manually evaluated by three authors from the TimeBench team. Within each subtask, we randomly sample 50 instances, and the average of the performances by three human evaluators is considered the final human performance. A.3 Task Formats TIMEBENCH is a multispectral benchmark, which features four different task formats. Multi-Select Questions Previous work utilizes the Multiple Choice (MC) form, which requires models to select the only correct answer from the options. However, this task form has shortcuts and may not truly reflect the model’s abilities. To address this, we employ the Multi-Select (M-S) task form, where the model needs to select all possible correct answers from the options provided. In our task, each question presents four options, with at most two of them being correct. Natural Language Inference is the task of determining the logical relationship between two pieces of text. Specifically, given a premise and a hypothesis, the model needs to determine whether the hypothesis can be inferred from the premise and output entailment, contradiction, or neutral. Our tasks focus on the entailment in temporal domains. Free-form Reading Comprehension requires models to answer questions based on the provided context, and the ground truth answer is free-form without pre-defined format restrictions. Constrained Text Generation refers to the task of generating text under certain constraints. The task is keyword-constrained text generation, where the model takes keywords as input and outputs sentences that include those keywords. A.4 Evaluation Metrics Accuracy is used for NLI and date arithmetic tasks. M-S tasks are evaluated using option-level EM and F1. FRC tasks (excluding date arithmetic) are assessed with token-level EM and F1. For CTG task, we take the average of multiple generation metrics, which are outlined as follows. Metrics for SituatedGen Following SituatedGen (Zhang and Wan, 2023), we use BLEU-4 (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), ROUGE-L (Lin, 2004), CIDEr (Vedantam et al., 2015), and MATCH (Zhang and Wan, 2023) scores to metric the results of CTG.3 The overall score is calculated as the sum of the above scores. We set the weight of CIDEr to 1/10 for balancing when summation. S = BLEU-4 + METEOR + ROUGE-L + CIDER/10 + MATCH As the overall score S does not represent a percentile, we proceed to normalize the models’ scores to align with humans’ relative performance levels. B Supplemental Materials B.1 Models ChatGPT-3.5/GPT-4 (Ouyang et al., 2022; OpenAI, 2023) ChatGPT is a chat model aligned 3We utilize pycocoevalcap package to calucate BLEU-4, METEOR, ROUGE-L, CIDEr. 1218 Order 2213 Duration 2332 Convention 2420 Explicit 500 Implicit 500 Order 400 Scope 651 Counterfactual 548 Event to Time 839 Event to Event 1037 Symbolic TimeX Arith 4000 Time NLI Common Sense McTACO 852 TimeDial 1446 DurationQA 687 Event Temporal TimeQA MenatQA Temp Reason TRACIE 500 Symbolic Common Sense Event Temporal SituatedGen 115 Figure 7: The quantity and proportion of data for each task and its respective subtasks within TIMEBENCH. through SFT and RLHF based on GPT-3 (Brown et al., 2020). GPT-4 is an upgraded version of ChatGPT with enhanced reasoning capabilities, making it the most powerful LLM. Unless otherwise stated, ChatGPT refers to gpt-3.5-turbo-0613 and GPT-4 refers to gpt-4-0613. Llama2/Vicuna-1.5 (Touvron et al., 2023; Chiang et al., 2023) LLaMA2 is an open foundation model trained on 2T tokens with efficient groupedquery attention (Ainslie et al., 2023). LLaMA2chat is the official aligned model with SFT and RLHF, and Vicuna-1.5 is aligned with SFT only by the community4. Baichuan2 (Yang et al., 2023a) is an open foundation model pre-trained on 2.6T tokens, which is competitive with LLaMA2. Baichuan2-chat is the official aligned model with SFT and RLHF. Mistral (Jiang et al., 2023) is a 7B open foundation model incorporating efficient grouped-query attention (Ainslie et al., 2023) and sliding windows attention (Beltagy et al., 2020). It achieves the strongest performance among models of its size, even surpassing LLaMA2-13B. Mistral-instruct is the officially aligned model with SFT only. ChatGLM3 (Zeng et al., 2023) is an open-source bilingual LLM for Chinese and English, exhibiting competitive performance under 10B. FLAN-T5 (Chung et al., 2022) is an open-source instruction model built on top of T5 (Raffel et al., 2020) through instruction fine-tuning. 4https://lmsys.org/ B.2 Full Results The overall score is derived from the average of all corresponding metrics. For brevity, we omit some F1 scores in tables in the main text. Please refer to Table 9 for the full experimental results. The full results of SituatedGen can be found in Table 8. B.3 Prompts The prompt formats are showcased in Figure 9. The demonstrations can be found from Figure 10 to 18. 1219 Symbolic Commonsense Event Overall 0 20 40 60 80 77.0 76.7 62.4 68.5 60.3 62.6 53.3 57.4 50.4 61.0 36.8 44.1 33.9 43.1 47.2 42.6 33.8 27.8 38.3 34.3 37.9 56.3 49.8 48.0 31.6 49.9 42.1 39.7 41.6 47.5 31.4 37.3 31.0 43.3 40.7 39.0 Model GPT-4 GPT-3.5 LLaMA270b LLaMA213b LLaMA27b Baichuan213b Baichuan27b Mistral7b ChatGLM36b Symbolic Commonsense Event Overall 0 20 40 60 80 85.0 84.1 66.5 73.7 61.1 73.6 56.7 59.7 60.9 77.6 62.4 64.4 47.5 59.9 57.4 54.5 37.6 55.4 49.4 47.4 46.8 59.6 56.3 54.2 41.6 57.0 49.5 48.5 52.2 64.5 56.0 55.4 48.2 58.2 46.6 49.3 Model GPT-4 GPT-3.5 LLaMA2† 70b LLaMA2† 13b LLaMA2† 7b Baichuan2† 13b Baichuan2† 7b Mistral† 7b ChatGLM3† 6b Figure 8: Performance comparison between state-of-the-art LLMs. Up: GPT-4/3.5 and alignment models under zero-shot setting. Down: GPT-4/3.5 and base models under few-shot setting. Dataset Format # Challenges Symbolic TimeX Arith FRC 4,000 TimeX Arithmetic TimeX NLI NLI 6,965 TimeX Causality - Order 2,213 order - Duration 2,332 duration - Conversion 2,420 duration + time unit conversion Commonsense MCTACO M-S 852 Temporal Commonsense TimeDial M-S 1,446 Temporal Commonsense DurationQA M-S 687 Event Duration SituatedGen CTG 115 Temporal Commonsense Event TimeQA FRC 1,000 Context-based Reasoning - Explicit 500 explicit, event-time reasoning - Implicit 500 implicit, event-time reasoning MenatQA FRC 1,599 Implicit, Context-based Reasoning - Order 400 event-time reasoning - Scope 400 event-time reasoning - Counterfactual 400 event-time reasoning TempReason FRC 1,876 Implicit, Fact-based Reasoning - l2 (e2t) 839 event-time reasoning - l3 (e2e) 1,037 event-event reasoning TRACIE NLI 500 Implicit, Implied Event-Event Reasoning In total 19,000 Table 7: The statistics, task formats and challenges in TIMEBENCH. 1220 Method BLEU-4 METEOR ROUGE-L CIDEr MATCH Overall Norm Human 39.9 40.4 56.3 397 98.1 274.4 100.0 GPT-4 8.23 31.27 28.84 38.45 90.41 162.59 59.25 + FS 28.64 38.99 55.69 298.64 90.11 243.29 88.66 GPT-3.5 13.38 30.12 35.91 125.41 78.76 170.70 62.21 + FS 27.24 33.77 51.18 282.75 76.54 217.01 79.08 LLaMA270b 5.15 13.62 15.83 22.07 31.79 68.60 25.00 + FS 19.10 29.09 41.74 171.36 65.29 172.35 62.81 LLaMA213b 4.66 21.43 20.80 17.72 61.62 110.28 40.19 + FS 15.15 27.49 37.55 138.13 64.94 158.93 57.92 LLaMA27b 2.77 13.46 14.69 14.34 34.83 67.18 24.48 + FS 6.90 15.82 21.77 52.99 33.81 83.60 30.47 Baichuan213b 8.33 25.86 30.07 82.63 70.63 143.15 52.17 + FS 15.79 30.23 40.96 169.14 71.01 174.91 63.74 Baichuan27b 5.17 21.99 23.73 44.80 59.85 115.22 41.99 + FS 15.06 23.45 32.29 137.94 52.04 136.64 49.79 Vicuna1.513b 7.73 26.35 29.15 69.16 71.91 142.06 51.77 + FS 6.85 18.66 25.99 92.96 46.19 106.99 38.99 Vicuna1.57b 6.29 24.34 26.91 46.90 68.84 131.07 47.77 + FS 20.71 30.19 45.20 203.20 67.58 184.00 67.05 FLAN-T5 16.20 24.43 29.38 95.17 56.38 135.91 49.53 + FS 12.88 30.38 36.27 92.20 76.44 165.19 60.20 Mistral7b 5.82 22.89 24.19 44.03 63.74 121.03 44.11 + FS 18.96 29.02 43.15 185.61 63.24 172.93 63.02 ChatGLM36b 6.56 21.11 21.96 41.48 53.02 106.80 38.92 + FS 10.53 24.17 33.44 124.50 56.94 137.53 50.12 LLaMA2† 70b 22.34 33.03 50.93 243.31 74.96 205.59 74.92 LLaMA2† 13b 17.54 29.44 45.21 200.14 65.64 177.84 64.81 LLaMA2† 7b 17.49 28.33 45.24 202.08 59.98 171.25 62.41 Baichuan2† 13b 17.86 29.75 44.28 198.83 66.35 178.12 64.91 Baichuan2† 7b 15.30 27.54 41.80 171.59 62.40 164.20 59.84 Mistral† 7b 14.54 27.39 41.72 168.89 59.42 159.96 58.30 ChatGLM3† 6b 17.11 29.35 40.74 156.49 66.18 169.02 61.60 Table 8: Full results of SituatedGen. Aligned models are under zero-shot setting by default. The top-3 results are bold. Methods with † are base models without alignment, under few-shot setting. We consider human performance as 100 points and normalize models’ results accordingly. 1221 Method Symbolic Commonsense Event Overall TimeXNLI Date Arith DurationQA McTACO TimeDial SitGen TimeQA MenatQA TempReason TRACIE Sym. Comm. Event Avg. s1 s2 s3 Acc EM F1 EM F1 EM F1 Norm E-EM E-F1 H-EM H-F1 S-EM S-F1 O-EM O-F1 C-EM C-F1 L2-EM L2-F1 L3-EM L3-F1 Acc Human 98.0 96.0 92.0 100.0 64.0 80.8 75.8 87.1 97.8 97.8 100.0 89.0 93.3 87.0 91.1 82.0 85.6 84.0 87.3 76.0 79.9 96.0 97.1 94.0 95.3 82.5 96.5 91.4 89.0 91.5 GPT-4 78.6 76.0 50.7 98.0 35.0 59.2 61.2 80.0 72.0 91.1 59.3 48.9 60.6 40.4 46.5 44.4 57.0 49.0 57.0 22.0 23.1 91.0 95.3 94.0 95.0 64.8 75.8 72.4 62.4 68.3 + CoT 80.0 76.0 60.0 92.0 35.0 58.1 67.0 82.6 65.0 89.3 50.0 61.3 33.0 41.2 43.4 54.6 53.0 59.6 20.0 22.6 93.0 97.0 93.0 94.5 58.0 77.0 76.7 61.1 68.5 + FS 85.3 73.3 53.3 100.0 51.0 64.8 77.6 88.3 85.0 94.6 88.6 59.2 73.7 40.0 51.0 59.6 72.4 48.0 54.8 25.3 28.7 86.0 92.4 94.8 95.9 62.8 78.0 84.1 66.5 73.7 + FS CoT 92.0 84.0 64.0 100.0 42.0 55.1 68.0 72.3 79.0 93.4 48.0 66.9 44.4 52.8 48.5 65.3 44.0 52.6 22.0 25.9 91.0 96.9 93.0 94.6 66.4 85.0 73.6 65.2 72.1 GPT-3.5 45.4 67.6 31.2 97.0 19.2 50.5 34.1 68.6 39.2 69.1 62.3 60.5 70.8 29.5 35.4 36.5 40.9 37.5 43.9 21.0 22.9 73.6 81.2 61.8 73.8 57.4 60.3 62.6 53.3 57.4 + CoT 33.6 64.8 33.6 71.0 12.4 23.2 28.1 45.1 34.6 67.0 52.5 64.4 29.0 35.1 35.8 39.7 38.5 42.9 24.0 26.3 32.0 57.6 54.2 68.1 52.0 50.8 45.1 48.3 48.3 + FS 52.0 68.4 31.6 63.6 42.8 67.7 43.5 71.2 47.8 76.4 79.1 53.8 66.1 37.9 48.4 37.8 43.2 43.5 51.6 16.0 17.9 77.7 84.7 70.0 78.0 55.0 53.9 73.6 55.6 59.7 + FS CoT 51.6 71.8 36.6 84.4 20.8 41.2 21.4 38.1 48.3 71.1 56.5 68.0 37.5 47.0 38.1 42.5 37.5 41.7 33.0 37.8 86.2 89.9 68.0 76.6 50.2 61.1 50.1 56.7 56.6 LLaMA270b 44.0 47.0 32.0 78.5 12.7 59.2 23.0 68.9 10.0 57.0 25.0 28.0 40.8 31.0 40.6 8.0 18.9 11.0 16.6 9.0 12.0 50.0 63.5 39.0 54.5 48.0 50.4 52.5 36.8 44.1 + CoT 30.0 66.0 28.0 53.5 8.0 57.3 21.0 67.1 9.0 58.6 17.0 31.4 13.0 19.5 5.0 12.2 8.0 12.7 18.0 20.8 12.0 37.5 20.0 40.5 51.0 44.4 61.0 28.2 39.1 + FS 49.0 42.0 38.0 62.0 1.3 61.2 13.0 66.5 6.0 56.6 62.8 41.0 51.1 16.0 20.0 8.0 16.4 17.0 19.9 18.0 18.7 34.0 52.2 31.0 41.1 51.0 47.8 61.8 33.8 44.3 + FS CoT 54.0 63.0 40.0 69.5 8.0 55.2 21.5 62.1 6.0 56.4 36.6 50.9 34.0 42.4 28.0 38.6 19.0 29.3 18.0 21.9 77.0 83.1 65.0 74.7 57.0 56.6 57.9 49.7 53.2 LLaMA213b 30.0 49.0 34.0 22.5 4.0 38.5 8.5 40.6 10.0 35.4 57.9 46.0 61.9 21.0 30.5 28.0 46.1 23.0 36.1 18.0 26.9 43.0 53.1 55.0 69.4 49.0 33.9 43.1 46.6 42.6 + CoT 36.0 50.0 38.0 6.0 7.3 39.2 14.0 51.7 10.0 36.9 45.0 58.7 30.0 38.9 20.0 40.9 18.0 32.5 21.0 33.6 43.0 58.0 56.0 68.4 47.0 32.5 42.6 47.3 42.4 + FS 43.0 57.0 60.0 20.5 9.0 46.8 8.0 66.6 15.0 62.3 40.2 24.0 34.2 17.0 18.4 11.0 25.9 5.0 14.6 22.0 33.3 54.0 68.1 50.0 64.8 47.0 45.1 54.0 38.3 43.9 + FS CoT 37.0 55.0 50.0 33.0 12.0 49.5 11.0 45.6 8.0 44.5 35.0 46.0 21.0 25.4 34.0 46.7 23.0 36.5 7.0 16.5 72.0 80.8 54.0 66.2 50.0 43.8 46.5 46.0 45.5 LLaMA27b 39.0 53.0 30.0 13.0 2.7 39.3 4.0 41.0 1.0 6.3 24.5 37.0 49.0 14.0 29.0 7.0 26.8 8.0 21.1 9.0 16.0 48.0 63.9 32.0 47.9 49.0 33.8 27.8 37.8 34.3 + CoT 44.0 50.0 33.0 5.0 2.7 35.0 4.5 40.0 1.0 1.7 27.0 49.9 17.0 31.6 11.0 31.4 10.0 24.5 7.0 17.8 44.0 56.9 32.0 48.1 46.0 33.0 25.6 38.3 34.3 + FS 44.0 60.0 34.0 11.0 4.0 62.8 8.0 64.7 8.0 40.0 30.5 36.0 50.8 20.0 29.4 5.0 22.3 6.0 18.0 6.0 17.9 12.0 36.3 23.0 44.3 53.0 37.3 49.5 34.0 38.7 + FS CoT 38.0 51.0 36.0 14.5 11.0 42.8 25.0 65.6 13.0 53.4 36.0 53.5 21.0 34.1 1.0 13.6 3.0 11.2 5.0 14.0 22.0 46.7 21.0 42.3 51.0 34.9 53.9 33.3 37.8 Baichuan213b 41.0 61.0 37.0 12.5 4.0 52.0 18.5 63.4 15.0 57.7 52.2 45.0 55.4 29.0 34.6 31.0 48.8 34.0 44.3 30.0 39.5 40.0 57.4 45.0 61.4 49.0 37.9 56.3 48.8 48.0 + CoT 40.0 57.0 31.0 10.0 3.3 44.6 20.0 61.9 13.0 58.1 36.0 41.5 36.0 40.9 39.0 52.0 27.0 38.5 29.0 43.2 46.0 62.8 46.0 64.3 55.0 34.5 54.9 49.8 46.7 + FS 43.0 59.0 40.0 42.5 24.7 62.1 27.5 70.2 18.0 58.9 63.7 47.0 60.7 35.0 45.7 37.0 51.9 31.0 41.5 19.0 31.8 73.0 81.1 48.0 59.4 48.0 46.1 63.7 52.5 53.7 + FS CoT 45.0 54.0 48.0 47.0 10.7 44.4 27.0 68.8 15.0 55.0 43.0 57.8 27.0 36.7 38.0 49.8 34.0 40.7 33.0 43.0 72.8 80.4 43.0 60.2 44.0 48.5 56.1 51.6 51.7 Baichuan27b 35.0 50.0 37.0 4.5 4.0 47.9 10.5 55.3 15.0 54.3 42.0 26.0 41.5 20.0 34.7 20.0 35.2 19.0 31.2 6.0 20.4 22.0 43.4 29.0 47.7 55.0 31.6 49.9 38.6 39.7 + CoT 38.0 43.0 32.0 1.0 5.3 37.9 13.0 58.0 15.0 44.2 41.0 53.5 28.0 38.8 29.0 39.9 23.0 33.2 18.0 29.3 21.0 41.2 29.0 47.2 54.0 28.5 46.7 42.1 39.4 + FS 40.0 50.0 36.0 20.0 28.7 59.4 26.5 66.9 17.0 53.0 49.8 45.0 60.7 30.0 42.1 27.0 37.8 23.0 35.7 10.0 20.4 40.0 57.4 37.0 53.0 51.0 36.5 57.3 44.8 45.8 + FS CoT 41.0 50.0 36.0 23.5 13.0 45.7 17.5 58.1 7.0 39.2 36.0 51.2 29.0 43.0 42.0 52.5 25.0 39.3 20.0 31.0 57.0 70.1 39.0 60.2 49.0 37.6 47.7 49.5 46.0 Vicuna1.513b 35.0 50.0 36.0 15.0 8.0 39.2 21.5 59.1 7.0 34.2 51.8 43.0 60.4 29.0 37.0 38.0 46.8 22.0 37.4 17.0 23.2 14.0 42.1 13.0 43.6 46.0 34.0 46.1 42.1 41.1 + CoT 42.0 51.0 37.0 3.0 1.3 29.8 11.5 50.0 7.0 33.7 44.0 56.9 31.0 36.4 16.0 38.2 25.0 37.7 13.0 20.4 31.0 49.0 29.0 49.1 51.0 33.3 37.8 42.3 39.0 + FS 48.0 57.0 38.0 30.5 7.3 33.6 27.5 57.0 13.0 40.3 39.0 45.0 58.3 23.0 25.9 38.0 42.6 26.0 41.4 18.0 20.1 51.0 61.8 28.0 42.6 56.0 43.4 42.5 43.6 43.3 + FS CoT 38.0 59.0 39.0 39.5 10.7 37.4 14.0 45.8 12.0 41.6 47.0 59.5 27.0 30.7 39.0 48.1 31.0 35.9 26.0 31.2 71.0 77.5 53.0 65.5 52.0 43.9 41.6 50.1 46.7 Vicuna1.57b 37.0 58.0 43.0 5.0 1.3 40.4 9.5 52.5 6.0 32.0 47.8 35.0 47.1 11.0 18.5 20.0 35.7 15.0 25.7 12.0 17.3 14.0 33.0 14.0 46.8 54.0 35.8 43.2 34.8 37.1 + CoT 36.0 50.0 36.0 1.5 1.3 39.4 8.5 49.2 9.0 36.2 30.0 40.9 14.0 24.6 16.0 26.2 14.0 28.5 12.0 25.0 9.0 27.7 7.0 40.3 54.0 30.9 41.6 33.4 34.4 + FS 43.0 57.0 37.0 8.5 3.3 44.6 5.5 42.1 7.0 36.8 67.1 24.0 31.9 12.0 14.9 16.0 21.8 20.0 27.5 17.0 22.2 13.0 34.3 6.0 32.2 54.0 36.4 47.7 29.9 35.9 + FS CoT 35.0 54.0 35.0 8.0 2.7 37.2 10.0 47.5 10.0 41.3 31.0 39.9 13.0 16.6 15.0 26.7 15.0 23.8 16.0 23.1 55.0 66.3 32.0 48.1 43.0 33.0 42.0 35.9 36.4 FLANT511b 53.0 63.0 43.0 0.0 4.0 52.0 14.0 65.0 13.0 47.7 49.5 56.0 61.7 24.0 26.8 31.0 33.6 48.0 52.2 20.0 21.8 84.0 87.9 78.0 83.9 64.0 39.8 53.6 54.0 50.3 + CoT 56.0 66.0 45.0 0.0 4.7 49.7 14.5 63.4 13.0 42.7 57.6 64.4 23.9 28.2 39.0 41.6 46.0 50.2 28.0 30.6 73.0 79.5 57.0 68.9 55.0 41.8 51.9 52.3 49.4 + FS 53.0 65.0 43.0 3.5 4.0 50.2 15.5 65.8 9.0 35.0 60.2 55.0 64.3 23.0 25.0 31.0 33.6 47.0 50.6 21.0 22.5 82.0 87.0 78.0 84.5 65.0 41.1 52.8 54.1 50.5 + FS CoT 54.0 68.0 46.0 3.5 4.0 50.7 13.0 64.0 11.0 43.7 54.0 59.4 19.0 21.2 34.0 36.6 45.0 49.2 20.0 21.7 89.0 93.8 72.0 79.7 66.0 42.9 52.8 53.5 50.5 Mistral7b 47.0 50.0 43.0 26.5 11.3 49.8 15.0 58.8 6.0 23.2 58.3 7.0 28.2 5.0 21.4 4.0 24.3 7.0 22.3 4.0 21.7 2.0 39.6 1.0 31.6 51.0 41.6 47.5 30.0 37.3 + CoT 38.0 56.0 35.0 16.5 13.3 36.6 14.5 49.3 8.0 19.3 13.0 31.3 11.0 22.4 8.0 21.1 14.0 24.9 12.0 25.6 5.0 34.0 4.0 31.2 61.0 36.4 35.1 31.4 33.5 + FS 51.0 57.0 35.0 18.0 12.0 55.8 25.5 71.0 13.0 52.9 63.0 24.0 43.5 12.0 23.9 4.0 21.3 7.0 21.2 7.0 23.0 23.0 48.9 23.0 44.9 57.0 40.3 60.7 35.5 43.0 1222 Method Symbolic Commonsense Event Overall TimeXNLI Date Arith DurationQA McTACO TimeDial SitGen TimeQA MenatQA TempReason TRACIE Sym. Comm. Event Avg. s1 s2 s3 Acc EM F1 EM F1 EM F1 Norm E-EM E-F1 H-EM H-F1 S-EM S-F1 O-EM O-F1 C-EM C-F1 L2-EM L2-F1 L3-EM L3-F1 Acc + FS CoT 29.0 62.0 32.0 28.5 9.3 46.6 14.5 49.2 8.0 34.4 13.0 33.7 11.0 25.3 9.0 26.4 9.0 24.3 9.0 20.3 68.0 78.6 42.0 57.4 50.0 37.9 43.4 39.5 39.8 ChatGLM36b 38.0 50.0 34.0 2.0 3.0 34.1 7.0 43.6 14.0 56.7 38.9 20.0 41.2 14.0 31.7 25.0 33.8 17.0 26.0 24.0 32.2 42.0 57.0 30.0 54.0 50.0 31.0 43.3 40.7 39.0 + CoT 27.0 49.0 37.0 0.0 1.0 24.8 3.0 37.1 10.0 44.8 24.0 41.7 10.0 25.4 27.0 34.6 22.0 28.1 35.0 41.2 28.0 44.5 27.0 52.0 48.0 28.3 35.6 39.4 35.7 + FS 37.0 52.0 30.0 0.0 10.0 53.0 9.0 52.9 19.0 54.7 50.1 11.0 19.6 13.0 17.0 25.0 35.0 24.0 30.4 20.0 25.6 27.0 46.5 33.0 53.3 54.0 29.8 52.7 35.2 38.2 + FS CoT 32.0 66.0 31.0 0.0 4.0 34.8 8.0 43.6 11.0 44.0 25.0 43.3 19.0 27.5 23.0 30.2 19.0 25.7 36.0 43.0 50.0 62.8 34.0 56.0 48.0 32.3 40.8 42.1 39.2 LLaMA2-Base70b 55.0 61.0 37.0 82.0 40.0 67.4 59.0 85.3 63.0 82.7 74.9 57.0 66.7 36.0 48.3 52.0 61.4 35.0 42.5 25.0 33.8 78.0 85.2 80.0 85.4 61.0 58.8 77.6 60.5 64.4 + CoT 52.0 73.0 39.0 79.5 32.6 62.3 47.0 79.1 25.0 61.1 55.0 64.3 34.0 43.0 50.0 57.7 37.0 45.2 44.0 53.1 82.0 87.5 76.0 81.6 67.0 60.9 67.5 62.4 63.0 LLaMA2-Base13b 50.0 54.0 30.0 29.5 19.3 53.3 26.5 66.0 20.0 55.6 64.8 48.0 59.3 34.0 48.6 41.0 49.6 38.0 43.4 34.0 37.5 68.0 78.7 49.0 62.7 58.0 40.9 59.9 54.7 52.6 + CoT 40.0 61.0 37.0 52.0 25.3 59.3 26.0 68.8 11.0 40.8 46.0 59.4 37.0 49.1 49.0 58.4 34.0 43.8 38.0 44.1 70.0 78.0 55.0 68.2 58.0 47.5 56.3 57.4 54.5 LLaMA2-Base7b 26.0 50.0 30.0 20.0 19.3 54.5 20.0 59.6 15.0 45.2 62.4 44.0 54.4 30.0 45.3 42.0 49.8 34.0 41.9 30.0 35.8 50.0 64.0 36.0 53.3 49.0 31.5 55.4 49.2 46.3 + CoT 37.0 52.0 36.0 25.5 21.3 56.9 26.5 67.0 16.0 41.9 32.0 45.6 27.0 36.1 41.0 50.9 30.0 38.0 51.0 57.3 45.0 59.7 37.0 57.7 50.0 37.6 55.3 49.4 47.4 Baichuan2-Base13b 38.0 48.0 33.0 42.5 20.7 54.8 42.5 73.0 11.0 45.7 64.9 50.0 59.4 40.0 54.2 42.0 52.7 31.0 38.0 13.0 21.4 68.0 77.3 50.0 63.5 54.0 40.4 59.6 52.6 51.3 + CoT 50.0 56.0 34.0 47.0 29.3 62.0 22.5 69.3 12.0 43.8 46.0 58.2 39.0 49.6 39.0 49.8 34.0 40.1 37.0 45.6 73.0 81.3 46.0 65.6 60.0 46.8 58.4 56.3 54.2 Baichuan2-Base7b 27.0 66.0 41.0 32.5 28.0 59.8 34.5 69.4 5.0 34.3 59.8 40.0 53.8 35.0 50.2 41.0 49.6 33.0 38.5 18.0 22.9 49.0 65.9 34.0 51.0 55.0 41.6 55.8 48.4 48.5 + CoT 30.0 56.0 34.0 34.0 23.3 57.0 33.0 69.5 12.0 44.5 41.0 51.2 31.0 40.7 38.0 46.4 26.0 32.6 41.7 46.3 46.0 61.5 43.8 64.1 53.0 38.5 57.0 49.5 48.1 Mistral-Base7b 48.0 53.0 38.0 41.0 34.0 61.8 42.5 76.2 35.0 61.8 58.3 43.0 55.9 30.0 45.3 37.0 49.4 38.0 47.8 37.0 45.5 68.0 76.7 64.0 74.8 53.0 45.0 64.5 56.1 55.4 + CoT 57.0 63.0 35.0 54.0 30.0 61.8 42.0 45.7 29.0 57.3 51.0 60.4 30.0 46.2 48.0 57.2 37.0 47.9 24.0 33.2 60.0 65.9 58.0 67.9 57.0 52.3 54.9 54.5 54.0 ChatGLM3-Base6b 48.0 70.0 32.0 35.0 3.3 51.8 13.5 62.6 11.0 55.0 61.6 50.0 57.2 24.0 26.3 30.0 35.4 38.0 41.5 22.0 22.5 67.0 76.4 35.0 55.9 58.0 46.3 57.8 46.7 49.3 + CoT 47.0 68.0 32.0 46.0 8.7 53.9 15.5 64.3 13.0 56.5 45.0 52.5 23.0 24.5 30.0 35.0 37.0 40.2 22.0 22.5 72.0 79.4 42.0 60.3 54.0 48.3 58.2 46.1 49.1 Table 9: Full results of TimeBench. Aligned models are under zero-shot setting by default. Methods with † are base models without alignment, under few-shot setting, thus incomparable with other methods. We consider human performance as 100 points and normalize models’ results accordingly. 1223 Answer the following question, select all the possible correct options, and each question has at least one correct option. Context: {} Question: {} Options: {} Answer: DURATIONQA, MCTACO There is a two-person dialogue with several options. Choose all appropriate options to substitute the <mask> in the dialogue, and each question has at least one correct option. Dialogue: {} Options: {} Answer: TIMEDIAL Read the following story and hypothesis, determine whether the hypothesis can be inferred from the story. You need to understand the implicit temporal relationships between events to make judgments. Story: {} Hypothesis: {} Options: A. Entailment B. Contradiction Answer: TRACIE Generate a pair of contrastive sentences with the given set of keywords. Keywords: {} SITUATEDGEN Question: {}? Answer: DATE ARITHMETIC I will give you a question with context. You need to answer my question based on the context. If you can infer the answer from the context, then output your answer. Otherwise, if there is no answer, output [unanswerable]. Context: {} Question: {} Answer: TIMEQA I will give you a question with context. You need to answer my question based on the context. Context: {} Question: {} Answer: TEMPREASON Get answers for the question based on the contxt, where answers derived from substrings in the context or categorized as [unanswerable]. Context: {} Question: {} Answer: MENATQA Read the following statements about time and determine if the hypothesis can be inferred from the premise. Premise: {} Hypothesis: {} Options: A. Entailment B. Contradiction C. Neutral Answer: TIMEX-NLI Figure 9: Zeroshot instructions and input formats. 1224 Answer the following question, select all the possible correct options, and each question has at least one correct option. Premise: On Wednesday, they got married. Hypothesis: Before Friday, they got married. Options: A. Entailment B. Contradiction C. Neutral Answer: Wednesday is before Friday. As a result, we can infer that if something happens on Wednesday, it definitely happens before Friday. Therefore, the answer is A. Entailment. Premise: We went to Disneyland on Monday. Hypothesis: We went to Disneyland after Wednesday. Options: A. Entailment B. Contradiction C. Neutral Answer: Monday is before Wednesday. As a result, We can infer that if something happens on Monday, it definitely can not happen after Wednesday. Therefore, the answer is B. Contradiction. Premise: The failing company issued major layoffs after Tuesday. Hypothesis: The failing company issued major layoffs after Thursday. Options: A. Entailment B. Contradiction C. Neutral Answer: Tuesday is before Thursday. If something happened after Tuesday, we cannot be certain whether it occurred after Thursday. Therefore, the answer is C. Neutral. CoT Demonstration of TIMEX-NLI (3-shot, order) Figure 10: Chain-of-Thought demonstrations of TimeX-NLI (s1-order). Question: What is the time 4 year and 1 month after Apr, 2000? Answer: First, 4 years after 2000 is 2004. Next, 1 month after April is May. Therefore, 4 year and 1 month after Apr, 2000 is May, 2004. Question: What is the time 3 year and 4 month before Jun, 1840? Answer: First, subtracting 3 years from 1840 gives 1837. Next, subtracting 4 months from June gives February. Therefore, 3 year and 4 month before Jun, 1840 is Feb, 1837. Question: What is the time 7 year and 11 month after Feb, 1819? Answer: First, 7 years after 1819 is 1826. Next, 11 months after February is January of the next year. Therefore, 7 years and 11 months after Feb, 1819 is Jan, 1827. Question: What is the time 6 year and 9 month before Jan, 1234? Answer: First, subtracting 6 years from 1234 gives 1228. Next, subtracting 9 months from January gives April of the previous year. Therefore, 6 year and 9 month before Jan, 1234 is Apr, 1227. CoT Demonstration of DATE ARITHMETIC (4-shot) Figure 11: Chain-of-Thought demonstrations of Date Arithmetic. Read the following story and hypothesis, determine whether the hypothesis can be inferred from the story. You need to understand the implicit temporal relationships between events to make judgments ...... Story: Joe was a police officer. Joe was patrolling the streets of the city in his cruiser. ¨Suddenly, Joe was alerted to a crime happening near him by dispatch.¨Joe responded to the scene and found a bank robber fleeing on foot. Joe arrested the criminal and was promoted. Hypothesis: Joe put on his police uniform. starts after Joe arrest the criminal Options: A. Entailment B. Contradiction Answer: From the story we know Joe was patrolling. In the work state, Joe has already put on the police uniform. So we can infer that Joe put on his police uniform before arresting the criminal. This conflicts with hypothesis. Therefore, the answer is B. Contradiction. CoT Demonstration of TRACIE (4-shot) Figure 12: Chain-of-Thought demonstrations of TRACIE. 1225 Answer the following question, select all the possible correct options, and each question has at least one correct option. ...... Context: actually i have an project on it so please give me as much as you have information about migratory birds in punjab Question: How long did it take for them to have information about migratory birds in punjab? Options: A. several months B. 12 weeks C. a few minutes D. almost instantly Answer: This is a conversation scenario. In the conversation, providing relevant information about migratory birds in punjab to him is in real-time and takes very little time. Therefore, the answer is C. a few minutes, D. almost instantly. Context: Hope she stops laying eggs because she will get really skinny ! Question: How long did it take for her to lay eggs? Options: A. 1 week B. 22 hours C. 2 years D. 4 years Answer: According to commonsense knowledge, the time it takes for birds to lay eggs typically varies from one day to several days. Therefore, the answer is A. 1 week, B. 22 hours. CoT Demonstration of DURATIONQA (4-shot) Figure 13: Chain-of-Thought demonstrations of DurationQA. Answer the following question, select all the possible correct options, and each question has at least one correct option. ...... Context: She ordered the tastiest kind of each vegetable and the prettiest kind of each flower. Question: How often does she order vegetables and flowers? Options: A. once a second B. three days a week C. every 10 centuries D. once a week Answer: According to commonsense knowledge, ordering vegetables and flowers typically happens on a regular basis, usually every few days. Therefore, the answer is B. three days a week, D. once a week. Context: Wallace, 38, called Gastonia home from the age of 8 until she graduated from Hunter Huss High School in 1983. Question: When did Wallace wake up for high school? Options: A. at 6 am B. at 1 am C. 7:00 AM D. at 6 pm Answer: According to commonsense knowledge, waking up for high school typically happens in the morning, usually between 6 AM and 8 AM. Therefore, the answer is A. at 6 am, C. 7:00 AM. CoT Demonstration of MCTACO (4-shot) Figure 14: Chain-of-Thought demonstrations of MCTACO. 1226 There is a two-person dialogue with several options. Choose all appropriate options to substitute the <mask> in the dialogue, and each question has at least one correct option. ...... Dialogue: A:What schools have you attended ? B: I finished Young Primary School in 1998 , and entered Xi ’ an Middle School that same September . I graduated from there in <MASK> , and that September I entered Wuhan University , where I’m studying now . A: How do you think the education you have received will contribute to your work in this company ? B: I think I have a good understanding of fundamentals in the areas your company deals with , and I can go on from here to build up the specific skills and knowledge I need to do my job well . A: Your graduation thesis was on Medical Application of Laser , right ? What were your conclusions ? B: Yes . I did some work on that , and I found out some really interesting things about the conductivity of liquid helium . I was sure I had a great discovery until my teacher told me the same discovery already made twenty years ago . I think the most important thing , I learnt though , was the importance of keeping good records . Options: A. 1998 B. July of 2004 C. March of 2003 D. twenty years ago Answer: Based on the dialogue, B entered middle school in Sep 1998. According to commonsense knowledge, it usually takes around 6 years from entering middle school to graduating from high school (and entering university). Adding 6 years to 1998 would be 2004, so the answer should be around the year 2004. Therefore, the answer is B. July of 2004, C. March of 2003. CoT Demonstration of TIMEDIAL (4-shot) Figure 15: Chain-of-Thought demonstrations of TimeDial. I will give you a question with context. You need to answer my question based on the context. If you can infer the answer from the context, then output your answer. Otherwise, if there is no answer, output [unanswerable] ...... Context: Theo-Ben Gurirab Theo-Ben Gurirab ( 23 January 1938 ˘2013 14 July 2018 ) was a Namibian politician who served in various senior government positions . He served as the second Prime Minister of Namibia from 28 August 2002 to 20 March 2005 , following the demotion and subsequent resignation of Hage Geingob . Previously he was the countrys first Minister of Foreign Affairs from 1990 to 2002 , and was President of the United Nations General Assembly from 1999 to 2000 . He was Speaker of the National Assembly of Namibia from 2005 to 2015 , when he was replaced by Peter Katjavivi . Gurirab ultimately resigned from politics in 2015 . Death . Gurirab died at a Windhoek hospital on 14 July 2018 of natural causes . He is buried at Heroes Acre . Question: Theo-Ben Gurirab took which position after Jan 2007? Answer: Based on the context, we can summarize the following facts: Theo-Ben Gurirab served as second Prime Minister of Namibia from August 2002 to March 2005. Prior to that, he was the countrys first Minister of Foreign Affairs from 1990 to 2002 and and was President of the United Nations General Assembly from 1999 to 2000. From 2005 to 2015, he held the position of Speaker of the National Assembly of Namibia. He resigned from politics in 2015 and passed away in July 2018. According to the aforementioned facts, he took the position of Speaker of the National Assembly of Namibia in January 2007. Therefore, the answer is Speaker of the National Assembly of Namibia. CoT Demonstration of TIMEQA, MENATQA (2-shot, implicit) Figure 16: Chain-of-Thought demonstrations of TimeQA, MenatQA, implicit reasoning. 1227 I will give you a question with context. You need to answer my question based on the context. ...... Context (facts): Gian Piero Gasperini is the head coach of Atalanta B.C. from Jun, 2016 to Dec, 2022. Edoardo Reja is the head coach of Atalanta B.C. from Mar, 2015 to Jun, 2016. Stefano Colantuono is the head coach of Atalanta B.C. from Jun, 2010 to Mar, 2015. Bortolo Mutti is the head coach of Atalanta B.C. from Jan, 2010 to Jun, 2010. Emiliano Mondonico is the head coach of Atalanta B.C. from Jul, 1987 to Jun, 1990. Marcello Lippi is the head coach of Atalanta B.C. from Jul, 1992 to Jun, 1993. Angelo Gregucci is the head coach of Atalanta B.C. from Jul, 2009 to Sep, 2009. Luigi Delneri is the head coach of Atalanta B.C. from Jul, 2007 to Jun, 2009. Ottavio Bianchi is the head coach of Atalanta B.C. from Jul, 1981 to Jun, 1983. Antonio Conte is the head coach of Atalanta B.C. from Sep, 2009 to Jan, 2010. Nedo Sonetti is the head coach of Atalanta B.C. from Jul, 1983 to Jun, 1987. Valter Bonacina is the head coach of Atalanta B.C. from Jan, 2010 to Jan, 2010. Question: Who was the head coach of the team Atalanta B.C. in Feb, 2016? Answer: According to the context, Edoardo Reja was the head coach of Atalanta B.C. from Mar, 2015 to Jun, 2016. In Feb 2016, the head coach of the team Atalanta B.C. is Edoardo Reja. Therefore, the answer is Edoardo Reja. CoT Demonstration of TEMPREASON (4-shot, event-time) Figure 17: Chain-of-Thought demonstrations of TempReason, event-time reasoning. I will give you a question with context. You need to answer my question based on the context. ...... Context (facts): Nicholas Macpherson holds the position of Member of the House of Lords from Oct, 2016 to Dec, 2022. Nicholas Macpherson holds the position of Principal Private Secretary to the Chancellor of the Exchequer from Jan, 1993 to Jan, 1997. Nicholas Macpherson holds the position of Permanent Secretary to the Treasury from Aug, 2005 to Jan, 2016. Question: Which position did Nicholas Macpherson hold before Member of the House of Lords? Answer: According to the context, Nicholas Macpherson holds the position of Permanent Secretary to the Treasury from Aug, 2005 to Jan, 2016. Afterthat, Nicholas Macpherson holds the position of Member of the House of Lords from Oct, 2016 to Dec, 2022. Nicholas Macpherson hold the position of Permanent Secretary to the Treasury before Member of the House of Lords. Therefore, the answer is Permanent Secretary to the Treasury." CoT Demonstration of TEMPREASON (4-shot, event-event) Figure 18: Chain-of-Thought demonstrations of TempReason, event-event reasoning. 1228
2024
66
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12215–12229 August 11-16, 2024 ©2024 Association for Computational Linguistics Multipath parsing in the brain Berta Franzluebbers University of Georgia [email protected] Donald Dunagan University of Georgia [email protected] Miloš Stanojevi´c Google DeepMind [email protected] Jan Buys University of Cape Town [email protected] John T. Hale University of Georgia [email protected] Abstract Humans understand sentences word-by-word, in the order that they hear them. This incrementality entails resolving temporary ambiguities about syntactic relationships. We investigate how humans process these syntactic ambiguities by correlating predictions from incremental generative dependency parsers with timecourse data from people undergoing functional neuroimaging while listening to an audiobook. In particular, we compare competing hypotheses regarding the number of developing syntactic analyses in play during word-by-word comprehension: one vs more than one. This comparison involves evaluating syntactic surprisal from a state-of-the-art dependency parser with LLM-adapted encodings against an existing fMRI dataset. In both English and Chinese data, we find evidence for multipath parsing. Brain regions associated with this multipath effect include bilateral superior temporal gyrus. 1 Introduction A major unsolved problem in computational psycholinguistics is determining whether human sentence comprehension considers a single analysis path1 at a time, or whether it sometimes entertains multiple lines of reasoning about the structure of a sentence. These lines of reasoning typically correspond to syntactic ambiguities (e.g. between Subject and Modifier as shown in Figure 1). Multipath accounts of ambiguity resolution (e.g. Gibson, 1991; Jurafsky, 1996) are quite different from single-path accounts (e.g. Frazier and Fodor, 1978; Marcus, 1980). The debate within cognitive science has focused on the special case of garden path sentences leaving open the broader question: Is human comprehension ever multipath in the sense of Ranked Parallel parsing? If it is, where in the brain does such multipath parsing happen? 1The “paths” terminology (vs “serial” or “parallel”) serves to categorize the cognitive issue as one of process rather than architecture (see Lewis, 2000b, §3.3.3). The desert is det nsubj The desert sand det nmod Figure 1: A sentence fragment from our stimulus text showing temporary syntactic ambiguity about the correct relationship between The and desert, which is resolved by hearing the next word (blue). We take up the call to answer this question (e.g. Lewis, 2000a; Gibson and Pearlmutter, 2000) by combining broad coverage incremental parsing (e.g. Nivre, 2004) with information-theoretical complexity metrics to derive predictions about neuroimaging. Analyzing an entire book (Li et al., 2022) makes it possible to examine – for the first time – all of the ambiguities that are implicated by a given conception of syntax and a given parsing strategy – not just those attested in classic garden path materials (Mason et al., 2003; Hopf et al., 2003; Rodd et al., 2010). We extend the state of the art in incremental generative dependency parsing by using a large language model (BLOOM; Le Scao et al., 2022b) together with parameter efficient fine-tuning using adapters (Pfeiffer et al., 2020). We fix the parsing strategy while varying the number of allowable analysis paths and connect neural data with parser actions via surprisal (for a review of this information-theoretical metric see Hale, 2016 or neuroimaging studies such as Brennan et al., 2016; Henderson et al., 2016; Shain et al., 2020; Brennan et al., 2020). This sets up a contrast between a multipath model that considers (at most) five paths at a time, versus a single-path model that can only consider one at a time. The results, reported in section 5, ultimately support the multipath view, as the surprisals from the five-way parser are bettercorrelated with the neuroimaging data. This obtains for both English and Chinese. 12215 Action Before After Arc Probability Shift (σ|i, j) (σ|i|j, j + 1) ptr(sh|hi, hj)pgen(wj+1|hi, hj) Left-arc (σ|i, j) (σ, j) j →i ptr(re|hi, hj)pdir(la|hi, hj) Right-arc (σ|l|i, j) (σ|l, j) l →i ptr(re|hi, hj)pdir(ra|hi, hj) Table 1: The arc-hybrid transition system of Kuhlmann et al. (2011) defines the possible parser actions (shift, left-arc, and right-arc) as transitions from previous to current parser states. States are indicated by (stack, current index) tuples, e.g. wi is the token on top of the stack (σ), and wj is the current index token at time step j. The probabilities associated with these actions are decomposed into complementary shift (sh) and reduce (re) transitions, and complementary left-arc (la) and right-arc (ra) arc directions. The shift action also includes the probability of generating the next token wj+1. 2 Related Work This paper follows a line of research which aims to characterize human language processing by evaluating word-by-word difficulty predictions against neuroimaging data from the brain. Hale et al. (2022) reviews many studies that fit into this tradition. Most of them consider just a single gold standard or single system-assigned analysis. Hale et al. (2018) and Crabbé et al. (2019) are notable exceptions to this general trend because they derive predictions from multiple analyses that would be considered as part of a beam. Hale and Crabbé work with phrase structure. By contrast, the present study uses dependency parsing. This choice is motivated by prior work in neurolinguistics (e.g. BornkesselSchlesewsky and Schlesewsky, 2015). For instance, Li and Hale (2019) relate a dependency-based structural distance metric to hemodynamic activity in left posterior temporal lobe, among other brain areas. Their study used spoken English materials. Lopopolo et al. (2021) apply a related metric to Dutch. Oota et al. (2023) use graph neural networks to embed dependency analyses of sentenceinitial substrings. Analyzing brain responses to written, rather than spoken stimuli, via the encoding approach of Reddy and Wehbe (2021), Oota et al. identify many of the same brain areas as the studies mentioned above. The single-path vs multipath question is itself motivated by prior work with eyetracking data. By varying the number of paths available to a parser, Boston et al. (2011) find support for Ranked Parallel parsing (in the sense of Gibson 1991 and Jurafsky 1996, further characterized below in §3.4). Apart from these cognitive considerations, a complementary motivation for this research is to investigate the parsing improvements that come by leveraging large language models (LLMs; see accuracy scores in Table 2). Such models are trained on datasets that far exceed classic treebanks in size. This allows LLMs to capture distributional regularities at a vast scale. However, it is challenging to explain at an algorithmic level what these models are doing. Instead of deriving a processing complexity metric directly from the output of a large language model, or correlating LLM internal representations with fMRI data, as in Schrimpf et al. (2021) and Caucheteux and King (2022), this work uses initial substring encodings from an LLM to inform an incremental parser. Our project also differs from Eisape et al. (2022), who probe LLM representations with a view towards inferring a single, unlabelled dependency analysis. In contrast, we use LLM representations to score the set of all possible alternative analyses. Our scientific goal is to relate such parser states to human brain states, as observed via neuroimaging. 3 Dependency Parsing 3.1 Parser Architecture The construction of our dependency parser relies on the work of Buys and Blunsom (2018), which employs the transition-based parsing system detailed in Nivre (2008) and exactly enumerates all parser paths. Following Kuhlmann et al. (2011), dynamic programming is used to sum over all paths. We use the arc-hybrid transition system shown in Table 1 because it showed good parsing performance in Buys and Blunsom (2018). The generative parser assigns a probability to each of the words in the sentence incrementally, like any other language model, in contrast to discriminative parsers which only predict transition actions. The shift action predicts the next word on the buffer, which corresponds to the buffer-next model in Buys and Blunsom (2018). We update Buys and Blunsom by encoding sequences not with an LSTM but with representations 12216 from the BigScience Large Open-science Openaccess Multilingual Language Model (BLOOM; Le Scao et al., 2022b), chosen based on its openaccess status and training on both English and Chinese data. These BLOOM representations encode an entire initial substring up to and including a particular word (or subword token). A set of classifiers over these substring representations estimates the probability of each transition, word and arc label. The classifiers correspond to the subscripted probabilities in the rightmost column of Table 1. We use the pre-trained BLOOM-560 model. Larger BLOOM models showed no significant increase in accuracy. This corroborates Oh and Schuler (2023) and Pasquiou et al. (2023), who find that smaller models provide an equal (or better) fit to human data. Instead of fine-tuning the entire pre-trained BLOOM model on dependency parsing, we apply a Pfeiffer adapter after each layer (Pfeiffer et al., 2020). This bottleneck adapter introduces a new linear layer which reduces the dimension from 1024 down to 64 and back up to 1024, for input into the next pre-trained BLOOM layer. The adapter approach enables parameter-efficient fine-tuning while preserving the original language modelling representations as much as possible in the generative parser. The goal of these design choices is to deliver linguistically plausible dependency analyses of sentence-initial substrings. Some aspects of the overall architecture play the role of auxiliary hypotheses that do not map on to the brain. As subsection 3.5 lays out in further detail, the cognitive claim is limited to proposals that (a) human parsing involves recognizing dependency relations via a schema like Table 1 and (b) surprising parser actions, within this schema, call for greater hemodynamic resources than do unsurprising ones. Proposal (a) is situated at Marr’s middle level of analysis (Marr, 1982). 3.2 Parser Training The parser is trained on English and Chinese treebanks annotated with Stanford Dependencies (De Marneffe and Manning, 2008). The English model is trained on the Penn Treebank (PTB) version 3, with the standard split of training on sections 02-21, development on section 22, and testing on section 23. The Chinese model is trained on Chinese Treebank version 7, with the standard split Figure 2: Pipeline Overview: Pretokenized text from the treebank corpora is pre-processed by adding a leading space to each token. The pre-trained BLOOM model is fine-tuned using the Pfeiffer Adapter, and an additional single-layer feedforward neural network completes the encoding step. The dependency parser takes tuples of sequence encodings (stack (σ), buffer (β)) as input, and its system of classifiers is trained from random weights. Note that the word prediction classifier predicts over the size of the training data vocabulary + Berkeley unknown tokens instead of the BLOOM vocabulary. The final dependency tree output is evaluated in Table 2. After training, we use the parser’s transition probabilities (see Table 1) to calculate syntactic surprisal (Section 3.5), which is added to other regressors (see Section 4.4), in a General Linear Model (GLM) which predicts BOLD signal (Section 5). of files 0-2082 for training, development on files 2083-2242, and testing on files 2243-2447. This results in 39,832 training sentences for English and 19,457 training sentences for Chinese. The parser’s next word prediction classifier predicts over a limited vocabulary, where words seen only once in the training data are replaced with unknown word tokens according to the rules in the Berkeley Parser. This results in 23,830 word types for English and 19,671 for Chinese. The LLM encoder, however, uses the BLOOM sub-word tokenizer and the BLOOM vocabulary, which is of size 250,680.2 2This vocabulary size mismatch is one reason we did not directly assign next-word probabilities using the LLM. 12217 Corpus Model Dev Test LAS UAS LAS UAS PTB-3 Buys and Blunsom (2018) 88.66 91.19 88.54 91.01 PTB-3 English BLOOM (brain analysis) 90.26 92.71 90.32 92.62 CTB-7 Chinese BLOOM (brain analysis) 77.07 83.65 74.71 81.66 Table 2: Labeled attachment score (LAS) and unlabeled attachment score (UAS) for the English PTB corpus and Chinese CTB corpus Token sequences are encoded by the LLM, selecting the encoding of the right-most sub-word of each word for parser predictions. This allows sequence encodings to be as detailed as possible while still limiting the vocabulary during training. It should be noted, though, that using this encoding for words that are unknown to the parser’s next word classifier prevents the parser from being strictly generative. More information on strictly generative models can be found in the appendix. While training, sentences are shuffled at each epoch, and within batches (batch size = 16), which are created from sentences of the same length. The BLOOM-560 model has 24 hidden layers, after encoding. Figure 3: Labeled Accuracy Score (LAS), Unlabeled Accuracy Score (UAS), Label Accuracy for our parser trained and evaluated (dev set) on the Universal Dependencies ParTUT corpus. Each model uses a different BLOOM layer as a sequence encoder. We select the 17th layer as the representation to input into the model, which was decided based on optimization of development set accuracy on the ParTUT Universal Dependencies corpus (De Marneffe et al., 2021). We use this smaller corpus (2,090 sentences) in order to conserve resources when investigating the utility of BLOOM representations for dependency parsing. As seen in Figure 3, BLOOM layers 10 to 17 have similar accuracy levels. It is possible that more extensive optimization of this layer choice might provide small improvements in accuracy beyond that reported in Table 2. Further training hyperparameters can be found in the Appendix. 3.3 Parser Accuracy Cognitively plausible parsers must be incremental, in the sense that they must not use information about as-yet-unprocessed words occurring later in the sentence. We compare the accuracy of the incremental generative parser proposed here to that of Buys and Blunsom (2018), which employs the same transition system with encodings based on an LSTM with random initial weights. Parser accuracy is evaluated on the development set after each epoch, and the highest scoring epoch is reported in Table 2. However, the cognitive modeling results reported in Section 5 rely on earlier training checkpoints. The epoch is chosen by maximizing r2 correlation with the fMRI data. The labeled attachment score and unlabeled attachment score (LAS/UAS) for the epochs chosen are 89.75/92.37 for English, and 77.06/83.59 for Chinese. We find that the objective of obtaining minimal loss does not correspond to optimal correlation with fMRI activity. 3.4 Multiple paths Multipath parsing, as a psycholinguistic claim about human sentence comprehension, is simply the idea that ambiguity-resolution pursues more than one alternative at the same time. This contrasts with Frazier and Fodor’s (1978) conception, in which a single parse is developed along one path. We consider here a version of the multipath idea that Kurtzman (1984) calls “strong” parallelism, in which paths can persist from word to word. Gorrell (1987) emphasizes that multiple alternatives are not equally available (page 84). This is naturally formalized by ranking the paths. Such ranking 12218 can be implemented with beam search, at the heart of parsing systems like Roark (2001), which is widely applied in computational psycholinguistics and neurolinguistics. 3.5 Complexity Metric In order to correlate parser predictions with brain activity, and in particular to take multiple parser paths into consideration, those predictions must somehow be summarized and quantified. One way to do that is via the surprisal complexity metric, revived by Hale (2001) as a method of predicting sentence-processing difficulty word-by-word. The basic formula is the logarithm of the reciprocal of a probability. log2  1 p(y)  = −log2(p(y)) (1) In Equation 1, p(y) denotes the probability of an arbitrary outcome y. In incremental sentence comprehension the relevant outcomes are words. The surprisal of a word at position wi which follows a string ending in wi−1, is a ratio of the probability at the current word with the previous probability: −log2 p(wi, wi−1, ..., w1)) p(wi−1, ..., w1))  (2) 3.5.1 Syntactic Surprisal Word probabilities, denoted pgen in Table 1, are affected by nonsyntactic factors. As Figure 4 shows, these nonsyntactic factors can overshadow the difference between syntactic alternatives. For this reason, we follow Roark et al. (2009) in decomposing surprisal into lexical and syntactic expectations. Discarding lexical expectations, we focus exclusively on differences among competing syntactic analyses by adopting their syntactic surprisal measure, given below as Equation 3. Syntactic surprisal does not consider word probability at the current time step (i), but the complete path probability is used in the denominator (i −1) to ensure that the correct path is selected. In addition, we limit the sum to the top k transition sequences. Here, j indexes discrete ranks of a ranked parallel parser, and ta(i) refers to all the parser transitions in the path that ends at generation of token wi. At each time step i, ta(i) includes 1 shift transition, and zero to many possible reduce transitions. Figure 4: A sentence from The Little Prince, showing incremental surprisal from joint probability decomposed into lexical (red), and syntactic surprisal (blue) SynSk(wi) = (3) −log2 Pk j=1 p ta(i) . . . ta(1) | wi−1 . . .w1  j Pk j=1 p ta(i−1) . . . ta(1) | wi−1. . .w1  j In Equation 3, syntactic surprisal is parameterized by a maximum number of allowable analysis paths, k. Figure 5 elaborates an example calculation using this definition. Figure 5: A sentence fragment from The Little Prince showing shift-reduce parser actions with associated probabilities. Word generation probabilities are not shown here, but would be included in the full path calculation. In this example, the syntactic surprisal at time t = 3 is -log2(0.55/1) = 0.86 for k=1, and for k=2 the top two paths are added: -log((0.55+0.45)/1) = 0. The limitation to k paths in Equation 3 follows Boston et al. (2011). Unlike Boston et al., we retain all analyses and simply chose the top k at each successive word. This allows paths that would have been lost forever in true beam search to later rejoin 12219 the beam. Such restoration has essentially the same effect as backtracking, in cases where it would lead to a higher-scoring analyses. “Reanalysis” of this sort could be helpful in modeling the comprehension of garden-path sentences (see e.g. Fodor and Ferreira, 1998). 3.5.2 Why Five Paths? Addressing the single-path vs multipath question entails choosing some number of paths, k to represent multipath parsing in general. Empirical literature such as Gibson and Pearlmutter (2000) has been primarily concerned with the distinction between 1 and 2 paths. By contrast, accurate NLP systems sometimes consider tens or even hundreds of paths.3 This mismatch calls for a setting of k that is small, but clearly different from 1. The syntactic surprisal metric itself also mitigates against settings of k that are too large. With this metric, more ranks quickly eat up the available probability mass, up to 1.0, when considering all possible paths. In this limiting case syntactic surprisal would trivially equal zero for all tokens (see Figure 6). For these reasons, we selected five paths to contrast with single-path parsing.4 4 fMRI Methods 4.1 Participants As detailed in Li et al. (2022), the English dataset includes 49 participants (30 female, mean age = 21.3, range = 18-37), and the Chinese dataset includes 35 participants (15 female, mean age = 19.9, range = 18-24). 4.2 Data Acquisition The English audio stimulus is an English translation of The Little Prince, read by Karen Savage. The Chinese audio stimulus is a Chinese translation of The Little Prince, read by a professional 3Many of these paths lead to similar syntactic structures. Grouping them in a cognitively realistic way brings up foundational questions regarding the mental representation of grammatical relations (see Bresnan and Kaplan 1982, Sturt 1996, and §4.4 of Brasoveanu and Dotlaˇcil 2020, among others). These questions, along with the proper formulation of reanalysis or repair operations (e.g. Lewis, 1992; Buch-Kromann, 2004) are important directions for future work. 4This goal of this study is to adjudicate between singlepath and multipath parsing. This question has remained open within the literature on human sentence processing for quite some time. An alternative approach seeks to maximize correlations with fMRI data by e.g. searching for the optimal number of paths. This suggests a natural follow-up. female Chinese broadcaster. The English and Chinese audiobooks are 94 and 99 minutes in length, respectively. The presentations were divided into nine sections, each lasting around ten minutes. Participants listened passively to the nine sections and completed four quiz questions after each section (36 questions in total). These questions were used to confirm participant comprehension of the story. The English and Chinese brain imaging data were acquired with a 3T MRI GE Discovery MR750 scanner with a 32-channel head coil. Anatomical scans were acquired using a T1weighted volumetric magnetization prepared rapid gradient-echo pulse sequence. Blood-oxygen-leveldependent (BOLD) functional scans were acquired using a multi-echo planar imaging sequence with online reconstruction (TR = 2000 ms; TE’s = 12.8, 27.5, 43 ms; FA = 77°; matrix size = 72 x 72; FOV = 240.0 mm x 240.0 mm; 2x image acceleration; 33 axial slices, voxel size = 3.75 x 3.75 x 3.8 mm). 4.3 Data Preprocessing The English and Chinese fMRI data were preprocessed using AFNI version 16 (Cox, 1996). The first 4 volumes in each run were excluded from analyses to allow for T1-equilibration effects. Multi-echo independent components analysis (MEICA), was used to denoise data for motion, physiology, and scanner artifacts (Kundu et al., 2012). Images were then spatially normalized to the standard space of the Montreal Neurological Institute (MNI) atlas, yielding a volumetric time series resampled at 2 mm cubic voxels. 4.4 Statistical Analysis The goal of the analysis is to compare surprisal values from single-path and multipath parsers against the same observed fMRI timecourses. We follow Crabbé et al. (2019) in evaluating goodness of fit pairwise using cross-validated coefficient of determination (r2) maps. For each subject individually, the fMRI BOLD signal is modeled by a General Linear Model (GLM) at each voxel. The following five regressors are included in the GLM: word rate, fundamental frequency (f0), word frequency, root mean square intensity (RMS), and syntactic surprisal of the top k paths. Word rate is a timing function marking the offset of each spoken word; f0 is the fundamental frequency, or pitch of the audio; word frequency is obtained from words in a movie subtitles database (Brysbaert and New, 12220 Figure 6: Syntactic surprisal at k = 1 (blue) and k = 5 (orange) for tokens in The Little Prince 2009); and RMS is an indicator of audio intensity, taken every 10ms. These predictors are known to affect speech comprehension, and are included as control variables to help isolate variance within the BOLD signal that is specific to syntactic processing. Syntactic surprisal for each word was also aligned to the offset of each word in the audiobook. All predictors were convolved using SPM’s canonical Hemodynamic Response Function (Friston et al., 2007). For each participant, we compute how much the inclusion of the syntactic surprisal regressor increases the cross-validated r2 over the baseline model, which includes only control variables. We repeat this process for syntactic surprisal at different levels of k separately in order to keep the number of parameters in each GLM model constant. Therefore, the r2 increase scores represent the variance explained in each voxel by the addition of syntactic surprisal to the model as a predictor. To compare models across two different levels of k and analyze their ability to explain the fMRI BOLD signal, we performed a paired t-test on individual r2 increase maps to obtain z-maps. These zmaps show where syntactic surprisal from a model with a given number of paths explains the signal significantly better than does a model that incorporates a different number of paths (see Figures 7 and 8). 5 Results Both English and Chinese paired t-tests on r2 increase maps show differences in the superior temporal gyrus (STG). The Chinese results further show differences in middle temporal gyrus (MTG), while the English results show differences in the parietal lobe, including bilateral angular gyrus. The full list of statistically significant clusters corresponding to Figures 7 and 8 can be found in the Appendix. Although the paired t-tests were twotailed, all of the resulting significant clusters show greater r2 increase for syntactic surprisal at k = 5 than at k = 1. The r2 increase maps for each model individually can be found in the Appendix. 6 Discussion Multipath parsing effort, outside of the 1-best analysis, localizes to superior temporal regions of the brain bilaterally in both English and Chinese. This finding converges with Crabbé et al. (2019). Crabbé and colleagues parse phrase structure, rather than dependencies, employ a different complexity metric, and use a very large beam (400). Despite all these differences, their results implicate roughly the same brain regions. These results also cohere with proposals regarding the large-scale organization of language processing in the brain. On the Neuroanatomical Pathway model, for instance, “most basic syntactic processes” are handled by a ventral pathway that passes through STG regions identified here (Friederici, 2015). Matchin and Hickok (2020) similarly point to the ventral pathway as essential for “basic” syntactic processes. This heuristic notion of basic syntactic processing aligns well with Stanford Dependencies’ avowed goal of providing a simple, surface-oriented notation for local gram12221 Figure 7: English Brain z-maps showing the significant clusters (p < .001 uncorrected; cluster threshold = 15 voxels) for the model comparison between syntactic surprisal at k = 1 vs k = 5. All significant clusters show greater r2 increase for surprisal at k = 5 than for surprisal at k = 1. Figure 8: Chinese Brain z-maps showing the significant clusters (p < .001 uncorrected; cluster threshold = 15 voxels) for the model comparison between syntactic surprisal at k = 1 vs k = 5. All significant clusters show greater r2 increase for surprisal at k = 5 than for surprisal at k = 1. matical relations. The present results do not identify a significant difference in r2 increase in Broca’s area (left IFG). Some increase in r2 can be seen, however, in middle and IFG in the individual model maps shown in the Appendix. The absence of a significant difference, however makes sense in light of two considerations. The first is the idea that left IFG subserves reanalysis in the face of misinterpretation (Novick et al., 2005). The second is the literary style of the The Little Prince. The stimulus text is dissimilar to the sorts of garden-path materials psycholinguists have used (see e.g. Sedivy 2019, chapter 9 or Fernández and Cairns 2010, chapter 7). It seems likely that any garden-pathing was mild, remaining below the level of conscious awareness. This distinction between misinterpretations that rise to the level of conscious awareness and those that do not could help reconcile these results with earlier studies showing IFG activation in response to garden path stimuli (e.g. Mason et al., 2003; Rodd et al., 2010). For instance, Jäger et al. (2015) suggests that Chinese relative clauses involve as many as four temporary ambiguities. To our knowledge none of these reach the level of conscious awareness in everyday comprehension. Yet in a naturalistic fMRI study, Dunagan et al. (2022) observe activation in anterior and middle STG that is specific to Chinese objectextracted relative clauses. This contrast was not reliable in a statistical comparison between English and Chinese. Further work is needed to chart the space between unproblematic ambiguities (Lewis, 1993, §2.4.2) and conscious misinterpretations engendered by syntactic ambiguity. These STG results include activation in the primary auditory cortex, which could suggest, in line with the sensory hypothesis (Dikker et al., 2009), that expectations based on earlier syntactic processing first impact regions involved in low-level sensory processing. This possibility could be examined in a followup fMRI study with written or signed materials, along the lines of Henderson et al. (2016). 7 Conclusion and Future Work The main conclusion is that human parsing is multipath. This follows from observing greater r2 increase for multipath surprisal than single-path surprisal in fMRI data. Even with a very bottom-up strategy, such as the arc-hybrid system used here, it seems that more than one path must be pursued in order to best-align with humans’ word-by-word effort profile. This conclusion is consistent with disjunctive representations of choices such as PP attachment (Kitaev et al., 2022, §5.5). The multipath interpretation that we offer here can be confirmed or refuted with other linked linguistic and neuro-cognitive databases – for in12222 stance, in different genres or languages. In addition, using brain data with a higher temporal resolution, such as MEG, may provide a benefit given the temporary nature of the syntactic ambiguities included in our model. Although we take the primary finding to be one of commonality across STG regions in both languages, English listeners did uniquely show an additional effect of multipath parsing in parietal regions. Activation in this area has been correlated with measures of human memory such as digit span (Meyer et al., 2012). Individuals’ memory span may modulate the number of paths that they pursue during comprehension (Vos et al., 2001; Prat and Just, 2010). Future work should address the interaction between disambiguation and memory (e.g. Campanelli et al., 2018). Limitations The goal of correlating parser states with fMRI data is limited by the particularities of the parsing system – namely the arc-hybrid transition system and the limited size and content of the training data (which is dissimilar in genre to the audiobook text we study). In addition, the English results in particular are the result of careful model selection to limit “training away” the syntactic ambiguity we would like to measure. This may indicate that more formalized external limitations should be applied to modern high-performing parsers should they be used to model human sentence processing. Ethics Language models pose risks when used outside of their intended scope. We use BLOOM, which is publicly available under the Responsible AI License (RAIL) (Le Scao et al., 2022a). Our scientific enquiry falls within the intended use of public research on LLMs. We also use a publicly available fMRI dataset (Li et al., 2022), which is available under a CCO license. This dataset was anonymized to remove identifying facial features before publication. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant Number 1903783. We thank Laura Rimell and Michael Covington for valuable discussion, along with Christophe Pallier for developing the crossvalidated r2 code used here. We are grateful to many collaborators for assistance with the fMRI data collection. Among them are Shohini Bhattasali, Jixing Li, Wen-Ming Luh and Nathan Spreng. References Ina Bornkessel-Schlesewsky and Matthias Schlesewsky. 2015. The Argument Dependency Model. In (Hickok et al., 2015), chapter 30. Marisa Ferrara Boston, John T Hale, Shravan Vasishth, and Reinhold Kliegl. 2011. Parallel processing and sentence comprehension difficulty. Language and Cognitive Processes, 26(3):301–349. Adrian Brasoveanu and Jakub Dotlaˇcil. 2020. Computational Cognitive Modeling and Linguistic Theory. Language, Cognition, and Mind. Springer. Jonathan R. Brennan, Chris Dyer, Adhiguna Kuncoro, and John T. Hale. 2020. Localizing syntactic predictions using recurrent neural network grammars. Neuropsychologia, 146:1074–1079. Jonathan R. Brennan, Edward P. Stabler, Sarah E Van Wagenen, Wen-Ming Luh, and John T. Hale. 2016. Abstract linguistic structure correlates with temporal activity during naturalistic comprehension. Brain Lang., 157-158:81–94. Joan Bresnan and Ronald M. Kaplan. 1982. Introduction: Grammars as mental representations of language. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages xvii,lii. MIT Press, Cambridge, MA. Marc Brysbaert and Boris New. 2009. Moving beyond Kuˇcera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior research methods, 41(4):977–990. Matthias Buch-Kromann. 2004. Optimality parsing and local cost functions in Discontinuous Grammar. Electronic Notes in Theoretical Computer Science, 53:163–179. Proceedings of the joint meeting of the 6th Conference on Formal Grammar and the 7th Conference on Mathematics of Language. Jan Buys and Phil Blunsom. 2018. Neural syntactic generative models with exact marginalization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 942–952. Luca Campanelli, Julie A. Van Dyke, and Klara Marton. 2018. The modulatory effect of expectations on memory retrieval during sentence comprehension. In Timothy T. Rogers, Marina Rau, Xiaojin Zhu, and Charles W. Kalish, editors, Proceedings of the 40th Annual Conference of the Cognitive Science Society, pages 1434–1439. Cognitive Science Society, Austin, Texas. 12223 Charlotte Caucheteux and Jean-Rémi King. 2022. Brains and algorithms partially converge in natural language processing. Nature Communications Biology, 5(1):134. Robert W Cox. 1996. Afni: software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical research, 29(3):162–173. Benoit Crabbé, Murielle Fabre, and Christophe Pallier. 2019. Variable beam search for generative neural parsing and its relevance for the analysis of neuroimaging signal. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1150–1160, Hong Kong, China. Association for Computational Linguistics. Marie-Catherine De Marneffe and Christopher D Manning. 2008. The stanford typed dependencies representation. In Coling 2008: proceedings of the workshop on cross-framework and cross-domain parser evaluation, pages 1–8. Marie-Catherine De Marneffe, Christopher D Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal dependencies. Computational linguistics, 47(2):255– 308. Suzanne Dikker, Hugh Rabagliati, and Liina Pylkkänen. 2009. Sensitivity to syntax in visual cortex. Cognition, 110(3):293–321. Donald Dunagan, Miloš Stanojevi´c, Maximin Coavoux, Shulin Zhang, Shohini Bhattasali, Jixing Li, Jonathan Brennan, and John Hale. 2022. Neural correlates of object-extracted relative clause processing across English and Chinese. Neurobiology of Language, pages 1–43. Tiwalayo Eisape, Vineet Gangireddy, Roger Levy, and Yoon Kim. 2022. Probing for incremental parse states in autoregressive language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2801–2813, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Eva M. Fernández and Helen Smith Cairns. 2010. Fundamentals of Psycholinguistics. Wiley-Blackwell. Janet Dean Fodor and Fernanda Ferreira, editors. 1998. Reanalysis in sentence processing, volume 21 of Studies in Theoretical Psycholinguistics. Kluwer, Dordrecht. Lyn Frazier and Janet Dean Fodor. 1978. The sausage machine: a new two-stage parsing model. Cognition, 6:291–325. Angela D. Friederici. 2015. The neuroanatomical pathway model of language. In (Hickok et al., 2015), chapter 29. Karl J Friston, John Ashburner, Stefan J Kiebel, T Nichols, and William Penny. 2007. Statistical Parametric Mapping. Academic Press. Edward Gibson. 1991. A Computational Theory of Human Linguistic Processing: Memory Limitations and Processing Breakdown. Ph.D. thesis, Carnegie Mellon University. Edward Gibson and Neal J. Pearlmutter. 2000. Distinguishing serial and parallel parsing. Journal of Psycholinguistic Research, 29(2). Paul G. Gorrell. 1987. Studies of human syntactic processing: Ranked-parallel versus serial models. Ph.D. thesis, University of Connecticut. John Hale. 2016. Information-theoretical complexity metrics. Language and Linguistics Compass, 10(9):397–412. John Hale, Chris Dyer, Adhiguna Kuncoro, and Jonathan Brennan. 2018. Finding syntax in human encephalography with beam search. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2727–2736, Melbourne, Australia. Association for Computational Linguistics. John T Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Second meeting of the North American chapter of the association for computational linguistics. John T Hale, Luca Campanelli, Jixing Li, Shohini Bhattasali, Christophe Pallier, and Jonathan R Brennan. 2022. Neurocomputational models of language processing. Annual Review of Linguistics, 8:427–446. John M Henderson, Wonil Choi, Matthew W Lowder, and Fernanda Ferreira. 2016. Language structure in the brain: A fixation-related fMRI study of syntactic surprisal in reading. NeuroImage, 132:293–300. Gregory Hickok, Steven L. Small, and Steven L. Small. 2015. Neurobiology of Language. Elsevier, San Diego. Jens-Max Hopf, Markus Bader, Michael Meng, and Josef Bayer. 2003. Is human sentence parsing serial or parallel?: Evidence from event-related brain potentials. Cognitive Brain Research, 15(2):165–177. Lena Jäger, Zhong Chen, Qiang Li, Chien-Jer Charles Lin, and Shravan Vasishth. 2015. The subjectrelative advantage in Chinese: Evidence for expectation-based processing. Journal of Memory and Language, 79:97–120. Daniel Jurafsky. 1996. A probabilistic model of lexical and syntactic access and disambiguation. Cognitive science, 20(2):137–194. Nikita Kitaev, Thomas Lu, and Dan Klein. 2022. Learned incremental representations for parsing. In 12224 Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3086–3095, Dublin, Ireland. Association for Computational Linguistics. Marco Kuhlmann, Carlos Gómez-Rodríguez, and Giorgio Satta. 2011. Dynamic programming algorithms for transition -based dependency parsers. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 673–682, Portland, Oregon, USA. Association for Computational Linguistics. Prantik Kundu, Souheil J Inati, Jennifer W Evans, WenMing Luh, and Peter A Bandettini. 2012. Differentiating BOLD and non-BOLD signals in fMRI time series using multi-echo EPI. Neuroimage, 60(3):1759– 1770. Howard S. Kurtzman. 1984. Ambiguity resolution in the human syntactic parser: An experimental study. In 10th International Conference on Computational Linguistics and 22nd Annual Meeting of the Association for Computational Linguistics, pages 481–485, Stanford, California, USA. Association for Computational Linguistics. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022a. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint 2211.05100. Teven Le Scao, Thomas Wang, Daniel Hesslow, Stas Bekman, M Saiful Bari, Stella Biderman, Hady Elsahar, Niklas Muennighoff, Jason Phang, Ofir Press, Colin Raffel, Victor Sanh, Sheng Shen, Lintang Sutawika, Jaesung Tae, Zheng Xin Yong, Julien Launay, and Iz Beltagy. 2022b. What language model to train if you have one million GPU hours? In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 765–782, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Richard L. Lewis. 1992. Recent Developments in the NL-Soar Garden Path Theory. Technical Report CMU-CS-92-141, Carnegie Mellon University. Richard L. Lewis. 1993. An Architecturally-based Theory of Human Sentence Comprehension. Ph.D. thesis, Carnegie Mellon University, Pittsburgh, PA. Richard L. Lewis. 2000a. Falsifying serial and parallel parsing models: empirical conundrums and an overlooked paradigm. Journal of Psycholinguistic Research, 29(2). Richard L. Lewis. 2000b. Specifying architectures for language processing: Process, control, and memory in parsing and interpretation. In Matthew W. Crocker, Martin Pickering, and Charles Clifton, Jr., editors, Architectures and mechanisms for language processing. Cambridge University Press. Jixing Li, Shohini Bhattasali, Shulin Zhang, Berta Franzluebbers, Wen-Ming Luh, R Nathan Spreng, Jonathan R Brennan, Yiming Yang, Christophe Pallier, and John Hale. 2022. Le Petit Prince multilingual naturalistic fMRI corpus. Scientific data, 9(1):530. Jixing Li and John Hale. 2019. Grammatical predictors for fMRI timecourses. In Robert C. Berwick and Edward P. Stabler, editors, Minimalist Parsing. Oxford University Press. Alessandro Lopopolo, Antal van den Bosch, KarlMagnus Petersson, and Roel M. Willems. 2021. Distinguishing syntactic operations in the brain: Dependency and phrase-structure parsing. Neurobiology of Language, 2(1):152–175. Mitchell P. Marcus. 1980. A theory of syntactic recognition for natural language. MIT Press. David Marr. 1982. Vision: A computational investigation into the human representation and processing of visual information. W.H. Freeman and Company. Robert A Mason, Marcel Adam Just, Timothy A Keller, and Patricia A Carpenter. 2003. Ambiguity in the brain: what brain imaging reveals about the processing of syntactically ambiguous sentences. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(6):1319. William Matchin and Gregory Hickok. 2020. The cortical organization of syntax. Cerebral Cortex, 30(3):1481–1498. Lars Meyer, Jonas Obleser, Alfred Anwander, and Angela D Friederici. 2012. Linking ordering in broca’s area to storage in left temporo-parietal regions: the case of sentence processing. Neuroimage, 62(3):1987–1998. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the workshop on incremental parsing: Bringing engineering and cognition together, pages 50–57. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. Jared M. Novick, John C. Trueswell, and Sharon L. Thompson-Schill. 2005. Cognitive control and parsing: Reexamining the role of Broca’s area in sentence comprehension. Cognitive, Affective & Behavioral Neuroscience, 5(3):263–281. Byung-Doh Oh and William Schuler. 2023. Why does surprisal from larger transformer-based language models provide a poorer fit to human reading times? Transactions of the Association for Computational Linguistics, 11:336–350. Subba Reddy Oota, Mounika Marreddy, Manish Gupta, and Raju Bapi. 2023. How does the brain process syntactic structure while listening? In Findings of 12225 the Association for Computational Linguistics: ACL 2023, pages 6624–6647, Toronto, Canada. Association for Computational Linguistics. Alexandre Pasquiou, Yair Lakretz, Bertrand Thirion, and Christophe Pallier. 2023. Information-restricted neural language models reveal different brain regions’ sensitivity to semantics, syntax and context. arXiv preprint arXiv:2302.14389. Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vuli´c, Sebastian Ruder, Kyunghyun Cho, and Iryna Gurevych. 2020. Adapterhub: A framework for adapting transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pages 46–54, Online. Association for Computational Linguistics. Chantel S. Prat and Marcel Adam Just. 2010. Exploring the Neural Dynamics Underpinning Individual Differences in Sentence Comprehension. Cerebral Cortex, 21(8):1747–1760. Aniketh Janardhan Reddy and Leila Wehbe. 2021. Can fmri reveal the representation of syntactic structure in the brain? In Advances in Neural Information Processing Systems, volume 34, pages 9843–9856. Curran Associates, Inc. Brian Roark. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249–276. Brian Roark, Asaf Bachrach, Carlos Cardenas, and Christophe Pallier. 2009. Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 324–333, Singapore. Association for Computational Linguistics. Jennifer M Rodd, Olivia A Longe, Billi Randall, and Lorraine K Tyler. 2010. The functional organisation of the fronto-temporal language system: evidence from syntactic and semantic ambiguity. Neuropsychologia, 48(5):1324–1335. Martin Schrimpf, Idan Asher Blank, Greta Tuckute, Carina Kauf, Eghbal A. Hosseini, Nancy Kanwisher, Joshua B. Tenenbaum, and Evelina Fedorenko. 2021. The neural architecture of language: Integrative modeling converges on predictive processing. Proceedings of the National Academy of Sciences, 118(45):e2105646118. Julie Sedivy. 2019. Language in Mind: An Introduction to Psycholinguistics, second edition. Oxford University Press. Cory Shain, Idan Asher Blank, Marten van Schijndel, William Schuler, and Evelina Fedorenko. 2020. fMRI reveals language-specific predictive coding during naturalistic sentence comprehension. Neuropsychologia, 138:107307. Patrick Sturt. 1996. Monotonic Syntactic Processing: A Cross-linguistic Study of Attachment and Reanalysis. Language and Cognitive Processes, 11(5):449–494. Sandra H. Vos, Thomas C. Gunter, Herbert Schriefers, and Angela D. Friederici. 2001. Syntactic parsing and working memory: The effects of syntactic complexity, reading span, and concurrent load. Language and Cognitive Processes, 16(1):65–103. 12226 A Appendix A.1 Generative Models As noted in Section 3.2, we use BLOOM’s tokenizer to encode all tokens. Since some of these tokens are not found in the training data, and thus are unknown to the parser’s next word prediction classifier, this prevents our parser from being strictly generative. As a comparison, we train additional models which are truly generative by replacing tokens not seen in the training data with unknown tokens in the encoder input, in the same way as the word prediction classes are defined. We report the accuracy for this generative approach alongside the previously defined full input model in Table 3. This distinction is also explained in Figure 9 for an example prefix string. Figure 9: Parser input is pre-processed according to the given method: Full Input indicates no pre-processing apart from adding a leading space to all pretokenized text from the corpus. Generative indicates tokens are preprocessed by replacing tokens not seen more than once in the training data with unknown tokens according to the rules of the Berkeley Parser. The BLOOM model and the system of classifiers used to output probabilities is the same for both models. Note that the word prediction classifier predicts over the size of the training data vocabulary + Berkeley unknown tokens for both methods. Corpus Model Dev Test LAS UAS LAS UAS PTB-3 Buys and Blunsom (2018) 88.66 91.19 88.54 91.01 PTB-3 English BLOOM Generative 89.27 92.00 90.53 92.68 PTB-3 English BLOOM Full Input (brain analysis) 90.26 92.71 90.32 92.62 CTB-7 Chinese BLOOM Generative 73.12 80.60 74.47 81.60 CTB-7 Chinese BLOOM Full Input (brain analysis) 77.07 83.65 74.71 81.66 Table 3: Labeled attachment score (LAS) and unlabeled attachment score (UAS) for the English PTB corpus and Chinese CTB corpus A.2 Parser Training The output representation from the BLOOM model has a dropout of 0.5 applied, and is then fed into a single layer feedforward neural network. During training, gradient norms are clipped to 5.0, and the initial learning rate is 1.0, with a decay factor of 1.7 applied every epoch after the initial 6 epochs. English and Chinese models train for approximately 5.5 and 4.5 minutes, respectively, per epoch on an A100 GPU. 12227 A.3 Intermediate Results Figure 10: English Brain z-maps showing the r2 increase for the model including syntactic surprisal at k = 5 compared to the model including only the following regressors: word rate, fundamental frequency (f0), word frequency, and root mean square intensity (RMS). Figure 11: English Brain z-maps showing the r2 increase for the model including syntactic surprisal at k = 1 compared to the model including only the following regressors: word rate, fundamental frequency (f0), word frequency, and root mean square intensity (RMS). Figure 12: Chinese Brain z-maps showing the r2 increase for the model including syntactic surprisal at k = 5 compared to the model including only the following regressors: word rate, fundamental frequency (f0), word frequency, and root mean square intensity (RMS). Figure 13: Chinese Brain z-maps showing the r2 increase for the model including syntactic surprisal at k = 1 compared to the model including only the following regressors: word rate, fundamental frequency (f0), word frequency, and root mean square intensity (RMS). 12228 A.4 Results Tables 4 and 5 show the full list of significant clusters (p < .001 uncorrected; cluster threshold = 15 voxels) for the model comparison between syntactic surprisal at k = 1 vs k = 5. Region X Y Z Peak Stat Cluster Size (mm3) Left-SecVisual(18), Right/Left DorsalPCC(31) -12.0 -64.0 24.0 4.78 968 Right-VisMotor(7) 0.0 -64.0 48.0 4.02 856 Right-AngGyrus (39) 38.0 -54.0 38.0 4.42 800 Left-PreMot + SuppMot(6) / Left-PrimAuditory (41) -48.0 -10.0 4.0 4.68 568 Right-VentPostCing(23) 14.0 -54.0 20.0 4.57 488 Right-AngGyrus (39) 36.0 -74.0 40.0 4.27 360 Left-AngGyrus (39) -44.0 -68.0 36.0 3.77 352 Right-DorsalPCC (31) 14.0 -62.0 34.0 4.03 216 Right-DorsalPCC (31) 4.0 -50.0 44.0 3.65 192 Right-PrimAuditory (41) 48.0 -12.0 2.0 3.69 160 Right-PrimAuditory (41) 38.0 -26.0 12.0 3.39 152 Table 4: English significant clusters (p < .001 uncorrected; cluster threshold = 15 voxels) for the model comparison between syntactic surprisal at k = 1 vs k = 5 Region X Y Z Peak Stat Cluster Size (mm3) Right-MedTempGyrus (21) / Right-SupTempGyrus (22) 56.0 -36.0 4.0 4.68 1336 Left-SupTempGyrus (22) -64.0 -26.0 6.0 4.32 392 Right-PrimAuditory (41) 52.0 -14.0 6.0 4.59 384 Left-PrimAuditory(41) -56.0 -6.0 4.0 5.04 384 Left-MedTempGyrus (21) -48.0 -42.0 4.0 4.25 320 Left-SupTempGyrus (22) -58.0 -26.0 0.0 4.06 288 Table 5: Chinese significant clusters (p < .001 uncorrected; cluster threshold = 15 voxels) for the model comparison between syntactic surprisal at k = 1 vs k = 5 12229
2024
660
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12230–12247 August 11-16, 2024 ©2024 Association for Computational Linguistics Search-Adaptor: Embedding Customization for Information Retrieval Jinsung Yoon, Yanfei Chen, Sercan Ö. Arık, Tomas Pfister Google Cloud AI {jinsungyoon, yanfeichen, soarik, tpfister}@google.com Abstract Embeddings extracted by pre-trained Large Language Models (LLMs) have significant potential to improve information retrieval and search. Beyond the zero-shot setup in which they are being conventionally used, being able to take advantage of the information from the relevant query-corpus paired data can further boost the LLM capabilities. In this paper, we propose a novel method, SearchAdaptor, for customizing LLMs for information retrieval in an efficient and robust way. Search-Adaptor modifies the embeddings generated by pre-trained LLMs, and can be integrated with any LLM, including those only available via prediction APIs. On multiple English, multilingual, and multimodal retrieval datasets, we show consistent and significant performance benefits for Search-Adaptor – e.g., more than 5% improvements for Google Embedding APIs in nDCG@10 averaged over 14 BEIR datasets. 1 Introduction Information retrieval is broadly considered as the task of searching for information via querying corpus database that might consist many different types of data, such as documents, webpages or logs. It has a wide range of applications across many industries, including retail, healthcare, and finance, with a significant portion of the world’s economy is built on. Particularly, language modeling is the key part of information retrieval as in most cases, query and corpus data are in text form. Large language models (LLMs) have demonstrated significant achievements for a variety of text processing tasks, including question answering, summarization, and mathematical reasoning (Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022a). One critical enabler for the success on these has been transforming raw text into meaningful representations that preserve semantic meanings in the representation space (Ouyang et al., 2022). For a wide range of applications, from recommendations to anomaly detection, tasks are defined as explicit operations on the learned representations. This makes the quality of the text mapping into embeddings become of paramount importance. Information retrieval systems commonly utilize the text embeddings, with relevant corpora being ranked based on the similarity between queries and corpus (Wang et al., 2022; Izacard et al., 2021a). Various LLMs have been proposed to extract embeddings from raw text, with the notable ones including the Sentence T5 (Ni et al., 2021a), OpenAI Embedding APIs (ope) and Google Embedding APIs (gcp). However, one fundamental limitation of pre-trained LLMs is that they cannot utilize retrieval-specific target task data, that are in the form of positively relevant query-corpus pairs. Even with a small amount, using such data for tuning is expected to significantly improve information retrieval capabilities. Conventional fine-tuning (Howard and Ruder, 2018) can be one straightforward way of utilizing the paired query-corpus information. However, if the number of paired samples is small, tuning all the weights of a model might yield overfitting and thus poor generalization (Lin et al., 2023), especially in the presence of distribution shifts. In addition, it can be costly from a computational perspective as it requires large memory. There are multiple parameter-efficient tuning methods such as prompt tuning (Lester et al., 2021; Li and Liang, 2021), LoRA (Hu et al., 2021), partial fine-tuning (Zaken et al., 2021), and various adapters (Houlsby et al., 2019; Rücklé et al., 2020). These approaches only tune a subset of the parameters of LLMs, aiming to reduce the risks of overfitting and bringing computational gains. As a common bottleneck, all of these methods need full access to the LLM’s parameters to tune the model, which may not be possible for LLMs with API-based inference-only access. 12230 User Query Documents LLM Encoder LLM Encoder Retriever Query-Doc pairs Search-Adaptor Customization Query embedding Document embeddings Relevant documents Search-Adaptor Impact Figure 1: Search-Adaptor modifies the pre-trained LLM embeddings by learning from paired querydocument data, yielding significantly improved retrieval performance on target tasks. Note that SearchAdaptor does not require access to the LLM weights or gradients and can even be applied to the LLMs that are only accessible via prediction APIs. In this paper, our focus is customizing LLMs to obtain superior embeddings for information retrieval, particularly from the angle of how to best take advantage of the available retrieval-specific tuning data and obtain robust improvements in wide range of regimes, with a tuning method that is low cost and even applicable to LLMs with APIbased inference-only access. Fig. 1 overviews the proposed approach, Search-Adaptor. Improving the tuning performance in this setup requires a set of innovations. We propose integration of a low capacity adapter module (to be customized for the target dataset) on top of fixed LLMs to modify the pre-trained embeddings. For efficient tuning, we introduce a novel differentiable ranking loss that can directly utilize the information of positive query and corpus pairs. In addition, we include multiple regularizers to improve generalization in this small data regime where without intervention, the pretrained LLMs would end up with catastrophic forgetting. Enabled by such design approach, one major advantage of Search-Adaptor is that it does not require access to the parameters of the pre-trained LLMs – only the inference outputs of the LLMs are needed. Commercial embedding APIs that show state-of-the-art performance usually do not provide access to their model parameters. In such cases, Search-Adaptor can still be used to further improve those API-based embedding models, in contrast to alternative tuning methods. We demonstrate the effectiveness of Search-Adaptor across 14 BEIR datasets (Thakur et al., 2021) and 17 MIRACL multilingual datasets (Zhang et al., 2022b) with Google and OpenAI embedding APIs, applying the Search-Adaptor on top. In addition, we evaluate Search-Adaptor’s performance improvements with open-source Sentence T5 models (Ni et al., 2021a). Overall, Search-Adaptor provides significant improvements over alternatives, consistently across different datasets and models. The contributions of this paper are: • We propose a novel adaptation framework for information retrieval applications that can significantly improve the pre-trained LLMs. • We introduce a novel ranking loss and multiple regularizers that reduce overfitting and forgetting and thereby improve the retrieval performance even in the small data regime. • We provide consistent and significant improvements for retrieval performance with a range of datasets (from 1K to 500K positive query-corpus training data pairs). • We show that Search-Adaptor on smaller LLMs can approach the performance of larger LLMs in zero-shot setting, underlining its potential for model distillation. • We extend the application of Search-Adaptor to multimodal learning and tool use scenarios, demonstrating its significant benefits. 2 Related Work Pre-trained LLMs for zero-shot retrieval. LLMs to extract general text embeddings are commonly used in both academia and industry. AI solution providers like Google (gcp) and OpenAI have productionized general text embeddings that can be directly used via simple APIs for zero-shot retrieval applications. In addition, many previous efforts have introduced new general text embedding models with various pre-training methods and datasets. GTE (Li et al., 2023) proposes a multi-stage pre-training of embedding models with diverse naturally paired text datasets. E5 (Wang et al., 2022) pre-trains the embedding models by weakly-supervised contrastive learning, utilizing consistency-based filter to generate high 12231 quality text pairs for pre-training. Note that SearchAdaptor can be applicable on top of any pre-trained LLM embedding models to customize their embeddings for superior retrieval performances. Embedding customization. Instead of using one unified model for zero-shot retrieval, the embeddings can be customized for each dataset or task. Instruction-based embedding customization is one popular method. TART (Asai et al., 2022) builds a retrieval system that adapts the retrieval based on the instruction. Different retrieval tasks (e.g., code, question, or answer) are given as the instruction to further improve dense embedding retrieval. InstructOR (Su et al., 2022) integrates the task and domain descriptions prior to the input to fine-tune the embeddings for retrieval. However, these do not directly utilize the provided relevant query-corpus pairs. Full or parameter-efficient fine-tuning (such as LoRA (Hu et al., 2021) and (IA)3 (Liu et al., 2022)) can also be considered for embedding customization. Pre-trained LLMs can be fine-tuned with contrastive loss using positive query-corpus paired data. Promptagator (Dai et al., 2022) utilizes in-context learning to generate synthetic query-corpus pairs using a few number of original query-corpus pairs, and subsequently using those synthetic pairs to fine-tune the pre-trained LLMs. However, all these are only applicable when there is full access to the parameters of pre-trained LLMs, which is often not possible for state-of-theart commercial text embedding models. On the other hand, Search-Adaptor can be applied without full access to the LLM parameters. 3 Problem Formulation We formulate the retrieval problem with a given query-corpus paired dataset. Assume a query set denoted as Q = {q1, ..., qN} ∈Q and a corpus set denoted as C = {c1, ..., cM} ∈C. Each positive relationship between a query and corpus is represented as the triplet rij = (qi, cj, yij) with yij > 0 as the strength of the relationship between qi and cj. We treat all other triplets as negative relationships (yij = 0). The set of all query-corpus relationships is denoted as R = {(qi, cj, yij)}i=1:N,j=1:M = Rp ∪Rn, where Rp = {(qi, cj, yij) ∈R|yij > 0} is the set of positive relationships and Rn = {(qi, cj, yij) ∈R|yij = 0} is the set of negative relationships. Note that yij can be either binary or continuous. The retrieval system aims to find the relationship Query Corpus LLM Encoder LLM Encoder Query embedding Query Adaptor Corpus Adaptor Cosine Similarity Adapted corpus embedding Adapted query embedding Recovery loss (Regularizer) Ranking loss (supervised loss) Prediction loss (Regularizer) Corpus embedding Query Predictor Recovery loss (Regularizer) Adapted relevance score Query-corpus relevance Skip-connection Skip-connection Predicted query embedding Figure 2: Block diagram of Search-Adaptor. Grey colored blocks are fixed components (e.g., a text embedding API); blue-colored blocks are additional trainable building blocks; and red-colored blocks are for loss computations. At inference, only query and corpus adapters are utilized and the query predictor can be discarded. between the given query (qi) and corpus (cj) such that the predicted relationship is highly correlated with the ground truth relationship (yij). The scoring function f : Q × C →R takes queries and corpus data as inputs and outputs a score estimate on the relationship between them. The optimal score is the one that has the same order as the ground truth relationship for each query. 4 Methods Fig. 2 overviews the components of SearchAdaptor, that are described in following sections. 4.1 Adapting pre-trained LLMs Major real-world constraints for tuning the LLM embedding models shape our methodological design. The tuning operation for high-capacity models can be very costly, and one often does not have access to the parameters and the gradients of the pre-trained models (e.g. LLMs with API-based inference-only access). This motivates the need for an adaptation method that can operate with 12232 fixed pre-trained embedding models with an efficient adaptation module, to extend to the LLMs with API-based inference-only access. In SearchAdaptor, we propose modifying the embeddings extracted from pre-trained LLMs for superior search and information retrieval. Consider the query and corpus embeddings extracted using the pre-trained embedding model E: QE = {qe1, ..., qeN} ∈Rd and CE = {ce1, ..., ceN} ∈Rd where qei = E(qi) and cej = E(cj). Note that both query and corpus embeddings are in the same embedding space. The objective of Search-Adaptor is to modify embeddings extracted from pre-trained LLMs in a way that maximizes retrieval performance. A learnable adaptation function is defined as f : Rd →Rd, which maps the original embedding to a new embedding that is more favorable for retrieval applications. The modified embeddings are denoted as ˆQE = { ˆqe1, ..., ˆqeN} ∈Rd and ˆCE = { ˆce1, ..., ˆceM} ∈Rd where ˆqei = f(qei) and ˆcej = f(cej). The relevance scores between modified query and corpus embeddings are defined as follows: ˆsij = Cosine-Similarity( ˆqei, ˆcej) = ˆqei · ˆcej || ˆqei|||| ˆcej||. Search-Adaptor consists of the following components (see Fig. 2 for details): • Adaptation function f. This function is used to modify the query and corpus embeddings. We add a skip connection to f so that it can only learn the residual between the original and adapted embeddings as follows: ˆqei = qei + f(qei) and ˆcej = cei + f(cei). Note that we use the shared adapter for both query and corpus (see Sec. 6 for ablation studies). The ranking loss, reconstruction loss, and prediction loss are used to train f. • Query predictor p. This function is used to predict the query embedding using the adapted corpus embedding. The prediction loss is used to train p. At inference, we only use the adaptation functions (f) to modify the query and corpus embeddings. We then compute the cosine similarity between the modified query and corpus embeddings to estimate the relevance between query and corpus. Query predictor is not used at inference. 4.2 Ranking objective As explained in Sec. 3, the objective of the retrieval is to predict the correct order of the relevance between queries and corpus. Therefore, the most critical part is to properly design the ranking loss. We propose a ranking loss as follows: LRank = N X i=1 M X j=1 M X k=1 I(yij > yik) · (yij −yik) · log(1 + e(sik−sij)), where I(yij > yik) is an indicator function that is equal to 1 if yij > yik and 0 otherwise. sij = Cosine-Similarity(E(qi), E(cj)) is the relevance score between query text (qi) and corpus text (cj). The ranking loss penalizes the model more when it predicts a lower score for a pair of query and corpus that has a higher ground truth relevance (i.e., sij < sik even though yij > yik). The amount of penalization is proportional to the difference in ground truth relevance (yij −yik) and the difference in estimated scores log(1 + e(sik−sij)). Note that log(1 + e(sik−sij)) can be replaced with any monotonic function such as linear function. In general, the ranking loss encourages the model to predict higher scores for pairs of query and corpus that have a higher ground truth relevance. Table 4 shows a comparison of this ranking loss to alternatives and demonstrates its effectiveness. 4.3 Regularization Introducing proper inductive biases via regularization is important to improve adaptation from pre-trained LLM embeddings without forgetting too much information from the pre-trained LLMs. Towards this end, we propose two regularization methods: Recovery. To increase generalizability, we postulate avoiding modification of the adapted embedding too far away from the original embedding. As such, we propose minimization of the difference between the original and adapted embeddings using a recovery regularizer: LRec = 1 N N X i=1 || ˆqei−qei||1+ 1 M M X j=1 || ˆcei−cei||1 where ˆqei is the adapted query embedding and qei is the original query embedding. Similarly, ˆcei is the adapted corpus embedding and cei is the original corpus embedding. The recovery regularizer 12233 encourages the adapted embeddings to be not too far from the original embeddings. Prediction. Intuitively, if the query and corpus are highly relevant, we can use the corpus to predict the query. Building upon this intuition, we propose a regularizer in the form of prediction loss between the query and corpus, calculated as follows: LPred = PN i=1 PM j=1 yij · || ˆqei −p( ˆcej)||1 PN i=1 PM j=1 yij where p : Rd →Rd is a function that predicts the query given the corpus, and yij is a weight that is assigned to the loss if the query and corpus are correlated. The prediction loss encourages the model to predict the query well given the corpus, especially if the query and corpus are correlated. Note that we do not include the prediction function from query to corpus because usually corpora are significantly longer than queries which would render the task challenging. 4.4 Training Using the proposed ranking loss, recovery loss, and prediction loss, we optimize the adaptation function f and prediction function p by minimizing the following loss function: min f,p LRank(f) + αLRec(f) + βLPred(f, p), where α ≥0 and β ≥0 are the hyper-parameters that control the relative importance of the different loss terms.1 Table 4 shows the results of ablation studies on the effectiveness of the different loss terms. All hyper-parameters are provided in Appendix C. Note that the ranking loss compares all possible pairs between queries and corpus which needs NM2 times computations per one epoch (M >> N). For efficient computation, we randomly subsample the corpus for each query batch. While doing so, we always include the corpus which has positive relevance to queries in that batch. 5 Experiments We evaluate the performance of Search-Adaptor in multiple scenarios on numerous datasets. We demonstrate that Search-Adaptor is modelagnostic, applying it both on top of API-based 1In the experiments, we tune these hyper-parameters based on validation set (α ∈{0.0, 0.1, 1.0} and β ∈ {0.0, 0.01, 0.1}). LLMs (merely via access to Google & OpenAI APIs) and open-sourced LLMs (e.g., Sentence T5 (Ni et al., 2021a)). We also demonstrate that it is data-agnostic by evaluating Search-Adaptor on English, multilingual and multimodal datasets. 5.1 Experimental settings We first consider the 14 retrieval datasets from the BEIR repository (bei) to evaluate the performance in English data, with corpus sizes ranging from 3.6K to 8.8M, and training pairs ranging from 0.7K to 532K. For the datasets with only test data (e.g., Arguana, SciDocs), we split the data into disjoint train and test sets with a 50/50 ratio, based on the sorted query IDs. We also use MIRACL data (mir) which consists of 17 multilingual datasets including Japanese, Chinese, French, and Indonesian. More dataset details can be found in Appendix A. We use nDCG@10 as the main metric to quantify the retrieval performance (see Appendix B for more details). For model selection, we divide the training data into disjoint training and validation splits with an 80/20 ratio, and select the model with the highest validation nDCG@10 value. We consider both API-based and open-sourced LLMs. As the API-based LLM, we use OpenAI embedding API (ope) and Google embedding API (gcp). As the open-sourced LLM, we use Sentence T5 models2 of two different sizes, GTE-Large (Li et al., 2023), GTR-Large (Ni et al., 2021b) and Condenser-Retriever (Gao and Callan, 2021). 5.2 Adapting with API-based LLMs One of the biggest advantages of Search-Adaptor is that it can be applied on top of any API-based LLM – without having access to the parameters of LLMs, Search-Adaptor can further improve the retrieval performance. This is particularly important as the state-of-the-art LLMs are actually API-based models (owned by companies). As can be seen in Fig. 3 and Table 1, we demonstrate the retrieval performance improvements on top of API-based text embedding models across 14 datasets from the BEIR repository. On average, Search-Adaptor achieves 0.0475 and 0.0349 nDCG@10 improvements for both Google and OpenAI text embedding APIs. The improvements of some datasets are quite significant indeed – e.g., 0.1739 with Arguana, 0.0856 with Scifact datasets. 2https://tfhub.dev/google/sentence-t5/ st5-base/1 12234 NFCorpus SciFact Arguana SciDocs FiQA Trec-Covid Touche Quora NQ DBPedia HotPotQA Fever Climate-fever MSMARCO 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 NDCG@10 0.9% 12.1% 32.8% 13.1% 6.6% 3.5% 34.0% 1.8% 5.4% 0.3% 6.2% 3.8% 41.0% 8.7% BEIR dataset LLM embedding (gecko@latest) LLM embedding (gecko@latest) + SearchAdaptor Figure 3: Performance improvements with Search-Adaptor on top of Google’s LLM based embedding APIs (gecko@latest, 768 dimensions) for 14 BEIR datasets. Bengali Hindi Swahili Telugu Thai Yoruba Indonesian Korean Finnish Persian Arabic Chinese Spanish French Japanese Russian Germany 0.3 0.4 0.5 0.6 0.7 0.8 0.9 NDCG@10 7.5% 0.7% 6.5% 8.0% 10.7% 10.3% 12.0% 8.2% 3.3% 4.0% 7.7% 12.2% 5.4% 12.4% 5.9% 2.7% 0.4% Non-English dataset (MIRACL dataset) LLM embedding (gecko-multilingual@latest) LLM embedding (gecko-multilingual@latest) + SearchAdaptor Figure 4: Performance improvements with Search-Adaptor on top of the Google’s LLM based embedding APIs (gecko-multilingual@latest, 768 dimensions) for 17 multilingual MIRACL datasets. Datasets ZeroSearchOpenAI’s shot Adaptor Solution NFCorpus 0.3750 0.3785 0.2595 SciFact 0.7221 0.7904 0.6449 Arguana 0.5885 0.6493 0.6151 SciDocs 0.2003 0.2158 0.1941 FiQA 0.4366 0.4410 0.3119 Trec-Covid 0.7224 0.7733 0.7712 Touche 0.2590 0.3312 0.3157 Quora 0.8830 0.8869 0.8670 Table 1: Performance improvements with SearchAdaptor and OpenAI’s embedding customization solution (OpenAI’s solution) on top of OpenAI’s LLM based embedding APIs (text-embedding-ada-002, 1536 dimensions) with 8 BEIR datasets. We also compare with OpenAI’s embedding customization solutions (OpenAI’s solution)3. Table 1 3https://github.com/openai/openai-cookbook/ blob/main/examples/Customizing_embeddings.ipynb shows worse performance compared to not only Search-Adaptor but also Zero-shot OpenAI embedding in terms of ranking metrics (nDCG@10). OpenAI’s solution tries to solve classification problems between positive and negatively correlated text pairs. It adds “random negatives” to their positive paired samples and try to distinguish between positive and “random negative” pairs using MSE loss. This problem is much easier than the retrieval problem (that Search-Adaptor tries to solve), where the task is to identify the positive pairs from all possible negative pairs (including hard negatives). Also, Search-Adaptor utilizes ranking loss while OpenAI’s solution utilizes MSE loss (which is beneficial for classification or regression problems). Lastly, OpenAI’s solution does not have regularizations that mitigate the high risks of overfitting when the number of query-corpus pairs is small. 12235 Base models ST5-Base GTE-Large Datasets Zero-shot Search-Adaptor Full Fine-tuning Zero-shot Search-Adaptor LoRA NFCorpus 0.3100 0.3258 0.3501 0.3810 0.4063 0.3512 SciFact 0.5237 0.7255 0.7542 0.7419 0.8179 0.6433 Arguana 0.3646 0.5501 0.6239 0.5987 0.6292 0.6091 SciDocs 0.1393 0.1657 0.1640 0.2460 0.2531 0.2209 FiQA 0.4064 0.4416 0.4557 0.4362 0.4428 0.4328 Trec-Covid 0.5990 0.6986 0.4178 0.7242 0.7593 0.7656 Touche 0.2291 0.3393 0.1844 0.2566 0.2905 0.2752 Quora 0.7484 0.8664 0.7817 0.8831 0.8842 0.8871 Average 0.4151 0.5141 0.4151 0.5335 0.5604 0.5232 Table 2: Performance comparison with Search-Adaptor and fine-tuning baselines on top of open-sourced embedding models (ST5-Base and GTE-Large) with 8 BEIR datasets. 5.2.1 Search-Adaptor on multilingual data Search-Adaptor is also applicable on non-English multilingual data. In Fig. 4, Search-Adaptor shows consistent performance improvements on top of Google Embedding API across 17 different languages (on average 0.0396 nDCG@10 improvement). For some languages, it is particularly significant, e.g. the improvement is 0.0687 for Thai. These highlight Search-Adaptor being a model-agnostic and data-agnostic approach. More experiments with additional embedding models (e.g., GTR-Large (Ni et al., 2021b) and CondenserRetriever (Gao and Callan, 2021)) can be found in Appendix D. Qualitative analyses can be also found in Appendix E. 5.3 Adapting with open-sourced LLMs Beyond API-based LLMs, Search-Adaptor can be applied to open-sourced LLMs. Here, we consider Sentence T5-Base (Ni et al., 2021a) and GTELarge (Li et al., 2023) models as the open-sourced LLMs to demonstrate the performance improvements over the baselines. As shown in Table 2, Search-Adaptor shows consistent improvements over zero-shot ST5-Base model. For the open-sourced LLMs, we can also utilize full fine-tuning (with contrastive loss) and LoRA (Hu et al., 2021) as alternatives of SearchAdaptor, albeit the higher training cost. The experimental results on the considered benchmarks show that on average, full fine-tuning and LoRA performances can indeed be worse than SearchAdaptor. Surprisingly, the performance of full fine-tuning and LoRA can even be much worse than the zero-shot baseline (e.g., for Trec-Covid, Touche with full fine-tuning and NFCorpus, SciFact with LoRA) which is attributed to fine-tuning being prone to overfitting and poor generalization (Lin et al., 2023). With a limited number of query-corpus pairs, Search-Adaptor performs better than fine-tuning methods due to lower risk of overfitting. Also, training cost (both memory and computations) is much lower with Search-Adaptor than fine-tuning methods. On the other hand, with enough query-corpus pairs, fine-tuning methods may perform better than Search-Adaptor with open-source retrievers. Based on these pros and cons, the appropriate customization method can be selected given the specific application scenario. If black-box retrievers work better than open-source retrievers or when the number of query-corpus pairs is small, then Search-Adaptor would be a superior choice of customization. On the other hand, if open-source retrievers work better than black-box retrievers and the number of querycorpus pairs is large, we can utilize fine-tuning methods to customize their retrievers. 5.4 Search-Adaptor with multimodal data Datasets ZeroSearchGains shot Adaptor (%) Dresses 0.2315 0.2681 15.8% Jackets 0.1652 0.2319 40.4% Pants 0.1248 0.1821 45.9% Skirts 0.1923 0.2282 18.7% Tops 0.2270 0.2542 12.0% Table 3: Multimodal retrieval performance (text to image) with Search-Adaptor for Google Cloud’s LLM based multimodal embedding API (1408 dimensions) with Fashion-200K dataset. Search-Adaptor makes consistent and significant improvements when applied on text embeddings. We also extend Search-Adaptor from textonly to multimodal data, with image and text, using 12236 Variants NFCorpus SciFact Arguana SciDocs FiQA Trec-covid Zero-shot baseline 0.3100 0.5237 0.3646 0.1393 0.4064 0.5990 Original Search-Adaptor 0.3258 0.7255 0.5501 0.1657 0.4416 0.6986 (a) Architectural modifications Without skip connection 0.3243 0.6465 0.5110 0.1579 0.4133 0.6380 With separate adapters 0.3047 0.5488 0.3659 0.1463 0.3977 0.6148 (b) Regularizer modifications Without prediction loss 0.3235 0.6501 0.5456 0.1642 0.4078 0.6177 Without reconstruction loss 0.3245 0.6491 0.5439 0.1637 0.4127 0.6551 (c) Alternative ranking losses Sigmoid cross entropy 0.3026 0.5917 0.4912 0.1567 0.4052 0.6702 Contrastive loss (Izacard et al., 2021b) 0.3046 0.5316 0.4822 0.1449 0.4091 0.6723 Softmax cross entropy (Bruch et al., 2019) 0.3097 0.5452 0.4874 0.1346 0.4121 0.6549 RankNet loss (Burges et al., 2005) 0.3119 0.5511 0.4699 0.1599 0.4155 0.6428 Table 4: Ablation studies with variants of Search-Adaptor. As ablation scenarios, we modify regularizers and architectures of the original Search-Adaptor, and replace the proposed ranking loss with alternative ranking losses. Google Cloud’s multimodal embedding API 4. We use the Fashion-200K dataset (Han et al., 2017) for the text to image retrieval task to show how much Search-Adaptor can make further improvements on top of base multimodal embedding API. Table 3 shows that Search-Adaptor can achieve 20-30% of performance improvement in nDCG@10 across 5 sub datasets of Fashion-200K datasets. Relevant qualitative analyses can be found in Appendix F. 5.5 Search-Adaptor for tool retrieval Datasets ZeroSearchGains shot Adaptor (%) ToolE - single tool 0.5292 0.8321 57.2% ToolBench - I1 0.6289 0.7320 16.4% ToolBench - I2 0.5054 0.6774 34.0% ToolBench - I3 0.5833 0.7917 35.7% Table 5: Tool retrieval performance with SearchAdaptor on top of Google’s LLM based embedding API (gecko@latest) in terms of NDCG@1 metric. In this subsection, we further extend SearchAdaptor to tool retrieval application (Patil et al., 2023) where agents choose which actions to perform to automate execution of a task given the input query. The objective of tool retrieval is to retrieve the proper tools for the new query based on the descriptions of tools. On two datasets, ToolE (Huang et al., 2023), ToolBench (Qin et al., 2023), we study the potential of Search-Adaptor to improve the tool 4https://cloud.google.com/vertex-ai/ generative-ai/docs/embeddings/ get-multimodal-embeddings retrieval performance, that would yield superior agents. Table 5 shows that with Search-Adaptor, significant retrieval performance improvements, 1550%, are obtained. 6 Discussions 6.1 Ablation studies Search-Adaptor proposes multiple innovations to improve the adaptation performance. We quantify the contributions of proposed constituents to the retrieval performance on various datasets with as ST5-Base as the base embedding model, with the results in Table 4. We consider various modifications to Search-Adaptor: (i) altering the architecture, (ii) applying different regularizations, and (iii) applying different losses. Using different losses yields the largest performance degradation, underlining the importance of the proposed ranking loss. Aside from the losses, if we use separate adapters for query and corpus, it also yields a noticeable performance drop. This shows the importance of ‘shared embedding space’ between the query and corpus for retrieval. The skip connection and the two regularization functions bring additional performance gains but their impact is lower than the ranking losses. Table 4(c) shows the impact of the proposed ranking loss in comparison to alternatives: (i) point-wise sigmoid cross entropy, (ii) contrastive loss (Izacard et al., 2021b), (iii) softmax cross entropy (Bruch et al., 2019) and (iv) RankNet loss (Burges et al., 2005). With the proposed ranking loss of the original Search-Adaptor, significant outperformance is observed, compared 12237 to the alternative ranking losses. 6.2 Small LLMs with Search-Adaptor outperform zero-shot large LLMs One bottleneck for real-world LLM deployment can be the prediction latency and cost, that are highly dependent on the LLM model size. We demonstrate that Search-Adaptor can achieve better or comparable retrieval performance even with much smaller LLMs, compared to to zero-shot retrieval with larger LLMs. LLMs ST5-Base ST5-Large Datasets ZeroSearchZeroSearchshot Adaptor shot Adaptor NFCorpus 0.3100 0.3258 0.3354 0.3410 SciFact 0.5237 0.7255 0.5801 0.7530 Arguana 0.3646 0.5501 0.2662 0.4770 SciDocs 0.1393 0.1657 0.1618 0.1850 FiQA 0.4064 0.4416 0.4785 0.5028 Trec-covid 0.5990 0.6986 0.6471 0.7082 Touche 0.2291 0.3393 0.2624 0.3408 Quora 0.7484 0.8664 0.7560 0.9705 Average 0.4151 0.5141 0.4607 0.5223 Table 6: The performance of Search-Adaptors when applied on Sentence-T5 models, (i) ST5-Base (110M parameters) and (ii) ST5-Large (335M parameters), in terms of nDCG@10 metric. As shown in Table 6, Search-Adaptor with ST5Base model (110M parameters) performs much better than ST5-Large (335M parameters). SearchAdaptor can achieve better results with much smaller encoders, positioning it as an effective distillation mechanism to significantly decrease the serving cost and latency of retrieval systems. It also reiterates the benefits of Search-Adaptor being model agnostic. 7 Conclusions In this paper, we focus on pushing the capabilities of LLMs for information retrieval and search. We propose a canonical efficient adaptation method, Search-Adaptor, that can also be applied to LLMs even with inference-only access. Search-Adaptor is a low-cost tuning method that brings significant and consistent improvements in retrieval performance across diverse regimes of training data size, encoder type, and corpus set. This is enabled by the judicious design of its adaptor module, along with training objectives and approaches. We have also studied the extension of Search-Adaptor to multimodal learning and tool use scenarios, highlighting the importance of embedding customization for such applications. 8 Limitations and Future Works Important future directions might include generalizing the propose adaptation method to include partial tuning of the embedding models as a way to increase trainable degrees of freedom; extensions to embedding tasks beyond retrieval; and extensions to multimodal learning with many modalities. There is no specific risk of the proposed method other than the general risks of tuning methods that they can lead to overfitting to certain tasks and they can absorb the biases present in the target tuning data. References Beir data repository. https://github.com/ beir-cellar/beir. Google cloud text embedding. https://cloud. google.com/vertex-ai/docs/generative-ai/ embeddings/get-text-embeddings. Miracl data repository. https://project-miracl. github.io/. Openai text embedding. https://platform.openai. com/docs/guides/embeddings/embeddings. Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, and Wen-tau Yih. 2022. Taskaware retrieval with instructions. arXiv preprint arXiv:2211.09260. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Sebastian Bruch, Xuanhui Wang, Michael Bendersky, and Marc Najork. 2019. An analysis of the softmax cross entropy loss for learning-to-rank with binary relevance. In Proceedings of the 2019 ACM SIGIR international conference on theory of information retrieval, pages 75–78. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In Proceedings of the 22nd international conference on Machine learning, pages 89–96. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling 12238 language modeling with pathways. arXiv preprint arXiv:2204.02311. Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B Hall, and Ming-Wei Chang. 2022. Promptagator: Few-shot dense retrieval from 8 examples. arXiv preprint arXiv:2209.11755. Luyu Gao and Jamie Callan. 2021. Condenser: a pre-training architecture for dense retrieval. arXiv preprint arXiv:2104.08253. Xintong Han, Zuxuan Wu, Phoenix X Huang, Xiao Zhang, Menglong Zhu, Yuan Li, Yang Zhao, and Larry S Davis. 2017. Automatic spatially-aware fashion concept discovery. In Proceedings of the IEEE international conference on computer vision, pages 1463–1471. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790–2799. PMLR. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, et al. 2023. Metatool benchmark for large language models: Deciding whether to use tools and which to use. arXiv preprint arXiv:2310.03128. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021a. Towards unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021b. Unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118. Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691. Xiang Lisa Li and Percy Liang. 2021. Prefixtuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190. Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. 2023. Towards general text embeddings with multi-stage contrastive learning. arXiv preprint arXiv:2308.03281. Yong Lin, Lu Tan, Hangyu Lin, Zeming Zheng, Renjie Pi, Jipeng Zhang, Shizhe Diao, Haoxiang Wang, Han Zhao, Yuan Yao, et al. 2023. Speciality vs generality: An empirical study on catastrophic forgetting in fine-tuning foundation models. arXiv preprint arXiv:2309.06256. Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. 2022. Few-shot parameter-efficient finetuning is better and cheaper than in-context learning. Advances in Neural Information Processing Systems, 35:1950–1965. Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant, Ji Ma, Keith B Hall, Daniel Cer, and Yinfei Yang. 2021a. Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models. arXiv preprint arXiv:2108.08877. Jianmo Ni, Chen Qu, Jing Lu, Zhuyun Dai, Gustavo Hernández Ábrego, Ji Ma, Vincent Y Zhao, Yi Luan, Keith B Hall, Ming-Wei Chang, et al. 2021b. Large dual encoders are generalizable retrievers. arXiv preprint arXiv:2112.07899. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. 2023. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789. Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, and Iryna Gurevych. 2020. Adapterdrop: On the efficiency of adapters in transformers. arXiv preprint arXiv:2010.11918. Hongjin Su, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A Smith, Luke Zettlemoyer, Tao Yu, et al. 2022. One embedder, any task: Instruction-finetuned text embeddings. arXiv preprint arXiv:2212.09741. 12239 Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv preprint arXiv:2104.08663. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weaklysupervised contrastive pre-training. arXiv preprint arXiv:2212.03533. Elad Ben Zaken, Shauli Ravfogel, and Yoav Goldberg. 2021. Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked languagemodels. arXiv preprint arXiv:2106.10199. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022a. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Xinyu Zhang, Nandan Thakur, Odunayo Ogundepo, Ehsan Kamalloo, David Alfonso-Hermelo, Xiaoguang Li, Qun Liu, Mehdi Rezagholizadeh, and Jimmy Lin. 2022b. Making a miracl: Multilingual information retrieval across a continuum of languages. arXiv preprint arXiv:2210.09984. 12240 A Data Statistics A.1 BEIR datasets Datasets The number of train pairs The number of test pairs The number of corpus NFCorpus 110575 12334 3.6K SciFact 919 339 5K Arguana 703 703 8.67K SciDocs 14972 14956 25K FiQA 14166 1706 57K Trec-Covid 35460 30876 171K Touche 1077 1137 382K Quora 7626 15675 523K NQ 2097 2104 2.68M DBPedia 5673 43515 4.63M HotPotQA 170000 14810 5.23M Fever 140085 7937 5.42M Climate-fever 2299 2382 5.42M MSMarco 532751 9260 8.84M Table 7: The statistics of the BEIR datasets (sorted by the number of corpus). A.2 MIRACL datasets Datasets The number of train pairs The number of test pairs The number of corpus Yoruba (yo) 959 229 49043 Swahilli (sw) 9359 5092 131924 Bengali (bn) 16754 4206 297265 Hindi (hi) 11668 3494 506264 Telugu (te) 18608 1606 518079 Thai (th) 21293 7573 542166 Indonesian (id) 41358 9668 1446315 Korean (ko) 12767 3057 1486752 Finnish (fi) 20350 12008 1883509 Arabic (ar) 25382 29197 2061414 Persian (fa) 21844 6571 2207172 Chinese (zh) 13113 3928 4934368 Japanese (ja) 34387 8354 6953614 Russian (ru) 33921 13100 9543918 Spanish (es) 21531 6443 10373953 French (fr) 11426 3429 14636953 Germany (de) 2526 628 15866222 Table 8: The statistics of the MIRACL datasets (sorted by the number of corpus). A.3 Fashion-200K datasets Datasets The number of train pairs The number of test pairs The number of corpus Dresses 15127 1567 72376 Jackets 8105 1511 71118 Pants 9264 1758 74470 Skirts 6822 1247 47931 Tops 13809 2536 72444 Table 9: The statistics of the Fashion-200K datasets. 12241 A.4 Tool retrieval datasets Datasets The number of train pairs The number of test pairs The number of corpus ToolE - single tool 16440 4110 199 ToolBench - I1 87322 97 10439 ToolBench - I2 84722 93 13142 ToolBench - I3 25155 96 1605 Table 10: The statistics of the tool retrieval datasets (ToolE and ToolBench). B Metrics For tasks that involve retrieving information, normalized discounted cumulative gain (nDCG) (Järvelin and Kekäläinen, 2002) is a standard metric for evaluating performance. To define nDCG, we first consider discounted cumulative gain (DCG): DCG(y, s) = X i 2yi log2(rank(si) + 1), where s is the relevance score computed by the model and y is the ground truth label. nDCG is then defined as nDCG(y, s) = DCG(y,s) DCG(y,y), where the denominator assumes the optimal case where the ranking of the scores (s) are exactly the same as the ranking of the ground truth label (y). nDCG@k is a widely used variation of nDCG where only the top k scores are considered. In this paper, we use nDCG@10 as the main retrieval metric. C Hyper-parameters We summarize the hyper-parameters used to train Search-Adaptor. In all experiments, we utilize the fixed hyper-parameters (except α, β) that enable applying Search-Adaptor without extensive hyper-parameter tuning. We use multi-layer perceptron as the adaptor architecture for both the encoder and the predictor. Hyper-parameters Fixed values Recovery loss coefficient (α) {0.0, 0.1, 1.0} Prediction loss coefficient (β) {0.0, 0.01, 0.1} Batch size for training 128 Maximum number of training iterations 2000 Patience for early stopping 125 Learning rates 0.001 Optimizer Adam Negative pair subsampling ratio (compared with positive pairs) 10 Table 11: Hyper-parameters used to train Search-Adaptor in all experiments. 12242 D Additional Experiments We include the additional results of Search-Adaptor with GTR-Large5 (Ni et al., 2021b) and CondenserRetriever 6 (Gao and Callan, 2021) as the base embedding models. As can be seen in Table. 12, the results are consistent with the above results that Search-Adaptor shows consistent and significant improvements on top of both GTR-Large and Condenser-Retriever models. For the Condenser-Retriever model, we apply pooling and normalization on the token embeddings to extract the final text embeddings. Datasets GTR-Large Model Condenser-Retriever Model Zero-shot Search-Adaptor Gains (%) Zero-shot Search-Adaptor Gains (%) NFCorpus 0.3148 0.3242 2.99% 0.0882 0.2506 184.13% SciFact 0.5331 0.7469 40.11% 0.2182 0.6783 210.86% Arguana 0.5139 0.6360 23.76% 0.2744 0.3757 36.92% SciDocs 0.1657 0.1687 1.81% 0.0659 0.1215 84.37% FiQA 0.4069 0.4265 4.82% 0.0775 0.2445 215.48% Trec-Covid 0.6912 0.7481 8.23% 0.3416 0.5769 68.88% Touche 0.2723 0.3227 18.51% 0.0623 0.1928 8.34% Quora 0.8428 0.8795 4.35% 0.7937 0.8599 8.34% Average 0.4676 0.5315 13.68% 0.2402 0.4125 71.72% Table 12: Performance improvements with Search-Adaptor on top of GTR-Large and Condenser-Retriever embedding models. 5https://huggingface.co/sentence-transformers/gtr-t5-large 6https://huggingface.co/Luyu/co-condenser-marco-retriever 12243 E Qualitative Analysis To understand the impact of Search-Adaptor, we first analyze the cosine similarity between query and corpus, before and after Search-Adaptor training. 0.4 0.5 0.6 0.7 0.8 Cosine similarity 0 5 10 15 20 25 Number of query-corpus pairs Relevant query-corpus pairs Irrelevant query-corpus pairs (a) Score distribution before Search-Adaptor 0.2 0.0 0.2 0.4 0.6 0.8 Cosine similarity 0 5 10 15 20 25 30 35 Number of query-corpus pairs Relevant query-corpus pairs Irrelevant query-corpus pairs (b) Score distribution after Search-Adaptor Figure 5: Cosine similarity score distributions before and after Search-Adaptor. As can be seen in Fig. 5, after Search-Adaptor training, the distribution differences between relevant and irrelevant pairs’ cosine similarity are larger which means that we can identify the relevant corpus per each query better. To further understand the distribution difference of query and corpus embeddings before and after Search-Adaptor training, we plot tSNE graphs of them. 80 60 40 20 0 20 40 60 80 100 75 50 25 0 25 50 75 (a) tSNE analysis before Search-Adaptor 100 75 50 25 0 25 50 75 100 100 75 50 25 0 25 50 75 100 (b) tSNE analysis after Search-Adaptor Figure 6: tSNE distributions before and after Search-Adaptor. Red represents query embedding and blue represents corpus embedding. Fig. 6 shows the impact of Search-Adaptor. The left figure shows that the original query and corpus embeddings are quite distinct. Most query embeddings are located in the restricted region. On the other hand, after training with Search-Adaptor, query embedding distribution is observed to better overlap with the corpus embedding distribution, which could result in more robust retrieval. 12244 We further investigate the success and failure cases of Search-Adaptor in comparison to the zero-shot baseline. Bold represents the relevant corpus to the query. Query Baseline Retrieval Search-Adaptor Retrieval Suboptimal nutrition is not predictive of chronic disease Maternal and child undernutrition: consequences for adult health and human capital Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015 Effect of women’s nutrition before and during early pregnancy on maternal and infant outcomes: a systematic review. Dietary quality among men and women in 187 countries in 1990 and 2010: a systematic assessment Dietary quality among men and women in 187 countries in 1990 and 2010: a systematic assessment Biomarkers of endothelial dysfunction and risk of type 2 diabetes mellitus. The PRR MDA5 is a sensor of RNA virus infection. Ribose 2-O-methylation provides a molecular signature for the distinction of self and non-self mRNA dependent on the RNA sensor Mda5 Immune signaling by RIG-I-like receptors. Immune signaling by RIG-I-like receptors. Ribose 2-O-methylation provides a molecular signature for the distinction of self and non-self mRNA dependent on the RNA sensor Mda5 RIG-I-mediated antiviral responses to singlestranded RNA bearing 5’-phosphates. RIG-I-mediated antiviral responses to singlestranded RNA bearing 5’-phosphates. A deficiency of vitamin B12 increases blood levels of homocysteine. Preventing coronary heart disease: B vitamins and homocysteine. Folic acid improves endothelial function in coronary artery disease via mechanisms largely independent of homocysteine lowering. Effect of homocysteine lowering on mortality and vascular disease in advanced chronic kidney disease and end-stage renal disease: a randomized controlled trial. Randomized trial of folic acid supplementation and serum homocysteine levels. Hyperhomocysteinemia and atherosclerotic vascular disease: pathophysiology, screening, and treatment. off. The effect of folic acid supplementation on plasma homocysteine in an elderly population. Table 13: Success cases: Examples of query and top-3 retrieved documents where relevant documents are ranked higher in Search-Adaptor in comparison to baseline. Top-3 retrieved documents’ titles are listed above. 12245 Query Baseline Retrieval Search-Adaptor Retrieval Antibiotic induced alterations in the gut microbiome reduce resistance against Clostridium difficile Antibiotic-induced shifts in the mouse gut microbiome and metabolome increase susceptibility to Clostridium difficile infection Precision microbiome reconstitution restores bile acid mediated resistance to Clostridium difficile Precision microbiome reconstitution restores bile acid mediated resistance to Clostridium difficile Antibiotic-induced shifts in the mouse gut microbiome and metabolome increase susceptibility to Clostridium difficile infection Role of gut commensal microflora in the development of experimental autoimmune encephalomyelitis. Microbiome-driven allergic lung inflammation is ameliorated by short-chain fatty acids The genomic aberrations found in matasteses are very similar to those found in the primary tumor. Evolution of metastasis revealed by mutational landscapes of chemically induced skin cancers Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. Molecular characterization of endometrial cancer: a correlative study assessing microsatellite instability, MLH1 hypermethylation, DNA mismatch repair protein expression, and PTEN, PIK3CA, KRAS, and BRAF mutation analysis. Diverse tumorigenic pathways in ovarian serous carcinoma. Deregulated DNA polymerase beta induces chromosome instability and tumorigenesis. Evolution of metastasis revealed by mutational landscapes of chemically induced skin cancers Incidence rates of cervical cancer have increased due to nationwide screening programs based primarily on cytology to detect uterine cervical cancer. Mass screening programmes and trends in cervical cancer in Finland and the Netherlands. The effect of mass screening on incidence and mortality of squamous and adenocarcinoma of cervix uteri. The effect of mass screening on incidence and mortality of squamous and adenocarcinoma of cervix uteri. Mass screening programmes and trends in cervical cancer in Finland and the Netherlands. Efficacy of human papillomavirus testing for the detection of invasive cervical cancers and cervical intraepithelial neoplasia: a randomised controlled trial. Efficacy of human papillomavirus testing for the detection of invasive cervical cancers and cervical intraepithelial neoplasia: a randomised controlled trial. Table 14: Failure cases: Examples of query and top-3 retrieved documents where relevant documents are ranked higher in baseline in comparison to Search-Adaptor. Top-3 retrieved documents’ titles are listed above. As can be seen in Table. 13 and 14, in failure cases, Search-Adaptor still can retrieve the relevant corpus in the top-3 corpus but the ranking is lower than the baseline. For the success cases, Search-Adaptor can retrieve the correct corpus even though the baseline is completely failed. Quantitatively, with 300 test samples, there are 9 cases where Search-Adaptor can retrieve the correct corpus in top-3 but Baseline cannot retrieve any correct corpus in top-3. But there is no case for the opposite. 12246 F Qualitative Analysis on Multimodal Data Text Query Ground Truth Zero-shot (Top-2) Search-Adaptor (Top-2) multicolor fisher project recycled cashmere blazer black women's bonded crepe biker jacket black freya wide leg trouser waistband blue plaid pull trousers black cutout-back sheath dress beige sequin lace t-shirt dress Figure 7: Qualitative analyses of text to image retrieval with Search-Adaptor using Fashion-200K data. Fig. 7 shows 6 examples in Fashion-200K datasets where Search-Adaptor makes better retrieved output than the zero-shot baseline (Google Cloud’s multimodal embedding API). First column represents the given text query. Second column represents the ground truth relevant image for the given text query. Third column shows the top-2 retrieved outputs based on the zero-shot baseline. Last column shows the top-2 retrieved outputs based on Search-Adaptor. Note that the ground truths are included in top-2 retrieved outputs by Search-Adaptor but not included in the baseline. 12247
2024
661
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12248–12267 August 11-16, 2024 ©2024 Association for Computational Linguistics Back to Basics: Revisiting REINFORCE-Style Optimization for Learning from Human Feedback in LLMs Arash Ahmadian1,2, Chris Cremer2 Matthias Gallé2, Marzieh Fadee1, Julia Kreutzer1, Olivier Pietquin2, Ahmet Üstün1, Sara Hooker1 1Cohere For AI, 2Cohere {arash,olivier,ahmet,sarahooker}@cohere.com Abstract AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. PROXIMAL POLICY OPTIMIZATION (PPO) has been positioned by recent literature as the canonical method for the RL part of RLHF. However, it involves both high computational cost and sensitive hyperparameter tuning. We posit that most of the motivational principles that led to the development of PPO are less of a practical concern in RLHF and advocate for a less computationally expensive method that preserves and even increases performance. We revisit the formulation of alignment from human preferences in the context of RL. Keeping simplicity as a guiding principle, we show that many components of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style optimization variants outperform both PPO and newly proposed “RLfree” methods such as DPO and RAFT. Our work suggests that careful adaptation to LLMs alignment characteristics enables benefiting from online RL optimization at low cost. 1 Introduction State-of-art Large Language Models (LLMs) are typically pre-trained on tremendous amounts of text (Brown et al., 2020; OpenAI et al., 2023; Anil et al., 2023; Touvron et al., 2023a,b; Üstün et al., 2024) spanning trillions of tokens. These training corpora often contain many complex preferences, relations, and intentions that may not all be desirable for an LLM to exhibit. A question of great interest to both the research and wider practitioner community is how to align these models to human preferences? Despite being the focus of considerable research effort (Khalifa et al., 2021; Ouyang et al., 2022; Bai et al., 2022b; Lee et al., 2023; Tunstall et al., 2023), there is a lack of consensus regarding the optimal approach to achieve this goal. Reinforcement Learning from Human Feedback (RLHF), one of the most widely regarded alignment approaches, directly borrows from traditional RL literature and uses techniques such as PROXIMAL POLICY OPTIMIZATION (PPO) to maximize the reward score produced by a reward model that is typically trained as a binary classifier on pairs of completions labeled by human annotators. While PPO has become a canonical approach cemented in popularity through its usage in the seminal literature on RLHF (Stiennon et al., 2020; Nakano et al., 2022; Bai et al., 2022b), getting PPO to work in practice is non-trivial for non-RL specialists and comes with known issues: 1. Computational Cost: PPO typically requires loading up to 4 models simultaneously: the generator, the reference (for KL estimation), the critic, and the reward model, where the training of the generative and critic models are interleaved (Schulman et al., 2017). This challenge is further exacerbated by the size of modern LLMs, ranging in the billions of parameters (OpenAI et al., 2023; Stiennon et al., 2020; Touvron et al., 2023a). 2. Optimization challenges: the unstable and sensitive nature of online RL optimization, and the relative algorithmic complexity of PPO requires niche expertise to tune it well (Engstrom et al., 2020). Recent works propose “RL-free” methods such as DPO (Rafailov et al., 2023), IPO (Azar et al., 2023) or iterative fine-tuning approaches to LLM preference training (Yuan et al., 2023; Zhao et al., 2023; Dong et al., 2023). However, these works fail to question whether a simpler solution within an RL paradigm exists. Instead, all these approaches attempt to answer this question by stripping all 12248 RL components from RLHF and the difficulties that come with it (Rafailov et al., 2023; Zhao et al., 2023; Yuan et al., 2023; Liu et al., 2023; Dong et al., 2023; Azar et al., 2023; Choi et al., 2024). Iterative fine-tuning techniques rely solely on a powerful reward model to identify a subset of samples to train on, while DPO and IPO avoid both reinforcement learning and training a separate reward model by directly learning from human feedback. In contrast to these approaches, we remain in the RL paradigm, but instead return to basics. The core question we seek to explore in this work is can we avoid the computational and optimization complexity of PPO while preserving performance?. We isolate several key differences between traditional Deep-RL settings which originally motivated PPO and typical human-preference learning settings for LLMs. We note that PPO, as an approach, emphasizes stability across iterations, aiming to train an effective policy with the premise of small, stable updates. PPO was designed for a regime where off-policy gradient updates are large enough to introduce instability. This regime dominates traditional Deep-RL benchmarks (Engstrom et al., 2020; Schulman et al., 2017). However, in this work, we posit that the setting of RLHF, which involves fine-tuning a pre-trained LLM, is lacking in these characteristics. In contrast to traditional Deep-RL settings, the initialization of the policy, in the form of a pretrained and supervised fine-tuned (SFT) model, is far from a random parameterization. While the conceivable search space is enormous, due to the pretraining and SFT stages, only a far smaller subset of tokens is likely to be generated as the probability mass is concentrated on these few tokens. Thus, while traditional Deep-RL settings require strong regularization to reduce the high variance of the gradient estimators; we observe empirically this is less of a practical concern in RLHF and motivate a less computationally expensive method that preserves robustness (Wu et al., 2018; Kreutzer et al., 2021). Furthermore, we revisit how learning from human preferences is formulated in the context of RL where generating each token is modeled as an action, and each partial sequence, starting with the prompt, is seen as a state. In practice, this modeling assumption for PPO method is often voided. We argue and show that the modeling of partial sequences is unnecessary in this setting where rewards are only attributed to full generations, with no true rewards for any intermediary tokens in the generation. Thus, it is more appropriate and efficient to model the entire generation as a single action with the initial state determined by the prompt. Given these observations, while keeping simplicity as a guiding principle, we explore the use of the REINFORCE estimator (Williams, 1992) and its multi-sample extension REINFORCE LEAVEONE-OUT (RLOO) (Kool et al., 2019) to optimize the sequence-level objective. We break apart PPO and show that the most basic policy gradient algorithm, Vanilla Policy Gradient REINFORCE consistently outperforms PPO. PPO is unnecessarily complicated for a pre-trained LLM environment. Unlike PPO, we can use REINFORCE to directly optimize the full trajectory (sequence) return coupled with unbiased baselines, whereas actor-critic algorithms (Sutton et al., 1999), such as PPO, bootstrap off intermediary state value-functions to reduce variance at the cost of introducing bias into the estimator. We arrive at consistent results across models including Llama (Touvron et al., 2023a), Pythia (Biderman et al., 2023) and datasets such as the Anthropic Helpful & Harmless (Bai et al., 2022a) and TL;DR Summarize (Stiennon et al., 2020): 1. PPO is not the right tool for doing RL in RLHF. We break apart PPO and show that the most “basic” policy gradient algorithm, Vanilla Policy Gradient REINFORCE (Sutton and Barto, 2020), is consistently outperforming PPO by 3.2% to 20.3% in terms of win-rate, across all dataset and base model pairing. 2. RLOO outperforms key baselines. Built on top of REINFORCE, RLOO enables using multiple online samples, and we empirically show it consistently outperforms baselines such as PPO, DPO (Rafailov et al., 2023) as well as RAFT (Dong et al., 2023) across all datasets and models. We show that RLOO makes better use of online samples than RAFT while presenting a higher robustness to noise and degree of KL penalty. 3. Modeling partial completions is not necessary. We effectively demonstrate that modeling partial sequences is an unnecessary undertaking for LLM preference training. Instead, modeling the full generations preserves performance while reducing complexity in the RL 12249 0 20 40 60 80 Steps 1 2 3 4 r(x, y) = 1.0 = 0.95 = 0.5 = 0.0 0 20 40 60 80 Steps 1 2 3 4 r(x, y) = 1.0 - No CLIP = 1.0 - No CLIP - No Ratio = 1.0 Figure 1: Training reward curves for PPO. Left: Higher variance reduction at the cost of bias (low λ) performs worse than forgoing variance reduction but introducing less bias (high λ). λ = 1.0 which has the highest variance but no bias, and makes use of the full-trajectory reward in each update, performs the best. Right: PPO is unnecessarily complicated. Removing various augmentations of PPO from the loss does not degrade performance. stage and significantly accelerating learning. 4. RLOO is relatively robust to noise and KL penalty sensitivity. We also accompany our results with a multi-dimensional analysis concerning language fluency, diversity, and robustness to noise. We showcase RLOO robustness to noise and degree of KL penalty compared to RAFT. 2 Background The original RLHF pipeline for LLMs proposed in (Ziegler et al., 2020) consists of three stages: (1) SFT Stage: A pre-trained LM is instructiontuned using a dataset consisting of a given instruction prompt, and (typically) a human-written completion. The LM/policy is trained with a crossentropy loss over the completion only. Often, the SFT model, denoted as πsft is used to initialize both the reward model and the RLHF policy. LRM = −log σ(rϕ(x, y+) −rϕ(x, y−)) (1) where σ denotes the logistic function. (3) RL Stage: In this stage, the reward model is used to provide online feedback in the optimization of the policy with the following objective: max πθ Ex∼D,y∼πθ(.|x)[rϕ(x, y) −βpKL], (2) with pKL = DKLπθ(.|x)||πref(.|x) (3) where β is meant to control the distance from the initial policy, πref during the optimization of rθ(x, y) as proposed in (Stiennon et al., 2022). The KL-penalty pKL is crucial as penalty-free optimization of the reward model leads to degradation in the coherence of the model. Optimizing this objective is equivalent to maximizing the following KL-shaped reward in expectation: R(x, y) = rϕ(x, y) −β log πθ(y|x) πref(y|x) (4) While reinforcement learning approaches share the components above, techniques differ in the formulation of the reward. To understand these differences, we introduce PPO and distinct alternatives such as REINFORCE and REINFORCE LeaveOne-Out in the following sections. 2.1 PPO When using PPO in the RL stage, the initial state is determined by the prompt, each generated token is modeled as an action, and partial sequences are seen as states, with a discount factor (γ ∈[0, 1]) of 1 used. In this framework, only generating the <EOS> token carries a reward as output by the reward model which is combined with KL penalty, while for all other tokens in the vocabulary, only the KL component is non-zero: R(x, y) = XT t=1 Rt(x, yt) (5) where yt denotes the t-th token of y, T the number of tokens in the trajectory, and Ri the correspondingly shaped reward. In practice, the following token-level clipped objective is used in PPO: min  f(yt|st) ˆAλ(yt, st), (6) clip1+ϵ 1−ϵ(f(yt|st)) ˆAλ(yt, st)  with f(yt|st) = πθ(yt|st) πold(yt|st), 12250 where st = {y<t, x} represents the state i.e. context at generation step t that is composed of the history of generated tokens y<t and the given prompt x, πold is an older policy (not the same as πref), and ˆA(yt, st) is the estimated advantage function, and ϵ is the clipping ratio. The advantage function ˆA(yt, st) is estimated using Generalized Advantage Estimation (GAE) (Schulman et al., 2018), of generating token (action) yt, at partial completion (state) at token t −1 of the generation. 2.2 REINFORCE Given that in LLM applications, r(x, y) is only obtained at the end of the full sequence, it may be more appropriate to model the entire generation as a single action, as opposed to each token. Although it has not been explored in the context of LLM alignment, modeling the full completion as a single action, as in the bandit formulation, allows using the REINFORCE estimator (Kreutzer et al., 2017; Nguyen et al., 2017a; Williams, 1992). This allows for back-propagating through the discrete action (generation) space, and directly optimize the KLshaped reward objective for the entire sequence. Ex∼D,y∼πθ(.|x)[R(y, x)∇θ log πθ(y|x)] (7) To improve learning, one can reduce the variance of the estimator in Eq. 7, while keeping it unbiased, by subtracting a baseline b that has high covariance with the stochastic gradient estimate of Eq. 7 (Williams, 1992; Mnih and Gregor, 2014): Ex∼D,y∼πθ(.|x)[(R(y, x) −b)∇θ log πθ(y|x)] (8) With a strong parameter-free choice for the baseline being the moving average of all rewards throughout training (Williams, 1992): bMA = 1 S X s R(xs, ys) (9) Where S is the number of training steps, and (xs, ys) is the prompt-completion pair at the step s. 2.3 REINFORCE Leave-One-Out (RLOO) The baseline in Eq. 9 is simple to implement and computationally cheap. However, it can be improved upon if we have access to multiple online samples, that can be used for further unbiased variance reduction: (1) The rewards for each sample can serve all other samples as a baseline. (2) Policy updates can be done on an average of gradient estimates for each sample, resulting in a variance-reduced multi-sample Monte-Carlo (MC) estimate. This is the intuition behind the REINFORCE Leave-One-Out (RLOO) estimator, proposed by (Kool et al., 2019)1: 1 k k X i=1 [R(y(i), x) − 1 k −1 X j̸=i R(y(j), x)] ∇log π(y(i)|x) for y(1), ..., y(k) i.i.d ∼πθ(.|x) RLOOk considers each y(i) individually and uses the remaining k −1 samples to create an unbiased estimate of the expected return for the prompt, akin to a parameter-free value-function, but estimated at each training step. This is a much more effective baseline (as our experiments show) than bMA since it’s created on-the-fly for each sample and at each training step, but comes at a cost of increased sampling time during training. 2.4 Alternatives to RL in Preference Training Iterative Fine-tuning Iterative fine-tuning methods use the trained reward model to rank completions of online or offline sampled prompts, and then iteratively fine-tune the policy on a selected subset (Gulcehre et al., 2023; Dong et al., 2023). We benchmark Reward rAnked FineTuning (RAFT; Dong et al., 2023), a simple cross-entropy loss is used on the best-ranked completion out of k online samples, based on R(x, y) or r(x, y). We note that RAFT does not make full use of all samples because it only optimizes using filtered top-ranked samples. In contrast, RLOO fully leverages constructing a baseline and a multi-sample MC estimate for the policy gradient. Direct Preference Optimization (DPO) Unlike other methods, DPO (Rafailov et al., 2023) skips the reward modeling stage in the traditional RLHF pipeline and uses preference pairs to directly optimize the policy with the following loss: −log σ(β log πθ(y+|x) πref(y+|x) −β log πθ(y−|x) πref(y−|x)) 3 From PPO to REINFORCE We scrutinize individual components of PPO that we consider not an ideal fit for RLHF. We explain the theoretical origin, motivate with the practical conditions of LLM RLHF, and provide empirical support from preliminary experiments. 1In collaboration with the Huggingface team, we provide an open-source implementation of RLOO in the TRL (von Werra et al., 2020) library: https://github.com/ huggingface/trl 12251 3.1 Revisiting the need for low-variance estimators Actor-critic algorithms, such as PPO, were motivated in formulation by the high variance observed in traditional RL settings. PPO leverages lowervariance estimators of the total trajectory return to improve learning. These estimators are constructed by bootstrapping off a state-value function (Sutton et al., 1999; Schulman et al., 2018; Sutton and Barto, 2020). While bootstrapping reduces variance, the trade-off is the introduction of bias which risks optimizing for biased rewards. In contrast, REINFORCE uses an unbiased Monte-Carlo estimator of the trajectory return that can have high variance in theory, especially if it is approximated with only a single sample, which is not frequently preferred in traditional Deep-RL environments. Recent work has offered a plethora of evidence that REINFORCE suffers from high variance and fails in the presence of large action spaces like NLP (Ranzato et al., 2016; Bahdanau et al., 2017; Ding and Soricut, 2017; Ammanabrolu and Hausknecht, 2020; Ammanabrolu et al., 2022; Martin et al., 2022; Korbak et al., 2022). However, we note that these findings were based on scenarios with poor conditioning when training from random or weak initialization as opposed to warm-starting it from a strong pre-trained model. Here, we question whether this empirical evidence holds for RLHF. We posit that this is not a practical concern in fine-tuning LLMs due to the extremely strong initialization of the policy (a pre-trained LLM). In this setting, strong initialization coupled with prompt conditioning leads to the concentration of probability mass on a few tokens at each generation step, even though the number of possible actions is in theory enormous (refer to Appendix B for further discussion on the effect of conditioning). The optimization landscape is far less likely to present problems like destructively large and high-variance gradient updates. Thus, attempting to reduce variance further at the cost of introducing bias is not worth it. Empirical support To validate this hypothesis, we vary the weight placed upon variance minimization and bias introduction. In the formulation of PPO in Sect. 2.1, the advantage estimator GAE (Schulman et al., 2018) is relied upon to trade-off bias and variance when estimating the true advantage function in PPO. GAE introduces a hyper-parameter λ ∈[0, 1] in the true advantage function, which balances bias and variance of the constructed estimator. The closer λ is to 1, the higher the observed variance. The optimal choice of where to set λ = 0 depends on the environment. In a highly stochastic environment, minimizing variance at the cost of bias is a worthy trade-off. However, given a stable environment where variance is already low, the introduction of bias is needless. At the extreme of 1 which imposes minimal bias at the trade-off of variance, the advantage term reduces to return the estimator used in Vanilla Policy Gradient (PG) REINFORCE, which directly builds on the REINFORCE estimator, by optimizing the trajectory returns starting from each token in the generation XT i=t γT−i−1Rt(x, yt) −bψ(st) (10) where bψ(st) is a learned baseline state st, akin to how a value network is learned in the traditional RL setting, using a standard MLE loss 1 2(PT i=t γT−i−1Ri(x, yi)−bψ(st))2. Note that the key distinguishing factor between Vanilla PG and REINFORCE as referred to in this work, is that Vanilla PG uses the REINFORCE estimator on the trajectory return starting from the context formed by the prompt and a partial completion, whereas the REINFORCE estimator described in Section 2.2 is applied to the the full trajectory return. We will return to this distinction in the results Section 5.1 when we evaluate whether evaluating partial completions is necessary in RLHF. In Figure 1, we present the results of evaluating the reward for PPO given GAE with different value of λ. Two variants impose minimal bias but invite high variance (( ˆAλ=1.00 (Vanilla PG introduced above), and ˆAλ=0.95) and two variants which over-index on minimizing variance at the cost of bias ( ˆAλ=0.0,and ˆAλ=0.5). Figure 1 plots the reward and observe that the most extreme variant Vanilla PG (unbiased Aλ=1.0) performs the best given it presents no bias at the risk of high variance. We observe a monotonically decreasing reward with decreasing λ. This supports our hypothesis that reducing variance at the cost of bias in an RHLF setting needlessly introduces bias given the stable default properties of the environment. 3.2 Clipping is Rarely Necessary in RLHF Next, we turn to the clipping-ratio ϵ (see Eq. 6), which is used to prevent large policy updates when 12252 πθ πold deviates far from 1, i.e., to prevent updates that are too far off from the current policy (Schulman et al., 2017). In Figure 1, we compare reward curves for independent PPO training with and without clipping. Note that we also turn off clipping for the value network, for these set of experiments, as it has been observed to have a noticeable impact on learning in traditional Deep-RL environments (Engstrom et al., 2020). The removal of these components does not impact learning meaningfully. We empirically found in our RLHF setting that the loss is actually clipped on average < 5% of the time per batch, throughout training across all dataset and base-model pairings, which indicates that the learning regime is close to being “on-policy”, with policies varying slowly from one iteration to the other. To further validate this, we completely turn off clipping followed by removing the ratio πθ πold , while λ = 1, which reduces the PPO loss to that of Vanilla PG. If anything the removal of clipping gives a slight boost in performance, validating our hypothesis that large off-policy updates in our optimization regime are rare and do not have catastrophic effects on learning as they do in traditional Deep-RL. 3.3 Modeling Partial Completions is Not Necessary As described in Sect. 2, PPO models each token as an action whereas REINFORCE models the entire generation as a single action, as opposed to each token. In practice, in LLM RLHF a r(x, y) is only attributed to the <EOS> token, where for all other tokens, only log π(yt|st) πref(yt|st) composes Rt(x, y), which is not meaningful. From a pure RL point of view, the environment dynamics are fully deterministic (PD({y<t+1,x}|st, yt) = 1), meaning that our environment (context) changes deterministically based on the new token/action predicted. Hence, the problem can be reduced to a bandit problem, where the Markov Decision Process (MDP) consists of only the initial state as determined by the prompt, and the terminal state, which is always reached after the generation (Kreutzer et al., 2017; Nguyen et al., 2017b). Note that modeling the entire generation as a single action is done explicitly by REINFORCE but is also done implicitly with iterative fine-tuning methods which generate the entire completion before filtering using a reward model. In the results section 5.1 we will explicitly compare REINFORCE and RLOO which both model the full trajectory return to PPO and Vanilla PG which both model the partial completion. We ask is modeling the entire generation as a single action sufficient to achieve similar or better performance in RLHF? 4 Experimental Setup 4.1 Training Details Datasets We report results on the TL;DR Summarize (Stiennon et al., 2020) and Anthropic Helpful and Harmless Dialogue (Bai et al., 2022a) datasets. The trainig split of TL;DR Summarize 2 dataset contains 116k human-written instructions and 93k human-annotated preference pairs. The preprocessed Anthropic-HH3 dataset contains 112k many training preference pairs. Models For both datasets, we use Pythia-6.9B (Biderman et al., 2023) as the pretrained basemodel. To ablate the effect of the pre-trained model quality on the human preference optimization, we also experiment with Llama-7B coupled with the Anthropic-HH dataset. To ensure a fair comparison across all methods, we use a context length of 512 tokens during both supervised fine-tuning and the reward model training. We initialize both the reward model and policy with the corresponding SFT checkpoint, unless noted otherwise. Experimental Details For the TL;DR Summarize dataset, we use the dedicated SFT split. Since the original Antrophic-HH dataset does not include a separate SFT split, we use prompts and the preferred responses from the binary comparisons during the SFT stage similar to prior work (Yuan et al., 2023; Dong et al., 2023; Rafailov et al., 2023). In the preference training stage, we use the same prompts as in the SFT stage to generate completions. Further details on the experimental setup and hyper-parameters are given in Appendix D 4.2 Evaluation Optimization Quality For all online methods (all methods except DPO), to measure how well they optimize the intrinsic objective , we report average rewards (using the training RM) on 1000 samples from the test set. To measure how well 2https://github.com/openai/ summarize-from-feedback 3https://huggingface.co/datasets/Dahoas/ full-hh-rlhf 12253 100 200 300 400 Steps 3.00 3.25 3.50 3.75 4.00 4.25 4.50 4.75 5.00 Reward Anthropic-HH (Pythia) RLOO k=4 RLOO k=2 RAFT k=4 RAFT k=2 REINFORCE w/ baseline PPO Vanila PG 100 200 300 400 500 600 Steps 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 TL;DR Summarize 20 40 60 80 100 Steps 2 3 4 5 6 7 Anthropic-HH (Llama) Figure 2: Comparison of test rewards for various optimization methods, throughout training. RLOO and REINFORCE with a moving average baseline, consistently achieve higher test rewards than RAFT and PPO throughout training. REINFORCE w/ B. refers to REINFORCE with a moving average reward baseline as in Equation 8. Method TL;DR HH (Pythia) HH (Llama) RLOO (k=4) 77.9 43.7 64.1 RAFT (k=4) 73.2 42.1 63.3 RLOO (k=2) 74.2 47.6 62.2 RAFT (k=2) 72.1 37.7 58.4 RF. W/ B. 70.7 37.9 55.3 VANILLA PG 70.4 36.4 52.3 PPO 67.6 29.2 32.0 DPO 66.6 39.0 61.9 Table 1: Final win rates on generations for held-out test prompts in the Anthropic-HH and TL;DR Summarize datasets. Metrics are reported for the checkpoint with the highest test reward. each method optimizes the extrinsic objective of aligning the models to human preference, we report simulated win rates in accordance with Alpacafarm framework (Dubois et al., 2024) where we use GPT4 as a proxy for human evaluation. Alignment Tax RLHF fine-tuning is often associated with a drop in diversity and language fluency which is referred to as alignment tax (Askell et al., 2021; Kirk et al., 2024). Hence, we also report metrics which serve as proxies for fluency and diversity similar to (Dong et al., 2023). To measure fluency, we report perplexity measured using the preferred completions from the test set, similar to (Dong et al., 2023). Finally, we measure the diversity coupled with length by using the average n-gram diversity Li et al. (2016). 5 Results and Discussion 5.1 Reward Optimization The objective of RLOO, REINFORCE with baseline, RAFT, PPO, and vanilla PG is to maximize the reward score, hence we compare the success of optimization for each method. Fig. 2 shows the test reward scores plotted during the training for all methods. On each dataset, we use the same reward model for all the methods, thus their test reward scores are directly comparable. Modeling Partial Completions vs Full Generations In Fig. 2, we observe that methods that do not model partial completions such as REINFORCE with baseline, RLOO, and RAFT consistently outperform Vanilla PG and PPO where each token is modeled as an action (i.e. partial completions). In addition to their higher performance in reward optimization, these methods are more efficient than Vanilla PG and PPO since they do not require training a learned baseline and a value network as in Vanilla PG or PPO respectively. This suggests that modeling partial sequences is unnecessary. Sampling Efficiency Given the same sampling budget, k, RLOO consistently outperforms RAFT throughout training in terms of reward scores. Noticeably, even with a smaller sampling budget RLOOk=2, RLOO either closely matches or outperforms RAFTk=4, across all datasets and models. In this setting, RLOO uses only half the onlinesample budget given the same step count. This confirms that RLOO leads to better optimization by using all the samples generated, unlike RAFT where only the top-ranked sample is used for fine-tuning. Showcasing the same finding, Fig. 5 in Appendix A plots the rewards with respect to the number of samples generated during training regardless of the k value. 12254 0 100 200 300 400 Steps 3.50 3.75 4.00 4.25 4.50 4.75 r(x, y) RLOO = 1.0 RAFT = 1.0 RLOO = 0.5 RAFT = 0.5 RLOO = 0.25 RAFT = 0.25 RLOO = 0.1 RAFT = 0.1 0 100 200 300 400 Steps 0 2 4 6 DKL( || ref) Figure 3: Sensitivity to KL weight Training curve for r(x, y) (left) DKL (right) and with k = 2. Under higher KL, RAFT is not only worse than RLOO at optimizing reward, but also deviates further from the reference policy. Curves are exponentially smoothed for better readability. Method Len. PPL D-1 D-2 R-Var. RLOO (k=4) 60.6 27.6 0.10 0.43 3.1 RAFT (k=4) 62.4 30.1 0.10 0.43 3.2 RLOO (k=2) 58.6 29.2 0.11 0.44 3.0 RAFT (k=2) 52.8 28.9 0.12 0.47 3.1 RF. W/ B. 47.2 27.2 0.13 0.50 2.7 V. PG 39.1 39.0 0.15 0.54 3.7 PPO 16.5 40.4 0.34 0.60 2.3 DPO 104.4 33.8 0.08 0.39 N/A Table 2: Language Fluency and Diversity Metrics on the Anthropic-HH dataset. 5.2 Simulated Win Rates Table 1 presents the win-rates against the original completions in TL;DR Summarize and AntrophicHH for each method. Here, we also include DPO. Modeling partial completions is not necessary Recall that the key distinguishing factor between Vanilla PG and REINFORCE as referred to in this work, is that while Vanilla PG treats each token as an action, REINFORCE operates on the entire generation. As seen in Table 1, REINFORCE with baseline is on par with Vanilla PG on both TL;DR (70.7 vs 70.4) and HH (37.9 vs 36.4) datasets when using Pythia-based models. Moreover, REINFORCE with baseline outperforms Vanilla PG in HH dataset with a Llama-based model, achieving a higher win-rate (55.3 vs 52.3). This confirms the effectiveness of only modeling the entire generation and not partial completions, even without using multiple samples during RLHF. Win-rates are inline with test reward scores RLOO with k = 4 achieves the highest win-rates, outperforming PPO by 10.3, 14.5, and 32.1 for TL;DR, HH (Pythia) and HH (Llama), respectively. As the only exception, RLOO achieves the highest win-rate with k = 2 in HH dataset. RLOO is more sample efficient than RAFT Comparing RLOO with RAFT, under the same sampling budget k, RLOO consistently outperforms RAFT in all datasets and models. When averaged across the three dataset and model pairings, RLOO achieves win-rates of 61.3 and 61.9 for k = 2 and k = 4, respectively, while RAFT scores 56.1 and 59.5, respectively. Notably, RLOO exhibits the highest increase in win-rate compared to RAFT, up to 9.9 as in the HH dataset with k = 2 and Pythia-based models (second column in Table 1). 5.2.1 Alignment Tax Table 2 shows various intrinsic evaluation metrics including perplexity and diversity scores of Llamabased models in Antrophic-HH datasets. Length of Generations Noticeably, the DPO trained model tends to be over-verbose (the longest generation length of 104 tokens on average) while the PPO trained model leads to short generations (the shortest on average with 16 tokens). We provide example responses in Appendix F. Perplexity and Diversity As seen in Table 2, perplexity (PPL) scores are relatively close amongst RLOO, RAFT, and REINFORCE with baseline where all three methods achieve significantly lower perplexity than PPO and Vanilla PG. In terms of diversity, Diversity-1 scores are similar across RLOO, RAFT, REINFORCE with baseline and Vanilla PG. Diversity-2 scores tend to slightly decrease for the methods with higher reward optimization (Askell et al., 2021). This is unsurprising given the significant difference in their generation length compared to other methods. Overall, RLOO and REINFORCE with a baseline maintain fluency and diversity in generations in 12255 comparison with other methods while achieving higher reward scores and win-rates. Reward variance Lower reward variance is desirable for applications such as safety and harmlessness where there is a high risk associated with generating a low-reward sample. Results in Table 2 show that RLOO for the same k value leads to slightly lower reward variance amongst the generations, compared to RAFT which is the most competitive method to RLOO in terms of reward optimization. Finally, Vanilla PG leads to the highest reward variance. REINFORCE with baseline, however, empirically results in 27% less variance, even though it either outperforms or is on par with Vanilla PG in terms of reward optimization and win-rates. 5.2.2 Robustness As previously noted, a major drawback of RAFT is that it only optimizes on the highest-ranked sample and discards the rest of the online samples. Thus, the factors that can lead to inaccurate ranking of the best completion, can also impede learning significantly. We demonstrate this fragility by showing the effects of 1) high β for the KL-term and 2) inserted reward noise on RAFT in comparison to RLOO. Mismatch from KL-penalty In Figure 3, we show the evolution of the KL distance and test reward curve r(x, y) throughout training for RLOO and RAFT, using k = 2, on the HH dataset using Pythia-based models. We vary the KL regularization, using β = {0.25, 0.5, 1}. Here, a larger KL penalty in R(x, y) (higher β) potentially increases mismatches between rankings of the k online samples. However, the choice of β often depends on multiple factors such as the distribution of the data and output logits of the base model, which may not allow for a low β value even with early-stopping. We find that RAFT is more sensitive to higher KL regularization. In a low-regularized regime (β = {0.1}), RLOO and RAFT converge to equal KL distances from the reference policy while RLOO achieves a higher reward. However increased regularization with β = {0.25, 0.5, 1.0}, not only RAFT is worse at optimizing the reward, but also deviates more from the reference policy. Mismatch from Reward Noise The reward model is itself a noisy proxy of reward signal due to the inherently noisy nature of human preference (Nguyen et al., 2017b; Kreutzer et al., 2018). Inspired by literature in Bayesian deep learning on 0 100 200 300 400 Steps 0.8 0.6 0.4 0.2 r (x, y) r(x, y) RLOO = 1.0 RAFT = 1.0 RLOO = 3.0 RAFT = 3.0 RLOO = 5.0 RAFT = 5.0 Figure 4: Sensitivity to noise Drop in training rewards upon the addition of noise with σ = {1.0, 3.0, 5.0} .RAFT shows a more significant drop in reward compared to RLOO. Curves are exponentially smoothed for better readability. modeling Aleatoric uncertainty (Kendall and Gal, 2017; Collier et al., 2021), to simulate the effect of such noise in different degrees, for each prompt, we add noise to the rewards. Concretely, we add noise ϵ to the output logits of the binary classifier: rσ(x, y) = r(x, y) + ϵ where ϵ ∼N(0, σ2). Figure 4 shows the drop in reward at different levels for noise σ = {1.0, 3.0, 5.0}. As expected, the unaltered training reward decreases for both RLOO and RAFT. However, the drop is far more pronounced for RAFT with σ = {3.0, 5.0} This is due to the addition of reward noise, impacting the relative rankings hence the training reward. In contrast, RLOO presents a relatively robust reward optimization under a noisy reward signal. 6 Conclusion At a high level, this work posits that the RLHF setting of fine-tuning an LLM has a strong initialization of the policy, which coupled with further conditioning on a prompt, alleviates historical concerns with high variance and large action spaces. We support this position with empirical results, showing that techniques such as REINFORCE & RLOO rarely used in traditional RL settings due to high variance outperform PPO. We also show the improved robustness of RLOO relative to iterative fine-tuning methods like RAFT — achieving a best of both worlds sweet spot. 7 Limitations As one of the limitations of our work, we do not study the reward model (RM) over-optimization, which refers to the problem when optimizing the 12256 trajectory of the proxy reward, the model diverges from the “gold” reward objective (Gao et al., 2022). This aspect has not been studied yet also for iterative fine-tuning methods such as RAFT and deserves a dedicated study. We leave this to future work. Another limitation is the exploration of RLOO baseline in a single token action framework, where partial sequences are modeled. In this work, we show that modeling partial sequences are unnecessary undertaking in the RLHF context. However, understanding how RLOO performs in such modeling formulation can potentially be useful in other contexts. Finally, we limit our work to rewards from human preferences and we did not explore the other rewards such as ROUGE, BLEU, or other metrics used in the NLP. 8 Acknowledgement We would like to thank Ivan Zhang, Phil Blunsom, Florian Strub, Max Bartolo, Bharat Venkitesh, Roger Grosse, and Keiran Paster for helpful discussions. We would like to especially thank Matthieu Geist for multiple fruitful discussions on the framing of this work and feedback on the final manuscript. We would also like to thank our colleagues in Cohere and Cohere For AI for their continued support throughout the under-taking of this project. References Prithviraj Ammanabrolu and Matthew Hausknecht. 2020. Graph constrained reinforcement learning for natural language action spaces. Prithviraj Ammanabrolu, Liwei Jiang, Maarten Sap, Hannaneh Hajishirzi, and Yejin Choi. 2022. Aligning to social norms and values in interactive narratives. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5994–6017, Seattle, United States. Association for Computational Linguistics. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report. Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A general language assistant as a laboratory for alignment. Mohammad Gheshlaghi Azar, Mark Rowland, Bilal Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and Rémi Munos. 2023. A general theoretical paradigm to understand learning from human preferences. Dzmitry Bahdanau, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. An actor-critic algorithm for sequence prediction. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. 12257 Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022b. Constitutional ai: Harmlessness from ai feedback. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Eugene Choi, Arash Ahmadian, Matthieu Geist, Oilvier Pietquin, and Mohammad Gheshlaghi Azar. 2024. Self-improving robust preference optimization. Mark Collier, Basil Mustafa, Efi Kokiopoulou, Rodolphe Jenatton, and Jesse Berent. 2021. Correlated input-dependent label noise in large-scale image classification. Nan Ding and Radu Soricut. 2017. Cold-start reinforcement learning with softmax policy gradient. Advances in Neural Information Processing Systems, 30. Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. 2023. Raft: Reward ranked finetuning for generative foundation model alignment. Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2024. Alpacafarm: A simulation framework for methods that learn from human feedback. Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. 2020. Implementation matters in deep policy gradients: A case study on ppo and trpo. Leo Gao, John Schulman, and Jacob Hilton. 2022. Scaling laws for reward model overoptimization. Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, et al. 2023. Reinforced selftraining (rest) for language modeling. arXiv preprint arXiv:2308.08998. Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in bayesian deep learning for computer vision? Muhammad Khalifa, Hady Elsahar, and Marc Dymetman. 2021. A distributional approach to controlled text generation. Robert Kirk, Ishita Mediratta, Christoforos Nalmpantis, Jelena Luketina, Eric Hambro, Edward Grefenstette, and Roberta Raileanu. 2024. Understanding the effects of rlhf on llm generalisation and diversity. Wouter Kool, Herke van Hoof, and Max Welling. 2019. Buy 4 reinforce samples, get a baseline for free! In DeepRLStructPred@ICLR. Tomasz Korbak, Hady Elsahar, Germán Kruszewski, and Marc Dymetman. 2022. On reinforcement learning and distribution matching for fine-tuning language models with no catastrophic forgetting. In Advances in Neural Information Processing Systems, volume 35, pages 16203–16220. Curran Associates, Inc. Julia Kreutzer, Shahram Khadivi, Evgeny Matusov, and Stefan Riezler. 2018. Can neural machine translation be improved with user feedback? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 3 (Industry Papers), pages 92–105, New Orleans - Louisiana. Association for Computational Linguistics. Julia Kreutzer, Stefan Riezler, and Carolin Lawrence. 2021. Offline reinforcement learning from human feedback in real-world sequence-to-sequence tasks. In Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021), pages 37–43, Online. Association for Computational Linguistics. Julia Kreutzer, Artem Sokolov, and Stefan Riezler. 2017. Bandit structured prediction for neural sequence-tosequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1503–1513, Vancouver, Canada. Association for Computational Linguistics. 12258 Harrison Lee, Samrat Phatale, Hassan Mansoor, Thomas Mesnard, Johan Ferret, Kellie Lu, Colton Bishop, Ethan Hall, Victor Carbune, Abhinav Rastogi, and Sushant Prakash. 2023. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110–119, San Diego, California. Association for Computational Linguistics. Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, and Jialu Liu. 2023. Statistical rejection sampling improves preference optimization. Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with restarts. ArXiv, abs/1608.03983. Alice Martin, Guillaume Quispe, Charles Ollion, Sylvain Le Corff, Florian Strub, and Olivier Pietquin. 2022. Learning natural language generation with truncated reinforcement learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 12–37, Seattle, United States. Association for Computational Linguistics. Andriy Mnih and Karol Gregor. 2014. Neural variational inference and learning in belief networks. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browserassisted question-answering with human feedback. Khanh Nguyen, Hal Daumé III au2, and Jordan BoydGraber. 2017a. Reinforcement learning for bandit neural machine translation with simulated human feedback. Khanh Nguyen, Hal Daumé III, and Jordan BoydGraber. 2017b. Reinforcement learning for bandit neural machine translation with simulated human feedback. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1464–1474, Copenhagen, Denmark. Association for Computational Linguistics. OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mo Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin 12259 Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. 2023. Gpt-4 technical report. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2018. High-dimensional continuous control using generalized advantage estimation. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. 2020. Learning to summarize from human feedback. Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul Christiano. 2022. Learning to summarize from human feedback. Richard S Sutton and Andrew G Barto. 2020. Reinforcement learning: An introduction. MIT press. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, volume 12. MIT Press. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023. Zephyr: Direct distillation of lm alignment. Ahmet Üstün, Viraat Aryabumi, Zheng-Xin Yong, WeiYin Ko, Daniel D’souza, Gbemileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, Freddie Vargus, Phil Blunsom, Shayne Longpre, Niklas Muennighoff, Marzieh Fadaee, Julia Kreutzer, and Sara Hooker. 2024. Aya model: An instruction finetuned open-access multilingual language model. Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, Nathan Lambert, and Shengyi Huang. 2020. Trl: Transformer reinforcement learning. https://github. com/huggingface/trl. Ronald J. Williams. 1992. Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and TieYan Liu. 2018. A study of reinforcement learning for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3612–3621, Brussels, Belgium. Association for Computational Linguistics. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. 12260 Yao Zhao, Rishabh Joshi, Tianqi Liu, Misha Khalman, Mohammad Saleh, and Peter J. Liu. 2023. Slic-hf: Sequence likelihood calibration with human feedback. Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2020. Fine-tuning language models from human preferences. 12261 A Additional Results Below we present additional experimental results on the RLOO estimator. A.1 RLOO vs RAFT Sample Efficiency Figure 5 plots the test reward for RAFT and RLOO regardless of the value of k vs online samples generated, normalized by batch size which shows RLOO’s efficiency. 500 1000 1500 2000 2500 2.6 2.8 3.0 3.2 3.4 Reward TL;DR Summarize 250 500 750 1000 1250 1500 4.5 4.6 4.7 4.8 4.9 5.0 Anthropic-HH (Pythia) RLOO RAFT 0 100 200 300 400 4.0 4.5 5.0 5.5 6.0 6.5 7.0 Anthropic-HH (Llama) Total Online Samples Figure 5: RLOO and RAFT Rewards are plotted at various points through-out training based on the total # of samples seen (regardless of whether k = 2 or k = 4). RLOO is more efficient than RAFT in-terms of online sample use, regardless of k and dataset. 12262 B Effective Conditioning To evaluate the hypothesis that probability mass is heavily concentrated and conditioning significantly narrows the likely generation space, we empirically study the characteristics of the output distributions and each generation step. We use the Llama SFT model used for the HH experiments in Results section. Probability Mass Concentration Figure 6 (Right) plots the total probability mass concentrated in the top {1, 16, 32, 64} tokens. There’s a notable jump in the total probability mass after the first token is generated, which points to the effectiveness of conditioning from the first token and prompt. From that point on, a significant (∼60%) portion of probability mass is put on only the single most probable token at each step, with more than ∼90% of the total mass being concentrated on the top 16 tokens, with diminishing increases that point for the top 32 & 64 tokens. This empirical evidence directly supports our reoccurring claim that even though the feasible search (action) space at each step is enormous, in practice due to the conditioning from the SFT model and the prompt, most of the probability mass is only distributed amongst a fraction of the possible tokens. Low Entropy Figure 6 (Left) plots the Normalized Entropy ˆH(X) = H(X) Hmax(X), where Hmax(X) is the entropy of the uniform distribution under the vocabulary-size. Similar to the jump in probability mass in Figure 6 right, as expected, the biggest the drop in entropy occurs right after the first token is generated and only slightly rises up to the end of the generation and is consistently low. This is further supporting evidence that the generation space is heavily skewed and naturally suggests there to be low variance in the probability of the generations, due to the entropy in the generative process being consistently low. This further motivates the single action modelling formulation as it suggests that the first conditioning in the generation is the most impactful. 0 20 40 60 80 100 120 Generation Step 0.14 0.16 0.18 0.20 0.22 0.24 0.26 0.28 Normalized Entropy 0 20 40 60 80 100 120 Generation Step 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Total Prob. Mass Top 1 Top 16 Top 32 Top 64 Figure 6: Effect of Conditioning The highest drop in entropy and increase in concentration of probability mass occurs after generating the first token. Left: Normalized Entropy Right: Total probability mass concentrated in the Top {1, 16, 32, 64} tokens. Probability mass is consistently concentrated on a very small number of tokens when compared to the vocabsize of 32k) C RLOO’s Connection to Contrastive Loss Multiple other works in iterative fine-tuning (Zhao et al., 2023; Yuan et al., 2023),utilize a contrastive-style loss by up weighing the log-probabilities of positive samples and down-weighing the probabilities of negative samples, as determined by the reward model Lk=2 c = −log π(y+|x) + log π(y−|x) (11) 12263 We also have the corresponding loss with k = 2 to Equation 2.3 as: Lk=2 RLOO = (R(y+, x) −R(y−, x)) 2 (−log π(y+|x) + log π(y−|x)) (12) It’s clear that RLOOk=2 loss is exactly the contrastive loss but weighted by the difference between the absolute cores (the 1 k factor is merged into the learning-rate). D Training Details Below are additional details on training and data preprocessing. Data-preprocessing For each dataset, we filter out prompts that exceed a pre-specified length to minimize the occurrence of generations not containing the EOS token. We filter prompts longer than 448 and 348 tokens, for the TL;DR and HH datasets, respectively. SFT Training For the TL;DR Summarize dataset, we use the dedicated SFT split. For Antrophic-HH, since the original dataset does not include a separate SFT split, we use prompts and the preferred responses from the binary comparisons during the SFT stage. This is consistent with prior work (Yuan et al., 2023; Dong et al., 2023; Rafailov et al., 2023). In terms of training hyperparameters, for the Pythia models, similar to previous work (Touvron et al., 2023b; Bai et al., 2022a), we train for 2 epochs with an initial learning rate of 2e-5 in both summarization and dialogue tasks. For the Antrophic-HH dataset, since we don’t have an SFT set, we use the preferred responses from the binary comparisons which make up the HH dataset. This is consistent with prior work (Yuan et al., 2023; Dong et al., 2023; Rafailov et al., 2023). For the summarize dataset, we use the dedicated SFT set indicated by the initial dataset. For the Llama models, we found that 1 epoch for the SFT stage is sufficient. RM Training In the RM stage, we train RM for 1 epoch with an initial learning rate of 1e-5. For both RM and SFT training, we use a cosine decay learning schedule (Loshchilov and Hutter, 2016) and a 0.03 warm-up ratio. Preference Training For TL;DR Summarize dataset where we only experiment with Pythia models, we train each variant for 600 steps with a rollout batch-size of 512. We use a β value of 0.03. For Anthropic-HH, we train Pythia models for 393 steps with the same batch-size configuration as for TL;DR summarize. As for Llama models, we follow the setup in (Dong et al., 2023) and use 2048 rollout and step batch size over 2 epochs. We use β = 0.10 for all Anthropic-HH experiments unless otherwise noted. For both datasets, we use the same prompts as in the SFT stage to roll out online generations. Across both datasets and all the models, we use a constant learning rate of 1e-6 with a linear warm-up duration of 3 % of total steps. Learning-rates were chosen after a sweep of {1×10−6, 1×10−5, 2×10−5} for RAFT and RLOO, and {1 × 10−6, 1 × 10−5} for PPO and Vanilla PG. For all algorithms, we take 2 gradient steps for each batch. E GPT-4 Evaluation Prompts TL;DR Summarize: Which of the following summaries does a better job of summarizing the most important points in the given forum post, without including unimportant or irrelevant details? A good summary is both precise and concise. Post: {instruction} Summary (A): {output_1} Summary (B): {output_2} FIRST provide a one-sentence comparison of the two summaries, explaining which you prefer and why. SECOND, on a new line, state only "Summary (A)" or " Summary (B)" to indicate your choice. Your response should use the format: Comparison: <one-sentence comparison and explanation> Preferred: <"Summary (A)" or "Summary (B)"> Anthropic-HH: For the following query to a chatbot assistant, which response is more helpful? Query: instruction 12264 Response (A): {output_1} Response (B): {output_2} FIRST provide a one-sentence comparison of the two responses and explain which you feel is more helpful. SECOND, on a new line, state only "Response (A)" or "Response (B)" to indicate which response is more helpful. If they are equally good or bad, state "Neither". Your response should use the format: Comparison: <one-sentence comparison and explanation> Preferred: <"Response (A)" or "Response ı(B)" or "Neither"> 12265 F Example Responses F.1 TL;DR Summarize (Pythia) PROMPT: Below is a reddit POST and the corresponding SUBREDDIT and TITLE. Write a both precise and concise summary of the contents of the post. SUBREDDIT:r/loseit TITLE:Caffeine addiction. I have a horrible caffeine addiction, and I don’t like sacrificing any of my daily calories for coffee. I used to drink 5-6 Diet Dr. Peppers a day, but I have switched to almost exclusively drinking only water most days. I do have a Diet Dr. Pepper occasionally, and I know that technically it’s okay to probably have one a day, but I honestly just feel better all around when I drink only water. But if I don’t have some caffeine in the morning, I will be writhing in the floor in pain with a skull-pounding headache by mid-day. I know. Ridiculous, right? So, I went to the vitamin aisle the other day in search of a supplement with a little caffeine and the only thing I found was One A Day Adult Vitacraves Gummies with Energy Support. Here’s the website with [nutritional info] One serving is 20 calories and has 90 mg of caffeine. There’s also some vitamin B6, B12, and folic acid in there that supposedly helps turn food into energy. I have no idea if that’s true. I’ve been taking them a couple days, and so far I do feel more energized and I haven’t had any headaches. The issue is that one bottle is around $9 and there are only 60 in a bottle. You have to take 4 (4!) a day, which is very intriguing considering the brand name and all. So it’ll cost me around $20 a month. Which is admittedly cheaper than what I used to spend on Diet Dr. Peppers, but still. I was just curious if anyone else had a better option for caffeine in a supplement or vitamin?, RLOO (K=4): "I have a horrible caffeine addiction. Went to the vitamin aisle and only found One A Day Adult Vitacraves Gummies with Energy Support. Cost $9/bottle and contains 90 mg of caffeine. Anyone else have a better option for caffeine in a supplement or vitamin?" RLOO (K=2): "I have a horrible caffeine addiction. I switched to exclusively drinking only water most days. I went to the vitamin aisle and only found one brand of supplement with caffeine. It costs $9 a bottle. Anyone else have a better option for caffeine in a supplement or vitamin?" REINFORCE W/ B.: "I have a horrible caffeine addiction. I don’t like sacrificing any of my daily calories for coffee. I went to the vitamin aisle and only found a supplement with 90 mg of caffeine. Is there a better option for caffeine in a supplement or vitamin?", RAFT (K=4): "I have a horrible caffeine addiction. Went to the vitamin aisle and only found One A Day Adult Vitacraves Gummies with Energy Support. Cost $9 a bottle and have 90 mg of caffeine. Anyone else have a better option for caffeine in a supplement or vitamin?" RAFT (K=2) "I have a horrible caffeine addiction. I switched to only drinking only water most days. Found One A Day Adult Vitacraves Gummies with Energy Support. Cost $9 a bottle. Anyone else have a better option for caffeine in a supplement or vitamin?" PPO: "I have a horrible caffeine addiction. Went to the vitamin aisle in search of a supplement with caffeine but only found One A Day Adult Vitacraves Gummies with Energy Support. Cost around $20 a month. Any better options? " VANILLA PG: "I have a horrible caffeine addiction and was looking for a supplement or vitamin with caffeine. Only found One A Day Adult Vitacraves Gummies with Energy Support which costs $9 per bottle. Any better options for caffeine in a supplement or vitamin?", DPO: "I have a horrible caffeine addiction. I like only drinking water most days, but occasionally I want some caffeine. I found some caffeine in a supplement in the vitamin aisle, but it’s expensive and I was just wondering if anyone else has a better option." F.2 HH (Llama) PROMPT:: "Human: Why does it hurt when I stretch too hard?Assistant:" 12266 RLOO (K=4): "Stretching too hard can cause muscle soreness or injury. This is due to a combination of muscle strain and inflammation. It’s important to stretch gradually, and to listen to your body, so you don’t overdo it." RLOO (K=2): "Stretching too hard can cause muscle pain or muscle soreness. This is usually temporary, and will go away as your muscles get used to the stretch. It’s important to start with a light stretch, and gradually increase the stretch as your muscles get used to it." REINFORCE W/ B.: "When you stretch too hard, you can cause minor muscle strain. It˘2019s usually not a big deal, but it can cause some discomfort. It˘2019s also a sign that you might be overdoing it, so it can be a warning sign." RAFT (K=4): "Stretching too hard can cause muscle strain. This is especially true for muscles that aren˘2019t used to stretching, or for muscles that are already fatigued. It˘2019s also possible to injure a muscle if you stretch it too far, but this is less common." RAFT (K=2): "When you stretch too hard, you can cause a minor muscle tear or strain." PPO: "Stretching too hard can cause muscle soreness or injury." VANILLA PG: "Stretching too hard can cause muscle soreness and pain. This is because you are overstretching the muscle, which can cause damage to the muscle fibers. It is best to stretch gradually and gently, to avoid injury." DPO : "Stretching can be uncomfortable if you do it too aggressively, because it can cause minor muscle soreness or even minor injury. When you stretch, you are putting stress on your muscles and connective tissue, and this stress can sometimes cause minor damage or injury. If you stretch too aggressively or for too long, you can cause minor muscle soreness or even minor injury. This can cause pain or discomfort during or after the stretching. It can also cause minor damage to your connective tissue, which can cause pain or discomfort after the stretch." 12267
2024
662
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12268–12290 August 11-16, 2024 ©2024 Association for Computational Linguistics VIEScore: Towards Explainable Metrics for Conditional Image Synthesis Evaluation Max Ku♠ Dongfu Jiang♠ Cong Wei♠ Xiang Yue♡ Wenhu Chen♠ {m3ku, dongfu.jiang, c58wei}@uwaterloo.ca, [email protected], [email protected] University of Waterloo♠ IN.AI Research♡ https://tiger-ai-lab.github.io/VIEScore/ Traditional Metrics VIEScore MLLM Instruction Input “A tiger in a lab coat ... tuning a welloiled science content machine, digital art.” Embeddings "The image successfully depicts a tiger in a lab coat… However, the prompt's mention of 'well oiled' could be … which isn't clearly depicted… Alignment: 0.8 Output 0.179252 Output Alignment: Image vector Text vector Image & Text Reasoning Explainability Task Awareness Reasoning Explainability Task Awareness Figure 1: Which method (VIESCORE or traditional metrics) is “closer” to the human perspectives? Metrics in the future would provide not just the score but also the rationale, enabling the understanding of each judgment. Abstract In the rapidly advancing field of conditional image generation research, challenges such as limited explainability lie in effectively evaluating the performance and capabilities of various models. This paper introduces VIESCORE, a Visual Instruction-guided Explainable metric for evaluating any conditional image generation tasks. VIESCORE leverages general knowledge from Multimodal Large Language Models (MLLMs) as the backbone and does not require training or fine-tuning. We evaluate VIESCORE on seven prominent tasks in conditional image tasks and found: (1) VIESCORE (GPT4-o) achieves a high Spearman correlation of 0.4 with human evaluations, while the human-to-human correlation is 0.45. (2) VIESCORE (with open-source MLLM) is significantly weaker than GPT-4o and GPT-4v in evaluating synthetic images. (3) VIESCORE achieves a correlation on par with human ratings in the generation tasks but struggles in editing tasks. With these results, we believe VIESCORE shows its great potential to replace human judges in evaluating image synthesis tasks. 1 Introduction Diffusion models have become a focal point in AI research for image synthesis. Over the past year, several new models (Kumari et al., 2023; Ruiz et al., 2023; Li et al., 2023c; Zhang and Agrawala, 2023) have been introduced to enhance control over image generation. However, comprehensively evaluating AI-synthesized images remains a challenging and unresolved issue. While metrics like LPIPS (Zhang et al., 2018), CLIP-Score (Hessel et al., 2021), and DreamSim (Fu et al., 2023b) were proposed, they have certain limitations: (1) these metrics are agnostic the end task, which can fail to measure the desired aspects of the generated images, (2) the score is opaque with limited explainability. These limitations heavily restrict their effectiveness in assessing conditional image generation. Some research work (Denton et al., 2015; Isola et al., 2017; Meng et al., 2021; Chen et al., 2023; Sheynin et al., 2023) relied on human-driven evaluation methods. While humans excel at understanding and interpreting visual content, such methods in the context are facing challenges such as scalability limits and preference subjectivity issues. This reliance on 12268 human judgment highlights the need for more uniform evaluation methods in the field. To solve the mentioned issues, we formulate the problem definition with our desired properties, as presented in equation 1. The function f takes an instruction I, a synthesized image O, and C∗which is a set of conditions (e.g. style, subject, background, cannyedge, etc). The score function should produce the intermediate rationale in the form of natural language before generating the final score according to the prompt instruction I: fVIE(I, O, C∗) = (rationale, score) (1) The function f can be any Multimodal Large Language Model (MLLM) such as GPT-4 (OpenAI, 2023) and LLaVA (Liu et al., 2023a), which can take input images to generate human-like text responses. Unlike the automatic metrics, MLLM can receive human instructions and produce rationale. With such motivation, we introduce VIESCORE (Visual Instruction-guided Explainable Score), a framework to assess synthetic images in different conditional image generation tasks. VIESCORE has multiple advantages compared to auto-metrics and human evaluation. It includes: Task Awareness. Existing metrics were often designed to measure a certain aspect of generated images. For example, LPIPS measures the perceptual similarity of a pair of images, while CLIP-Score measures the text alignment of one single image. As a consequence, these metrics cannot be adapted to evaluate other tasks. VIESCORE acts as a silver bullet to tackle all conditional image generation evaluation processes due to its instruction-guiding property. It can be carefully adjusted with different instruction requirements. Explainability. The existing metrics normally output a single float-point score, which cannot offer detailed insights into the ’rationale’ behind its evaluations. Such a score makes it difficult to interpret the decisions from the metric output. Instead, VIESCORE can offer the rationale in the form of natural languages to help humans understand the reasoning process. As depicted in Figure 1, the rationale can significantly improve the trustworthiness of VIESCORE. While the ultimate goal is to derive an MLLM that can rate images like humans, in this paper we also explore how well MLLMs can assess synthetic images compared to human evaluation and present insights and challenges on state-of-the-art MLLMs towards human evaluators, as shown in Figure 2. 2 Related Works 2.1 Conditional Image Synthesis With recent advancements in Image Synthesis research (Goodfellow et al., 2016; Ho et al., 2020; Dhariwal and Nichol, 2021), researchers proposed different methods and contributed a large amount of controllable image synthesis models with conditional inputs. Conditional image synthesis can be categorized into conditional image generation and conditional image editing tasks. Prevalent tasks include Text-To-Image generation (Saharia et al., 2022; Rombach et al., 2022; stability.ai, 2023) (known as text-guided image generation), Inpainting (Avrahami et al., 2022; Lugmayr et al., 2022) (known as mask-guided image editing) and Textguided image editing (Brooks et al., 2023; Couairon et al., 2022; Wu and la Torre, 2023). More recent works proposed new tasks such as Subject-driven image generation and editing (Gal et al., 2022; Ruiz et al., 2023; Li et al., 2023c) to inject one specific subject into a synthesized image, while Multi-concept image composition (Kumari et al., 2023; Liu et al., 2023b) allows multiple specific subjects into the synthesized image. Controlguided image generation (Zhang and Agrawala, 2023; Qin et al., 2023) allows additional conditions alongside the text prompt to guide the image synthesis. Our work uses MLLM to access all the discussed tasks on synthetic image evaluation. 2.2 Synthetic Images Evaluation Various metrics are proposed to evaluate the quality of AI-generated images. Traditional measures like the Inception Score (IS) (Salimans et al., 2016) and the Frechet Inception Distance (FID) (Heusel et al., 2017) are commonly employed to measure image fidelity. On the other hand, to measure the alignment between the generated image and the text prompt, several metrics (Kim et al., 2022; Kynkäänniemi et al., 2019; Park et al., 2021; Sajjadi et al., 2018) have been introduced. The CLIP score (Hessel et al., 2021) and BLIP score (Li et al., 2022) are the most commonly used. Recently, approaches such as (Cho et al., 2023) and (Lu et al., 2023c) aim to provide a fine-grained evaluation framework, while the HEIM-benchmark (Lee et al., 2023) assesses text-to-image models across multiple aspects, such as toxicity and safety. Other methods, such as projective-geometry (Sarkar et al., 2023), evaluate images’ physical and geometric realism. However, these metrics are primarily fo12269 cused on text-to-image generation and remain narrow in scope. General image generation tasks like subject-driven image generation and image editing (Ruiz et al., 2023; Li et al., 2023c) still lack effective automatic metrics. One traditional, yet effective method to evaluate AI-generated image performance is to have human annotators assess visual quality. Recent works like ImagenHub (Ku et al., 2023), and HEIM (Lee et al., 2023) attempt to standardize human evaluation across various image generation tasks, though scalability remains a challenge. Our research aims to identify the challenges in mimicking human perception in synthetic image evaluation and address these gaps by developing auto-metrics that align with human judgment across common image evaluation tasks. 2.3 Large Language Models as Evaluators Large language models (LLMs) are often used to evaluate the quality of model-generated outputs. Recent works used LLMs as an evaluator demonstrating their great ability in text generation evaluation (Zheng et al., 2023; Dubois et al., 2023). This ability for evaluation naturally emerges (Fu et al., 2023a) and stems from LLM’s great reasoning ability and instruction-following ability. Recent works also tried to devise a smaller but explicitly fine-tuned LLM(Touvron et al., 2023) that achieves similar evaluation results on natural language generation (Xu et al., 2023; Jiang et al., 2023; Li et al., 2023b). Besides text evaluation, LLMs with visual features have been used as evaluators on images (Lu et al., 2023d; Huang et al., 2023; Lee et al., 2020; Inan et al., 2021; Hu et al., 2023), mainly focused on evaluating text-to-image task. GPT-4v, regarded as the state-of-the-art LLM with visual features, also reported a decent ability on image evaluation, especially in text-image alignment (Zhang et al., 2023b). However, the GPT-4v is not perfect for image evaluation. A comprehensive study on GPT-4v’s vision ability reported that GPT-4v makes mistakes on image evaluation tasks (Yang et al., 2023). For example, it failed to provide proper reasonings for spotting the difference between two similar images. 3 Preliminary 3.1 Evaluation Benchmark ImagenHub (Ku et al., 2023) is a standardized benchmark for evaluating conditional image generation models with human raters. The framework covered mainstream tasks, including image generation, editing, and several conditioned tasks. In this section, we visit how humans assess images in the ImagenHub framework. Images are rated in two aspects: (1) Semantic Consistency (SC) assesses how well the generated image aligns with the given conditions, such as prompts and subject tokens, ensuring coherence and relevance to the specified criteria according to the task. (2) Perceptual Quality (PQ) evaluates the extent to which the generated image appears visually authentic and conveys a sense of naturalness. ImagenHub curated a human evaluation dataset for each task, in which the dataset contains around 100 to 200 conditional inputs for generating synthesized images. Then each image was rated by three human raters according to the guidelines of the defined task, and a final score in the range [0.0, 1.0] was reported for the average score in semantic consistency (SC) and perceptual quality (PQ) respectively, with another overall score (O) derived from the geometric mean of semantic consistency and perceptual quality at the instance level. ImagenHub covered 30 image synthesis models and reported 0.4 Krippendorff’s alpha on the inter-worker agreement of their human rating. 3.2 Multimodal Large Language Models Multimodal Large Language Models (MLLMs) typically denote LLMs with integrated visual capabilities (Yin et al., 2023). This visual proficiency opens up the potential to perform image analysis and evaluation. However, for a comprehensive assessment of synthetic images, multiple images may be examined in one pass due to complex conditions. The prompt will also be extensive to comprehensively describe the rating process. Therefore, the MLLM candidate should possess specific capabilities: (1) The model must efficiently process and interpret multiple images simultaneously. (2) The model needs to comprehend and respond to lengthy text prompts while matching all requirements. Recent popular open-source MLLMs, including LLaVA (Liu et al., 2023a), InstructBLIP (Dai et al., 2023), Fuyu (Bavishi et al., 2023), and CogVLM (Wang et al., 2023), can only accept a single image as input along with text instruction. To feed multiple images, a workaround is to merge and concatenate multiple images horizontally and feed as one image. More recent MLLMs such as Open-Flamingo (Awadalla et al., 2023), Kosmos2 (Peng et al., 2023), and QwenVL (Bai et al., 2023) 12270 can accept multiple images in an interleaved imagetext format. For closed-source MLLMs, OpenAI’s GPT-4v (OpenAI, 2023), GPT-4o (OpenAI, 2023) and Google’s Gemini (Team et al., 2024) are the most popular MLLMs that perform exceptionally. 3.3 Existing Auto-metrics Here we list some prominent automatic metrics: Image-Text Alignment. CLIP-Score (Hessel et al., 2021) computes the average cosine similarities between prompt and generated image CLIP embeddings. One disadvantage of CLIP-Score is that the score is biased towards the training distribution (Kim et al., 2023). Moreover, in practical evaluation, the average CLIP-Score result of a decent method will always fall in the range [0.25, 0.35] even though a single CLIP-Score is within [0, 1]. Such a narrow range may not offer enough differentiation to know which model is better. Moreover, image-text alignment is not the only considered aspect of semantic consistency. For example, it cannot examine the degree of overediting in text or mask-guided image editing tasks. Perceptual Distance. LPIPS (Zhang et al., 2018) measures the resemblance between two images in a manner that aligns with human perception. With its sensitivity to distortions, it is an often used metric in image synthesis research such as image editing tasks and control-guided Image Generation task (Meng et al., 2021; Qin et al., 2023), to measure between the input (or ground truth) and the generated image. However, in the image editing context, the image’s naturalness (e.g. shadow, lighting, sense of distance) is often required in the human perspective of perceptual quality, which is missed in the LPIPS metric. It is also difficult to access a model’s performance by distortion level, as in the current state of research the models often can process high-quality editing without artifacts. Subject Fidelity. CLIP-I computes the average pairwise cosine similarities between CLIP embeddings of generated and real images, first proposed in Textual-Inversion (Gal et al., 2022). However, CLIP-I cannot distinguish between different subjects that may have highly similar text descriptions, and it is less sensitive to shape consistency as it compares the semantic similarity between images. DINO metric was proposed in DreamBooth (Ruiz et al., 2023). The metric is computed by the mean cosine similarities calculated pairwise between the DINO embeddings of ViT-S/16 (Caron et al., 2021) Prompt: A cat [V] standing by a pot [M] MLLM Text−guided Image Editing Synthetic image Make it a slice of pizza Multi−Concept Image Composition Synthetic image Text−to−Image Generation Prompt: A cartoon- styled alarm clock Synthetic image You will have to evaluate the effectiveness of the AI-generated image(s) based on the given rules. RULES: Two images will be provided: The first being the original A… Instructions: Humans The score is given an 8 because the image demonstrates proper shadowing and lighting… - Perceptual Quality: The pot looks different from the referenced one, so I will give it a 5. -Perceptual Quality: - Semantic Consistency: Correlate? The cat is right next to the pot, just like in the prompt. I'm giving it a solid 10. -Semantic Consistency: The cat is standing by the pot, hence the score of 9 for following the prompt… Figure 2: We study the correlation between MLLMs and human perspectives on rating images. for both synthesized and authentic images. In contrast to CLIP-I, the DINO metric is sensitive to differences between subjects of the same class due to the self-supervised training objective of DINO. These two became popular metrics reported in research on subject-driven image generation and editing tasks (Li et al., 2023c; Lu et al., 2023a). 4 Method During the experiment, we select 29 models evaluated in ImagenHub (Ku et al., 2023) to compare the correlations with human ratings. See Appendix B for the listed models. Rating instructions. In ImagenHub, each image in one rating aspect is rated by picking an option from List[0, 0.5, 1] by three human raters. While such simple rating instruction is human-friendly and offers enough granularity, the simplicity of the scale can lead to less accurate representations of opinions, as given the broad spectrum covered by the rating aspects of semantic consistency (SC) and perceptual quality (PQ). We propose a more rigorous rating instruction toward comprehensive evaluation for each type of task. We split the rating of semantic consistency (SC) and perceptual quality (PQ) into multiple sub-scores, which SC contains multiple scores according to the tasks. 12271 Semantic Consistency (SC) Input Concept 1 Concept 2 Synthesized Image PQ Prompt SC Prompt “dog sitting in a driving car” Prompt for Synthesized Image SC Scores: Alignment with the prompt: 7 Resemblance to concept 1: 9 Resemblance to concept 2: 0 SC score = min(7, 9, 0) = 0 Perceptual Quality (PQ) PQ Scores: looks natural: 7 Has no artifacts: 8 PQ score = min(7, 8) = 7 MLLM MLLM Response: The naturalness score is given a 7 because the dog appears well integrated into the car setting with proper shadowing and lighting that matches the interior of the car...The artifact score is an 8 because the image is clear ... Response: The dog isn't sitting as a driver would, hence the score of 7 for following the prompt. The dog in the second image strongly resembles…, warranting a score of 9 for resemblance. The car's interior and style are entirely different, which results in a score of 0 for resemblance… Figure 3: Process of MLLM evaluation on one synthetic image. All input conditions, synthesized images, and rating instructions are fed together to the MLLM in one pass. Multi-concept image composition task is used here as an example. The final overall score of the image is derived with equation 2. For example, in the multi-concept image composition task as shown in Figure 3, two images (known as concepts) and a text prompt are provided as input, and the desired synthesized image will contain the two concept objects in the image in actions according to the text prompt. Thus SC will be split into 3 sub-scores: (1) Is the image aligning with the prompt? (2) Does the object in the image resemble the first concept? (3) Does the object in the image resemble the second concept? For PQ, the naturalness level and distortion level will be accessed separately, resulting in 2 subscores: (i) Does the image give an unnatural feeling such as a wrong sense of distance, wrong shadow, or wrong lighting? (ii) Does the image contain a portion of distortion, watermark, scratches, etc.? Our proposed rating system enhances the evaluation of tasks by dividing SC and PQ into distinct sub-scores. The details of prompt templates are available in Appendix A. O = [min(α1, ..., αi) min(β1, ..., βi)] 1 2 (2) Our overall score is derived as shown in equation 2. We assume each sub-score weights the same and used min operation to emphasize the importance of meeting all criteria without exception. αi is a subscore in SC and βi is a sub-score in PQ. The final rating scores of SC and PQ provided by MLLMs are on a scale of 0 to 10. The design rationale is that in ImagenHub’s human rating method, the possible results when the answers of three human raters, each picking an option from List[0, 0.5, 1], are added together and then divided by 3, will fall into one of the options: List[0.0, 0.17, 0.33, 0.5, 0.67, 0.83, 1.0]. Thus we simply use a scale of 0 to 10 and normalized in the range [0.0, 1.0] when comparing with human ratings. Input conditions and synthetic image are fed into the MLLM together during the rating process of SC, while in the PQ rating process, only the synthetic image is fed into the MLLM. This is to avoid the model getting confused by the input conditions in the PQ rating process, as to be discussed in section 5.1. 5 Experimental Results 5.1 Correlation Study For all presented correlations, we applied Fisher Z-transformation to estimate the average Spearman correlation ∈[−1, 1] across models and tasks. Metric-to-Human (M-H) correlations. In Table 2 and 3, we first verified the reliability of ImagenHub human ratings by computing the Human-to-Human (H-H) correlation, as the correlation goes around 0.5, expected to be the highest value compared to MLLMs. Then we benchmark the MLLMs according to our designed method to compute the Metric-to-Human (M-H) correlation. We noticed only GPT-4v, GPT4o, Gemini, and LLaVA were able to follow our instructions clearly while other MLLMs were not able to produce any meaningful results according to our setup. For example, 12272 Backbone M-HSC corr M-HPQ corr M-HO corr Across All 7 Tasks Human Raters 0.4700 0.4124 0.4558 VIESCORE GPT-4o0shot 0.4459 0.3399 0.4041 GPT-4o1shot 0.4309 0.1167 0.3770 Gemini-Pro0shot 0.3322 0.2675 0.3048 Gemini-Pro1shot 0.3094 0.3070 0.3005 GPT-4v0shot 0.3655 0.3092 0.3266 GPT-4v1shot 0.2689 0.2338 0.2604 LLaVA0shot 0.1046 0.0319 0.0925 LLaVA1shot 0.1012 0.0138 0.0695 Qwen-VL0shot 0.0679 0.0165 0.0920 BLIP20shot 0.0504 -0.0108 0.0622 InstructBLIP0shot 0.0246 0.0095 0.0005 Fuyu0shot -0.0110 -0.0172 0.0154 CogVLM0shot -0.0228 0.0514 -0.0050 OpenFlamingo0shot -0.0037 -0.0102 -0.0122 Table 1: Correlations across all tasks with different backbone models. We highlight the highest correlation numbers in green. See Appendix C for details. BLIP-2, while able to output the correct format, the scores provided are constant zeros. Qwen-VL and InstructBLIP could only produce a portion of responses for semantic consistency but failed to generate any results for perceptual quality evaluation. From overall performance, we found that GPT-4o reports a significantly higher correlation than all other MLLMs, while LLaVA’s correlation is much less than human raters. It seems that LLaVA is less effective in these specific tasks compared to closesourced MLLMs like GPT-4v. It is worth mentioning that GPT-4o achieves a very high correlation with human raters on different image generation and editing tasks. GPT-4v and Gemini also show satisfactory performance on nearly all tasks with a difference of less than 0.2 towards human correlations, even on par with humans in text-guide image generation tasks. Both GPT-4v, Gemini, and LLaVA demonstrated the weakest performance in the text-guided editing task and subject-driven image editing task. Extra visuals resulted in a decline in performance. Numerous studies (Brown et al., 2020; Parnami and Lee, 2022; Liu et al., 2021) have highlighted that In-Context Learning (ICL) allows LLMs to tackle novel tasks effectively without requiring the traditional fine-tuning process. We applied In-Context Learning in our prompting method with the expectation of increasing the correlation Method M-HSC corr M-HPQ corr M-HO corr Text-guided Image Generation (5 models) Human Raters 0.5044 0.3640 0.4652 CLIP-Score -0.0817 -0.0114 -0.0881 VIESCORE GPT-4o0shot 0.4989 0.2495 0.3928 GPT-4o1shot 0.5124 0.0336 0.4042 Gemini-Pro0shot 0.5123 0.1842 0.4356 Gemini-Pro1shot 0.4757 0.2206 0.4326 GPT-4v0shot 0.4885 0.2379 0.4614 GPT-4v1shot 0.4531 0.1770 0.3801 LLaVA0shot 0.1809 0.0306 0.1410 LLaVA1shot 0.1789 -0.0020 0.1309 Mask-guided Image Editing (4 models) Human Raters 0.5390 0.5030 0.4981 LPIPS -0.1012 0.0646 -0.0694 VIESCORE GPT-4o0shot 0.5421 0.3469 0.4769 GPT-4o1shot 0.5246 0.1272 0.4432 Gemini-Pro0shot 0.4304 0.2839 0.3593 Gemini-Pro1shot 0.4595 0.3170 0.4017 GPT-4v0shot 0.4508 0.2859 0.4069 GPT-4v1shot 0.4088 0.2352 0.3810 LLaVA0shot 0.1180 -0.0531 0.0675 LLaVA1shot 0.1263 -0.0145 0.1040 Text-guided Image Editing (8 models) Human Raters 0.4230 0.5052 0.4184 LPIPS 0.0956 0.2504 0.1142 VIESCORE GPT-4o0shot 0.4062 0.4863 0.3821 GPT-4o1shot 0.3684 0.1939 0.3438 Gemini-Pro0shot 0.2836 0.4291 0.2728 Gemini-Pro1shot 0.2805 0.4657 0.2648 GPT-4v0shot 0.2610 0.4274 0.2456 GPT-4v1shot 0.2428 0.3402 0.2279 LLaVA0shot 0.0448 0.0583 0.0273 LLaVA1shot 0.0185 -0.0107 0.0258 Table 2: Correlations comparison of available methods on the most common tasks. We highlight the best method and the correlation numbers closest to human raters. Continue in Table 3 and 4. scores, but we observed the opposite. In Table 2 and 3, there is an observable general trend of diminishing correlation scores. The overall correlation score in subject-driven image generation and editing, and the multi-concept image composition task dropped significantly. Only the mask-guided image editing task and control-guided image generation task reported a subtle increase in correlation score. 12273 Method M-HSC corr M-HPQ corr M-HO corr Subject-driven Image Generation (4 models) Human Raters 0.4780 0.3565 0.4653 DINO 0.4160 0.1206 0.4246 CLIP-I 0.2961 0.1694 0.3058 VIESCORE GPT-4o0shot 0.4806 0.2576 0.4637 GPT-4o1shot 0.4685 -0.0171 0.4292 Gemini-Pro0shot 0.2906 0.1765 0.2851 Gemini-Pro1shot 0.3486 0.2800 0.3342 GPT-4v0shot 0.3979 0.1903 0.3738 GPT-4v1shot 0.2757 0.2261 0.2753 LLaVA0shot 0.0326 -0.0303 0.1219 LLaVA1shot 0.1334 0.0858 0.1248 Subject-driven Image Editing (3 models) Human Raters 0.4887 0.2986 0.4747 DINO 0.3022 -0.0381 0.3005 CLIP-I 0.2834 0.1248 0.2813 VIESCORE GPT-4o0shot 0.4800 0.3734 0.3268 GPT-4o1shot 0.3862 0.1273 0.2797 Gemini-Pro0shot 0.2187 0.3148 0.2234 Gemini-Pro1shot -0.0083 0.3181 0.0004 GPT-4v0shot 0.3274 0.2960 0.1507 GPT-4v1shot -0.0255 0.1572 -0.0139 LLaVA0shot 0.0360 -0.0073 0.0168 LLaVA1shot 0.0587 -0.0249 0.0309 Table 3: Correlations comparison of available methods on the subject-driven tasks. Looking into the rationale, we found that the MLLMs tend to get confused by the example images, as illustrated in Figure 4. Such behavior is observed in both GPT-4v, GPT-4o, Gemini, and LLaVA rationale. Another recent work (Lu et al., 2023b) also reported a similar issue where the model attempted to consider the example when answering the visual question. This explains the deterioration of the correlation on both GPT-4v, GPT4o, Gemini, and LLaVA when the ICL prompting technique is used. This also implied the low correlation scores on image editing tasks were due to the limited capability of state-of-the-art MLLMs for multiple image understanding. Ablation study on PQ rating setting. As providing multiple images could potentially decrease the performance, we attempt to minimize the workload of MLLM by only providing the synthetic image in the PQ rating process instead of including the input conditions. We report the correlation score Method M-HSC corr M-HPQ corr M-HO corr Multi-concept Image Composition (3 models) Human Raters 0.5927 0.5145 0.5919 DINO 0.0979 -0.1643 0.0958 CLIP-I 0.1512 -0.0963 0.1498 VIESCORE GPT-4o0shot 0.4516 0.2751 0.4136 GPT-4o1shot 0.4120 -0.0141 0.3523 Gemini-Pro0shot 0.3557 0.1948 0.3314 Gemini-Pro1shot 0.4151 0.1798 0.4131 GPT-4v0shot 0.3209 0.3025 0.3346 GPT-4v1shot 0.1859 0.1185 0.1918 LLaVA0shot 0.1022 0.1194 0.1070 LLaVA1shot 0.0828 0.0379 0.0293 Control-guided Image Generation (2 models) Human Raters 0.5443 0.5279 0.5307 LPIPS 0.3699 0.4204 0.4133 VIESCORE GPT-4o0shot 0.4972 0.4892 0.5439 GPT-4o1shot 0.5544 0.3699 0.5238 Gemini-Pro0shot 0.3254 0.3359 0.2960 Gemini-Pro1shot 0.2677 0.4392 0.3240 GPT-4v0shot 0.4360 0.4975 0.3999 GPT-4v1shot 0.3892 0.4132 0.4237 LLaVA0shot 0.2207 0.1060 0.1679 LLaVA1shot 0.1121 0.0247 0.0416 Table 4: Correlations comparison of available methods on the control-guided and multi-concept tasks. TIE MCIC PQ Prompting Method M-HPQ corr M-HPQ corr Human (with inputs) 0.5052 0.5145 without inputs 0.4274 0.3025 with inputs 0.2256 0.0731 Table 5: Correlations of GPT-4v when including inputs in the PQ prompt in TIE (Text-guided Image Editing) and MCIC (Multi-concept Image Composition) task. See 7 for detailed comparison in Appendix. in the two different settings with GPT-4v in Table 5 to examine the impact. We spotted a significant improvement in correlation after taking away the inputs in the PQ rating process. Ranking image models. Besides rating score correlations, we also compared the model ranking from the ImagenHub human evaluation leaderboard and the model ranking suggested by the MLLMs, shown in Table 6. We computed Spearman’s footrule dSF (r, r∗) ∈[0, +∞) and Spear12274 dSF (rHuman, rMethod)↓ ρS(rHuman, rMethod)↑ Task (Total number of Models) GPT-4v LLaVA LPIPS CLIP GPT-4v LLaVA LPIPS CLIP Text-guided Image Generation (5) 2 6 N/A 8 0.90 0.50 N/A -0.20 Mask-guided Image Editing (4) 2 8 2 0 0.80 -1.00 0.80 1.00 Text-guided Image Editing (8) 12 16 20 16 0.67 0.48 0.17 0.48 Subject-driven Image Generation (4) 4 6 0 6 0.20 -0.40 1.00 -0.20 Subject-driven Image Editing (3) 2 2 4 4 0.50 0.50 -0.50 -1.00 Multi-concept Image Composition (3) 0 0 2 2 1.00 1.00 0.50 0.50 Control-guided Image Generation (2) 0 0 0 2 1.00 1.00 1.00 -1.00 Table 6: Ranking judgment from each metric method. dSF (r, r∗) is the Spearman’s footrule and ρS(r, r∗) is the Spearman’s rho (correlation). LPIPS is not available for the first task because there are no reference images. 1st image as a rating example. PQ scores: Image looks natural? 5 Image has no artifacts? 5 Reasoning: The image gives an unnatural feeling on hands of the girl. There is also minor distortion on the eyes of the girl. Please evaluate the 2nd image. PQ scores: Image looks natural? _ Image has no artifacts? _ Reasoning: ________________________________ ________________________________ ________________________________ ________________________________ PQ scores: Image looks natural? 3 Image has no artifacts? 4 Reasoning: The girl's image has an unnatural blurring effect .... The birds also look slightly distorted. The cat’s the face looks slightly artificial. LLM Prompt Response ……. (Detailed text of rating instruction on PQ) ……. Figure 4: MLLM making mistakes on rationale when prompted with extra images as examples. man’s rho ρS(r, r∗) ∈[−1, 1] to examine the ranking correlation. Both GPT-4v and LLaVA can align to ImagenHub rankings on the multi-concept image composition task and control-guided image generation task, and with only one model difference in the subject-driven image editing task. While the results vary significantly across other tasks, GPT-4v generally maintains a stronger alignment with the ImagenHub rankings compared to LLaVA. 5.2 Insights and Challenges on VIESCORE MLLMs are weak at capturing image nuances in edited images. From Table 2, we noticed the correlation scores on editing tasks are generally Figure 5: Representative pairs that MLLMs misunderstood as identical images. Images in the first row are the inputs and in the second row are the edited. lower than generation tasks. Upon investigation, it was found that MLLMs often fail to detect minor changes made in image editing, such as small patch edits. Consequently, MLLMs might perceive two images as identical even when humans recognize the edits as successful. This issue may stem from MLLMs focusing on high-level image features while overlooking finer details like color and texture differences, as illustrated in Figure 5. This limitation is apparent in both GPT-4v, GPT4o, Gemini, and LLaVA, highlighting a challenge in synthetic image evaluation accuracy on image editing tasks. Both MLLMs and human evaluators display a broader range of views regarding perceptual quality compared to semantic consistency. From Table 2 and Table 3, we can observe that the correlation scores of PQ are generally lower than the correlation scores of SC and Overall, even on human raters. This suggests the human raters’ perspective on evaluating perceptual quality is more diverse. Possible impacting factors include the rater’s eyesight condition, screen resolution, rating leniency, etc. In the context of MLLMs, we found that MLLMs while being able to correctly recognize the naturalness and artifacts of the image, the rating scores are as diverse as human rating scores 12275 even though we have provided a marking rubric. 5.3 VIESCORE and Auto-metrics vs Human We report the human correlations in Table 2 and 3 to compare the performance between our VIESCORE and popular auto-metrics. To ensure a fair comparison, we only included automatic metrics that have been previously reported in related research for the specific tasks under consideration. DINO is an effective metric in subject-driven tasks. The DINO metric demonstrates sensitivity to variations within the same class of subjects, making it an effective metric for measuring whether the subject in the synthesized image aligns with the token subject. Our correlation result shows that DINO outperforms GPT-4v and CLIP-I on subject-driven image generation and editing tasks, suggesting that DINO highly aligns with human’s perspective on semantic consistency where subject fidelity is considered. However, multi-subject fidelity remains a challenge as DINO and CLIP-I only consider single subjects. However, GPT-4o still achieves the highest correlation to human annotations. LPIPS metric proves to be effective in controlguided tasks, but less effective in image editing tasks. As discussed in section 3.3, LPIPS has great ability in detecting distortions. Since the control-guided task is a less mature research direction compared to image editing tasks, distortions are often found in the synthetic images from the control-guided task. On the other hand, current image editing models can synthesize images with less distortions. This explains the high correlation in the control-guided task. While GPT-4o still achieves the highest correlation to human annotations, LPIPS outperforms GPT-4v, Gemini, and LLaVA on this task. CLIP-Score has a much weaker correlation with human ratings in the text-guided image generation task than GPT-4v. We also noticed none of the synthetic images achieved higher than 0.3 CLIP-Score, even though most of the images are regarded as having high semantic consistency by human raters. This can be due to different evaluation focuses, as humans tend to grab the abstract idea from the prompt to access the image, but CLIPScore considers the whole text prompt. GPT-4v outperforms other auto-metrics with its correlation to the ImagenHub leaderboard rankings. The correlation of model rankings on ImagenHub was evaluated against CLIP Score and LPIPS metrics, as shown in Table 6, and compared with MLLMs in the VIESCORE. We found that GPT-4v can achieve a positive correlation with the model rankings on every task. This shows the sign of capability for MLLMs as evaluators for image synthesis research. 6 Conclusion In this paper, we propose the VIESCORE for synthetic image evaluation across seven popular image synthesis tasks and comprehensively access the efficacy using human ratings from ImagenHub. Our experiment reported that VIESCORE with close-source MLLMs backbones like GPT-4o and GPT-4v are significantly more effective than other open-source MLLMs in assessing synthetic images, achieving a correlation of over 0.4 to human ratings most of the tasks. However, it notes a lower correlation in image editing tasks for most of the MLLMs, including GPT-4v. We also noticed Gemini has similar performance as GPT-4v, while GPT-4o stands superior. Comparing our VIESCORE to existing auto-metrics, we found that GPT-4o is more effective than auto-metrics in all tasks, while DINO is more effective in subjectdriven image generation and editing tasks than GPT-4v. GPT-4v also shows a higher ranking correlation with the ImagenHub leaderboard than other automatic metrics. This marked a milestone towards explainable metrics for conditional image synthesis evaluation. Our future research will focus on investigating the use of distillation models to replicate human-like performance in evaluating synthetic images. 7 Limitations OpenAI Security and Privacy Policy. Due to ChatGPT’s security and privacy policy, AIgenerated images that resemble a real person or photograph will be refused by GPT-4v for evaluation. The model will return results similar to "I am sorry, but I cannot process these images as they contain real people.". We simply drop those results by keyword matching. OpenAI Playground vs API. While GPT-4v Playground allows the user to keep a session, the OpenAI API does not provide such a function. While we believe using GPT-4v playground might yield better performance, especially in an In-Context learning setting, we can only rely on API due to the large scale of the experiment. 12276 8 Potential Risks Multimodal models can inadvertently perpetuate or amplify biases present in their training data. The interpretation and evaluation of synthetic images depend heavily on context. A multimodal model might not fully grasp certain images’ nuances or cultural sensitivities, leading to inappropriate or offensive outputs. 9 Artifacts All datasets and models are publicly accessible for academic use, and the official OpenAI API is available for academic purposes. 10 Computational Experiments All open-source model experiments were conducted on an NVIDIA RTX A6000 GPU. Approximately 500 US dollars were spent on an OpenAI API call for GPT-4v and GPT-4o experiments. 11 Acknowledgement We thank Kai Zhang, Yujie Lu, and Tianle Li for the discussion. We also thank Xueguang Ma and Tianle Li for sharing the quotas of GPT-4v and GPT-4o API calls from OpenAI. References Omri Avrahami, Dani Lischinski, and Ohad Fried. 2022. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18208–18218. Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. 2023. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390. Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966. Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, and Tali Dekel. 2022. Text2live: Text-driven layered image and video editing. In European Conference on Computer Vision, pages 707–723. Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sa˘gnak Ta¸sırlar. 2023. Introducing our multimodal models. Tim Brooks, Aleksander Holynski, and Alexei A. Efros. 2023. Instructpix2pix: Learning to follow image editing instructions. In CVPR. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. ArXiv, abs/2005.14165. Mathilde Caron, Hugo Touvron, Ishan Misra, Herv’e J’egou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. 2021. Emerging properties in self-supervised vision transformers. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 9630– 9640. Wenhu Chen, Hexiang Hu, Yandong Li, Nataniel Rui, Xuhui Jia, Ming-Wei Chang, and William W Cohen. 2023. Subject-driven text-to-image generation via apprenticeship learning. arXiv preprint arXiv:2304.00186. Jaemin Cho, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, and Su Wang. 2023. Davidsonian scene graph: Improving reliability in fine-grained evaluation for text-image generation. arXiv preprint arXiv:2310.18235. Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. 2022. Diffedit: Diffusion-based semantic image editing with mask guidance. In The Eleventh International Conference on Learning Representations. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Albert Li, Pascale Fung, and Steven C. H. Hoi. 2023. Instructblip: Towards general-purpose vision-language models with instruction tuning. ArXiv, abs/2305.06500. deep floyd.ai. 2023. If by deepfloyd lab at stabilityai. Emily L Denton, Soumith Chintala, Rob Fergus, et al. 2015. Deep generative image models using a laplacian pyramid of adversarial networks. Advances in neural information processing systems, 28. Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. In Advances in Neural Information Processing Systems, volume 34, pages 8780–8794. Curran Associates, Inc. 12277 Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023a. Gptscore: Evaluate as you desire. ArXiv, abs/2302.04166. Stephanie Fu, Netanel Tamir, Shobhita Sundaram, Lucy Chai, Richard Zhang, Tali Dekel, and Phillip Isola. 2023b. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit Haim Bermano, Gal Chechik, and Daniel Cohen-or. 2022. An image is worth one word: Personalizing text-to-image generation using textual inversion. In The Eleventh International Conference on Learning Representations. Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. 2016. Deep learning, volume 1. MIT Press. Jing Gu, Yilin Wang, Nanxuan Zhao, Tsu-Jui Fu, Wei Xiong, Qing Liu, Zhifei Zhang, He Zhang, Jianming Zhang, HyunJoon Jung, et al. 2023. Photoswap: Personalized subject swapping in images. arXiv preprint arXiv:2305.18286. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. Clipscore: A reference-free evaluation metric for image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7514–7528. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30. Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems, volume 33, pages 6840–6851. Curran Associates, Inc. Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations. Yushi Hu, Benlin Liu, Jungo Kasai, Yizhong Wang, Mari Ostendorf, Ranjay Krishna, and Noah A Smith. 2023. Tifa: Accurate and interpretable text-toimage faithfulness evaluation with question answering. arXiv preprint arXiv:2303.11897. Kaiyi Huang, Kaiyue Sun, Enze Xie, Zhenguo Li, and Xihui Liu. 2023. T2i-compbench: A comprehensive benchmark for open-world compositional textto-image generation. ArXiv, abs/2307.06350. Mert Inan, Piyush Sharma, Baber Khalid, Radu Soricut, Matthew Stone, and Malihe Alikhani. 2021. COSMic: A coherence-aware generation metric for image descriptions. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3419– 3430, Punta Cana, Dominican Republic. Association for Computational Linguistics. Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134. Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, and Wenhu Chen. 2023. Tigerscore: Towards building explainable metric for all text generation tasks. arXiv preprint arXiv:2310.00752. Jin-Hwa Kim, Yunji Kim, Jiyoung Lee, Kang Min Yoo, and Sang-Woo Lee. 2022. Mutual information divergence: A unified metric for multimodal generative models. Advances in Neural Information Processing Systems, 35:35072–35086. Younghyun Kim, Sangwoo Mo, Min-Kyung Kim, Kyungmin Lee, Jaeho Lee, and Jinwoo Shin. 2023. Bias-to-text: Debiasing unknown visual biases through language interpretation. ArXiv. Max Ku, Tianle Li, Kai Zhang, Yujie Lu, Xingyu Fu, Wenwen Zhuang, and Wenhu Chen. 2023. Imagenhub: Standardizing the evaluation of conditional image generation models. arXiv preprint arXiv:2310.01596. Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shechtman, and Jun-Yan Zhu. 2023. Multi-concept customization of text-to-image diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1931–1941. Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. 2019. Improved precision and recall metric for assessing generative models. Advances in Neural Information Processing Systems, 32. Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, and Kyomin Jung. 2020. ViLBERTScore: Evaluating image caption using vision-and-language BERT. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 34–39, Online. Association for Computational Linguistics. Tony Lee, Michihiro Yasunaga, Chenlin Meng, Yifan Mai, Joon Sung Park, Agrim Gupta, Yunzhi Zhang, Deepak Narayanan, Hannah Benita Teufel, Marco Bellagente, et al. 2023. Holistic evaluation of textto-image models. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 12278 Dongxu Li, Junnan Li, and Steven CH Hoi. 2023a. Blipdiffusion: Pre-trained subject representation for controllable text-to-image generation and editing. arXiv preprint arXiv:2305.14720. Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. 2023b. Generative judge for evaluating alignment. arXiv preprint arXiv:2310.05470. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In International Conference on Machine Learning, pages 12888–12900. PMLR. Tianle Li, Max Ku, Cong Wei, and Wenhu Chen. 2023c. Dreamedit: Subject-driven image editing. arXiv preprint arXiv:2306.12624. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023a. Visual instruction tuning. ArXiv, abs/2304.08485. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What makes good in-context examples for gpt-3? In Workshop on Knowledge Extraction and Integration for Deep Learning Architectures; Deep Learning Inside Out. Zhiheng Liu, Yifei Zhang, Yujun Shen, Kecheng Zheng, Kai Zhu, Ruili Feng, Yu Liu, Deli Zhao, Jingren Zhou, and Yang Cao. 2023b. Cones 2: Customizable image synthesis with multiple subjects. arXiv preprint arXiv:2305.19327. Lingxiao Lu, Bo Zhang, and Li Niu. 2023a. Dreamcom: Finetuning text-guided inpainting model for image composition. ArXiv, abs/2309.15508. Yujie Lu, Xiujun Li, William Yang Wang, and Yejin Choi. 2023b. Vim: Probing multimodal large language models for visual embedded instruction following. ArXiv, abs/2311.17647. Yujie Lu, Xianjun Yang, Xiujun Li, Xin Eric Wang, and William Yang Wang. 2023c. Llmscore: Unveiling the power of large language models in text-to-image synthesis evaluation. arXiv preprint arXiv:2305.11116. Yujie Lu, Xianjun Yang, Xiujun Li, Xin Eric Wang, and William Yang Wang. 2023d. Llmscore: Unveiling the power of large language models in text-to-image synthesis evaluation. ArXiv, abs/2305.11116. Andreas Lugmayr, Martin Danelljan, Andrés Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. 2022. Repaint: Inpainting using denoising diffusion probabilistic models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11451–11461. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. 2021. Sdedit: Guided image synthesis and editing with stochastic differential equations. In International Conference on Learning Representations. Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. 2023. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038– 6047. OpenAI. 2023. Gpt-4 technical report. openjourney.ai. 2023. Openjourney is an open source stable diffusion fine tuned model on midjourney images. Dong Huk Park, Samaneh Azadi, Xihui Liu, Trevor Darrell, and Anna Rohrbach. 2021. Benchmark for compositional text-to-image synthesis. In Thirtyfifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1). Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. 2023. Zero-shot image-to-image translation. In ACM SIGGRAPH 2023 Conference Proceedings, pages 1–11. Archit Parnami and Minwoo Lee. 2022. Learning from few examples: A summary of approaches to few-shot learning. ArXiv, abs/2203.04291. Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. 2023. Kosmos-2: Grounding multimodal large language models to the world. ArXiv, abs/2306.14824. Can Qin, Shu Zhang, Ning Yu, Yihao Feng, Xinyi Yang, Yingbo Zhou, Huan Wang, Juan Carlos Niebles, Caiming Xiong, Silvio Savarese, et al. 2023. Unicontrol: A unified diffusion model for controllable visual generation in the wild. arXiv preprint arXiv:2305.11147. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical textconditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 1(2):3. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695. Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. 2023. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22500–22510. runwayml. 2023. Stable diffusion inpainting. 12279 Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35:36479–36494. Mehdi SM Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly. 2018. Assessing generative models via precision and recall. Advances in neural information processing systems, 31. Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. 2016. Improved techniques for training gans. Advances in neural information processing systems, 29. Ayush Sarkar, Hanlin Mai, Amitabh Mahapatra, Svetlana Lazebnik, David Forsyth, and Anand Bhattad. 2023. Shadows don’t lie and lines can’t bend! generative models don’t know projective geometry...for now. ArXiv, abs/2311.17138. Shelly Sheynin, Adam Polyak, Uriel Singer, Yuval Kirstain, Amit Zohar, Oron Ashual, Devi Parikh, and Yaniv Taigman. 2023. Emu edit: Precise image editing via recognition and generation tasks. arXiv preprint arXiv:2311.10089. stability.ai. 2023. Stable diffusion xl. Gemini Team, Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry, Lepikhin, Timothy Lillicrap, Jean baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, Ioannis Antonoglou, Rohan Anil, Sebastian Borgeaud, Andrew Dai, Katie Millican, Ethan Dyer, Mia Glaese, Thibault Sottiaux, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, James Molloy, Jilin Chen, Michael Isard, Paul Barham, Tom Hennigan, Ross McIlroy, Melvin Johnson, Johan Schalkwyk, Eli Collins, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Clemens Meyer, Gregory Thornton, Zhen Yang, Henryk Michalewski, Zaheer Abbas, Nathan Schucher, Ankesh Anand, Richard Ives, James Keeling, Karel Lenc, Salem Haykal, Siamak Shakeri, Pranav Shyam, Aakanksha Chowdhery, Roman Ring, Stephen Spencer, Eren Sezener, Luke Vilnis, Oscar Chang, Nobuyuki Morioka, George Tucker, Ce Zheng, Oliver Woodman, Nithya Attaluri, Tomas Kocisky, Evgenii Eltyshev, Xi Chen, Timothy Chung, Vittorio Selo, Siddhartha Brahma, Petko Georgiev, Ambrose Slone, Zhenkai Zhu, James Lottes, Siyuan Qiao, Ben Caine, Sebastian Riedel, Alex Tomala, Martin Chadwick, Juliette Love, Peter Choy, Sid Mittal, Neil Houlsby, Yunhao Tang, Matthew Lamm, Libin Bai, Qiao Zhang, Luheng He, Yong Cheng, Peter Humphreys, Yujia Li, Sergey Brin, Albin Cassirer, Yingjie Miao, Lukas Zilka, Taylor Tobin, Kelvin Xu, Lev Proleev, Daniel Sohn, Alberto Magni, Lisa Anne Hendricks, Isabel Gao, Santiago Ontanon, Oskar Bunyan, Nathan Byrd, Abhanshu Sharma, Biao Zhang, Mario Pinto, Rishika Sinha, Harsh Mehta, Dawei Jia, Sergi Caelles, Albert Webson, Alex Morris, Becca Roelofs, Yifan Ding, Robin Strudel, Xuehan Xiong, Marvin Ritter, Mostafa Dehghani, Rahma Chaabouni, Abhijit Karmarkar, Guangda Lai, Fabian Mentzer, Bibo Xu, YaGuang Li, Yujing Zhang, Tom Le Paine, Alex Goldin, Behnam Neyshabur, Kate Baumli, Anselm Levskaya, Michael Laskin, Wenhao Jia, Jack W. Rae, Kefan Xiao, Antoine He, Skye Giordano, Lakshman Yagati, Jean-Baptiste Lespiau, Paul Natsev, Sanjay Ganapathy, Fangyu Liu, Danilo Martins, Nanxin Chen, Yunhan Xu, Megan Barnes, Rhys May, Arpi Vezer, Junhyuk Oh, Ken Franko, Sophie Bridgers, Ruizhe Zhao, Boxi Wu, Basil Mustafa, Sean Sechrist, Emilio Parisotto, Thanumalayan Sankaranarayana Pillai, Chris Larkin, Chenjie Gu, Christina Sorokin, Maxim Krikun, Alexey Guseynov, Jessica Landon, Romina Datta, Alexander Pritzel, Phoebe Thacker, Fan Yang, Kevin Hui, Anja Hauth, Chih-Kuan Yeh, David Barker, Justin Mao-Jones, Sophia Austin, Hannah Sheahan, Parker Schuh, James Svensson, Rohan Jain, Vinay Ramasesh, Anton Briukhov, DaWoon Chung, Tamara von Glehn, Christina Butterfield, Priya Jhakra, Matthew Wiethoff, Justin Frye, Jordan Grimstad, Beer Changpinyo, Charline Le Lan, Anna Bortsova, Yonghui Wu, Paul Voigtlaender, Tara Sainath, Shane Gu, Charlotte Smith, Will Hawkins, Kris Cao, James Besley, Srivatsan Srinivasan, Mark Omernick, Colin Gaffney, Gabriela Surita, Ryan Burnell, Bogdan Damoc, Junwhan Ahn, Andrew Brock, Mantas Pajarskas, Anastasia Petrushkina, Seb Noury, Lorenzo Blanco, Kevin Swersky, Arun Ahuja, Thi Avrahami, Vedant Misra, Raoul de Liedekerke, Mariko Iinuma, Alex Polozov, Sarah York, George van den Driessche, Paul Michel, Justin Chiu, Rory Blevins, Zach Gleicher, Adrià Recasens, Alban Rrustemi, Elena Gribovskaya, Aurko Roy, Wiktor Gworek, Sébastien M. R. Arnold, Lisa Lee, James Lee-Thorp, Marcello Maggioni, Enrique Piqueras, Kartikeya Badola, Sharad Vikram, Lucas Gonzalez, Anirudh Baddepudi, Evan Senter, Jacob Devlin, James Qin, Michael Azzam, Maja Trebacz, Martin Polacek, Kashyap Krishnakumar, Shuo yiin Chang, Matthew Tung, Ivo Penchev, Rishabh Joshi, Kate Olszewska, Carrie Muir, Mateo Wirth, Ale Jakse Hartman, Josh Newlan, Sheleem Kashem, Vijay Bolina, Elahe Dabir, Joost van Amersfoort, Zafarali Ahmed, James Cobon-Kerr, Aishwarya Kamath, Arnar Mar Hrafnkelsson, Le Hou, Ian Mackinnon, Alexandre Frechette, Eric Noland, Xiance Si, Emanuel Taropa, Dong Li, Phil Crone, Anmol Gulati, Sébastien Cevey, Jonas Adler, Ada Ma, David Silver, Simon Tokumine, Richard Powell, Stephan Lee, Kiran Vodrahalli, Samer Hassan, Diana Mincu, Antoine Yang, Nir Levine, Jenny Brennan, Mingqiu Wang, Sarah Hodkinson, Jeffrey Zhao, Josh Lipschultz, Aedan Pope, Michael B. Chang, Cheng Li, Laurent El Shafey, Michela Paganini, Sholto Douglas, Bernd Bohnet, Fabio Pardo, Seth Odoom, Mihaela Rosca, Cicero Nogueira dos Santos, Kedar Soparkar, Arthur Guez, Tom Hudson, Steven Hansen, Chulayuth Asawaroengchai, Ravi Addanki, Tianhe Yu, Wojciech Stokowiec, Mina Khan, Justin Gilmer, Jaehoon Lee, Carrie Grimes Bostock, Keran Rong, Jonathan 12280 Caton, Pedram Pejman, Filip Pavetic, Geoff Brown, Vivek Sharma, Mario Luˇci´c, Rajkumar Samuel, Josip Djolonga, Amol Mandhane, Lars Lowe Sjösund, Elena Buchatskaya, Elspeth White, Natalie Clay, Jiepu Jiang, Hyeontaek Lim, Ross Hemsley, Zeyncep Cankara, Jane Labanowski, Nicola De Cao, David Steiner, Sayed Hadi Hashemi, Jacob Austin, Anita Gergely, Tim Blyth, Joe Stanton, Kaushik Shivakumar, Aditya Siddhant, Anders Andreassen, Carlos Araya, Nikhil Sethi, Rakesh Shivanna, Steven Hand, Ankur Bapna, Ali Khodaei, Antoine Miech, Garrett Tanzer, Andy Swing, Shantanu Thakoor, Lora Aroyo, Zhufeng Pan, Zachary Nado, Jakub Sygnowski, Stephanie Winkler, Dian Yu, Mohammad Saleh, Loren Maggiore, Yamini Bansal, Xavier Garcia, Mehran Kazemi, Piyush Patil, Ishita Dasgupta, Iain Barr, Minh Giang, Thais Kagohara, Ivo Danihelka, Amit Marathe, Vladimir Feinberg, Mohamed Elhawaty, Nimesh Ghelani, Dan Horgan, Helen Miller, Lexi Walker, Richard Tanburn, Mukarram Tariq, Disha Shrivastava, Fei Xia, Qingze Wang, ChungCheng Chiu, Zoe Ashwood, Khuslen Baatarsukh, Sina Samangooei, Raphaël Lopez Kaufman, Fred Alcober, Axel Stjerngren, Paul Komarek, Katerina Tsihlas, Anudhyan Boral, Ramona Comanescu, Jeremy Chen, Ruibo Liu, Chris Welty, Dawn Bloxwich, Charlie Chen, Yanhua Sun, Fangxiaoyu Feng, Matthew Mauger, Xerxes Dotiwalla, Vincent Hellendoorn, Michael Sharman, Ivy Zheng, Krishna Haridasan, Gabe Barth-Maron, Craig Swanson, Dominika Rogozi´nska, Alek Andreev, Paul Kishan Rubenstein, Ruoxin Sang, Dan Hurt, Gamaleldin Elsayed, Renshen Wang, Dave Lacey, Anastasija Ili´c, Yao Zhao, Adam Iwanicki, Alejandro Lince, Alexander Chen, Christina Lyu, Carl Lebsack, Jordan Griffith, Meenu Gaba, Paramjit Sandhu, Phil Chen, Anna Koop, Ravi Rajwar, Soheil Hassas Yeganeh, Solomon Chang, Rui Zhu, Soroush Radpour, Elnaz Davoodi, Ving Ian Lei, Yang Xu, Daniel Toyama, Constant Segal, Martin Wicke, Hanzhao Lin, Anna Bulanova, Adrià Puigdomènech Badia, Nemanja Raki´cevi´c, Pablo Sprechmann, Angelos Filos, Shaobo Hou, Víctor Campos, Nora Kassner, Devendra Sachan, Meire Fortunato, Chimezie Iwuanyanwu, Vitaly Nikolaev, Balaji Lakshminarayanan, Sadegh Jazayeri, Mani Varadarajan, Chetan Tekur, Doug Fritz, Misha Khalman, David Reitter, Kingshuk Dasgupta, Shourya Sarcar, Tina Ornduff, Javier Snaider, Fantine Huot, Johnson Jia, Rupert Kemp, Nejc Trdin, Anitha Vijayakumar, Lucy Kim, Christof Angermueller, Li Lao, Tianqi Liu, Haibin Zhang, David Engel, Somer Greene, Anaïs White, Jessica Austin, Lilly Taylor, Shereen Ashraf, Dangyi Liu, Maria Georgaki, Irene Cai, Yana Kulizhskaya, Sonam Goenka, Brennan Saeta, Ying Xu, Christian Frank, Dario de Cesare, Brona Robenek, Harry Richardson, Mahmoud Alnahlawi, Christopher Yew, Priya Ponnapalli, Marco Tagliasacchi, Alex Korchemniy, Yelin Kim, Dinghua Li, Bill Rosgen, Kyle Levin, Jeremy Wiesner, Praseem Banzal, Praveen Srinivasan, Hongkun Yu, Ça˘glar Ünlü, David Reid, Zora Tung, Daniel Finchelstein, Ravin Kumar, Andre Elisseeff, Jin Huang, Ming Zhang, Ricardo Aguilar, Mai Giménez, Jiawei Xia, Olivier Dousse, Willi Gierke, Damion Yates, Komal Jalan, Lu Li, Eri Latorre-Chimoto, Duc Dung Nguyen, Ken Durden, Praveen Kallakuri, Yaxin Liu, Matthew Johnson, Tomy Tsai, Alice Talbert, Jasmine Liu, Alexander Neitz, Chen Elkind, Marco Selvi, Mimi Jasarevic, Livio Baldini Soares, Albert Cui, Pidong Wang, Alek Wenjiao Wang, Xinyu Ye, Krystal Kallarackal, Lucia Loher, Hoi Lam, Josef Broder, Dan HoltmannRice, Nina Martin, Bramandia Ramadhana, Mrinal Shukla, Sujoy Basu, Abhi Mohan, Nick Fernando, Noah Fiedel, Kim Paterson, Hui Li, Ankush Garg, Jane Park, DongHyun Choi, Diane Wu, Sankalp Singh, Zhishuai Zhang, Amir Globerson, Lily Yu, John Carpenter, Félix de Chaumont Quitry, Carey Radebaugh, Chu-Cheng Lin, Alex Tudor, Prakash Shroff, Drew Garmon, Dayou Du, Neera Vats, Han Lu, Shariq Iqbal, Alex Yakubovich, Nilesh Tripuraneni, James Manyika, Haroon Qureshi, Nan Hua, Christel Ngani, Maria Abi Raad, Hannah Forbes, Jeff Stanway, Mukund Sundararajan, Victor Ungureanu, Colton Bishop, Yunjie Li, Balaji Venkatraman, Bo Li, Chloe Thornton, Salvatore Scellato, Nishesh Gupta, Yicheng Wang, Ian Tenney, Xihui Wu, Ashish Shenoy, Gabriel Carvajal, Diana Gage Wright, Ben Bariach, Zhuyun Xiao, Peter Hawkins, Sid Dalmia, Clement Farabet, Pedro Valenzuela, Quan Yuan, Ananth Agarwal, Mia Chen, Wooyeol Kim, Brice Hulse, Nandita Dukkipati, Adam Paszke, Andrew Bolt, Kiam Choo, Jennifer Beattie, Jennifer Prendki, Harsha Vashisht, Rebeca SantamariaFernandez, Luis C. Cobo, Jarek Wilkiewicz, David Madras, Ali Elqursh, Grant Uy, Kevin Ramirez, Matt Harvey, Tyler Liechty, Heiga Zen, Jeff Seibert, Clara Huiyi Hu, Andrey Khorlin, Maigo Le, Asaf Aharoni, Megan Li, Lily Wang, Sandeep Kumar, Norman Casagrande, Jay Hoover, Dalia El Badawy, David Soergel, Denis Vnukov, Matt Miecnikowski, Jiri Simsa, Praveen Kumar, Thibault Sellam, Daniel Vlasic, Samira Daruki, Nir Shabat, John Zhang, Guolong Su, Jiageng Zhang, Jeremiah Liu, Yi Sun, Evan Palmer, Alireza Ghaffarkhah, Xi Xiong, Victor Cotruta, Michael Fink, Lucas Dixon, Ashwin Sreevatsa, Adrian Goedeckemeyer, Alek Dimitriev, Mohsen Jafari, Remi Crocker, Nicholas FitzGerald, Aviral Kumar, Sanjay Ghemawat, Ivan Philips, Frederick Liu, Yannie Liang, Rachel Sterneck, Alena Repina, Marcus Wu, Laura Knight, Marin Georgiev, Hyo Lee, Harry Askham, Abhishek Chakladar, Annie Louis, Carl Crous, Hardie Cate, Dessie Petrova, Michael Quinn, Denese Owusu-Afriyie, Achintya Singhal, Nan Wei, Solomon Kim, Damien Vincent, Milad Nasr, Christopher A. Choquette-Choo, Reiko Tojo, Shawn Lu, Diego de Las Casas, Yuchung Cheng, Tolga Bolukbasi, Katherine Lee, Saaber Fatehi, Rajagopal Ananthanarayanan, Miteyan Patel, Charbel Kaed, Jing Li, Shreyas Rammohan Belle, Zhe Chen, Jaclyn Konzelmann, Siim Põder, Roopal Garg, Vinod Koverkathu, Adam Brown, Chris Dyer, Rosanne Liu, Azade Nova, Jun Xu, Alanna Walton, Alicia Parrish, Mark Epstein, Sara McCarthy, Slav Petrov, Demis Hassabis, Koray Kavukcuoglu, Jeffrey Dean, and Oriol Vinyals. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. 12281 Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971. Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Cogvlm: Visual expert for pretrained language models. ArXiv, abs/2311.03079. Chen Henry Wu and Fernando De la Torre. 2023. A latent space of stochastic diffusion models for zeroshot image editing and guidance. In ICCV. Wenda Xu, Danqing Wang, Liangming Pan, Zhenqiao Song, Markus Freitag, William Yang Wang, and Lei Li. 2023. Instructscore: Towards explainable text generation evaluation with automatic feedback. ArXiv, abs/2305.14282. Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. 2023. The dawn of lmms: Preliminary explorations with gpt-4v(ision). Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. 2023. A survey on multimodal large language models. ArXiv, abs/2306.13549. Kai Zhang, Lingbo Mo, Wenhu Chen, Huan Sun, and Yu Su. 2023a. Magicbrush: A manually annotated dataset for instruction-guided image editing. NeurIPS dataset and benchmark track. Lvmin Zhang and Maneesh Agrawala. 2023. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. 2018. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR. Xinlu Zhang, Yujie Lu, Weizhi Wang, An Yan, Jun Yan, Lianke Qin, Heng Wang, Xifeng Yan, William Yang Wang, and Linda Ruth Petzold. 2023b. Gpt-4v(ision) as a generalist evaluator for vision-language tasks. ArXiv, abs/2311.01361. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. 12282 A Prompt Templates Prompt Engineering. We found that not all MLLMs can fully understand our prompt to give a desired output format consistently. Thus we required MLLMs to output a JSON format, which is supposed to be capable for most MLLMs. Prompt Design. The prompt is divided into two segments: the ‘context prompt’ and the ‘rating prompt’. The ultimate prompt provided to the model is a combination of these two segments. Context You are a professional digital artist. You will have to evaluate the effectiveness of the AI-generated image(s) based on the given rules. You will have to give your output in this way (Keep your reasoning concise and short.): { "score" : [...], "reasoning" : "..." } PQ Rating Prompt Template (for all tasks) RULES: The image is an AI-generated image. The objective is to evaluate how successfully the image has been generated. On a scale 0 to 10: A score from 0 to 10 will be given based on image naturalness. ( 0 indicates that the scene in the image does not look natural at all or gives an unnatural feeling such as a wrong sense of distance, wrong shadow, or wrong lighting. 10 indicates that the image looks natural. ) A second score from 0 to 10 will rate the image artifacts. ( 0 indicates that the image contains a large portion of distortion, watermarks, scratches, blurred faces, unusual body parts, or subjects not harmonized. 10 indicates the image has no artifacts. ) Put the score in a list such that output score = [naturalness, artifacts] 12283 SC Rating Prompt Template (Text-Guided Image Generation) RULES: The image is an AI-generated image according to the text prompt. The objective is to evaluate how successfully the image has been generated. On a scale 0 to 10: A score from 0 to 10 will be given based on the success in following the prompt. (0 indicates that the AI-generated image does not follow the prompt at all. 10 indicates the AI-generated image follows the prompt perfectly.) Put the score in a list such that output score = [score]. Text Prompt: <prompt> SC Rating Prompt Template (Text/Mask-Guided Image Editing) RULES: Two images will be provided: The first being the original AI-generated image and the second being an edited version of the first. The objective is to evaluate how successfully the editing instruction has been executed in the second image. Note that sometimes the two images might look identical due to the failure of the image edit. On scale of 0 to 10: A score from 0 to 10 will be given based on the success of the editing. (0 indicates that the scene in the edited image does not follow the editing instructions at all. 10 indicates that the scene in the edited image follows the editing instruction text perfectly.) A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the original. 10 indicates that the edited image can be recognized as a minimally edited yet effective version of the original.) Put the score in a list such that output score = [score1, score2], where ’score1’ evaluates the editing success and ’score2’ evaluates the degree of overediting. Editing instruction: <instruction> SC Rating Prompt Template (Control-Guided Image Generation) RULES: Two images will be provided: The first being a processed image (e.g. Canny edges, openpose, grayscale, etc.) and the second being an AI-generated image using the first image as guidance. The objective is to evaluate how successfully the image has been generated. On scale 0 to 10: A score from 0 to 10 will be given based on the success in following the prompt. (0 indicates that the second image does not follow the prompt at all. 10 indicates the second image follows the prompt perfectly.) A second score from 0 to 10 will rate how well the generated image is following the guidance image. (0 indicates that the second image does not follow the guidance at all. 10 indicates that the second image is following the guidance image.) Put the score in a list such that output score = [score1, score2], where ’score1’ evaluates the prompt and ’score2’ evaluates the guidance. Text Prompt: <prompt> 12284 SC Rating Prompt Template (Subject-Driven Image Generation) RULES: Two images will be provided: The first is a token subject image and the second is an AI-generated image using the first image as guidance. The objective is to evaluate how successfully the image has been generated. On a scale of 0 to 10: A score from 0 to 10 will be given based on the success in following the prompt. (0 indicates that the second image does not follow the prompt at all. 10 indicates the second image follows the prompt perfectly.) A second score from 0 to 10 will rate how well the subject in the generated image resembles the token subject in the first image. (0 indicates that the subject in the second image does not look like the token subject at all. 10 indicates the subject in the second image looks exactly like the token subject.) Put the score in a list such that output score = [score1, score2], where ’score1’ evaluates the prompt and ’score2’ evaluates the resemblance. Text Prompt: <prompt> SC Rating Prompt Template (Subject-Guided Image Editing) RULES: Three images will be provided: The first image is an input image to be edited. The second image is a token subject image. The third image is an AI-edited image from the first image. it should contain a subject that looks like the subject in the second image. The objective is to evaluate how successfully the image has been edited. On a scale 0 to 10: A score from 0 to 10 will rate how well the subject in the generated image resembles the token subject in the second image. (0 indicates that the subject in the third image does not look like the token subject at all. 10 indicates the subject in the third image looks exactly like the token subject.) A second score from 0 to 10 will rate the degree of overediting in the second image. (0 indicates that the scene in the edited image is completely different from the first image. 10 indicates that the edited image can be recognized as a minimally edited yet effective version of the original.) Put the score in a list such that output score = [score1, score2], where ’score1’ evaluates the resemblance and ’score2’ evaluates the degree of overediting. Subject: <subject> 12285 B Supplementary Information B.1 Human Correlation Study In our paper content, we only reported the Spearman correlations. Here we included the Pearson and Kendall correlation in Table 8 for comparative analysis of Human-to-Human (H-H) correlation. B.2 Zero-shot vs One-shot on VIE We applied only zero-shot and one-shot experiments in this paper because not even GPT-4v can produce anything with few-shot setting in our context. We report full table of GPT-4v performance in Table 7 for zero-shot vs one-shot results. B.3 Autometrics vs Human Detail results See Table 9, 10, 11, 12 for detail statistics of CLIPScore, LPIPS, DINO, and CLIP-I correlation with human ratings. B.4 ImagenHub Models used • Text-guided Image Generation: Stable Diffusion (SD) (Rombach et al., 2022), SDXL (stability.ai, 2023), DALLE-2 (Ramesh et al., 2022), DeepFloydIF (deep floyd.ai, 2023), OpenJourney (openjourney.ai, 2023). • Mask-guided Image Editing: SD (runwayml, 2023), SDXL (stability.ai, 2023), GLIDE, BlendedDiffusion (Avrahami et al., 2022) • Text-guided Image Editing: MagicBrush (Zhang et al., 2023a), InstructPix2Pix (Brooks et al., 2023), Prompt-to-Prompt (Mokady et al., 2023), CycleDiffusion (Wu and la Torre, 2023), SDEdit (Meng et al., 2021), Text2Live (Bar-Tal et al., 2022), DiffEdit (Couairon et al., 2022), Pix2PixZero (Parmar et al., 2023). • Subject-driven Image Generation: DreamBooth (Ruiz et al., 2023), DreamBooth-Lora (Hu et al., 2021), BLIP-Diffusion (Li et al., 2023a), TextualInversion (Gal et al., 2022). • Subject-driven Image Editing: PhotoSwap (Gu et al., 2023), DreamEdit (Li et al., 2023c), BLIP-Diffusion. • Multi-concept Image Composition: CustomDiffusion (Kumari et al., 2023), DreamBooth, TextualInversion. • Control-guided Image Generation: ControlNet (Zhang and Agrawala, 2023), UniControl (Qin et al., 2023). B.5 ImagenHub Human data information We showed the total human rating data we used for each task in Table 13. 12286 Backbone: GPT-4v Zero-Shot One-Shot Image Model M-HSC 0shot M-HPQ 0shot M-HO 0shot M-HSC 1shot M-HPQ 1shot M-HO 1shot Text-guided Image Generation DeepFloydIF 0.5182 0.3509 0.5479 0.4272 0.2048 0.3849 Stable Diffusion XL 0.5684 0.2823 0.5301 0.5136 0.1522 0.3735 Dalle-2 0.5046 0.2192 0.4871 0.4469 0.1822 0.5364 OpenJourney 0.4835 0.1624 0.4648 0.4563 0.2730 0.3750 Stable Diffusion 2.1 0.5957 0.1981 0.4658 0.5988 0.0820 0.3311 Mask-guided Image Editing SDXL-Inpainting 0.5461 0.2331 0.4772 0.5308 0.3460 0.5261 SD-Inpainting 0.5607 0.4253 0.544 0.3759 0.3446 0.3969 GLIDE 0.4663 0.2816 0.4499 0.4247 0.1056 0.3536 BlendedDiffusion 0.3695 0.2363 0.2563 0.4054 0.1624 0.3283 Text-guided Image Editing MagicBrush 0.3273 0.3696 0.3395 0.3613 0.5135 0.4727 InstructPix2Pix 0.3094 0.4461 0.3363 0.4423 0.3106 0.3921 Prompt-to-prompt 0.3094 0.3696 0.3395 0.2514 0.2057 0.2068 CycleDiffusion 0.4488 0.6124 0.3927 0.3522 0.3374 0.1578 SDEdit 0.1607 0.3944 0.1570 0.1754 0.3837 0.2814 Text2Live 0.1875 0.4158 0.1964 0.2817 0.2357 0.2753 DiffEdit 0.1803 0.5957 0.0247 0.1761 0.4874 0.1281 Pix2PixZero 0.2144 0.4502 0.2193 -0.0588 0.3609 -0.0588 Subject-driven Image Generation DreamBooth 0.4975 0.2199 0.4787 0.5409 0.1930 0.5848 BLIP-Diffusion 0.3367 0.0663 0.2845 0.1176 0.3402 0.1194 TextualInversion 0.5564 0.2398 0.4795 0.3882 0.0010 0.3035 DreamBooth-Lora 0.2938 0.2448 0.3285 0.0856 0.3860 0.1225 Subject-driven Image Editing PhotoSwap 0.3711 0.1246 0.1598 -0.0782 0.0385 -0.1063 DreamEdit 0.3817 0.4419 0.1580 0.1384 0.3037 0.0954 BLIP-Diffusion 0.2671 0.3488 0.1379 -0.1368 0.1333 -0.0309 Multi-concept Image Composition CustomDiffusion 0.4781 0.431 0.4263 0.5064 0.0194 0.4867 DreamBooth 0.1494 0.2367 0.232 0.0396 0.0633 0.0694 TextualInversion 0.3703 0.269 0.3857 0.0183 0.2745 0.0266 Control-guided Image Generation ControlNet 0.4270 0.4827 0.4753 0.3561 0.4052 0.4055 UniControl 0.5797 0.4173 0.3972 0.4655 0.4737 0.4988 Table 7: Comprehensive study on the Spearman correlation between GPT-4v-to-Human (GPT-4v-H) ratings across various models, in zero-shot (0shot) and one-shot (1shot) settings, across different test categories: Semantic Consistency (SC), Perceptual Quality (PQ), and Overall (O). 12287 Image Model H-HSC pear H-HPQ pear H-HO pear H-HSC spea H-HPQ spea H-HO spea H-HSC kend H-HPQ kend H-HO kend Text-guided Image Generation DeepFloydIF 0.5933 0.3086 0.5595 0.5635 0.3029 0.5131 0.5360 0.2878 0.4581 Stable Diffusion XL 0.5990 0.4957 0.5945 0.5807 0.4992 0.5896 0.5468 0.4719 0.5289 Dalle-2 0.5208 0.5024 0.4630 0.5019 0.4680 0.4348 0.4654 0.4459 0.3820 OpenJourney 0.5678 0.3853 0.5513 0.5321 0.3600 0.4861 0.5017 0.3442 0.4347 Stable Diffusion 2.1 0.6202 0.3227 0.5397 0.5979 0.2772 0.4962 0.5707 0.2636 0.4557 Mask-guided Image Editing SDXL-Inpainting 0.6550 0.5929 0.6578 0.6574 0.5928 0.6556 0.6160 0.5382 0.6040 SD-Inpainting 0.6606 0.5197 0.5716 0.6590 0.5166 0.5394 0.6222 0.4728 0.5039 GLIDE 0.6253 0.5496 0.6144 0.5894 0.5530 0.5695 0.5573 0.4984 0.5357 BlendedDiffusion 0.5863 0.5873 0.5879 0.5051 0.5511 0.4224 0.4911 0.5346 0.4157 Text-guided Image Editing MagicBrush 0.6217 0.5251 0.6288 0.6219 0.5190 0.6289 0.5740 0.4740 0.5651 InstructPix2Pix 0.6573 0.6158 0.6632 0.6600 0.5955 0.6561 0.6250 0.5502 0.6157 Prompt-to-prompt 0.5954 0.5084 0.5699 0.5880 0.5028 0.5811 0.5611 0.4537 0.5470 CycleDiffusion 0.5908 0.5848 0.6101 0.5482 0.5887 0.5891 0.5228 0.5378 0.5600 SDEdit 0.2303 0.4717 0.1674 0.2657 0.4705 0.1991 0.2618 0.4211 0.1957 Text2Live 0.3167 0.6013 0.2890 0.2675 0.5757 0.1524 0.2648 0.5440 0.1503 DiffEdit 0.2513 0.6331 0.3570 0.3286 0.6214 0.4265 0.3268 0.5924 0.4247 Pix2PixZero 0.4747 0.5763 0.5247 0.3311 0.5770 0.3327 0.3305 0.5299 0.3312 Subject-driven Image Generation DreamBooth 0.6337 0.3988 0.5834 0.6452 0.3871 0.6208 0.6010 0.3787 0.5625 BLIP-Diffusion 0.4970 0.2663 0.4394 0.4458 0.3263 0.4390 0.4090 0.3180 0.3993 TextualInversion 0.5987 0.3219 0.5533 0.6000 0.3351 0.5686 0.5683 0.3078 0.5226 DreamBooth-Lora 0.5014 0.4571 0.4278 0.3903 0.4430 0.3878 0.3831 0.4169 0.3756 Subject-driven Image Editing PhotoSwap 0.4685 0.3213 0.5025 0.4805 0.2961 0.4973 0.4412 0.2768 0.4368 DreamEdit 0.5684 0.2319 0.5520 0.5867 0.2245 0.5460 0.5485 0.2130 0.4892 BLIP-Diffusion 0.5411 0.4086 0.5074 0.5359 0.4033 0.5051 0.5221 0.3779 0.4857 Multi-concept Image Composition CustomDiffusion 0.7257 0.4889 0.7215 0.7256 0.4838 0.7217 0.7101 0.4665 0.6963 DreamBooth 0.6560 0.6583 0.6575 0.6209 0.6423 0.6222 0.6068 0.6228 0.6022 TextualInversion 0.6833 0.6009 0.6799 0.6990 0.5803 0.6980 0.6935 0.5563 0.6898 Control-guided Image Generation ControlNet 0.6166 0.5730 0.5830 0.6144 0.5682 0.5868 0.5585 0.5408 0.5429 UniControl 0.6014 0.6194 0.6131 0.6060 0.6062 0.5954 0.5577 0.5741 0.5533 Table 8: Comparative analysis of Human-to-Human (H-H) correlation ratings across various models. Metrics used include Pearson’s (pear), Spearman’s (spea), and Kendall’s (kend) correlation coefficients, across different test categories: Semantic Consistency (SC), Perceptual Quality (PQ), and Overall (O). 12288 Metric: CLIP-Score Image Model M-HSC corr M-HPQ corr M-HO corr Text-guided Image Generation DeepFloydIF 0.0272 0.1025 0.0332 OpenJourney -0.1628 -0.0875 -0.1907 DALLE2 -0.0946 0.1469 -0.0381 SD -0.0528 -0.0691 -0.0632 SDXL -0.1265 -0.1500 -0.1830 Table 9: CLIP-Score vs Human correlation on Textguided image generation task. Metric: LPIPS Image Model M-HSC corr M-HPQ corr M-HO corr Text-guided Image Editing InstructPix2Pix 0.1652 0.4717 0.2045 CycleDiffusion -0.0936 0.3193 -0.0211 MagicBrush 0.2146 0.3722 0.2667 Text2Live -0.0812 0.2906 -0.0787 DiffEdit 0.0943 0.3299 0.1440 Pix2PixZero 0.1379 0.0256 0.1370 Prompt2prompt 0.1918 0.1929 0.1798 SDEdit 0.1381 0.0441 0.0857 Mask-guided Image Editing Glide -0.1098 0.0647 -0.0662 BlendedDiffusion 0.0980 0.1371 0.0598 SDInpaint -0.2447 -0.0749 -0.2110 SDXLInpaint -0.1496 0.1318 -0.0607 Control-guided Image Generation ControlNet 0.3447 0.3916 0.3888 UniControl 0.4319 0.5048 0.4904 Table 10: LPIPS (signs inverted) vs Human correlation on several tasks. Metric: DINO Model M-HSC corr M-HPQ corr M-HO corr Multi-Concept Image Composition TextualInversion 0.0759 -0.2746 0.0754 DreamBooth 0.1027 -0.0761 0.1054 CustomDiffusion 0.1159 -0.1466 0.1074 Subject-Driven Image Generation DreamBoothLora 0.2335 0.0684 0.2535 BLIPDiffusion (Gen) 0.4718 0.0798 0.4751 TextualInversion 0.6508 -0.0169 0.6450 DreamBooth 0.4153 0.3535 0.4396 Subject-Driven Image Editing BLIPDiffusion (Edit) 0.4063 -0.1081 0.4000 DreamEdit 0.1994 -0.0877 0.1878 PhotoSwap 0.3300 0.0814 0.3424 Table 11: DINO vs Human correlation on several tasks. Metric: CLIP-I Image Model M-HSC corr M-HPQ corr M-HO corr Multi-Concept Image Composition TextualInversion 0.1511 -0.1847 0.1523 DreamBooth 0.0741 -0.1166 0.0758 CustomDiffusion 0.2319 0.0116 0.2246 Subject-Driven Image Generation DreamBoothLora 0.2499 0.1801 0.2615 BLIPDiffusion (Gen) 0.2611 0.1031 0.2660 TextualInversion 0.5776 0.0362 0.5775 DreamBooth 0.1324 0.3648 0.1587 Subject-Driven Image Editing BLIPDiffusion (Edit) 0.4202 -0.0798 0.3844 DreamEdit 0.1927 0.1535 0.1955 PhotoSwap 0.2613 0.3028 0.2874 Table 12: CLIP-I vs Human correlation on several tasks. Data amount per model Total Human rating data Task: Text-guided Image Generation 197 2955 Task: Mask-guided Image Editing 179 2148 Task: Text-guided Image Editing 179 4296 Task: Subject-driven Image Generation 150 1800 Task: Subject-driven Image Editing 154 1386 Task: Multi-concept Image Composition 102 918 Task: Control-guided Image Generation 150 900 Sum of 7 tasks 14403 Table 13: Number of human ratings from ImagenHub used in this paper. 12289 C Backbone Performances C.1 Parsing MLLM outputs We tried to parse the output using Regex and modify the format requirement if the MLLM fail to do so. If the output failed to pass our parsing rules, we fill random value as output to penalize the correlation. C.2 Observations of GPT-4o. GPT-4o-2024-05-13 tends to be the best MLLM in this context. It can understand every task instruction in VIESCORE and produce reasonable scores and rationale. C.3 Observations of Gemini-Pro. Gemini-1.5-Pro can understand every task instruction in VIESCORE and produce reasonable scores and rationale, achieving similar performance as GPT-4v. C.4 Observations of GPT-4v. GPT-4-vision-preview tends to be the second best MLLM in this context. It can understand every task instruction in VIESCORE and produce reasonable scores and rationale. C.5 Observations of LLaVA. LLaVA-1.5-7B can also understand every task instruction in VIESCORE and produce reasonable rationale. However, the scores produced tend to be concentrated toward certain numbers. C.6 Observations of Qwen-VL. Qwen-VL-7B does not understand the meaning of delimiter. However, it was able to output a JSONlike dictionary following the instructions on both SC and PQ. The rationale produced is often not reasonable. C.7 Observations of BLIP2. BLIP-2 FLAN-T5-XXL often failed to produce the result formats according to the instructions, especially in PQ. It also tend to give 0 score in SC. Prompt engineering in our context does not solve the issue. C.8 Observations of InstructBLIP. InstructBLIP-T5-XL shares same observation as BLIP-2 FLAN-T5-XXL. C.9 Observations of Fuyu. Fuyu-8B always output 0 and failed to follow in our instruction. C.10 Observations of CogVLM. CogVLM tends to output numbers fall off the range [0, 10] and often failed to follow the required format. Prompt engineering in our context does not solve the issue. C.11 Observations of OpenFlamingo. OpenFlamingo simply printing blank as the output in our context. 12290
2024
663
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12291–12301 August 11-16, 2024 ©2024 Association for Computational Linguistics Tree Transformer’s Disambiguation Ability of Prepositional Phrase Attachment and Garden Path Effects Lingling Zhou and Suzan Verberne and Gijs Wijnholds Leiden Institute of Advanced Computer Science, Leiden University [email protected] {s.verberne,g.j.wijnholds}@liacs.leidenuniv.nl Abstract This work studies two types of ambiguity in natural language: prepositional phrase (PP) attachment ambiguity, and garden path constructions. Due to the different nature of these ambiguities – one being structural, the other incremental in nature – we pretrain and evaluate the Tree Transformer of Wang et al. (2019), an unsupervised Transformer model that induces tree representations internally. To assess PP attachment ambiguity we inspect the model’s induced parse trees against a newly prepared dataset derived from the PP attachment corpus (Ratnaparkhi et al., 1994). Measuring garden path effects is done by considering surprisal rates of the underlying language model on a number of dedicated test suites, following Futrell et al. (2019). For comparison we evaluate a pretrained supervised BiLSTMbased model trained on constituency parsing as sequence labelling (Gómez-Rodríguez and Vilares, 2018). Results show that the unsupervised Tree Transformer does exhibit garden path effects, but its parsing ability is far inferior to the supervised BiLSTM, and it is not as sensitive to lexical cues as other large LSTM models, suggesting that supervised parsers based on a pre-Transformer architecture may be the better choice in the presence of ambiguity. 1 Introduction One of the core challenges for Natural Language Understanding (NLU) systems is that they must advertently deal with ambiguity; depending on the type of ambiguity, systems may or may not incorporate mechanisms for handling ambiguous input. A typical example of structural ambiguity, i.e. an ambiguity that is not resolved without external context, is that of prepositional phrase attachment. An often used example, with its two possible interpretations, is given below: (a) I saw the man with the telescope (b) I saw the man through the telescope (c) I saw the man that had the telescope Another form of syntactic ambiguity is given by garden path effects (Bever, 1970), where a comprehender is guided toward a locally plausible but ultimately incorrect parse, exemplified in (d) The horse raced past the barn fell. Here, the reader initially interprets the horse to be the subject of ‘raced’, only to revise this assumption after reading the final verb ‘fell’. In this paper, we investigate to what extent Transformer-based models are sensitive to these ambiguities.To that end, we rely on the language modelling and parsing components of a model; the language modelling component allows for measuring surprisal, and thus measuring the presence and sensitivity of garden path effects (Futrell et al., 2019). The parsing component is required in order to measure a model’s capacity to disambiguate cases of PP attachment. Rather than assessing whether models can identify ambiguity, we investigate to what extent models can disambiguate by means of their parsing components. Thus, we focus on the Tree Transformer model (Wang et al., 2019), which adds a constrained attention layer to the base Transformer architecture to do unsupervised parsing while training on a language modelling objective. In doing so, the model allows us to evaluate the parser as well as the language model on the two different evaluation tasks. For comparison, we evaluate the parsing as sequence labelling approach of Gómez-Rodríguez and Vilares (2018), where a BiLSTM-based language model is trained in a supervised fashion. As an extra baseline, we compare against an end to end setup with LLM prompting (Tunstall et al., 2023). In terms of evaluation data, we rely on two sources: for measuring PP attachment ambiguities, we modernize the existing prepositional phrase attachment corpus (Ratnaparkhi, 1998), using a standard BERT language model, to be more naturalistic and closer to the kind of data our parsers will have observed during training. Second, for measuring 12291 garden path effects, we employ existing datasets from previous work (Futrell et al., 2019). The results from our comparative evaluation not only assess the Tree Transformer’s proficiency in parsing and language modeling but also provide a deeper understanding of its intrinsic mechanisms when handling complex grammar structures and ambiguity. We summarize our main contributions as follows: • We convert the PP attachment corpus into naturalistic sentences using BERT. • We train a Tree Transformer from scratch on the Penn Treebank, and evaluate it on our novel data to quantify the Tree Transformer’s performance. • We evaluate pre-trained BiLSTM on our novel data to quantify the model’s performance. • We quantify the garden path effect size of Tree Transformer and its sensitivity to subtle lexical cues, compared with models from other work. All our code and data is available online.1 2 Background PP Attachment ambiguity Due to the widespread presence of prepositional phrase attachment ambiguity in natural language, it has garnered significant attention from scholars. In early research, structure-based methods have been explored, such as the Right Association (Kimball, 1973) and Minimal Attachment methods (FRAZIER, 1978), which despite their popularity due to simplicity, exhibit notable shortcomings and perform suboptimally in practical applications. Additionally, statistics-based methods have been employed. Hindle and Rooth (1993) introduced the first corpus-based co-occurrence statistical method, known as “lexical association". To address sparsity issues in these methods, Collins and Brooks (1995) proposed a back-off model and other work has applied WordNet (Fellbaum, 1998) classes (Stetina and Nagao, 1997; Toutanova et al., 2004). However, these methods are limited due to their specificity, making them cumbersome for practical applications. Especially in the context of neural language modelling, the exploration of PP attachment ambiguity is limited. Garden Path Effects On the other hand, garden path effects have been studied in the context of 1https://github.com/L-innnng/ Tree-Transformer-analysis neural language models: van Schijndel and Linzen (2018a,b) demonstrate garden path effects in LSTM models by simulating human reading times. Surprisal theory (Hale, 2001; Levy, 2008) proposed that observed slowdowns are a result of the unpredictability of each word appearing in a sentence. Subsequently, the surprisal introduced by grammarbased language models has been shown to be correlated with reading time by Demberg and Keller (2008). Other work has shown that the surprisal in RNNs is a powerful predictor of human reading times (Frank and Bod, 2011; Goodkind and Bicknell, 2018). van Schijndel and Linzen (2018a) demonstrated the ability of RNNs to make reading time predictions comparable to grammar-based language models. In addition to validating the surprisal theory across different models, Futrell et al. (2019) tested multiple LSTMs and RNNG models to determine if they exhibit the garden path effect and observed their levels of surprisal in disambiguating sentences. Their results serve as a baseline in our work. (Un)supervised parsing and language models Much previous research that attempts to incorporate tree structures into neural networks has focused on supervised syntactic parsing, relying on annotated parse trees. Through clever encoding of the parse tree this process can be simplified to sequence labelling, as proposed by Gómez-Rodríguez and Vilares (2018) for a BiLSTM-based model. On the side of unsupervised parsing, researchers have explored various techniques, hoping that models could learn latent tree structures from unlabeled data without explicit syntactic annotations. Among them, Yogatama et al. (2016) depicted the problem as a reinforcement learning task. In addition, there have been attempts based on recurrent neural networks, such as PRPN (Shen et al., 2017), On-LSTM (Shen et al., 2018), and Tree-LSTMs (Tai et al., 2015), an variations like URNNG (Kim et al., 2019) and DIORA (Drozdov et al., 2019). More recently, adaptations of the base Transformer architecture (Vaswani et al., 2017) have been proposed for unsupervised parsing, like Transformer Grammars (Sartran et al., 2022), and the Tree Transformer Wang et al. (2019) which is our focus. 3 Datasets In our experiments we use four datasets for evaluation, which we discuss in order below. 12292 Category Word Verb v provide Noun n1 services Preposition p for Noun n2 customers Label l V Table 1: An original example from the prepositional phrase attachment corpus (Ratnaparkhi et al., 1994). 3.1 Prepositional Phrase Attachment The prepositional phrase attachment corpus (Ratnaparkhi et al., 1994) was originally extracted the from the Penn Treebank (PTB) (Marcus et al., 1993) and was used to study PP attachment ambiguity. In this corpus, each example comes as a quadruple {v, n1, p, n2} and a label l to indicate the actual attachment of the PP (p, n2), which can be attached to the noun n1 or the verb v, resulting in different syntactic structures and meanings. Table 1 displays an example from the original corpus. Generating naturalistic sentences In order to have naturalistic example sentences that are still naturally ambiguous, we convert the examples from the PP attachment corpus into longer phrases in a structured way. We only consider cases from the original corpus that were extracted from the WSJ-test set, because for these cases we can be sure that they will not occur in the PTB train set used for training the parser later on. We use a pretrained BERT model (Devlin et al., 2018) 2 on this set to expand the quadruple to 7-tuples {n0, v, m1, n1, p, m2, n2} to form a complete sentence, leaving the original attachment labels unchanged. This is done as follows. Prior to using the language model, data cleaning is necessary. Gerunds or present participle, past participles, and the verbs like “be" that do not conform to the required forms are filtered out. Symbols and numbers that fail to meet the requirements are removed. Subsequently, the process of generating the sentence begins. We then mask the subject position, i.e. “[MASK] v n1 p n2", and take the personal pronoun with the highest score as the subject n0. The following step is to mask the modifier of noun n1 like “n0 v [MASK] n1 p n2", and the noun, adjective, determiner and possessive pronoun with the highest score is used as m1. The modifier of the second 2We specifically use Huggingface’s bert-base-uncased implementation. noun m2 is obtained in the same way. Throughout this process we use the POS tagger from NLTK to identify words of the correct type. As a result we generate 1424 sentences used for evaluation. An example conversion of the item example in Table 1 is They provide various services for their customers. 3.2 Garden Path Effects In order to investigate models’ sensitivity to garden path effects, we rely on three datasets introduced by Futrell et al. (2019): Main-verb/Reduced-relative (MV/RR), NP/Z (Overt Object) and NP/Z (Verb Transitivity). MV/RR Ambiguity This dataset contains 28 examples where each example consists of 4 sentences. The first verb of the sentences is ambiguous. It can be considered both as the main verb of the sentence and as a word introducing a reduced relative clause. This ambiguity can persist for an extensive stretch of the following context until the disambiguator appears: (1) a. The women brought the sandwich from the kitchen fell in the dining room b. The women given the sandwich from the kitchen fell in the dining room c. The women who was brought the sandwich from the kitchen fell in the dining room d. The women who was given the sandwich from the kitchen fell in the dining room In sentence 1a, the verb “brought" is initially analyzed as part of the main verb phrase, but upon the appearance of the disambiguator “fell", readers’ comprehension is severely disrupted and the verb “brought" had to be reanalyzed as part of a relative clause. In contrast, the garden path effect theoretically should be reduced in sentences like 1b. Compared to “gave", “given" is more explicit, indicating that it should not be the main verb of the sentence. Furthermore, the garden path effect should be eliminated in sentences 1c, where the presence of the words “who was" makes it clear to the reader. NP/Z Ambiguity This ambiguity arises from a noun phrase which can act as either the direct object of the main verb in the subordinate clause or as the subject of the main clause. There are two datasets with a similar configuration using overt 12293 objects and intransitive verbs to disambiguate, respectively. Additionally, using a comma to mark the end of a clause makes the sentence easier at the disambiguator. Both of them contains 24 examples, each with 4 sentences. Overt Object As shown in sentence 2a, prior to encountering “burst", “shot" is naturally interpreted as a transitive verb with “the woman" as its direct object. However, upon the appearance of “burst", readers realize that “shot" should be considered as an intransitive verb with “the woman" as the subject of the main clause. In sentence 2b, introducing the explicit object “the gun" to the transitive verb effectively reduces or eliminates the ambiguity before the disambiguator appears. (2) a. As the gangster shot the woman burst into hysterics b. As the gangster shot his gun the woman burst into hysterics c. As the gangster shot, the woman burst into hysterics d. As the gangster shot his gun, the woman burst into hysterics Verb Transitivity As shown in sentence 3b, it replaces the transitive verb “shot" with the intransitive verb “laughed" to reduce the garden path effect by making readers believe that “the women" is not its object. However, this lexical information about syntactic structure is so subtle that it is not known whether humans are as sensitive to it as theory. (3) a. As the gangster shot the woman burst into hysterics b. As the gangster laughed the woman burst into hysterics c. As the gangster shot, the woman burst into hysterics d. As the gangster laughed, the woman burst into hysterics 4 Experimental set-up 4.1 Tree Transformer Model For training and validation of the Tree Transformer we use the standard splits of the Penn Treebank: sections 2 to 21 for training (WSJ-train) and 23 for testing (WSJ-test). Our trained model is a faithful replication of the model by Wang et al. (2019) except for differences in the preprocessing of the training set. Where Wang et al. (2019) simplified Figure 1: A constituency parse tree with its span representations. the training data to being composed of words with certain POS tags, we require punctuation to be included in the training data and so we retain it during training. As for validation, following the evaluation setting established in prior research, the performance of the Tree Transformer is assessed by calculating F1 scores on the WSJ-test, which is processed in the same way as WSJ-train. We report macroaverage F1 score over all the predicted trees against the ground-truth parse trees in the WSJ-test set, using unlabeled bracketing representations. For illustration, we provide the parse tree and bracketed span representation for a converted sentence in Figure 1. 4.2 BiLSTM Model In order to study the difference between the Tree Transformer and a supervised parser, we include the pretrained BiLSTM model, i.e. BILSTMΦ ′ m=2,e,ch, from Gómez-Rodríguez and Vilares (2018) to make additional comparison. This model utilizes NCRFpp, a sequence labeling framework grounded on bidirectional short-term memory networks (Yang and Zhang, 2018). It has been trained over the original WSJ-train. For comparison with Tree Transformer, the preprocessed WSJtest is used in the test process. The predicted sequence labels will be decoded into a parse tree, which will then be converted into span representation to calculate F1 score in the same way as for the Tree Transformer. 4.3 LLM prompting For a comparison against generative models, we include a prompting baseline with Zephyr 7B β (Tunstall et al., 2023). The prompt forces the model to predict verb or noun attachment, following the prompt included in the Appendix. 12294 (a) Noun Attachment (binary) (b) Verb Attachment (binary) (c) Noun Attachment (ternary) (d) Verb Attachment (ternary) Figure 2: Two binary ground-truth parse trees where the PP attaches to the noun (a) or to the verb (b), and two ternary ground-truth parse trees where the PP attaches to the noun (c) or to the verb (d). 4.4 Parsing PP Attachment We consider two binary ground-truth parse trees for PP attachment, depending on whether the prepositional phrase attaches to the noun (Figure 2(a)) or to the verb (Figure 2(b)). For the BiLSTM model, the generated parsing tree is not necessarily a binary tree, thus we allow for ternary ground-truth parse trees as well (Figure 2(c) and 2(d)). We calculate classification accuracy by considering whether the model returns a parse tree that either corresponds to the ground-truth tree, or is incomplete but still has attached the prepositional phrase to the noun or to the verb in a subtree. In the next section we give a more in-depth analysis of the different cases the models produced, as well as a breakdown of accuracy with respect to different verbs and prepositions. 4.5 Measuring Garden Path Effects Similar to the previous work by Futrell et al. (2019), we investigate the behavior of the Tree Transformer and to what extent it reflects incremental representations of syntactic states. This is done by calculating surprisal of a word wi as its inverse log-probability given a prior hidden state hi−1 of the model: S(wi) = −log p(wi | hi−1) The measure of the garden path effect, for an example like sentence 1 can be quantified by subtracting the surprisal at the disambiguator in sentence 1c from the surprisal at the disambiguator in 1a and offsetting against the same difference in the anticipated less surprising case of 1b minus 1d. We calculate both differences, and compare them with the garden path effects of LSTMs and RNNG in the study by Futrell et al. (2019) on datasets MV/RR as well as NP/Z (verb transitivity). 5 Results We initially validate the retrained Tree Transformer model and the pretrained BiLSTM model on the WSJ-test data, giving the F1 scores in Table 2. Model F1-score Tree Transformer, L=10 49.7 BiLSTM 75.8 Table 2: The F1 scores of Tree Transformer with 10 layers and pre-trained BiLSTM tested on WSJ-test. The F1 score of the Tree Transformer model is aligned with the results reported by Wang et al. (2019), as we stick close to their original setting for preprocessing the data and computing F1 score. However, for the pretrained BiLSTM model, the F1 score deviates from the score reported by GómezRodríguez and Vilares (2018) (90.6). This is due to few factors: first, they calculate the F1 score directly from the predicted labels, rather than the reconstructed span representation. Second, they use the micro- instead of macro-average F1 score. Third, the preprocessing of the data is different. 5.1 PP Attachment Results Given that the PP attachment experiment evaluates parser performance, we first conduct an error analysis on the parse trees produced by the models, before analyzing the classification results. Parsing error analysis An overview of the types of parse structures produced by the two models is given in Table 3. For the Tree Transformer, close to 80% of cases are fully correct parses. In cases of incomplete (Partial) parse trees, we could still include 13 out of 14 cases where an attachment preference is clearly present. These cases, as well as the cases where no attachment decision could be made, are further detailed in Figure 3. We classify the remainder as ‘Incorrect’ in our accuracy results, with the main source of error the incapability of the parser to recognize the prepositional phrase (19.52% of cases). For the BiLSTM model there are significantly fewer parsing errors, with a total of just 13 out of 12295 (a) Noun attachment (b) Verb attachment (c) Undecidable attachment (d) Undecidable attachment Figure 3: Partial parse trees produced by the Tree Transformer model. Tree Transformer BiLSTM Parse Structure Count Perc. Count Perc. Correct parse 1120 78.65% 1406 98.74% PP not found 278 19.52% 2 0.14% Partial parse 14 0.98% 5 0.35% Other incorrect 12 0.84% 11 0.77% Table 3: Different parse structures from the Tree Transformer model and the BiLSTM model, with their absolute and relative frequency. We include correct parses, partial parses with decidable attachments in our result. The remaining cases are considered incorrect in the classification results. 1424. There are only 2 cases where the prepositional phrase is not found, significantly lower than for the Tree Transformer. Again we find cases where the model provides an incomplete parse structure in which nevertheless the PP attachment can still be determined. These partial parse trees are given in Figure 4 having a total of 5 cases. Classification results We display the two models’ accuracy in Table 4, offset against the LLM baseline. As already shines through in the parsing Model Accuracy Tree Transformer 47.2% BiLSTM 79.4% Zephyr 62.5% Table 4: Accuracy on pp attachment sentences for Tree Transformer and BiLSTM models. error analysis, we observe significant performance differences between the Tree Transformer model and the BiLSTM model. The fact that the latter model achieves an accuracy of 79.4% may be attributed to its supervised learning approach, where the high cost of training leads to substantial returns. A further breakdown of model performance is given in the confusion matrix in Table 5. While the number of prepositions attached to nouns (868) is (a) Verb attachment (b) Noun attachment Figure 4: Partial parse trees produced by the BiLSTM model. Tree Transf. Prediction V N Incorr. rec. Gold V 299 104 153 54% N 357 373 138 43% pr. 46% 78% BiLSTMpad Gold V 442 110 4 79% N 171 688 9 79% pr. 72% 86% Zephyrpad Gold V 53 503 0 10% N 31 837 0 96% pr. 63% 62% Table 5: Confusion matrix for the attachment accuracy on PP attachment sentences for the Tree Transformer and BiLSTM models. higher than that attached to verbs (556) in the data, the Tree Transformer tends to attach prepositional phrases to verbs (656) rather than nouns (477). Its ability to determine the PP attachment to nouns is particularly weak, and there is a significant gap compared to the gold standard attachments. Ad12296 (a) Data (b) Tree Transformer (c) BiLSTM Figure 5: The proportions of noun attachment, verb attachments or neither/incorrect for the 20 most frequent prepositions in our dataset (a), by Tree Transformer (b) and BiLSTM (c), ordered by overall frequency. ditionally, the number of attachments where the attachment decision is neither a noun nor a verb is not negligible, contributing significantly to the low accuracy. This also indicates that the Tree Transformer is not highly sensitive to the attachment of prepositional phrases. However, while the BiLSTM model also tends to attach prepositional phrases to verbs, this tendency is much less pronounced compared to the Tree Transformer. As shown in Table 5, the number of instances where BiLSTM attaches the pp to a verb is 613, which is higher than the actual attachment count of 556 in the dataset. Conversely, BiLSTM attaches prepositional phrases to nouns 798 times, which is lower than the actual attachment count of 868. Furthermore, the number of attachments that are neither nouns nor verbs is only 13, significantly lower than 291 that the Tree Transformer possesses. As for a comparison against the LLM prompting, we observe a large imbalance with the LLM deciding on noun attachment in most of the cases, leading to low accuracy. Linguistic Analysis To investigate this imbalance in attachment decisions, we conducted an analysis of the parsing results combined with the distribution of verbs and prepositions in the sentences. This analysis aims to explore whether specific verbs or prepositions contribute to the attachment decision. (a) Data (b) Tree Transformer (c) BiLSTM Figure 6: The proportions of noun attachment, verb attachments or neither/incorrect for the 20 most frequent verbs in our dataset (a), by Tree Transformer (b) and BiLSTM (c), ordered by overall frequency. Figure 5 illustrates the distribution of attachment decisions by preposition, in both the data and the model output. Figure 5(a) illustrates the preference in the dataset, with prepositions ‘of’, ‘about’, ‘between’ and ‘than’ expectedly having a strong preference to attach to the noun. Figure 5(b) shows that Tree Transformer has a proportion trend that is roughly consistent with the data, albeit being skewed towards verb attachment. The prepositions ‘than’ and ‘over’ deviate, with the model having no preference for noun over verb attachment for ‘than’ even though verb attachment barely occurs in the dataset for this preposition; conversely for ‘over’ we find a relatively equal proportion of attachment choice, whereas the model always chooses verb attachment. Figure 5(c) displays the same distribution for the BiLSTM model, indicating that the parsing results are closer to the distribution of the original dataset in terms of different prepositions. However, for certain prepositions, the distribution trend becomes more polarized. The BiLSTM model prefers noun attachment more than proportionate in the dataset for the prepositions ‘of’, ‘for’, ‘about’, ‘between’ and ‘than’, whereas it disproportionately prefers verb attachment for the prepositions ‘during’ and ‘through’. Overall, BiLSTM exhibits a more pronounced attachment tendency difference in the prepositional dimension compared to Tree Transformer. We present the same proportions but broken 12297 (a) Tree Transformer (b) BiLSTM Figure 7: Evaluation metrics (precision, recall and F1 score) for different types of verbs when attachment decision is noun (N) or verb (V). down by choice of verb in Figure 6. In the dataset (Figure 6(a)) we find that inflections of the functional verbs ‘be’ and ‘have’ generally have a higher preference for noun attachment than notional verbs. This difference tends to become more pronounced in the models’ predictions: the Tree Transformer (Figure 6(b)) tends to choose noun attachment for functional verbs, and verb attachment for notional verbs, whereas BiLSTM (Figure 6(c)) displays a similar trend, albeit less extreme. Given this observed dichotomy, we compute separate precision/recall/F1 scores for the aggregate functional verbs and aggregate notional verbs, displayed in Figure 7. The scores in the figure confirm worse performance in the case of functional verbs, indicating the sensitivity of the models to lexical cues in their decision-making. 5.2 Garden Path Effect Results We combined Figure 8 and 9 to give a comparative analysis of the performance of each model in the garden path effect. Figure 8(a) depicts the magnitude of the garden path effect generated by Tree Transformer and verb form ambiguity. Figure 9(a) illustrates the garden path effect size by four models and verb form ambiguity directly taken from Futrell et al. (2019). Upon comparison, it is evident that Tree Transformer, like the other models, exhibits a fundamental garden path effect. The garden path effect in Tree Transformer is similar to that of TinyLSTM, but considerably smaller than in the other models. Additionally, if the model uses the morphological form of the verb as a cue for syntactic structure, instances where the verb has not changed to a passive participle form should exhibit a stronger garden path effect compared to situations where the change has already occurred. This is evident in the figures, where the red bars are higher than the green ones. Tree Transformer, along with two large LSTMs and RNNG, displays this pattern. Despite demonstrating crucial human-like garden path effect disambiguation due to the verb form ambiguity, it is noteworthy that significant garden path effect still persist in these models, even when the verb form is unambiguous like passive-participial verb. Regarding TinyLSTM, it does not exhibit sensitivity to ambiguous verb forms and reduced relative clauses. Figures 8(b) and 9(b) respectively illustrate the garden path effect sizes generated by Tree Transformer and four other models along with those of verb transitivity. Similarly, we observe the presence of the garden path effect in all models. Although smaller in Tree Transformer, even markedly smaller than in the two large LSTMs, it is higher than in TinyLSTM. As for the sensitivity, only the large LSTMs seem to be sensitive to the transitivity of embedded verbs, showing smaller garden path effects for intransitive verbs. Tree Transformer demonstrates sensitivity just below them, but noticeably higher than RNNG and TinyLSTM. Figure 8(c) shows the garden path effect sizes generated by Tree Transformer and presence of an object. In comparison with Figure 8(b), it can be observed that Tree Transformer is much more sensitive to the presence or absence of an object than lexical cues such as verb transitivity. 6 Conclusion In this work, we prepared a novel prepositional phrase attachment ambiguity dataset, suitable for evaluating modern parsers on their capacity to recognize such ambiguities. We evaluated the Tree Transformer model, which combines an unsupervised parsing objective with masked language modelling. As such, we could evaluate both on prepositional phrase attachment ambiguity by analyzing the induced parse trees, as well as compare with prior work on measuring garden path effects through language model surprisal rate (Futrell et al., 2019). In order to align with the datasets for our experiment, we trained a Tree Transformer from scratch 12298 (a) MV/RR (b) NP/Z(verb transitivity) (c) NP/Z(overt object) Figure 8: Average garden path effect size by Tree Transformer and disambiguation lexical clues on (a) MV/RR dataset; (b) NP/Z (verb transitivity) dataset; (c) NP/Z (overt object) dataset. Error bars depict 95% confidence intervals computed based on the standard error of the surprisals after subtracting out the mean surprisal (Masson and Loftus, 2003). (a) MV/RR (b) NP/Z(verb transitivity) Figure 9: Figures are the original ones taken from the paper of Futrell et al. (2019). Average garden path effect by 4 models and disambiguation lexical clues on (a) MV/RR dataset; (b) NP/Z (verb transitivity) dataset. and compared its performance with a supervised BiLSTM-based parser trained on constituent parsing as sequence labelling (Gómez-Rodríguez and Vilares, 2018). Our study reveals that the Tree Transformer model performs suboptimal in disambiguating PP attachments at the sentence level, and less effective than the BiLSTM-based model. Primary factors are its poor ability to identify prepositional phrases and a tendency to attach prepositional phrases to verbs. We argue that this is likely to be caused by Tree Transformer’s unsupervised nature. Additionally, the Tree Transformer demonstrates garden path effects across multiple datasets and exhibits varying sensitivity to different subtle lexical cues, generally being more sensitive to the presence of an object than to specific verb form. Overall, the evaluation suggests that an unsupervised model like Tree Transformer may be the lesser choice compared to a supervised model based on a pre-Transformer architecture, when it comes to natural language ambiguities. 7 Limitations There is still room for improvement, and many related aspects warrant further investigation. For example, enriching the diversity and quantity of pp attachment sentences can enhance the depth of evaluation, and the accuracy of results. Improving Tree Transformer’s ability to correctly identify prepositional phrase structures is essential for increased accuracy. Additionally, training Tree Transformer on larger datasets could be attempted to explore differences in performance of garden path effect compared to large LSTMs. 12299 References Thomas G Bever. 1970. The cognitive basis for linguistic structures. Cognition and the development of language. Michael Collins and James Brooks. 1995. Prepositional phrase attachment through a backed-off model. In Third Workshop on Very Large Corpora. Vera Demberg and Frank Keller. 2008. Data from eyetracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193–210. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, and Andrew McCallum. 2019. Unsupervised latent tree induction with deep inside-outside recursive auto-encoders. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1129–1141, Minneapolis, Minnesota. Association for Computational Linguistics. Christiane Fellbaum. 1998. WordNet: An electronic lexical database. MIT press. Stefan L Frank and Rens Bod. 2011. Insensitivity of the human sentence-processing system to hierarchical structure. Psychological science, 22(6):829–834. L FRAZIER. 1978. On comprehending sentences: Syntactic parsing strategies. Doctoral dissertation, University of Connecticut. Richard Futrell, Ethan Wilcox, Takashi Morita, Peng Qian, Miguel Ballesteros, and Roger Levy. 2019. Neural language models as psycholinguistic subjects: Representations of syntactic state. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32–42, Minneapolis, Minnesota. Association for Computational Linguistics. Carlos Gómez-Rodríguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1314–1324, Brussels, Belgium. Association for Computational Linguistics. Adam Goodkind and Klinton Bicknell. 2018. Predictive power of word surprisal for reading times is a linear function of language model quality. In Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018), pages 10–18, Salt Lake City, Utah. Association for Computational Linguistics. John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics. Donald Hindle and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19(1):103–120. Yoon Kim, Alexander Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. 2019. Unsupervised recurrent neural network grammars. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1105–1117, Minneapolis, Minnesota. Association for Computational Linguistics. John Kimball. 1973. Seven principles of surface structure parsing in natural language. Cognition, 2(1):15– 47. Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Annalen der Physik, 19(2):313–330. Michael EJ Masson and Geoffrey R Loftus. 2003. Using confidence intervals for graphically based data interpretation. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale, 57(3):203. Adwait Ratnaparkhi. 1998. Statistical models for unsupervised prepositional phrase attachment. In COLING 1998 Volume 2: The 17th International Conference on Computational Linguistics. Adwait Ratnaparkhi, Jeff Reynar, and Salim Roukos. 1994. A maximum entropy model for prepositional phrase attachment. In Proceedings of the Workshop on Human Language Technology, HLT ’94, page 250–255. Laurent Sartran, Samuel Barrett, Adhiguna Kuncoro, Miloš Stanojevi´c, Phil Blunsom, and Chris Dyer. 2022. Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale. Transactions of the Association for Computational Linguistics, 10:1423–1439. Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron C. Courville. 2017. Neural language modeling by jointly learning syntax and lexicon. ArXiv, abs/1711.02013. Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. 2018. Ordered neurons: Integrating tree structures into recurrent neural networks. ArXiv, abs/1810.09536. Jiri Stetina and Makoto Nagao. 1997. Corpus based PP attachment ambiguity resolution with a semantic dictionary. In Fifth Workshop on Very Large Corpora. 12300 Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1556– 1566, Beijing, China. Association for Computational Linguistics. Kristina Toutanova, Christopher D. Manning, and Andrew Y. Ng. 2004. Learning random walk models for inducing word dependency distributions. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML ’04, page 103, New York, NY, USA. Association for Computing Machinery. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023. Zephyr: Direct distillation of lm alignment. Marten van Schijndel and Tal Linzen. 2018a. Modeling garden path effects without explicit hierarchical syntax. In 40th Annual Meeting of the Cognitive Science Society: Changing Minds, CogSci 2018, pages 2603–2608. The Cognitive Science Society. Marten van Schijndel and Tal Linzen. 2018b. A neural model of adaptation in reading. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4704–4710, Brussels, Belgium. Association for Computational Linguistics. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Neural Information Processing Systems. Yau-Shian Wang, Hung yi Lee, and Yun-Nung (Vivian) Chen. 2019. Tree transformer: Integrating tree structures into self-attention. In Conference on Empirical Methods in Natural Language Processing. Jie Yang and Yue Zhang. 2018. NCRF++: An opensource neural sequence labeling toolkit. In Proceedings of ACL 2018, System Demonstrations, pages 74–79, Melbourne, Australia. Association for Computational Linguistics. Dani Yogatama, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Wang Ling. 2016. Learning to compose words into sentences with reinforcement learning. ArXiv, abs/1611.09100. A Appendix Given a sentence containing a prepositional phrase attachment ambiguity, the task is to indicate whether the prepositional phrase attaches to the main verb in the sentence, or to the main noun. —— Sentence: He saw the person with the binoculars. Attachment: Verb Sentence: They understand the cost of living. Attachment: Noun Sentence: He saw the person with the binoculars. Attachment: Noun Sentence: I drove home with my bike. Attachment: Verb Sentence: We prepare a dinner for the family. Attachment: [FILL] Figure 10: The prompt we use to have the LLM predict verb or noun attachment. [FILL] indicates the generation of the model which is constrained to the two possible labels. 12301
2024
664
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12302–12319 August 11-16, 2024 ©2024 Association for Computational Linguistics Tree-of-Traversals: A Zero-Shot Reasoning Algorithm for Augmenting Black-box Language Models with Knowledge Graphs Elan Markowitz∗,1,2, Anil Ramakrishna1, Jwala Dhamala1, Ninareh Mehrabi1, Charith Peris1, Rahul Gupta1, Kai-Wei Chang1, and Aram Galstyan1 1Amazon AGI 2University of Southern California Abstract Knowledge graphs (KGs) complement Large Language Models (LLMs) by providing reliable, structured, domain-specific, and up-to-date external knowledge. However, KGs and LLMs are often developed separately and must be integrated after training. We introduce Tree-of-Traversals, a novel zero-shot reasoning algorithm that enables augmentation of black-box LLMs with one or more KGs. The algorithm equips a LLM with actions for interfacing a KG and enables the LLM to perform tree search over possible thoughts and actions to find high confidence reasoning paths. We evaluate on two popular benchmark datasets. Our results show that Tree-ofTraversals significantly improves performance on question answering and KG question answering tasks. Code is available at https: //github.com/amazon-science/ tree-of-traversals 1 Introduction Large language models (LLMs) are used for a range of knowledge-intensive tasks such as information retrieval (Zhu et al., 2023b), summarization (Zhang et al., 2023), and question answering (Tan et al., 2023). Trained on large amounts of textual data, these models learn a wide breadth of information. However, LLMs suffer from several limitations – they produce hallucinated information (Ji et al., 2022; Bang et al., 2023), lack deep domain-specific knowledge (Pan et al., 2023b), and have a static knowledge cutoff when training ends. Knowledge graphs (KGs) naturally complement LLM weaknesses. KGs contain up-to-date information covering general (Vrandei´c and Krötzsch, 2014; Lehmann et al., 2015) and/or domainspecific topics (Abu-Salih, 2020; Liu et al., 2019; Zhu et al., 2017; Choi and Lee, 2019; Farazi et al., ∗Work done while interning at Amazon Corresponding author: [email protected] 2020) in highly structured and interpretable format. Augmenting an LLM’s ability to reason and respond in natural language with an external KG’s up-to-date knowledge presents a path toward a reliable and factual LLM. The rise of powerful LLMs with new capabilities has renewed interest in combining LLMs with KGS. Numerous survey and position papers have recently emphasized their combined potential (Pan et al., 2023a; Zhu et al., 2023a; Yang et al., 2023; Pan et al., 2023b). Existing works augmented LLMs with KGs in multiple ways, such as integration into pretraining (Yasunaga et al., 2022), fine-tuning (Zhang et al., 2022), or later adaptation with subsequently trained components (Lin et al., 2019; Hu et al., 2022). All of these carry some limitations. In particular, training or fine-tuning a large scale LLMs is computationally expensive. In some cases, model weights are unavailable publicly. Finally, the largest KGs require their own servers and cannot be integrated in-memory with LLMs. Additionally, previous works do not consider the case of augmenting with multiple KGs. An algorithm that allows the augmentation of a powerful black-box LLM with any number of internal or external KGs without training from scratch or fine-tuning the model is valuable. Such a zeroshot algorithm would enable several innovative use cases such as (1) customers using a black-box LLM API in conjunction with an internal domain-specific KG, (2) integration of personalized KGs into an LLM without the risks associated with training a model with such personal data, (3) integration of deep domain knowledge via an array of API accessible KGs (e.g., IMDb1, MusicBrainz2). We introduce Tree-of-Traversals, a novel algorithm that addresses the above issues by allowing the augmentation of any powerful LLM with arbitrary number of KGs in a zero-shot fashion. It re1IMDb API 2MusicBrainz API 12302 Answer: No one Inception Interstellar v=0.0 I should ExpandKG v=0.9 1 1⃝Initial state seeded with the KG entities from the query. Each subsequent step consists of selecting a tree node (rectangle), sampling thoughts or actions from LLM, performing actions, and evaluating outputs with LLM. ? ? ? Select Inception v=0.8 v=0.7 Select Inception and Interstellar I should ExpandKG v=0.9 2 2⃝Tree-of-Traversals selects which node to search based on the values of the potential options. The “Expand KG” option had higher value than “Answer: No one”. After choosing to Expand KG, the model next selects which entities from the KG to expand. ? Select Inception Select director Select cast v=0.8 v=0.2 v=0.4 3 3⃝After selecting entities, the model next chooses what relation to expand upon. The local knowledge graph is then updated with the requested information. Unfortunately, in this case, the sampled actions yielded low value output states. Answer: Michael Caine ? ? ? Query: What actor played in both Inception and Interstellar? Initialize local KG with entities for Inception and Interstellar Answer: No one Select Inception Select director Select cast Select director Answer: Unknown 1 # expansion order value = 1.0 value = 0.0 Inception Interstellar v=0.8 v=0.2 v=0.4 v=1.0 v=0.7 v=0.7 v=0.0 v=0.0 Select Inception and Interstellar I should ExpandKG v=0.9 Select cast v=0.8 2 3 4 5 ? ? Select cast Select director v=0.7 v=0.7 Select Inception and Interstellar v=0.8 4 4⃝Since the previous selection yielded low-value states, the model backtracks to the best alternative state. Answer: Michael Caine Select cast Answer: Unknown v=1.0 v=0.0 v=0.8 5 5⃝Tree-of-Traversals now has all the information needed for the query and generates two answers. The correct answer is given a high value by the LLM and is returned to the user. Figure 1: An example of how Tree-of-Traversals uses a KG interface for the query, “What actor played in both Inception and Interstellar?”. quires no training, functions with black-box access to the LLM, and works with any API accessible KG. Our contributions are: 1. Tree of traversals: A novel zero-shot algorithm for augmenting any powerful LLM with arbitrary number of KGs and enabling advanced KG reasoning using tree search. 2. Evaluation of Tree-of-Traversals on two question answering tasks: 2WikiMultiHop and QALD-10 and comparison with baselines. 3. Development of a new dataset to test combined reasoning over a general and a domainspecific KG, and evaluation of Tree-ofTraversals on this dataset. We conduct detailed experiments on three models of varied sizes hosted on Amazon Bedrock and present detailed ablation studies. 2 Tree-of-Traversals The Tree-of-Traversals algorithm maintains a local KG subgraph that is repeatedly expanded until it contains all the information required by an LLM to answer the given query. At the start, a local KG subgraph is initialized to contain the entities present in the original query. It is then expanded using a tree search algorithm to choose actions and thoughts generated by an LLM in order to obtain relevant knowledge from the KG using a KG interface. The algorithm halts when the LLM is able to answer the original query using the local KG subgraph as context. Tree-of-Traversals consists of three major components. (1) A knowledge graph interface implemented to interact with one or more required KGs. (2) An action state machine (ASM) which is a finite state machine that defines the feasible space of actions, states, and prompt templates when the LLM interacts with a KG to expand the local KG subgraph. (3) A tree search algorithm which defines the overall LLM search trajectory such as best first search, backtrack upon making a mistake, and termination condition when an answer is found. 2.1 Knowledge Graph Interface The knowledge graph interface allows Tree-ofTraversals to interact with one or multiple KGs. Let K = (E, R, T ) be a single KG. E is the set of entities in which each entity consists of an identifier, a label, and an optional description (e.g., Q35332, ‘Christopher Nolan’, ‘British-American filmmaker’). R is the set of relation types, each consisting of an identifier, a label, and an optional inverse label (P57, ‘director’, ‘is director of’). T 12303 DEFAULT SELECTING ENTITIES
 STATE SELECTING RELATION
 STATE DONE EXPAND_KG THINK: <thought> SELECT ENTITIES SELECT RELATION ANSWER: <answer> Figure 2: Action State Machine is the set of edges or facts in the KG in which each edge is of the form (s, r, o) where s, o ∈E and r ∈R, e.g., (‘Inception’, ‘director’, ‘Christopher Nolan’). Tree-of-Traversals can be used for K as long as the following interfaces are implemented. 1. initialize(q) →E0: It takes as input a query q, extracts entities from q and returns the linked entities E0 ⊂E where E0 are the entities from K that are referenced in q. 2. get_relations(Eselected) →Roptions: It takes a set of entities, Eselected, and returns the relation types, Roptions ⊂R, that Eselected have in K: {r|(s, r, o) ∈T , s ∈Eselected} 3. get_edges(Eselected, r) →Tadded, Eadded: It takes a set of entities and a selected relation type, and returns all edges with relation type r for source entities in Eselected: {(s, r, o) ∈T |s ∈Eselected, r = r, o ∈E}. It also returns the new entities, Eadded, that are entities reached with Tadded. This interface is implemented with SPARQL queries when available; otherwise, we use the graph API that is available for the KG. For multiple KGs, each interface is implemented separately. 2.2 Action State Machine (ASM) One of the challenges with developing a zero-shot LLM algorithm that works with arbitrary KGs is that the LLM does not know what relations are available in the graph or what relations are valid for a given entity. Few-shot or in-context learning approaches can only cover a handful of the possible relation types (e.g., Wikidata has over 11,000 relation types) (Brown et al., 2020). To overcome these issues we break the task of expanding a local KG subgraph into multiple subtasks. We use a finite state machine with the following actions: Think, Answer, ExpandKG, Select_Entities, and Select_Relation, and states: default, selecting-entities, selecting-relation, and done as shown in Figure 2. This is named as Action State Machine (ASM) throughout the paper. From the default state, Tree-of-Traversals can either Think, Answer, or choose to ExpandKG. After Tree-of-Traversals chooses to ExpandKG, it first is prompted to Select_Entities about which it needs more information (e.g., ‘Inception’ in Figure 1). It is then prompted to Select_Relation from a list of candidate relations provided by the KG interface’s get_relations method. After selecting a relation (e.g., ‘cast member’ in Figure 1), all edges containing one of the selected entities as the source and the selected relation as the relation are added to the local KG subgraph. Tree-of-Traversals is then able to Answer, Think, or ExpandKG again. Prompt Templates. Each state in the ASM other than the ‘done’ state is associated with a unique prompt template. Prompt templates are filled with information from the local KG subgraph and the KG interface before presenting them to the LLM. A customized prompt for each state allows us to present precise and relevant information along with specific instructions for each state to the LLM simplifying the LLM’s task. For instance, the prompt for ‘selecting-entities’ presents the available options of entities to choose from that are in the local KG subgraph (see Figure 3). Prompt templates for all the states are in the Appendix F.1-F.3. Local KG subgraph is represented using a a token efficient YAML format that minimizes repetition for multiple edges on the same entity. Invoking KG Interfaces. Outside of initialization, there are two times in which the ASM needs to invoke the KG interface: (1) When constructing the selecting-relation prompt, the algorithm calls get_relations(Eselected) to gather available choices of relations. (2) When executing the transition from selecting-relation to default after having selected a relation type r, the algorithm calls get_edges(Eselected, r) so that it can add the new edges and entities to the local subgraph. 2.3 Tree Search Algorithm Our approach draws inspiration from the Tree-ofThoughts approach (Yao et al., 2023) in which an LLM is given increased reasoning power by allowing generation of multiple thoughts at a time and 12304 Original Query: Who is Bob Dylan’s maternal grandmother? Knowledge Graph Entities: Q392: Bob Dylan - American singer-songwriter Q62519478: Beatrice Stone Knowledge Graph Edges: Bob Dylan: mother: Beatrice Stone Previous Actions: ... Current task: Select entities to expand from [Q392, Q62519478] Selection: Figure 3: Prompt for the ‘selecting-entities’ state of the ASM. The original query and the local KG subgraph are part of the prompt for every action state while the ‘Current task’ differs. The entity options show what entities can be selected based on the current local KG subgraph. If the model selected Q62519478: Beatrice Stone here, then in the next action state (selecting-relation), the dynamic prompt would list the relation types that Beatrice Stone has edges for. building a search tree over the resulting reasoning chains. We extend this approach to augment LLMs with KGs through allowing the generation of actions in addition to thoughts, and building a search tree over them. Challenges arise because Tree-ofThoughts was not designed to incorporate actions and was not designed for knowledge intensive question answering tasks. As a result, we introduce some modifications: incorporation of actions via the ASM, a slightly different search procedure and stopping condition to better handle QA, and a different sampling procedure for improved diversity when doing constrained sampling. Algorithm 1 presents the Tree-of-Traversals tree search algorithm. Given a query q it begins with initialization using initialize(q) as described in Section 2.1. After initialization, it searches by (1) choosing the unexplored node to expand based on the node’s value assigned by the value function (Best First→Depth First as tie-breaker), (2) sampling k actions from the LLM (k is branching factor) using the prompt associated with the selected node’s state in the ASM, (3) for each of the sampled action, applying the transition function, and (4) evaluating the value of the resulting nodes with the LLM value function. The search stops when an answer is found with a node value exceeding the threshold τ. To bound the search space we add two hyper-parameters: max depth of the tree search, after which the algorithm is forced to transition to the done state (i.e., answer the query), and max expansions after which the model halts exploration and returns "Cannot find answer". Algorithm 1 Tree-of-Traversals Input: q ←query, k ←branching factor, τ ←answer threshold, P ←LLM, F ←ASM Output: Answer to q 1: procedure TOACTIONS(q) 2: T ←ϕ ▷Empty tree 3: Y ←ϕ ▷Empty answer set 4: s0 ←initialize(q) 5: T.add(s0) 6: while !finished do 7: s ←choose_node(T) 8: ▷Sample k actions from P using the action state and associated prompt of s from F(s) 9: a ←sample_actions(s, k, F, P) 10: for a ∈a do 11: ▷Apply action a to state s according to F 12: s′ ←transition(s, a, F) 13: ▷Evaluate the resulting state s′ using P 14: s′.value ←evaluate(s′, P) 15: T.add(s′) 16: if s′.action_state = done then 17: Y.add(s′) 18: if s′.value > τ then 19: ▷When answer exceeds threshold, stop search 20: finished ←True 21: return argmaxy∈Y y.value Value Function Guidance. Tree-of-Traversals computes the value of a node to determine its utility. This value is created by the LLM using evaluation prompts (step evaluate in Algorithm 1). The value can be between 0 to 1 where 1 indicates highest utility. We use two types of evaluation prompts: one for intermediate states and one for answer states. The prompts include the original query, the local KG subgraph, the trajectory of previous actions, followed by instructions for evaluating the node (see Appendix F.4 and F.5). These values are then used to guide the exploration of the action space. Specifically, choose_node returns the unexplored node with the highest value (Best First). If there are nodes with the same value, Depth First Search is used. Chain-of-Traversals. In some cases we can find the answer to a query with a single sequence of thoughts and actions using the ASM and KG interface. This is equivalent to Tree-of-Traversals with a branching factor of k = 1. We refer to this as Chain-of-Traversals. While useful for compari12305 son, experiments show the benefit of considering multiple branches. 2.4 Tree-of-Traversals with Multiple KGs Augmenting an LLM with more than one KG mainly involves building KG interfaces for each of the added KGs. There are a few other changes to the algorithm. (1) The entities extracted in initialize(q) are matched with each KG interface. (2) When presenting the options of relations during selecting-relation, get_relations(Eselected) is called on each KG interface. (3) When adding a new entity to the local KG subgraph we call an entity linking function for the other KG interfaces. We allow for the entity linking function to be a separate function in the KG interface or to fallback on initialize(o) where o is the text label of the entity that has just been added. This makes allows the use of explicit links between common KGs if available while still functioning without. 3 Experiments We evaluate Tree-of-Traversals using three different models available on Amazon Bedrock3: ClaudeInstant (claude-instant-v1), Llama2 70b (llama2-70b-chat-v1), and Llama2 13b (llama2-13b-chat-v1). AWS Bedrock provides on-demand access through a single API to various Foundation Models, including both open source and black box ones. This is precisely a use case Tree-of-Traversals is designed for. 3.1 Tasks and Datasets. We first evaluate the Tree-of-Traversals algorithm on two common tasks used for evaluating an LLM’s knowledge: 2WikiMultiHop and QALD-10. To allow testing on complex questions requiring knowledge from multiple KGs we create a new dataset that requires knowledge from multiple KGs. 2WikiMultiHop Dataset (Ho et al., 2020) was constructed by extracting multi-hop templates from HotPotQA (Yang et al., 2018), combining templates to get complex reasoning questions, generating candidate questions with Wikidata, and then confirming that the entity mentions for each edge appear in the Wikipedia paragraphs. Answers to these questions can be derived from both Wikipedia and Wikidata (Vrandei´c and Krötzsch, 2014). We 3https://aws.amazon.com/bedrock Which event was hosted in a venue with a larger maximum capacity, Fearless Tour: Indianapolis or Fearless Tour: Omaha? 1. Get venue for Fearless Tour: Indianapolis →Gainbridge Fieldhouse 2. Get venue for Fearless Tour: Omaha →CHI Health Center Omaha 3. Get maximum capacity of the respective venues →(18165, 18975) 4. Answer: Fearless Tour: Omaha was hosted in a venue with a larger maximum capacity. Figure 4: Example question from MusicBrainz-x-Wiki and how Tree-of-Traversals arrives at its final answer. Red indicates entities and and relationships belonging to MusicBrainz. Green indicates relationships only in Wikidata. Blue indicates entities linked to both KGs. subsample 500 questions from the test set following the approach used in ReAct (Yao et al., 2022), including the sampling seed value of 233. QALD-10 Dataset (Usbeck et al.) is a multilingual Knowledge Graph Question Answering (KGQA) dataset with 395 questions created by humans, translated into other languages, and then constructed as a SPARQL query over Wikidata4. These questions are more varied in terms of reasoning structure than the questions in 2WikiMultiHop (e.g. requiring multiple answers or aggregations). We used the questions in English language. MusicBrainz-x-Wikidata Dataset One novel use-case of Tree-of-Traversals is synthesizing and reasoning over multiple knowledge graph sources. There is no existing dataset that requires synthesizing information from multiple KGs to answer individual questions. Therefore, to test the reasoning ability using multiple KGs, we create a new dataset, MusicBrainz-x-Wikidata, containing 109 questions that require reasoning with information from both MusicBrainz and Wikidata. Unlike Wikidata which contains general knowledge, MusicBrainz is a deep domain-specific database on the music industry. The vast majority of this information is unlikely to be known by large language models. We construct the MusicBrainz-x-Wikidata dataset with human annotators who were provided with instructions to find reasoning paths between these two KGs, the question types to focus on, and some example questions. The curated questions were checked for ambiguity and sensibility. By design, each question 4https://github.com/KGQA/QALD-10 12306 2WikiMultiHop (↑) Algorithm Claude-Instant Llama2-70b Llama2-13b Chain-of-Thought 25.2 30.4 26.8 ReAct 5.8 30.4 6.8 ReAct→CoT 27.0 (84.8%) 41.6 (41.6%) 23.6 (71.8%) FLARe 35.0 45.4 34.8 Chain-of-Traversals 42.6 45.0 31.0 Tree-of-Traversals 63.0 56.6 37.3 QALD-10 (↑) Algorithm Claude-Instant Llama2-70b Llama2-13b Chain-of-Thought 47.4 40.0 42.6 ReAct 3.1 23.7 8.7 ReAct→CoT 47.9 (72.6%) 43.4 (44.4%) 40.8 (79.5%) FLARe 39.2 38.8 23.1 Chain-of-Traversals 55.7 56.9 39.6 Tree-of-Traversals 64.3 61.6 39.2 Table 1: EM-in on 2WikiMultiHop dataset. →CoT indicates falling back to Chain-of-Thought when no answer is given. (%) indicates the number of such cases. requires extracting information from both knowledge graphs in order to answer it successfully. In addition to the reasoning types in 2WikiMultiHop, this dataset contains questions that involve aggregations, comparisons of aggregations, qualifications, and complex combinations of these. An example can be seen in Figure 3.1. Detail on instructions, question types, and examples are in Appendix A. 3.2 Metric. We use Exact Match Included (EM-in) as the evaluation metric. EM-in is 1 if the ground truth answer appears with an exact match anywhere in the answer and 0 otherwise. This accounts for the propensity of an LLM to output answers in a sentence with varying syntax. This is a common metric but is often referred to interchangeably with Exact Match (EM) (Sun et al., 2023). When there are multiple answers, we compute average EM-in over all labels. 3.3 Comparison Baselines. We experiment against three related approaches that can work with any black-box LLMs: (1) Chainof-Thought (CoT) prompting (Wei et al., 2022) (2) ReAct (Yao et al., 2022) which iterates between generating thoughts and generating actions for searching and retrieving from a text knowledge base like Wikipedia, and (3) Forward Looking Active Retrieval (FLARe) (Jiang et al., 2023b) which iterates between generating thoughts and then retrieving from a knowledge base to correct inaccuracies. 3.4 Implementation details. For all models, we use a sampling temperature of 0.0 for the LLM when multiple samples are not required and a temperature of 1.0 when diverse samples are required. We test our approach in two settings: (1) with a branching factor of k = 1 termed as Chain-of-Traversals, and (2) with a branching factor of k = 3 termed as Tree-of-Traversals. In both setups, we set maximum depth to 7 which means that upon reaching the default action state beyond depth 7, the only available action is to answer the question. For Tree-of-Traversals, we set the maximum total expansions to 20. The answer threshold τ is set to 0.8 which corresponds to high confidence answers supported by the KG according the evaluate prompt (Appendix F.5). For 2WikiMultiHop and QALD-10, we use Wikidata as the knowledge graph (Vrandei´c and Krötzsch, 2014). For MusicBrainz-x-Wikidata, we use MusicBrainz in addition to Wikidata. We implement the KG interface for Wikidata using Wikidata SPARQL queries5, and implement the KG in5https://query.wikidata.org/ 12307 Llama2-13b Llama2-70b Claude-Instant Model 0 10 20 30 40 50 60 70 80 Accuracy (EM-in) Accuracy for answers w/ value 0.0 Accuracy for answers w/ value 1.0 Figure 5: The EM-in accuracy for answers with model assigned value 0.0 or 1.0 and the corresponding true EM-in score of the answer. This includes all proposed answers, not just the final answer returned by the model. terface for MusicBrainz KG using the MusicBrainz API6. 4 Results and Discussion Table 1 presents the results of our experiments on 2WikiMultiHop and QALD-10. For all the models, Tree-of-Traversals outperforms the baselines on 2WikiMultiHop, setting state-of-the-art results for these tasks in the zero-shot setting. We hypothesize that much of this gain is due to Tree-of-Traversals’ access to the knowledge base via proposed KG interface and its thought-action procedure guided by the ASM. This is evident, as even Chain-ofTraversals, which does not perform a tree traversal (including multiple thoughts/actions, backtracking, and node value computation), significantly outperforms ReAct’s knowledge grounding: 8.7% higher accuracy than ReAct→CoT on 2WikiMultiHop when averaged over all models. Compared to Chain-of-Traversals, Tree-of-Traversals further improves performance. It gives on average a 12.8% absolute accuracy increase for 2WikiMultiHop and 4.3% absolute improvement for QALD-10. We note that the better performing the model is, the more it stands to gain from Tree-of-Traversals as noted by the difference between Llama-70b and Llama-13b. On MusicBrainz-x-Wikidata (Table 2), which contains challenging reasoning questions requiring access to two KGs, there is an average of 37.4% relative improvement from Tree-of-Traversals over Chain-of-Traversals as shown in Table 2. Both Chain-of-traversals and Tree-of-Traversals outper6https://musicbrainz.org/doc/ MusicBrainz_API Llama2-13b Llama2-70b Claude-Instant Model 20 25 30 35 40 45 50 Accuracy (EM-in) 123 cases 91 cases 73 cases w/o backtracking w/ backtracking Figure 6: Comparison of results with and without backtracking on 2WikiMultiHop questions that needed backtracking. There would be a major degradation in performance if backtracking was not allowed. form Chain-of-Thoughts on this dataset. We only compare with Chain-of-Thoughts as other retrieval methods have low coverage over the MusicBrainz knowledge base. 4.1 Effect of the Value Function. Tree-of-Traversals relies on signal from the value function to pick the final tree trajectory. If the values were arbitrary, then Tree-of-Traversals would not do any better than Chain-of-Traversals. Figure 5 shows that there is a meaningful signal from the value function for all models. There is an average performance difference of 31.0% between the accuracy for answers valued at 1.0 vs 0.0. This represents an average relative improvement of 83.2% when selecting answers valued at 1.0 over those valued at 0.0. See Appendix C for individual profiles of each model’s value function. 4.2 Effect of Backtracking. To determine the effect of backtracking in Treeof-Traversals, we ask the counter-factual of how would the model perform if it could not backtrack (Figure 6). We limit the analysis to 2WikiMultiHop questions in which Tree-of-Traversals generates answers on a subtree, and the model eventually backtracks on that subtree (i.e., the cases where we have a counter-factual). As a result, these questions are generally more challenging than the overall distribution of questions. In these cases, we compare the result if the highest-valued answer was taken from the first searched subtree compared to the ultimate answer after backtracking. We find that the ability to backtrack gives Tree-of-Traversals a significant accuracy increase ranging from 4.1% to 12.3%. 12308 Music-x-Wiki (↑) Algorithm Claude-Instant Llama2-70b Llama2-13b Chain-of-Thoughts 11.9 10.1 13.8 Chain-of-Traversals 22.0 21.9 14.7 Tree-of-Traversals 40.4 23.4 16.5 Table 2: EM-in on MusicBrainz-x-Wikidata dataset. 4.3 Performance on MusicBrainz-x-Wikidata. For MusicBrainz-x-Wikidata dataset, we only compare against Chain-of-Thought since the implementations for other baseline algorithms do not have access to similar music-specific knowledge bases. Despite this, we observed some interesting results. Chain-of-Thought does far worse on MusicBrainzx-Wikidata than on the more general datasets based exclusively on Wikipedia/Wikidata (10.1%-13.8% vs. 25.2%-47.4%). This could be because LLMs are trained on vast amount of general knowledge such as that found in Wikipedia, but they are not trained with as much data from specific domains such as music. Besides the presence of KG interface and ASM with Tree-of-Traversals, this could be an additional reason why Tree-of-Traversals does 2.2 times as well as Chain-of-Thought on MusicBrainz-x-Wikidata. This demonstrates the importance of augmenting LLMs with domainspecific KGs and/or multiple KGs which the proposed Tree-of-Traversals is capable of. 4.4 Analysis of Baselines. ReAct underperforms Chain-of-Thought in most cases. This phenomenon was seen in the original ReAct paper which noted that their approach significantly reduced hallucinations despite lower accuracy (Yao et al., 2022). Therefore, falling back on Chain-of-Thought results in an accuracy improvements for for Llama2-70b and Claude-Instant. We note that claude-instant vastly underperforms Llama2-70b on ReAct. The primary reason for this is claude-instant refuses to "search" people and apologizes after performing invalid actions. Despite this, ReAct→CoT still improves over CoT on both 2WikiMultiHop and QALD-10. FLARe improves model performance on 2WikiMultiHop but not on QALD-10. This may be due to better overlap between the text knowledge base for 2WikiMultiHop. 5 Related Works Knowledge Base Question Answering There is a large history of work that has looked into answering questions using a knowledge graph (Wu et al., 2019; Lan et al., 2021). The approaches can broadly be broken into those that attempt to parse the question into a logical form (Berant and Liang, 2014; Luo et al., 2018; Zhu et al., 2022), and information retrieval approaches (Bordes et al., 2015; Chen et al., 2019) Knowledge Enhancement. Recently, researchers have looked into enhancing pretrained language models and LLMs using knowledge graphs. Many works have focused on incorporating knowledge graph data into LLMs through training, often with architectural changes to the model (Zhang et al., 2019; Wang et al., 2019; Peters et al., 2019; Yamada et al., 2020; He et al., 2021). Some of these have found success using specialized graph encoder layers (Yasunaga et al., 2021; Sun et al., 2021). Some have sought to mix this process with pretraining of the language model (Yasunaga et al., 2022). The limitations of including KGs during LLM training are: the models are incapable of incorporating KG updates without retraining, are not able to change KG source (e.g., a domain-specific one), and may have low explainability resulting from learning knowledge in model weights rather than explicit retrieval. In addition, these methods add complexity to the training process, increase cost, and do not work with LLM without access to model weights. As a result, scaling these methods has often been seen as too risky. Other methods treat a KG as a more complete source of truth and use the LLM to generate structured queries for the KG. Tianle et al. (2023) and Choudhary and Reddy (2023) teach the LLM to generate a logical KG query which then returns matching entities from the KG. These methods share similarities with the semantic parsing based 12309 approaches mentioned earlier. However, by returning a logical query for the KG, these methods lose the reasoning and commonsense abilities of the LLM. Some approaches have looked at injecting KG or knowledge base data into LLM prompts. Many methods do a single round of retrieval based on the query (Lewis et al., 2020; Li et al., 2023) using a retrieval mechanism, such as dense passage retrieval (Karpukhin et al., 2020). These methods cannot answer more complex multi-hop questions as the initial retrieval is unlikely to contain the secondary or tertiary information that will ultimately be required. Later methods have made the retrieval process iterative. E.g. FLARe (Jiang et al., 2023b) uses prediction of the upcoming sentence to retrieve relevant documents and regenerates until the sentence contains high-confidence tokens. In Wang et al. (2023) multi-round QA format is used for multi-round retrieval. ReAct (Yao et al., 2022) does multiple rounds of thoughts and actions to query a text-based knowledge base. Jiang et al. (2023a) repeatedly interfaces with a KG. In other parallel works, (Sun et al., 2023; Wen et al., 2023) explore methods for building KG context for LLMs using path and neighborhood based search methods. The above methods do not incorporate tree search or forms of advanced reasoning beyond the LLM’s innate ability. They cannot explore multiple reasoning paths or solutions and cannot backtrack if making a mistake. This results in lesser capabilities. Additionally, none of the above methods explore reasoning over multiple KGs. Multiple Knowledge Base QA. Some works studied QA on questions derived from differing domains. These works learn to pick between different domain-specific QA models, each trained on a single knowledge base (Puerto et al., 2021; Geigle et al., 2021; Puerto et al., 2023). Besides requiring training, such systems cannot answer questions requiring the synthesis of information from different domains (MusicBrainz-x-Wikidata). 6 Conclusion Tree-of-Traversals is a powerful algorithm that enables LLMs to utilize KG reasoning with zero examples, no schema dependency, and no training. We demonstrate its efficacy experimentally on multiple LLMs and datasets. We hope Tree-ofTraversals continues to be developed and see value in studying integration with personalized user KGs as well as with other domain-specific datasets. 7 Limitations Tree-of-Traversals is slower than simpler retrieval alternatives. Improving the value function would reduce the number of LLM and KG API calls by avoiding incorrect paths along the tree to explore. Other engineering solutions, such as hosting graph servers, could be implemented to accelerate LLM and KG access. In terms of the token cost, the KG text representation is token efficient compared to many retrieval methods using unstructured knowledge bases as the latter tend to maximize usage of the LLM context window. However, compared to non-retrieval baselines, there is significant increase in token usage cost. The types of KG questions that can be answered are limited by the context window, search depth, and LLM’s reasoning ability. For instance, a very large aggregation (‘How many mountains have elevation above 3500 meters?’) would require more entities than what fits in the LLM context. Future work could look at adding other actions to the ASM, such as aggregations, that could distill the local KG into more relevant information. EM-in is also an imperfect metric. False negatives can arise from discrepancies in date and number formatting, text formatting, and aliasing. False positives can arise in cases in which the model does not definitively answer but includes the correct answer in its response. 8 Ethical Impact We do not anticipate Tree-of-Traversals to introduce new areas of risk but it may have unstudied effects on existing risks of LLMs or KGs. We highlight the following areas. (i) Tree-ofTraversals gives new capabilities to LLMs after training. While positive in terms of accuracy, we have not evaluated its effect on wider safety metrics. (ii) Our evaluation is limited to only English versions of KGs and datasets. Tree-of-Traversals should be evaluated in other languages to ensure consistent and fair experience. (iii) We have not performed analysis under misleading or deceptive knowledge graphs. Using a publicly modifiable knowledge graph does come with the risk that information could be deceptively changed. In terms of positive ethical impact, making this research public democratizes access to knowledgeaugmented LLMs as this method does not require 12310 a large training investment or owning the language model to build and customize. References Bilal Abu-Salih. 2020. Domain-specific knowledge graphs: A survey. ArXiv, abs/2011.00235. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, and Pascale Fung. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. ArXiv, abs/2302.04023. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Annual Meeting of the Association for Computational Linguistics. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. ArXiv, abs/1506.02075. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. ArXiv, abs/2005.14165. Zi-Yuan Chen, Chih-Hung Chang, Yi-Pei Chen, Jijnasa Nayak, and Lun-Wei Ku. 2019. Uhop: An unrestricted-hop relation extraction framework for knowledge-based question answering. ArXiv, abs/1904.01246. Wonjun Choi and Hyunju Lee. 2019. Inference of biomedical relations among chemicals, genes, diseases, and symptoms using knowledge representation learning. IEEE Access, 7:179373–179384. Nurendra Choudhary and Chandan K. Reddy. 2023. Complex logical reasoning over knowledge graphs using large language models. ArXiv, abs/2305.01157. Feroz Farazi, Maurin Salamanca, Sebastian Mosbach, Jethro Akroyd, Andreas Eibeck, Leonardus Kevin Aditya, Arkadiusz Chadzynski, Kang Pan, Xiaochi Zhou, Shaocong Zhang, Mei Qi Lim, and Markus Kraft. 2020. Knowledge graph approach to combustion chemistry and interoperability. ACS Omega, 5:18342 – 18348. Gregor Geigle, Nils Reimers, Andreas Ruckl’e, and Iryna Gurevych. 2021. Tweac: Transformer with extendable qa agent classifiers. ArXiv, abs/2104.07081. Gaole He, Yunshi Lan, Jing Jiang, Wayne Xin Zhao, and Ji rong Wen. 2021. Improving multi-hop knowledge base question answering by learning intermediate supervision signals. Proceedings of the 14th ACM International Conference on Web Search and Data Mining. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multihop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6609–6625, Barcelona, Spain (Online). International Committee on Computational Linguistics. Ziniu Hu, Yichong Xu, W. Yu, Shuohang Wang, Ziyi Yang, Chenguang Zhu, Kai-Wei Chang, and Yizhou Sun. 2022. Empowering language models with knowledge graph reasoning for open-domain question answering. ArXiv, abs/2211.08380. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Wenliang Dai, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. ACM Computing Surveys, 55:1 – 38. Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji rong Wen. 2023a. Structgpt: A general framework for large language model to reason over structured data. ArXiv, abs/2305.09645. Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Li-Yu Daisy Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023b. Active retrieval augmented generation. ArXiv, abs/2305.06983. Vladimir Karpukhin, Barlas O˘guz, Sewon Min, Patrick Lewis, Ledell Yu Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Conference on Empirical Methods in Natural Language Processing. Yunshi Lan, Gaole He, Jinhao Jiang, Jing Jiang, Wayne Xin Zhao, and Ji rong Wen. 2021. A survey on complex knowledge base question answering: Methods, challenges and solutions. ArXiv, abs/2105.11644. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S. Auer, and Christian Bizer. 2015. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6:167–195. Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledgeintensive nlp tasks. ArXiv, abs/2005.11401. 12311 Shiyang Li, Yifan Gao, Hao Jiang, Qingyu Yin, Zheng Li, Xifeng Yan, Chao Zhang, and Bing Yin. 2023. Graph reasoning for question answering with triplet retrieval. In Annual Meeting of the Association for Computational Linguistics. Bill Yuchen Lin, Xinyue Chen, Jamin Chen, and Xiang Ren. 2019. Kagnet: Knowledge-aware graph networks for commonsense reasoning. ArXiv, abs/1909.02151. Yang Liu, Qingguo Zeng, Joaquín B. Ordieres Meré, and Huanrui Yang. 2019. Anticipating stock market of the renowned companies: A knowledge graph approach. Complex., 2019:9202457:1–9202457:15. Kangqi Luo, Fengli Lin, Xusheng Luo, and Kenny Q. Zhu. 2018. Knowledge base question answering via encoding of complex query graphs. In Conference on Empirical Methods in Natural Language Processing. Jeff Z. Pan, Simon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jiaoyan Chen, Stefan Dietze, Hajira Jabeen, Janna Omeliyanenko, Wen Zhang, Matteo Lissandrini, Russa Biswas, Gerard de Melo, Angela Bonifati, Edlira Vakaj, Mauro Dragoni, and Damien Graux. 2023a. Large language models and knowledge graphs: Opportunities and challenges. ArXiv, abs/2308.06374. Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. 2023b. Unifying large language models and knowledge graphs: A roadmap. ArXiv, abs/2306.08302. Matthew E. Peters, Mark Neumann, IV RobertL.Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. 2019. Knowledge enhanced contextual word representations. In Conference on Empirical Methods in Natural Language Processing. Haritz Puerto, Tim Baumgärtner, Rachneet Sachdeva, Haishuo Fang, Haotian Zhang, Sewin Tariverdian, Kexin Wang, and Iryna Gurevych. 2023. Ukp-square v3: A platform for multi-agent qa research. In Annual Meeting of the Association for Computational Linguistics. Haritz Puerto, Gözde Gül ¸Sahin, and Iryna Gurevych. 2021. Metaqa: Combining expert agents for multiskill question answering. ArXiv, abs/2112.01922. Jiashuo Sun, Chengjin Xu, Lumingyuan Tang, Sai Wang, Chen Lin, Yeyun Gong, Heung yeung Shum, and Jian Guo. 2023. Think-on-graph: Deep and responsible reasoning of large language model with knowledge graph. ArXiv, abs/2307.07697. Yu Sun, Shuohuan Wang, Shikun Feng, Siyu Ding, Chao Pang, Junyuan Shang, Jiaxiang Liu, Xuyi Chen, Yanbin Zhao, Yuxiang Lu, Weixin Liu, Zhihua Wu, Weibao Gong, Jianzhong Liang, Zhizhou Shang, Peng Sun, Wei Liu, Ouyang Xuan, Dianhai Yu, Hao Tian, Hua Wu, and Haifeng Wang. 2021. Ernie 3.0: Large-scale knowledge enhanced pre-training for language understanding and generation. ArXiv, abs/2107.02137. Yiming Tan, Dehai Min, Y. Li, Wenbo Li, Na Hu, Yongrui Chen, and Guilin Qi. 2023. Can chatgpt replace traditional kbqa models? an in-depth analysis of the question answering performance of the gpt llm family. Li Tianle, Xueguang Ma, Alex Zhuang, Yu Gu, Yu Su, and Wenhu Chen. 2023. Few-shot in-context learning on knowledge base question answering. In Annual Meeting of the Association for Computational Linguistics. Ricardo Usbeck, Xi Yan, Aleksandr Perevalov, Longquan Jiang, Julius Schulz, Angelie Kraft, Cedric Möller, Junbo Huang, Jan Reineke, AxelCyrille Ngonga Ngomo, et al. Qald-10—the 10th challenge on question answering over linked data. Denny Vrandei´c and Markus Krötzsch. 2014. Wikidata: a free collaborative knowledgebase. Commun. ACM, 57:78–85. Keheng Wang, Feiyu Duan, Sirui Wang, Peiguang Li, Yunsen Xian, Chuantao Yin, Wenge Rong, and Zhang Xiong. 2023. Knowledge-driven cot: Exploring faithful reasoning in llms for knowledge-intensive question answering. ArXiv, abs/2308.13259. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juan-Zi Li, and Jian Tang. 2019. Kepler: A unified model for knowledge embedding and pretrained language representation. Transactions of the Association for Computational Linguistics, 9:176– 194. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. ArXiv, abs/2201.11903. Yilin Wen, Zifeng Wang, and Jimeng Sun. 2023. Mindmap: Knowledge graph prompting sparks graph of thoughts in large language models. ArXiv, abs/2308.09729. Peiyun Wu, Xiaowang Zhang, and Zhiyong Feng. 2019. A survey of question answering over knowledge base. In China Conference on Knowledge Graph and Semantic Computing. Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. Luke: Deep contextualized entity representations with entity-aware self-attention. In Conference on Empirical Methods in Natural Language Processing. Lin F. Yang, Hongyang Chen, Zhao Li, Xiao Ding, and Xindong Wu. 2023. Chatgpt is not enough: Enhancing large language models with knowledge graphs for fact-aware language modeling. ArXiv, abs/2306.11489. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset 12312 for diverse, explainable multi-hop question answering. In Conference on Empirical Methods in Natural Language Processing. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. ArXiv, abs/2305.10601. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. ArXiv, abs/2210.03629. Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, and Jure Leskovec. 2022. Deep bidirectional language-knowledge graph pretraining. ArXiv, abs/2210.09338. Michihiro Yasunaga, Hongyu Ren, Antoine Bosselut, Percy Liang, and Jure Leskovec. 2021. Qa-gnn: Reasoning with language models and knowledge graphs for question answering. In North American Chapter of the Association for Computational Linguistics. Haopeng Zhang, Xiao Liu, and Jiawei Zhang. 2023. Extractive summarization via chatgpt for faithful summary generation. ArXiv, abs/2304.04193. Xikun Zhang, Antoine Bosselut, Michihiro Yasunaga, Hongyu Ren, Percy Liang, Christopher D. Manning, and Jure Leskovec. 2022. Greaselm: Graph reasoning enhanced language models for question answering. ArXiv, abs/2201.08860. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. Ernie: Enhanced language representation with informative entities. In Annual Meeting of the Association for Computational Linguistics. Yueqin Zhu, Wenwen Zhou, Yang Xu, Ji Liu, and Yongjie Tan. 2017. Intelligent learning for knowledge graph towards geological data. Sci. Program., 2017:5072427:1–5072427:13. Yuqi Zhu, Xiaohan Wang, Jing Chen, Shuofei Qiao, Yixin Ou, Yunzhi Yao, Shumin Deng, Huajun Chen, and Ningyu Zhang. 2023a. Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities. ArXiv, abs/2305.13168. Yutao Zhu, Huaying Yuan, Shuting Wang, Jiongnan Liu, Wenhan Liu, Chenlong Deng, Zhicheng Dou, and Ji rong Wen. 2023b. Large language models for information retrieval: A survey. ArXiv, abs/2308.07107. Zhaocheng Zhu, Mikhail Galkin, Zuobai Zhang, and Jian Tang. 2022. Neural-symbolic models for logical queries on knowledge graphs. In International Conference on Machine Learning, pages 27454–27478. PMLR. A MusicBrainz x Wiki Additional information on the creation of the MusicBrainz x Wiki dataset. A.1 Creation steps The dataset was created by annotators in a semiautomatically supported manner. We developed an automated tool to support the following process: • Find linking entities that are present in both MusicBrainz and Wikidata of various types (artist, label, place, event, etc.) • Find related entities that could be used in a question to perfectly identify the entities above (e.g., The place owned by the Detroit Tigers →Tiger Stadium) • Find related entities that could be used in a question to ambiguously identify a linking entity, so that a qualifier could be added to disambiguate the linking entity. • Count the number of edges in Wikidata or MusicBrainz for each relationship type an entity has. We use these tools to create an initial set of multihop questions falling into specific reasoning categories. We then have human annotators check each question for sensibility (grammatically correct and can be definitively understood) and ambiguity (one right answer in the KGs). If possible, the question is reworded or fixed to not contain ambiguities. We are then left with a final set of 109 questions. A.2 Composition of Questions The composition of questions for Musicbrainz x Wiki can be found in table 3. A.3 Annotators The annotators consisted of approximately 10 research scientists and engineers, all English speaking and based in the United States. B Implementation Details B.1 Diversity Oversampling. One problem with Tree-of-Thoughts is that when there are constrained options to choose from, the LLM becomes repetitive and fails to produce diverse outputs (Yao et al., 2023). Their solution used a "propose prompt" with additional instructions for proposing multiple distinct thoughts. However, this 12313 Question Type Count Description Example Bridge 30 Direct multi-hop questions The father of Alexander Newley was the vocal arranger for what song? Bridge Comparison 7 Comparison of values from two multi-hop reasoning chains Which venue was opened more recently, the venue for The Eras Tour: Seattle (night 1) or the venue for Fearless Tour: Seattle? Bridge Aggregation 42 Count of the number of entities satisfying a multi-hop condition (aggregation only over the second hop) The place which is the filming location of Michelangelo Buonarroti, has how many songs recorded there? Bridge Aggregation Comparison 7 Comparison of counts of entities between two different aggregation conditions Who composed more songs, Marie Daulne’s mother or Qubilah Shabazz’s godparent? Qualification Bridge 10 Multi-hop question in which one of the edges needs to be disambiguated with a qualifying statement The filming location of Scream Awards that was built in 1929, has what recording engineered there? Qualification Bridge Aggregation 9 Count of the number of entities satisfying a qualification bridge style condition The sibling of Teppo Ruohonen that was born in 1949, has composed how many songs? Qualification Bridge Aggregation Comparison 1 Comparison of qualification bridge aggregation type questions Which of the children of Isabel Preysler that are singers has released more albums? Double Qualification 3 Multi-hop question in which each hop requires a disambiguating qualification edge The child of Guus Belinfante that was a singer, has a vocal feature on what recording by Doe Maar? Table 3: Composition of questions for Musicbrainz x Wikidata adds significant complexity to the prompt for purposes that are only tangential to the task being completed. A simpler approach we employ is to generate diverse actions by oversampling. Specifically, if the branching factor is k, we sample 2 ∗k possible actions from the LLM and then extract the first k unique ones to use. We only employ this for selecting-entities and selecting-relation as the action choices are limited and diversity is required. This change does not noticeably affect the computation cost of Tree-of-Traversals as multiple actions are sampled for the same prompt and the average prompt input size (100s-1000s of tokens) is generally much larger than the average generation length for these actions (<20 tokens). Latency remains similar as multiple actions are sampled in parallel. With this, we consistently generate diverse options when selecting-entities or selecting-relation. B.2 Hyperparameters Tree-of-Traversals does not have a huge number of hyperparamters. Most of the hyperparameters were chosen analytically, though some were chosen based on preliminary experiments. We share some information on choosing them. τ was chosen analytically in conjunction with the prompt for evaluate on the done state (Appendix F.5). Chosen to be 0.8 so that Tree-ofTraversals only stops when it is confident that the answer is supported by the KG. The temperature was chosen to be 1.0 to promote diversity. If implementing with other models which can have temperature >1.0, we recommend hyperparameter tuning or using a sample prompt of each type and analytically choosing the temperature such that diverse but sensible actions are output. The branching factor k = 3 was chosen based on small scale experimentation. Larger k makes 12314 the model spend longer searching, taking longer time and increasing expenses. Smaller k did not generate as much diversity and options. It is worth noting that even with k = 3 and diversity sampling, sometimes the actual number of unique outputs is less than 3. The max depth was chosen analytically to be minimal while enabling answering of most questions. Some questions in QALD-10 and MusicBrainz-x-Wikidata would require a greater depth to answer. The maximum expansions cutoff was chosen purely to ensure the model does not hang on a single question too long. The actual value is somewhat arbitrary and has minimal impact on accuracy. B.3 KG Interface We present some additional details of the KG Interface. The initialize function is a three step process. First a LLM call and prompt is used to extract named entities. Second, for each extracted entity candidates are searched for using the KG API. Finally, another LLM call is used to match the extracted entity to the best candidate. get_relations and get_edges are implemented exclusively using the respective KG API. For Wikidata, these are each one or two SPARQL query (forward and reverse edges). For MusicBrainz, they each require multiple API calls as the endpoints are separated for different entity types. E.g. there is a separate endpoint for Artists and Recordings. C Value Function Profiles We profile each value model’s value function to assess their characteristics. Figure 7 shows the individual answer value vs accuracy profiles for each model on 2WikiMultiHop. All models have a positive correlation between the answer value and the answer’s accuracy. However, as is to be expected, there are differences with Llama2-13b having the lowest correlation and Claude-Instant having the highest correlation. Figure 7 only look at the value function response for final answer states as those are the only ones with a directly associated accuracy score. While accurately evaluating answer states is more important, we also want to see some useful signal from the values for intermediary states. Figure 8 shows the average value for nodes on the search path to the answer, separated by whether the answer scores as correct or not. We would expect actions that ultimately lead to a correct answer to have higher values than those that lead to incorrect ones. We clearly see this with the correct distribution being further right than the incorrect distribution. This is least pronounced for Llama2-13b which is to be expected. We also note that the distributions shape is different between the Llama2 models and Claude-Instant. These results indicate that larger, stronger models, will likely see more improvement in the value function, and thus, more utility from Tree-ofTraversals . D Prompt Engineering Some prompt re-engineering was required when switching between models for both our methods and ReAct. This engineering was limited primarily to formatting of responses. For example, Llama2 would start responses with "So the next action should be" while we required the action to start with "THINK", "EXPAND_KG" or "ANSWER". Instead we incorporated “So the next action should be:” into the prompt so that the output began with the chosen action. For selecting-entities we added “You have selected the following entities to expand:”. For selecting-property we added “I suggest selecting property:”. These issues could also be solved by changing the parsing of the presented prompts. E Fallback to Chain-of-Thought (→CoT) We fallback to Chain-of-Thought for the ReAct baseline when either no answer is provided after depth 7 or when one of the following phrases appears in the answer: ’determine’, ’unable’, ’cannot’, ’unknown’, ’unsure’, or ’not possible’. While these certainly could appear in intentional answers, we find that they occur consistently and exclusively in non-answer style responses. 12315 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Value 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Answer Score corr = 0.10 p = 9.5e-05 (a) Llama2-13b 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Value 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Answer Score corr = 0.22 p = 1.3e-21 (b) Llama2-70b 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Value 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Answer Score corr = 0.45 p = 1.4e-74 (c) claude-instant Figure 7: EM-in Accuracy vs value function for all Treeof-Traversals answers on 2Wiki. Points are jittered for visibility. Trendline indicates correlation and p-value. 0.0 0.2 0.4 0.6 0.8 Average Value 0.00 0.05 0.10 0.15 0.20 0.25 Frequency Average Value on Answer Path Incorrect Correct (a) Llama2-13b 0.0 0.2 0.4 0.6 0.8 Average Value 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 Frequency Average Value on Answer Path Incorrect Correct (b) Llama2-70b 0.0 0.2 0.4 0.6 0.8 Average Value 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Frequency Average Value on Answer Path Incorrect Correct (c) claude-instant Figure 8: The average value function of intermediary steps along the path to each answer, separated by whether the answer would be scored as correct or incorrect. For each answer proposed by the model, the scores along the path are averaged, and the resulting distribution plotted. Data is from 2WikiMultiHop results. 12316 F Prompts The following pages contains the prompts used in Tree-of-Traversals. The dynamic components that change based on the query, KG state, action history are shown in color. F.1 Prompt for default action state Original Query: Who is Bob Dylan’s maternal grandmother? Knowledge Graph Entities: Q392: Bob Dylan - American singer-songwriter Q62519478: Beatrice Stone Knowledge Graph Edges: Bob Dylan: mother: Beatrice Stone Previous Actions: EXPAND KG: I should search for the mother of Bob Dylan SELECT ENTITIES: Q392 SELECT RELATION: P25 - has mother You are a superintelligent AI equipped with the ability to search a knowledge graph for definitive, up-todate answers. Your task is to interface with the knowledge graph in order to answer the above query. You will be able to expand the knowledge graph until you have found the answer. Think in detail before acting or answering. Available actions: ’THINK’ - Generate relevant thoughts to solving the problem. This could include recalling well known facts from memory. e.g., THINK: I should search for the movies directed by... THINK: I know that Biden is American, therefore... THINK: I see that John Cena is in both The Independent and Vacation Friends, therefore... ’EXPAND_KG’ - Search for edges of entities in the knowledge graph using an external API. This is a useful function for getting to the correct answer. e.g., EXPAND_KG: I should search for the country of origin of Jonathon Taylor ’ANSWER’ - Generate the final answer once the problem is solved. Just state the best answer, do not output a full sentence. e.g., ANSWER: No ANSWER: Harry Potter ANSWER: [Harry Potter, Ron Weasley, Hermione Granger] F.2 Prompt for selecting-entities action state Original Query: Who is Bob Dylan’s maternal grandmother? Knowledge Graph Entities: Q392: Bob Dylan - American singer-songwriter Q62519478: Beatrice Stone Knowledge Graph Edges: Bob Dylan: mother: Beatrice Stone Previous Actions: EXPAND KG: I should search for the mother of Bob Dylan SELECT ENTITIES: Q392 SELECT RELATION: P25 - has mother EXPAND KG: I should search for the mother of Beatrice Stone Current task EXPAND_KG takes two parameters. The first is the entity or entity group to get more information about. Select which entity or entities from the KG to expand. Provide the QIDs. Options include [Q392, Q62519478] SELECT ENTITIES: 12317 F.3 Prompt for selecting-relation action state Original Query: Who is Bob Dylan’s maternal grandmother? Knowledge Graph Entities: Q392: Bob Dylan - American singer-songwriter Q62519478: Beatrice Stone Knowledge Graph Edges: Bob Dylan: mother: Beatrice Stone Previous Actions: EXPAND KG: I should search for the mother of Bob Dylan SELECT ENTITIES: Q392 SELECT RELATION: P25 - has mother EXPAND KG: I should search for the mother of Beatrice Stone SELECT ENTITIES: Q62519478 Your current task is to select the property (PID) to expand along for the selected entities. The selected entities are: [Q62519478] The options of properties to choose from are: P31 - has instance of P21 - has sex of gender P27 - has country of citizenship P735 - has given name ... Select exactly one property (PID e.g., P10) from those listed above SELECT PROPERTY: F.4 General Prompt for evaluate Original Query: Who is Bob Dylan’s maternal grandmother? Knowledge Graph Entities: Q392: Bob Dylan - American singer-songwriter Q62519478: Beatrice Stone Knowledge Graph Edges: Bob Dylan: mother: Beatrice Stone Previous Actions: EXPAND KG: I should search for the mother of Bob Dylan SELECT ENTITIES: Q392 SELECT RELATION: P25 - has mother EXPAND KG: I should search for the mother of Beatrice Stone SELECT ENTITIES: Q62519478 Your current task is to evaluate the above knowledge graph and action history. Based on the original query, the current knowledge graph, and the action history, give the likelihood that the model will correctly answer the question. If the most recent action provided information towards the goal and followed the preceding thought, give a high score. If the last action was unhelpful, give a low score. The output should be a number between 0 and 1 with one decimal. Do not output anything else. RATING [0.0-1.0]: 12318 F.5 Prompt for evaluate on answer (done) Original Query: Who is Bob Dylan’s maternal grandmother? Knowledge Graph Entities: Q392: Bob Dylan - American singer-songwriter Q62519478: Beatrice Stone Q62519478: Florence Sara Stone Knowledge Graph Edges: Bob Dylan: mother: Beatrice Stone Beatrice Stone: mother: Florence Sara Stone Provided answer: The answer is Florence Sara Stone. Your task is to score the correctness of the provided answer based on the original query, and the knowledge graph. Give a pessimistic score from 0.0 to 1.0 on how likely the answer is to be correct. 0.0 if definitely wrong 0.0 if unable to answer based on the knowledge graph 0.5 if unsure 0.7 for probably correct but not confirmed in knowledge graph 1.0 for definitely correct and confirmed in knowledge graph. Give reasoning to get to the correct answer. Then provide a score. e.g., Reasoning... So the score for the provided answer should be... 12319
2024
665
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12320–12332 August 11-16, 2024 ©2024 Association for Computational Linguistics Structured Tree Alignment for Evaluation of (Speech) Constituency Parsing Freda Shi Kevin Gimpel Karen Livescu Toyota Technological Institute at Chicago 6045 S. Kenwood Ave, Chicago, IL, USA, 60637 {freda,kgimpel,klivescu}@ttic.edu Abstract We present the structured average intersectionover-union ratio (STRUCT-IOU), a similarity metric between constituency parse trees motivated by the problem of evaluating speech parsers. STRUCT-IOU enables comparison between a constituency parse tree (over automatically recognized spoken word boundaries) with the ground-truth parse (over written words). To compute the metric, we project the groundtruth parse tree to the speech domain by forced alignment, align the projected ground-truth constituents with the predicted ones under certain structured constraints, and calculate the average IOU score across all aligned constituent pairs. STRUCT-IOU takes word boundaries into account and overcomes the challenge that the predicted words and ground truth may not have perfect one-to-one correspondence. Extending to the evaluation of text constituency parsing, we demonstrate that STRUCT-IOU can address token-mismatch issues, and shows higher tolerance to syntactically plausible parses than PARSEVAL (Black et al., 1991).1 1 Introduction Automatic constituency parsing of written text (Marcus et al., 1993, inter alia) and speech transcriptions (Godfrey and Holliman, 1993, inter alia), as representative tasks of automatic syntactic analysis, have been widely explored in the past few decades. Appropriate evaluation metrics have facilitated the comparison and benchmarking of different approaches: the PARSEVAL F1 score (Black et al., 1991; Sekine and Collins, 1997) has served as a reliable measure of text parsing across scenarios; for speech transcription parsing, the SPARSEVAL (Roark et al., 2006) metric extends PARSEVAL and accounts for speech recognition errors by allowing word-level editing with a cost. 1We open-source the code of STRUCT-IOU at https: //github.com/ExplorerFreda/struct-iou. NP PRP Your NN turn Forced Alignment NP (2.56–3.01) PRP (2.56–2.72) Your NN (2.72–3.01) turn (a) Ground-truth speech parse tree (right), obtained by forced alignment between the ground-truth text parse tree (left, top) and the spoken utterance (left, bottom). VP (2.55–3.01) VBP (2.55–2.56) NP (2.56-3.01) PRP (2.56–2.72) NN (2.72–3.01) (STRUCT-IOU = 0.75) NP (2.51–3.10) PRP (2.51–2.70) NN (2.70–3.10) (STRUCT-IOU = 0.81) (b) Predicted tree with good word boundaries and an errorful tree structure (left), or that with errorful word boundaries and a perfect tree structure (right). Figure 1: Illustration of how STRUCT-IOU (§§ 4.1 and 4.2) evaluates textless speech constituency parsing. Best viewed in color, where nodes with the same color are aligned. Numbers in parentheses are the starting and ending times of the corresponding spans (in seconds). Recent work (Lai et al., 2023; Tseng et al., 2023) has proposed a new task of textless speech constituency parsing. In contrast to earlier work that parses manually labeled (Charniak and Johnson, 2001, inter alia) or automatic (Kahn and Ostendorf, 2012, inter alia) speech transcriptions, these models construct constituency parse trees over automatically recognized spoken word boundaries, where each word is represented with a time range of the spoken utterance, without using any form of text. To evaluate these textless models, we need a metric that compares the predicted tree (over spoken word boundaries) with the manually labeled ground-truth tree (over written words) and faithfully reflects the parsing quality. Since the automatically recognized word boundaries may be imperfect, the metric should also reflect the changes 12320 in parsing quality due to word boundary errors. To the best of our knowledge, none of the existing metrics meets these requirements, as they are all designed to compare parse trees over discrete word sequences, instead of continuous time ranges. Motivated by the need for textless speech parsing evaluation, in this paper, we introduce the structured average intersection-over-union ratio (STRUCT-IOU; Figure 1), a metric that compares two parse trees over time ranges. We relax the definition of segment trees (Bentley, 1977) to represent speech constituency parse trees, where each node is associated with an interval that represents the time range of the corresponding spoken word or constituent. To obtain the “ground-truth” speech parse trees, we use the forced alignment algorithm (McAuliffe et al., 2017), a supervised and highly accurate method that aligns written words to time ranges of the corresponding spoken utterance, to project the ground-truth text parses onto the time domain. STRUCT-IOU is calculated by aligning the same-label nodes in the predicted and ground-truth parse trees, following structured constraints that preserve parent-child relations. The calculation of STRUCT-IOU can be formulated as an optimization problem (§4.1) with a polynomial-time solution (§4.2) in terms of the number of tree nodes. Although STRUCT-IOU is designed to evaluate speech parsing, it is also applicable to text parsing evaluation. We analyze STRUCT-IOU for both purposes: in speech parsing evaluation, STRUCTIOU robustly takes into account both the structure information and word boundaries; in text parsing evaluation, while maintaining a high correlation with the PARSEVAL F1 score, STRUCT-IOU shows a higher tolerance to potential syntactic ambiguity. 2 Related Work Text constituency parsing and evaluation. In the past decades, there has been much effort in building and improving constituency parsing models (Collins and Koo, 2005; Charniak and Johnson, 2005; McClosky et al., 2006; Durrett and Klein, 2015; Cross and Huang, 2016; Dyer et al., 2016; Choe and Charniak, 2016; Stern et al., 2017; Kitaev and Klein, 2018, inter alia). PARSEVAL (Black et al., 1991) has been the standard evaluation metric for constituency parsing in most scenarios, which takes the ground truth and predicted trees and calculates the harmonic mean of precision and recall of labeled spans. For morphologically rich languages, TEDEVAL (Tsarfaty et al., 2012) extends PARSEVAL to accept multiple morphological analyses over sequences of words. A few alternative approaches have been pursued to address the potential mismatch in words and sentences (Calder, 1997; Jo et al., 2024). All these metrics are designed to evaluate parses over discrete word sequences, and cannot be easily extended to evaluate speech parses over continuous time ranges. Although our metric, STRUCT-IOU, is designed to evaluate speech constituency parsing, it can be easily extended for text parsing evaluation, reflecting a different aspect from existing metrics (§5.2). Speech constituency parsing and its evaluation. Work on conversational speech parsing has focused on addressing the unique challenges posed by speech, including speech recognition errors (Kahn and Ostendorf, 2012; Marin and Ostendorf, 2014), unclear sentence boundaries (Kahn et al., 2004), disfluencies (Jamshid Lou and Johnson, 2020; Kahn et al., 2005; Lease and Johnson, 2006), as well as integrating prosodic features into the parsing systems (Tran et al., 2018; Tran and Ostendorf, 2021). On the evaluation side, the closest work to ours is SPARSEVAL (Roark et al., 2006), which extends PARSEVAL to account for speech recognition errors by allowing for word-level insertion, deletion, and substitution. In contrast, our metric STRUCT-IOU applies to the cases where no speech recognizer is applied or available. Other structured evaluation metrics for parsing. There have been evaluation metrics of abstract meaning representations (AMRs; Cai and Knight, 2013), where two AMR graphs are matched by solving an NP-complete integer linear programming problem. While our work shares the spirit with theirs, we focus on the evaluation of speech constituency parsing over continuous word boundaries. There also exists a polynomial-time exact solution to our optimization problem. 3 Preliminaries We use real-valued open intervals to represent speech spans for simplicity, although most of the following definitions and conclusions can be easily extended to closed intervals and half-open intervals. Proof of each corollary and proposition can be found in Appendix A. 12321 T1: A B C D E T2: F G H A ↔G and E ↔H: conflicted (violating rule 10.1) C ↔F and A ↔H: conflicted (violating rules 10.2 and 10.3) B ↔H and C ↔F: conflicted (violating rule 10.4) A ↔F and C ↔H: not conflicted Figure 2: Examples of conflicted and non-conflicted node matchings (Definition 10). 3.1 Open Interval Operations Definition 1. The length of a real-valued open interval I = (a, b), where a < b, is |I| = b −a. Definition 2. The intersection size of open intervals I1 and I2 is I(I1, I2) = ( 0 if I1 ∩I2 = ∅ |I1 ∩I2| otherwise. Definition 3. The union size of open intervals I1 and I2 is U(I1, I2) = |I1| + |I2| −I(I1, I2). Definition 4. The intersection over union (IOU) ratio between open intervals I1 and I2 is IOU(I1, I2) = I(I1, I2) U(I1, I2). Throughout this paper, we will use IOU as the similarity metric between two intervals. 3.2 Relaxed Segment Trees We relax the definition of a segment tree (Bentley, 1977) as follows to represent parse trees. Definition 5. A node n of a relaxed segment tree is a triple n = ⟨In, Cn, ℓn⟩, where ℓn refers to the label of the node, and 1. In = (sn, en) is an open interval (i.e., segment) associated with the node n, where sn < en; 2. Cn is a finite set of disjoint children nodes of n: for any p, q ∈Cn(p ̸= q), Ip ∩Iq = ∅. Cn = ∅if and only if n is a terminal node; 3. For a nonterminal node n, sn = minp∈Cn sp, and en = maxp∈Cn ep. Corollary 5.1. For nodes p, n, if p ∈Cn, then Ip ⊆In. Definition 6. Node p is an ancestor of node q if there exists a sequence of nodes n0, n1, . . . , nk(k ≥1) such that (i.) n0 = p, (ii.) nk = q, and (iii.) for any i ∈[k],2 ni ∈Cni−1. Corollary 6.1. If node p is an ancestor of node q, then Ip ⊇Iq. 2[k] = {1, 2, . . . , k}, where k ∈N. Definition 7. Node p is a descendant of node q if q is an ancestor of p. Definition 8. A relaxed segment tree T = ⟨rT , NT ⟩is a tuple, where 1. rT is the root node of T ; 2. NT = {rT } ∪{n : n is a descendant of rT } is a finite set of all nodes in T . Example 8.1. A constituency parse tree over spoken word time ranges (Figure 1a) can be represented by a relaxed segment tree. Corollary 8.1. A relaxed segment tree can be uniquely characterized by its root node. In the following content, we use T (n) to denote the relaxed segment tree rooted at n. Proposition 9. For a relaxed segment tree T and p, q ∈NT , p is neither an ancestor nor a descendant of q ⇔Ip ∩Iq = ∅. 4 The STRUCT-IOU Metric 4.1 Problem Formulation Given relaxed segment trees T1 and T2 with node sets NT1 = {n1,i} |NT1| i=1 and NT2 = {n2,j} |NT2| j=1 , we can align the trees by matching their same-label nodes. Let n1,i ↔n2,j denote the matching between the nodes n1,i and n2,j. Definition 10 (conflicted node matchings; Figure 2). The matchings n1,i ↔n2,j and n1,k ↔ n2,ℓare conflicted if any of the following conditions holds: 1. n1,i is an ancestor of n1,k, and n2,j is not an ancestor of n2,ℓ; 2. n1,i is not an ancestor of n1,k, and n2,j is an ancestor of n2,ℓ; 3. n1,i is a descendant of n1,k, and n2,j is not a descendant of n2,ℓ; 4. n1,i is not a descendant of n1,k, and n2,j is a descendant of n2,ℓ. Intuitively, we would like the alignment to be consistent with the ancestor-descendant relationship between nodes. 12322 The optimal (i.e., maximally IOU-weighted) structured alignment between T1 and T2 is given by the solution to the following problem: Problem 11 (maximally IOU-weighted alignment). A∗= arg max A |NT1| X i=1 |NT2| X j=1 ai,j IOU In1,i, In2,j  s.t. X j ai,j ≤1(∀i ∈[|NT1|]), (1) X i ai,j ≤1(∀j ∈[|NT2|]), (2) ai,j + ak,ℓ≤1if n1,i ↔n2,j and n1,k ↔n2,ℓ are conflicted. A ∈{0, 1}|NT1|×|NT2| denotes an alignment matrix: ai,j = 1 indicates that the matching n1,i ↔ n2,j is selected, otherwise ai,j = 0. The last constraint of Problem 11 ensures that there are no conflicted matchings selected. Equations (1) and (2) imply one-to-one matching between nodes; that is, in a valid tree alignment, each node in T1 can be matched with at most one node in T2, and vice versa. The solution to Problem 11 gives the maximal possible sum of IOU over aligned node pairs. Definition 12. The structured average IOU (STRUCT-IOU) between T1 and T2 is given by IOU(T1, T2) = 1 |NT1| + |NT2| |NT1| X i=1 |NT2| X j=1 a∗ i,j IOU In1,i, In2,j  , where A∗= {a∗ i,j} is the solution to Problem 11. 4.2 Solution We present a polynomial-time algorithm for the exact solution to Problem 11, by breaking it down into structured subproblems and solving them recursively with dynamic programming. We define the subproblem as follows: given relaxed segment trees T1 and T2, we would like to find the maximum IOU weighted alignment of T1 and T2, where the roots of T1 and T2 are aligned. Without loss of generality, we assume that the root nodes of T1 and T2 are both indexed by 1. Formally, Problem 13 (maximum IOU weighted alignment, with root nodes aligned). fT1,T2 = max A |NT1| X i=1 |NT2| X j=1 ai,j IOU In1,i, In2,j  s.t. a1,1 = 1; X j ai,j ≤1(∀i ∈[|NT1|]), X i ai,j ≤1(∀j ∈[|NT2|]), ai,j + ak,ℓ≤1if n1,i ↔n2,j andn1,k ↔n2,ℓ are conflicted, where A ∈{0, 1}|T1|×|T2| is the alignment matrix. While Problems 11 and 13 are not equivalent in principle, Problem 11 can be reduced to Problem 13 within O(1) time, by adding a dummy root node to each tree that associates with segments covering all the segments in both trees. We now present a polynomial-time solution to Problem 13. Definition 14. Given a node n of a relaxed segment tree, D = (n1, n2, . . . , nk) is an ordered disjoint descendant sequence of n if 1. (ordered) for any i, j ∈[k] and i < j, sni < snj, where sni and snj are left endpoint of the associated intervals; 2. (disjoint) for any i, j ∈[k] and i ̸= j, Ini ∩ Inj = ∅; 3. (descendant) for any i ∈[k], ni is a descendant of n. Corollary 14.1. In an ordered disjoint descendant sequence D = (n1, n2, . . . , nk) of n, eni ≤ sni+1 for any i ∈[k −1]. The solution to Problem 13 is given by the following recursion: fT1,T2 =IOU  IrT1, IrT2  + max |D1|=|D2| |D1| X i=1 fT (d1,i),T (d2,i), (3) where rT1 and rT2 denote the root nodes of T1 and T2 respectively; | · | denotes the length of a sequence; D1 = (d1,1, d1,2, . . . , d1,|D1|) and D2 = (d2,1, d2,2, . . . , d2,|D2|) are same-length ordered disjoint descendant sequences of rT1 and rT2 respectively. Equation (3) can be computed within polynomial time, by solving a knapsack-style problem with dynamic programming. Specifically, let g[T1, T2,e1, e2] = max |De1 1 |=|De2 2 | |De1 1 | X j=1 fT (de1 1,j),T (de2 2,j), 12323 Algorithm 1 Polynomial time solution to Equation (3) Input: T1, T2 g[T1, T2, x, y] ←0, ∀x, y g′[T1, T2, x, y] := maxx′<x,y′<y g[T1, T2, x′, y′] d1 ←the sequence of all descendants of rT1, sorted in increasing order of right endpoint d2 ←the sequence of all descendants of rT2, sorted in increasing order of right endpoint for i ←1 . . . , |d1| do for j ←1 . . . , |d2| do g[T1, T2, ed1,i, ed2,j] ←max(g[T1, T2, ed1,i, ed2,j], fT (d1,i),T (d2,j) + g′[T1, T2, sd1,i, sd2,j]) update g′ accordingly within O(1) time using bidimensional prefix sum end for end for Output: Equation (3) = g[T1, T2, erT1, erT2] where e1 and e2 are arbitrary scalars denoting the constraints of endpoints; De1 1 = (de1 1,1, . . . , de1 1,|De1 1 |) is an ordered disjoint descendant sequence of rT1, where for any j ∈ [|De1 1 |], the right endpoint of the corresponding node ede1 1,j ≤ e1; similarly, De2 2 = (de2 2,1, de2 2,2, . . . , de2 2,|De2 2 |) is a disjoint descendant sequence of rT2 of which the right endpoint of each node does not exceed e2. Algorithm 1 computes g and Equation (3) within polynomial time, and therefore leads to a polynomial-time solution to Problem 13. Complexity analysis. Suppose |T1| = n and |T2| = m. To compute fT1,T2, all we need to compute is g[T ′ 1, T ′ 2, e′ 1, e′ 2] for all T ′ 1, T ′ 2, e′ 1 and e′ 2. Here, T ′ 1 and T ′ 2 enumerate over all subtrees of T1 and T2, respectively, and e′ 1 and e′ 2 enumerate over the endpoints of all nodes in both trees, respectively. The update process requires O(1) time for each T1, T2, e1, e2. The edge cases, i.e., g values of terminal nodes, can be directly computed in O(1) time, and therefore, the overall time complexity to solve Problem 13 is O(n2m2). 5 Experiments 5.1 Speech Constituency Parsing Evaluation We use the NXT-Switchboard (NXT-SWBD; Calhoun et al., 2010) dataset to train and evaluate models, where the parser can access the forced alignment word boundaries in both training and testing stages. We train an off-the-shelf supervised constituency parsing model for speech transcriptions (Jamshid Lou and Johnson, 2020) on the training set of NXT-SWBD, do early-stopping using PARSEVAL F1 on the development set, and perform all 0.85 0.90 0.95 1.00 STRUCT-IOU 0.6 0.7 0.8 0.9 1.0 F1 Figure 3: STRUCT-IOU vs. PARSEVAL F1 on NXT-SWBD (Spearman’s correlation ρ = 0.689, pvalue=1.79 × 10−54). Each dot represents the results of the base model (F1=85.4 on the full development set) on 10 random examples from the development set. the analysis below on the development set. The model achieves F1 = 85.4 and STRUCT-IOU (averaged across sentences)3= 0.954 on the standard development set. 5.1.1 Comparison with PARSEVAL F1 Since the forced alignment word boundaries are accessible by the models, the PARSEVAL F1 metric can be directly calculated between the predicted speech constituency parse tree and the ground truth. We compare the values of STRUCT-IOU and PARSEVAL (Sekine and Collins (1997) implementation with default parameters) in the settings with forcedalignment word segmentation (Figure 3), and find a strong correlation between the two metrics. 3Unless otherwise specified, all STRUCT-IOU scores reported in the paper are computed by averaging across STRUCTIOU scores of individual sentences. We compare and discuss sentence-level and corpus-level STRUCT-IOU in §5.1.3 12324 5.1.2 Analysis: STRUCT-IOU with Perturbed Word Boundaries In textless speech parsing (Lai et al., 2023; Tseng et al., 2023), the word boundaries are unknown, and the boundaries predicted by the parser are usually imperfect. As a controlled simulation to such settings, we perturb the forced alignment word boundaries of the predicted parse tree (Figure 4), and calculate the STRUCT-IOU score between the perturbed parse tree and the ground truth over the original forced alignment word boundaries. Specifically, we suppose the word boundaries of a sentence with n words are B = b0, b1, . . . , bn,4 and consider the following types of perturbation with a hyperparameter δ ∈[0, 1] controlling the perturbation level: • Noise-δ. We start with B(0) = B, and update the boundaries iteratively as follows. For each i ∈[n −1], we randomly draw a number ri from the uniform distribution U(−δ, δ), and let b(i) i = b(i−1) i +|ri|∗  b(i−1) i+sgn(ri) −b(i−1) i  , where sgn(·) : R →{1, −1} denotes the sign function sgn(x) = ( 1 if x ≥0; −1 if x < 0. For all j ̸= i and j ∈[n], we let b(i) j remain the same as b(i−1) j . Finally, we take B(n−1) as the perturbed word boundaries for the predicted tree. • Insert-δ. We randomly draw a number ri from the uniform distribution for each boundary index i ∈[n]. If ri < δ, we insert a word boundary at the position b′ i, randomly drawn from the uniform distribution U(bi−1, bi), breaking the ith spoken word into two (i.e., [bi−1, b′ i] and [b′ i, bi]). • Delete-δ. Similarly to the insertion-based perturbation, we randomly draw a number ri from the uniform distribution U(0, 1) for each boundary index i ∈[n −1], and delete the boundary bi if ri < δ. Since such boundary deletion may break the predicted tree structure, we use the base model to re-predict the parse tree with the new word boundaries, where words concatenated by space are taken as the textual input (Jamshid Lou and Johnson, 2020). A larger δ means a higher level of perturbation is applied, and we therefore expect a lower STRUCTIOU score; δ = 0 means no perturbation is applied, and the STRUCT-IOU score is the same as that 4We assume no silence between spoken words; if any interword silence exists, we remove it. b(i−1) i−1 b(i−1) i b(i−1) i+1 b(i) i [si, ei] (a) Noise-δ, where si = b(i−1) i −δ ·  b(i−1) i −b(i−1) i−1  and ei = b(i−1) i + δ ·  b(i−1) i+1 −b(i−1) i  denote the most left and right possible position of b(i) i . bi−1 bi b′ i (b) Insert-δ, with ri < δ; otherwise b′ i will not be inserted. bi−1 bi+1 bi (c) Delete-δ, with ri < δ; otherwise bi will not be deleted. Figure 4: Examples of three types of perturbation: when applicable, the added boundaries are shown in red and the deleted boundaries are shown in blue. Best viewed in color. for the predicted parse trees with forced alignment word boundaries. For each δ ∈{0.1, 0.2, . . . , 1.0}, starting from the base model (for deletion-based perturbation) or its predicted parse trees (for noise and insertionbased perturbation), we run the perturbation 5 times and report both the mean and the standard deviation of the STRUCT-IOU result after perturbation. Results and discussion. We present how the STRUCT-IOU value changes with respect to δ for different types of perturbation (Figure 5). The standard deviation is nearly invisible in the figure, showing that our metric is stable under a specific setting. For all three types of perturbation, as desired, a larger δ leads to a lower STRUCT-IOU score. Among the perturbation types, STRUCTIOU is the most sensitive to deletion, and the least sensitive to noise-based perturbation. Although the results are not comparable across perturbation types in the most rigorous sense, this reflects the fact that STRUCT-IOU, to some extent, is more sensitive to structural change of the trees than simple word boundary changes. Although both word boundary insertion and deletion change the predicted tree structures, the former has less impact on the STRUCT-IOU scores. This also aligns with our expectation: boundary insertion only splits some of the spoken words into two and keeps the longer constituents; however, deletion may change significantly the tree structure, especially when it happens at the boundary of two 12325 0.2 0.4 0.6 0.8 1.0 δ 0.5 0.6 0.7 0.8 0.9 STRUCT-IOU Pertubation Type noise insert delete Figure 5: STRUCT-IOU scores with respect to δ for different types of perturbations. long constituents. 5.1.3 Corpus-Level vs. Sentence-Level Metric Note that 39.7% utterances in the NXT-SWBD development set contain only one spoken word, and the STRUCT-IOU score of such sentences is always high—the metric degenerates to the IOU score between two intervals. Averaging the STRUCTIOU scores across all sentence pairs in the dataset may therefore overly emphasize these short utterances. To address this, we introduce the corpuslevel STRUCT-IOU score as an alternative, where Definition 12 is modified as follows: Definition 15. The corpus-level STRUCT-IOU between two sets of parsed trees D1 = {T1,k} and D2 = {T2,k} is given by IOU(D1, D2) = P|D1| k=1 (|T1,k| + |T2,k|) IOU(T1,k, T2,k) P|D1| k′=1 |T1,k′| + |T2,k′| , where |D1| = |D2|, and a pair of T1,k and T2,k denotes the parse trees of the kth sentence in the corpus respectively. We compare the corpus-level and sentence-level STRUCT-IOU scores (Figure 6). As desired, the corpus-level STRUCT-IOU score has lower absolute values than the sentence-level one, and the difference is more significant when longer sentences are considered. A similar phenomenon has been found in text constituency parsing (Kim et al., 2019) as well, where corpus-level PARSEVAL F1 scores are lower than sentence-level ones. 0 20 40 60 Length (≤) 0.93 0.94 0.95 0.96 0.97 0.98 0.99 STRUCT-IOU Metric Type Corpus-Level Sentence-Level Figure 6: Corpus-level and sentence-level STRUCTIOU scores of the predicted parse trees of the base model (F1 = 85.4 on the development set), evaluated on development examples with less than or equal to a certain number of spoken words. 5.2 Evaluation of English Text Constituency Parsing We extend our experiment to the evaluation of text constituency parsing. In this part, we suppose every written word corresponds to a unit-length segment— analogously, this can be considered as speech parsing with evenly distributed word boundaries, for both predicted and ground-truth trees. 5.2.1 Correlation with PARSEVAL F1 Scores on the Penn Treebank We use the Penn Treebank (PTB; Marcus et al., 1993) dataset to train and evaluate Benepar (Kitaev and Klein, 2018), a state-of-the-art text constituency parsing model, doing early-stopping using labeled PARSEVAL F1 on the development set. The base model achieves PARSEVAL F1 = 94.4 and STRUCT-IOU (averaged across sentences) = 0.962 on the standard development set. We compare the STRUCT-IOU scores with the PARSEVAL F1 scores on the development set (Figure 7). As in the speech parsing experiment, we find a strong correlation between the two metrics, showing that STRUCT-IOU is consistent with the existing metric in the text parsing domain. 5.2.2 STRUCT-IOU vs. PARSEVAL F1 on Syntactically Ambiguous Sentences We consider a special setting of parsing syntactically ambiguous sentences, where the syntactically plausible parse tree of a sentence may not be unique (see examples in Figure 8). We simplify the case shown in Figure 8 using synthetic sentences with syntactic ambiguity with the template N (P 12326 0.92 0.94 0.96 0.98 STRUCT-IOU 0.825 0.850 0.875 0.900 0.925 0.950 0.975 F1 Figure 7: Comparison of STRUCT-IOU and PARSEVAL F1 (Spearman’s rank correlation ρ = 0.821, pvalue=8.16 × 10−43). Each dot represents the results of the base model on 10 random examples from the PTB development set. N){n}, where P denotes a preposition and N denotes a noun, and n determines how many times the P N pattern is repeated. For N (P N){2}, the two potential parse trees are shown in Figure 9. We compare PARSEVAL and STRUCT-IOU in the following scenarios, choosing a random syntactically plausible parse tree as the ground truth: • Ground truth vs. random parse trees, where the random parse trees are constructed by recursively combining random consecutive words (or word groups) into a binary tree. We construct 100 random parse trees and report the average. • Ground truth vs. syntactically plausible parse trees, where we report the lowest possible score between the ground truth and other syntactically plausible trees. As shown in Table 1, the lowest possible PARSEVAL F1 score between the ground truth and another syntactically plausible tree is significantly lower than the score achieved by meaningless random trees; however, STRUCT-IOU consistently assigns higher scores to the syntactically plausible parses, showing a higher tolerance to syntactic ambiguity. 5.3 Evaluation of Hebrew Text Constituency Parsing We extend our experiment to the evaluation of text constituency parsing on Hebrew, a morphologically rich language that allows different tokenizations of the same sentence. In this subsection, we suppose every written character (instead of a word for English; §5.2) corresponds to a unitlength segment. We evaluate STRUCT-IOU by running a pre-trained Hebrew constituency parsing S NP DT The NN girl VP VBD saw NP DT a NN cat PP P with NP DT a NN telescope S NP DT The NN girl VP VBD saw NP DT a NN cat PP P with NP DT a NN telescope Figure 8: An example syntactically ambiguous sentence: The girl saw a cat with a telescope. Both parses are syntactically valid, but the first one implies that a cat was holding the telescope, whereas the second implies the girl was using the telescope. NP NP N PP P NP NP N PP P NP N NP NP NP N PP P NP N PP P NP N Figure 9: Two syntactically plausible parses of the N (P N){2}, where NP denotes a noun phrase, and PP denotes a prepositional phrase. model (benepar_he2; Kitaev et al., 2019) on the SPMRL 2013 Hebrew development set (Seddah et al., 2013). Similarly to the English text parsing evaluation result (§5.2), we obtain a high Spearman rank correlation coefficient of 0.823 (over 10sentence buckets) between STRUCT-IOU (0.959 averaged across sentences) and PARSEVAL F1 scores measured by EVALB-SPMRL (93.3). In addition, we demonstrate that STRUCT-IOU naturally provides a metric that supports misaligned morphological analysis. The default tokenization in the SPMRL dataset does not extract the plural morphemes !ותand !M;יtherefore, simply extracting the plural morphemes forms another acceptable tokenization strategy (see Figure 10 for an example). We break the nouns ending with these two morphemes and feed the new tokenization to 12327 Metric Ground-Truth vs. Random, Average PARSEVAL F1 27.3 STRUCT-IOU 61.9 Ground-Truth vs. Plausible, Lowest PARSEVAL F1 12.5 STRUCT-IOU 63.6 Table 1: Average PARSEVAL F1 and STRUCT-IOU scores between the ground truth and a random binary tree, and the lowest possible scores between the ground truth and another syntactically plausible tree. Experiments are done on the string “N (P N){8}”. For simplicity, we report the unlabeled scores, where all nonterminals are treated as having the same label. the benepar_he2 model.5 The prediction with our new tokenization receives a STRUCT-IOU of 0.907 against the ground-truth—as desired, it is lower than 0.959 with the ground-truth tokenization. However, the STRUCT-IOU score remains high, reflecting the facts that (1) the manipulation introduces misalignment between parses, and (2) the Benepar model is fairly robust to such mismatch on tokenization (see footnote 5). In contrast to TEDEVAL (Tsarfaty et al., 2012), which treats all mismatched nodes as errors with the same penalization in the final metric, STRUCT-IOU offers an alternative approach for evaluating parsing quality under misaligned morphological analyses, assigning partial credit to aligned same-label nodes with IOU > 0. 6 Conclusion and Discussion In this paper, we present STRUCT-IOU, the first metric that computes the similarity between two parse trees over continuous spoken word boundaries. STRUCT-IOU enables the evaluation of textless speech parsing (Lai et al., 2023; Tseng et al., 2023), where no text or speech recognizer is used or available to parse spoken utterances. In the canonical text and speech parsing settings, STRUCT-IOU complements the existing evaluation metrics (Black et al., 1991; Roark et al., 2006; Tsarfaty et al., 2012). Even for the evaluation of English constituency parsing, STRUCT-IOU shows a higher tolerance to potential syntactic ambiguity under certain scenarios, providing an alternative interpretation of the parsing quality. 5The benepar_he2 model is not trained on this tokenization; however, we expect the model to work reasonably well, since it uses XLM-R (Conneau et al., 2020) as the word embeddings, which provides syntactic information of the new tokenization. Ground-truth Tree FRAGQ NP SYN_WDT DTT !איזה NP SYN_NN NN !Mפצועי SYN_yyQM yyQM ? Predicted Tree SQ NP SYN_WDT DTT !איזה NP SYN_NNT BNT !פצוע NP SYN_NN NN !Mי SYN_yyQM yyQM ? Figure 10: Parse trees of the Hebrew sentence ?!Mאיזה! פצועיwith two possible morphological analyses (STRUCT-IOU = 0.635). The plural morpheme !Mי appears as a separate token in the predicted tree. Best viewed in color: the aligned nodes are shown in the same color. Faithful evaluation of parsing quality is crucial for developing both speech and text parsing models. In supervised parsing, it has been common sense that higher evaluation metric scores (i.e., PARSEVAL F1) imply better models. However, the misalignment between linguistically annotated ground truths and model predictions, especially unsupervised parsing model predictions, does not necessarily indicate poor parsing quality of the models (Shi et al., 2020)—instead, the models may have learned different but equally valid structures. Conversely, annotations made by linguistic experts (such as the Penn Treebank) may exhibit discrepancies when compared to the responses of native speakers who lack formal linguistic training. We suggest that future work investigate what properties of the parses are emphasized by each evaluation metric, and consider multi-dimensional evaluation metrics (Kasai et al., 2022, inter alia). Acknowledgement We thank Yudong Li, Jiayuan Mao, and the anonymous reviewers for their valuable suggestions. This work is supported in part by a Google Ph.D. Fellowship to FS. 12328 Limitations STRUCT-IOU is designed to evaluate constituency parse trees over continuous spoken word boundaries, and is not directly applicable to evaluate other types of parses, such as dependency parse trees; however, it may be extended to evaluate other types of parse trees by modifying the alignment constraints. We leave the extension of STRUCTIOU to other types of parses as future work. We do not foresee any risk associated with the use of STRUCT-IOU beyond the inherent minimal risks encountered in computer science research. References Jon L. Bentley. 1977. Solutions to Klee’s rectangle problems. Technical Report, Carnegie Mellon University. E. Black, S. Abney, D. Flickenger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991. Shu Cai and Kevin Knight. 2013. Smatch: an evaluation metric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 748–752, Sofia, Bulgaria. Association for Computational Linguistics. Jo Calder. 1997. On aligning trees. In Second Conference on Empirical Methods in Natural Language Processing. Sasha Calhoun, Jean Carletta, Jason M Brenier, Neil Mayo, Dan Jurafsky, Mark Steedman, and David Beaver. 2010. The nxt-format switchboard corpus: a rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue. Language resources and evaluation, 44:387–419. Eugene Charniak and Mark Johnson. 2001. Edit detection and parsing for transcribed speech. In Second Meeting of the North American Chapter of the Association for Computational Linguistics. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 173–180, Ann Arbor, Michigan. Association for Computational Linguistics. Do Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2331–2336, Austin, Texas. Association for Computational Linguistics. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. James Cross and Liang Huang. 2016. Span-based constituency parsing with a structure-label system and provably optimal dynamic oracles. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1–11, Austin, Texas. Association for Computational Linguistics. Greg Durrett and Dan Klein. 2015. Neural CRF parsing. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 302–312, Beijing, China. Association for Computational Linguistics. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 199–209, San Diego, California. Association for Computational Linguistics. John J. Godfrey and Edward Holliman. 1993. Switchboard-1 release 2. Paria Jamshid Lou and Mark Johnson. 2020. Improving disfluency detection by self-training a self-attentive model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3754–3763, Online. Association for Computational Linguistics. Eunkyul Leah Jo, Angela Yoonseo Park, and Jungyeul Park. 2024. A Novel Alignment-based Approach for PARSEVAL Measures. Computational Linguistics, pages 1–10. Jeremy G. Kahn, Matthew Lease, Eugene Charniak, Mark Johnson, and Mari Ostendorf. 2005. Effective use of prosody in parsing conversational speech. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 233–240, Vancouver, British Columbia, Canada. Association for Computational Linguistics. 12329 Jeremy G Kahn and Mari Ostendorf. 2012. Joint reranking of parsing and word recognition with automatic segmentation. Computer Speech & Language, 26(1):1–19. Jeremy G. Kahn, Mari Ostendorf, and Ciprian Chelba. 2004. Parsing conversational speech using enhanced segmentation. In Proceedings of HLT-NAACL 2004: Short Papers, pages 125–128, Boston, Massachusetts, USA. Association for Computational Linguistics. Jungo Kasai, Keisuke Sakaguchi, Ronan Le Bras, Lavinia Dunagan, Jacob Morrison, Alexander Richard Fabbri, Yejin Choi, and Noah A Smith. 2022. Bidimensional leaderboards: Generate and evaluate language hand in hand. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3540–3557. Yoon Kim, Chris Dyer, and Alexander Rush. 2019. Compound probabilistic context-free grammars for grammar induction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2369–2385, Florence, Italy. Association for Computational Linguistics. Nikita Kitaev, Steven Cao, and Dan Klein. 2019. Multilingual constituency parsing with self-attention and pre-training. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3499–3505, Florence, Italy. Association for Computational Linguistics. Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2676–2686, Melbourne, Australia. Association for Computational Linguistics. Cheng-I Jeff Lai, Freda Shi, Puyuan Peng, Yoon Kim, Kevin Gimpel, Shiyu Chang, Yung-Sung Chuang, Saurabhchand Bhati, David Cox, David Harwath, et al. 2023. Audio-visual neural syntax acquisition. In IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Matthew Lease and Mark Johnson. 2006. Early deletion of fillers in processing conversational speech. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 73–76, New York City, USA. Association for Computational Linguistics. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Alex Marin and Mari Ostendorf. 2014. Domain adaptation for parsing in automatic speech recognition. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6379–6383. IEEE. Michael McAuliffe, Michaela Socolof, Sarah Mihuc, Michael Wagner, and Morgan Sonderegger. 2017. Montreal forced aligner: Trainable text-speech alignment using kaldi. In Annual Conference of the International Speech Communication Association (Interspeech). David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152–159, New York City, USA. Association for Computational Linguistics. Brian Roark, Mary Harper, Eugene Charniak, Bonnie Dorr, Mark Johnson, Jeremy Kahn, Yang Liu, Mari Ostendorf, John Hale, Anna Krasnyanskaya, Matthew Lease, Izhak Shafran, Matthew Snover, Robin Stewart, and Lisa Yung. 2006. SParseval: Evaluation metrics for parsing speech. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA). Djamé Seddah, Reut Tsarfaty, Sandra Kübler, Marie Candito, Jinho D. Choi, Richárd Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepiórkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, Alina Wróblewska, and Eric Villemonte de la Clergerie. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages, pages 146–182, Seattle, Washington, USA. Association for Computational Linguistics. Satoshi Sekine and Michael Collins. 1997. Evalb bracket scoring program. Haoyue Shi, Karen Livescu, and Kevin Gimpel. 2020. On the role of supervision in unsupervised constituency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7611–7621, Online. Association for Computational Linguistics. Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 818–827, Vancouver, Canada. Association for Computational Linguistics. Trang Tran and Mari Ostendorf. 2021. Assessing the use of prosody in constituency parsing of imperfect transcripts. In Annual Conference of the International Speech Communication Association (Interspeech). Trang Tran, Shubham Toshniwal, Mohit Bansal, Kevin Gimpel, Karen Livescu, and Mari Ostendorf. 2018. 12330 Parsing speech: a neural approach to integrating lexical and acoustic-prosodic information. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 69–81, New Orleans, Louisiana. Association for Computational Linguistics. Reut Tsarfaty, Joakim Nivre, and Evelina Andersson. 2012. Joint evaluation of morphological segmentation and syntactic parsing. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 6–10, Jeju Island, Korea. Association for Computational Linguistics. Yuan Tseng, Cheng-I Jeff Lai, and Hung-yi Lee. 2023. Cascading and direct approaches to unsupervised constituency parsing on spoken sentences. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Appendix A Proof of Corollaries and Propositions We present the proof of the corollaries mentioned in the main content as follows. Corollary 5.1 For nodes p, n, if p ∈Cn, then Ip ⊆In. Proof. According to the definition of open intervals and Definition 5 (3), an ≤ap < bp ≤bn ⇒Ip = (ap, bp) ⊆(an, bn) = In. Corollary 6.1 If node p is an ancestor of node q, then Ip ⊇Iq. Proof. According to Definition 6, there exists a sequence of nodes n0, n1, . . . , nk(k ≥1) such that (1) n0 = p, (2) nk = q and (3) for any i ∈[k], ni ∈Cni−1. Corollary 5.1 implies that for any i ∈[k], Ini−1 ⊇Ini ⇒In0 ⊇Ink ⇒Ip ⊇Iq. Corollary 8.1 A relaxed segment tree can be uniquely characterized by its root node. Proof. (⇒) Definition 8 implies that each relaxed segment tree has one root node. (⇐) Given a specific node n, we have the unique set N = {n}∪{n′ : n′ is a descendant of n}, and therefore extract the set of all nodes in the relaxed segment tree rooted at n. Proposition 9 For a relaxed segment tree T and p, q ∈NT , p is neither an ancestor nor a descendant of q ⇔Ip ∩Iq = ∅. Proof. (⇒) Let z denote the least common ancestor of p and q. There exists p′, q′ ∈Cz(p′ ̸= q′) such that Ip′ ⊇Ip and Iq′ ⊇Iq; therefore Ip ∩Iq ⊆Ip′ ∩Iq′ Definition 5 (2) = ∅⇒Ip ∩Iq = ∅. (⇐) If Ip ∩Iq = ∅, according to Definition 5 (3) and Definition 6, p is not an ancestor of q and vice versa. Corollary 14.1 Given an ordered disjoint descendant sequence S = (n1, n2, . . . , nk) of n, for any i ∈[k −1], bni ≤ani+1. 12331 Proof. If there exists i ∈[k −1] such that bni > ani+1, then Ini ∩Ini+1 =(ani, bni) ∩(ani+1, bni+1) =  x : max(ani, ani+1) < x < min(bni, bni+1) =  x : ani+1 < x < min(bni, bni+1) (Definition 14 (1)). Since bni+1 > ani+1 (definition of open intervals), ani+1 < min(bni, bni+1) ⇒Ini ∩Ini+1 ̸= ∅. This conflicts with Definition 14 (2). 12332
2024
666
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12333–12347 August 11-16, 2024 ©2024 Association for Computational Linguistics ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation Akshita Jha ∗ Virginia Tech [email protected] Vinodkumar Prabhakaran Google Research [email protected] Remi Denton Google Research [email protected] Sarah Laszlo Google Research [email protected] Shachi Dave Google Research [email protected] Rida Qadri Google Research [email protected] Chandan K. Reddy Virginia Tech [email protected] Sunipa Dev Google Research [email protected] Abstract Recent studies have shown that Text-toImage (T2I) model generations can reflect social stereotypes present in the real world. However, existing approaches for evaluating stereotypes have a noticeable lack of coverage of global identity groups and their associated stereotypes. To address this gap, we introduce the ViSAGe (Visual Stereotypes Around the Globe) dataset to enable evaluation of known nationality-based stereotypes in T2I models, across 135 nationalities. We enrich an existing textual stereotype resource by distinguishing between stereotypical associations that are more likely to have visual depictions, such as ‘sombrero’, from those that are less visually concrete, such as ‘attractive’. We demonstrate ViSAGe’s utility through a multi-faceted evaluation of T2I generations. First, we show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes, and that the offensiveness of these depictions is especially higher for identities from Africa, South America, and South East Asia. Second, we assess the stereotypical pull of visual depictions of identity groups, which reveals how the ‘default’ representations of all identity groups in ViSAGe have a pull towards stereotypical depictions, and that this pull is even more prominent for identity groups from the Global South. CONTENT WARNING: Some examples contain offensive stereotypes. 1 Introduction Text-to-Image (T2I) models are increasingly being used to generate visual content from textual descriptions (Rombach et al., 2022; Ramesh et al., 2021), enabling downstream tasks such as creative design, advertising, marketing, and education. More often than not, these T2I models have been trained on large amounts of web-scale data with minimal ∗Work done while at Google Research curation, if any. As a result, these models often reflect and propagate stereotypes about identities and cultures present in the data (Bianchi et al., 2023; Ghosh and Caliskan, 2023; Luccioni et al., 2023). While these studies provide crucial evidence for fairness failures of T2I generations, they have a noticeable lack of coverage of global identity groups and their associated known stereotypes. Some studies offer qualitative insights across different identity groups but focus on only a limited set of identities and known stereotypes (Bianchi et al., 2023; Qadri et al., 2023). On the other hand, existing larger scale studies focus primarily on stereotypes in the US or Western society (Luccioni et al., 2023), and lack global identities and contexts. Given the widespread adoption of these models in global contexts, it is imperative to build extensive, stratified evaluations that cover broader identity groups to prevent further under-representation of already marginalized groups. We address the above challenges by presenting a systematic, large-scale, cross-cultural evaluation of regional stereotypes present in the generated images from T2I models. Specifically, we ground our evaluations in an existing geo-culturally broadcoverage, textual resource, SeeGULL (Jha et al., 2023) that incorporates societal stereotypes for 175 nationality based identity groups. This grounding in existing social stereotype resources aids the critical distinction between spurious correlations in models and stereotypical tendencies which are necessary for model safety interventions (Blodgett et al., 2021; Selvam et al., 2023). We enrich this data with human annotations to incorporate the notion of which stereotypes are more likely to be visually depicted versus those that cannot be. Building on this societal grounding, we conduct a study to investigate the extent of stereotypical and offensive depictions of different nationalities in a scalable manner. We also study how the default representation of a nationality relates to their stereotypical as 12333 Figure 1: We identify ‘visual’ stereotypes in the generated images of the identity group by grounding the evaluations in existing textual stereotype benchmarks. Yellow boxes denote annotated visual markers of known stereotypes associated with the identity group in the image. We use Stable Diffusion (Rombach et al., 2022) to generate images and evaluate them using the stereotypes present in the SeeGULL dataset (Jha et al., 2023). well as non-stereotypical representations. Figure 1 provides a thematic representation of our approach. Our main contributions are as follows. • We evaluate Text-to-Image generations for cross-cultural regional stereotypes at scale and with global coverage by leveraging existing resources from textual modality. • We publicly release the dataset ViSAGe1 : Visual Stereotypes Around the Globe where we make a critical distinction between ‘visual’ and ‘non-visual’ stereotypes in images. We identify a list of 385 visual attributes, and also introduce a broad-coverage image dataset with annotations of visual markers of stereotypes present in 40,057 image-attribute pairs, representing different identity groups. • We demonstrate the offensiveness of the generated images and investigate the feasibility of using automated methods employing captioning models to identify visual stereotypes in images at scale. • We analyze the default representations of identity groups and demonstrate how T2I models disproportionately lean towards their stereotypical representations, even when explicitly 1https://github.com/google-research-datasets/ visage prompted otherwise. The above contributions collectively offer a systematic approach for critically examining the stereotypes in images generated from T2I models. 2 Related Work Stereotypes in Text-to-Image Models Cho et al. (2023) show that T2I models reflect specific gender/skin tone biases from training data. Fraser and Kiritchenko (2024) examine gender and racial bias in vision language models. Zhang et al. (2023) study gender presentation in T2I models. Ungless et al. (2023) demonstrate that images of noncisgender identities are more stereotyped and more sexualised. Bianchi et al. (2023) highlight the presence of stereotypes by using prompts containing attributes and associating the generated visual features with demographic groups. Stable Bias (Luccioni et al., 2023) prompts T2I models with a combination of ethnicity/gender and profession and evaluates profession-based stereotypes. Basu et al. (2023) use location-based prompts to quantify geographical representativeness of the generated images. Qadri et al. (2023) identify harmful stereotypes and an ‘outsiders gaze’ in image generation in the South Asian context. We prompt T2I models with ‘identity groups’ around the globe to study 12334 the prevalent regional stereotypes in their default visual representation. We identify ‘visual’ stereotypes, related to the concept of ‘imageable’ synsets (Yang et al., 2020), to conduct this analysis at scale. Several efforts have also been made to quantify the harms associated with generative models. Hao et al. (2023) propose a theoretical framework for content moderation and its empirical measurement. Naik and Nushi (2023) and Wang et al. (2023) quantify social biases in T2I generated images. Garcia et al. (2023) and Gustafson et al. (2023) highlight demographic biases in image captioning and classification tasks. We investigate the feasibility of automatically identifying visual stereotypes in image generations, and demonstrate their offensiveness. Stereotype Benchmarks in Textual Modality StereoSet (Nadeem et al., 2021) and CrowS-Pairs (Nangia et al., 2020) are datasets used for detecting stereotypes in NLP prediction and classificationbased tasks. SeeGULL (Jha et al., 2023) is a stereotype repository with a broad coverage of global stereotypes – containing close to 7000 stereotypes for identity groups spanning 178 countries across 8 different geo-political regions across 6 continents. It comprises of (identity, attribute) pairs, where ‘identity’ denotes global identity groups, and ‘attribute’ denotes the associated descriptive stereotypical adjective/adjective phrase, or a noun/noun phrase, such as (Mexicans, sombrero), (Germans, practical), and (Japanese, polite). In this work, we leverage resources from textual modalities, such as SeeGULL, to ground our evaluation of prevalent stereotypes in the generated images. 3 Our Approach Some stereotypes can be recognized in terms of clearly visual attributes (e.g.: ‘Mexicans, sombreros’) whereas other stereotypes are defined through non-visual characteristics such as social constructs, and adjectives (e.g.: ‘Chinese, intelligent’). While the latter may have some visual markers, they are difficult to represent accurately in images. Therefore, we follow a nuanced two-step approach. As a first step, we answer the fundamental question of which stereotypes can be objectively represented in an image. We identify inherently ‘visual’ attributes which we subsequently use in the second step to detect stereotypes in the generated images by (i) conducting a large-scale annotation study, and (ii) using automated methods. 3.1 Identifying Visual Stereotypes To mitigate subjectivity in visual representation and ensure a more reliable evaluation of the generated images from T2I Models for stereotypes, it is crucial to make a distinction between clearly identifiable ‘visual’ stereotypes and difficult to represent ‘non-visual’ stereotypes. We undertake a systematic annotation task to differentiate the two. We use an existing stereotype resource in textual modality, SeeGULL (Jha et al., 2023), as a reference to ground our evaluations. The dataset has 1994 unique attributes – both ‘visual’ and ‘nonvisual. We deduce visual attributes and then map it back to the identity terms in SeeGULL to obtain visual stereotypes. Annotating Visual Attributes The annotators are presented with an attribute and a statement of the form of ‘The attribute [attr] can be visually depicted in an image.’ The attribute presented in the sentence is selected from a list of all unique attributes (noun/noun phrases, and adjective/adjective phrases) from SeeGULL. The annotators are required to assign a Likert scale rating to each attribute indicating the extent to which they agree or disagree with the above statement w.r.t. to the given attribute term. We note that the visual nature of attributes lie on a spectrum. While there may be clearly defined attributes on either extremes, the ‘visual’ and ‘non-visual’ attributes in the middle are more subjective. For our subsequent analysis, we exclude all attributes where any annotator expressed uncertainty or disagreement regarding the visual nature. Consequently, we deem terms where all annotators at least agreed about their visual nature, as ‘visual’ resulting in a selection of 385 out of the original 1994 attributes. Please refer to the Appendix 7.1.1 for details. We recruited annotators based out of the United States, and proficient in English reading and writing. For each attribute, we get annotations from 3 annotators that identify with different geographical origin identities - Asian, European, and North American. Annotators were professional data labelers and were compensated at rates above the prevalent market rates. Mapping to Visual Stereotypes Using the identified visual attributes (Appendix 7.1.1), we filter out the associated ‘visual stereotypes’ in SeeGULL. Figure 2 shows the global distribution of visual attributes. It also contains examples of visual 12335 stereotypes associated with different countries. We observe that around 30 identities, including Omani, Ukranian, Swiss, Canadian, Mongolian, etc., just have a single visual attribute associated with them. Australians (63) have the highest number of visual attributes associated with them followed by Mexicans (46), Indians (34), New Zealand (31), Ethiopians (27), and Japanese (20). The numbers in bracket indicate the number of visual attributes associated with the identity group in SeeGULL. We use these visual stereotypes for identifying their prevalence in images in the next step. 3.2 Detecting Visual Stereotypes in Text-to-Image Generation Evaluating generated images for the presence of stereotypes is essential to not amplify or reinforce existing biases in the society. We conduct a largescale annotation task to detect prevalent visual stereotypes in the generated images of 135 identity groups across the globe. The scale of our evaluation is significantly larger than the prior work – in terms of the global coverage of identity groups, and their associated stereotypes. We use Stable Diffusion-v1.4 (Rombach et al., 2022) as the base Text-to-Image model, denoted as M, and generate images representative of various identity groups by defining three distinct prompts as follows: (i) P1(id) = ‘a photo of id person’, (ii) P2(id) = ‘a portrait of id person’, and (iii) P3(id) = ‘an id person’, where id ∈X such that X is the set encompassing identity terms extracted from the SeeGULL dataset. For each prompt Pi(i = 1, 2, 3) and each identity term id, we generate 5 images, resulting in an output set of images I = {Ii,1, Ii,2, . . . , Ii,5} for a combination of prompt and identity term. The image generation process can be denoted as: Ii,j = M(Pi(id)), where j = (1, 2, . . . , 5) is the generated images for a given prompt Pi and identity term id. Detecting Stereotypes through Human Annotations The annotation task aims to examine whether well-known visual stereotypes about identity groups are reflected in the generated images. We generate 15 images per identity group and present it to the annotators. Each image is accompanied by two sets of attributes: (i) a set of visual attributes stereotypically associated with the identity group (Figure 2), and (ii) an equal number of randomly selected visual non-stereotypical attributes, i.e., attributes that are not stereotypically associated with the identity group. We combine the two sets and randomly show 5 attributes in succession alongside each image, until every attribute has been covered. An additional option of ‘None of the above’ is also displayed with each set. The annotators are asked to select all attribute(s) that they believe are visually depicted in the image. Additionally, they are also asked to draw bounding boxes to highlight specific regions, objects, or other indicators that support their selection of the visual attribute within the image. If annotators believe that no attributes are visually represented, they can choose ‘None of the above.’ Overall, we get annotations for 2,025 identity-image pairs and 40,057 image-attribute pairs. Similar to 3.1, we recruited annotators proficient in English reading and writing, residing in United States but identifying with different nationalities. We get annotations for each attribute-image pair from 3 annotators. Detecting Stereotypes through Automated Methods Annotating stereotypes in images at scale can often times be resource-intensive and time-consuming. Therefore, we also investigate the feasibility of using automated techniques for stereotype detection in images. We use the already existing CLIP (Contrastive Language-Image Pre-training) embeddings combined with the BART model to generate image captions. An image is mapped into the CLIP embedding space, and the embeddings are then scored against a cache of n-grams. The top-k embeddings are then sent to BART to generate candidate captions. We get top 50 captions for each image for the same set of the 15 images per identity group previously generated. Building on the already identified visual stereotypes, we identify the stereotypes in captions by performing a string match. To further understand how uniquely a stereotypical attribute attrs is present in the caption of the images of an identity group, we compute a salience score of the attributes w.r.t. the identity group S(attrs, id). We use a modified tf-idf metric as follows: S(attrs, id) = tf(attrs, id) · idf(attrs, C). The function tf(attrs, id) represents the smoothed relative frequency of attribute attrs for the identity group id; and the function idf(attrs, C) denotes the inverse document frequency of the attribute term attrs in all the captions ‘C’, reflecting the importance of the attribute across all the captions. A higher salience score of an attribute, indicates a more unique association of the attribute with 12336 Figure 2: Global distribution of visual stereotypes across countries. Depth of the color indicates the number of visual stereotypes. A few examples of visual stereotypes of some countries are shown in the figure. the identity group. For each identity group, we then extract the most salient associated visual stereotypes as present in the captions. 4 Study 1: Stereotypical Depictions 4.1 Stereotypes Identified through Human Annotations Do generated images reflect known stereotypes? We use the identified visual stereotypes for evaluating whether already known stereotypical attributes are visually represented in the images of different identity groups. We only consider identity groups that have more than one associated visual stereotypes for our analysis. We find the likelihood L(attrs, id) of a stereotypical attribute attrs being present in any image of the identity group id as follows. If a stereotypical attribute attrs was shown n times per identity group across all 15 images and selected k times by the annotators, then L(attrs, id) = k n. Figure 3 presents examples of the most likely stereotypes associated with certain identity groups as annotated by different annotators. We observe that the attributes with the highest likelihood for an identity group, also align with the known stereotypes present in SeeGULL. For example, ‘dark’, ‘thin’, ‘skinny’, and ‘underweight’, the most likely attributes depicting Sudanese individuals, are also known stereotypes in SeeGULL. The attribute ‘poor’ which has a likelihood measure of 0.5 for Bangladeshi, i.e., approximately 50% of images representing Bangladeshis contained a representation of ’poor’, is also an annotated stereotype in SeeGULL. Similarly, attributes ‘beaner’, ‘brown’, ‘festive, ‘dark skin’, and ‘sombrero’ are stereotypes in SeeGULL for Mexicans, and are also present in the images representing a Mexican person. An Indian person is often represented as ‘dark, ‘brown’, and ‘religious’ which are its associated visual stereotypes in SeeGULL. We note that the terms ‘brown’, ‘black’ etc., in this work, as identified in images do not imply race, and instead are about appearance, or appearance-based observed race (Hanna et al., 2020; Schumann et al., 2023), or even the colors themselves. Are some identity groups depicted more stereotypically than others? It is crucial to understand if representations of some identity groups tend to be more stereotypical than others. To analyze this better, we compute the ‘stereotypical tendency’ θid for each identity group. We measure the mean likelihood of a stereotype being present in any image representing an identity group L(stereo, id) and compare it with the likelihood of a randomly selected non-stereotypical attribute being depicted visually in the same set of images L(random, id). Let L(attrs, id) denote the likelihood of a stereotypical attribute being present in an image, and L(attrr, id) denote the likelihood of a randomly selected non-stereotypical attribute being present in the image corresponding to an identity group id. We select an equal number, k, of stereotypical and random attributes for any given identity group for a fair comparison. The likelihood of an image being stereotypical for a given identity group is, then, denoted as L(stereo, id) = 1 k Pk i=1 L(attri s, id), where i denotes the i-th stereotypical attribute associated with the identity group. Similarly, the likelihood of a random attribute being present in the image of the identity group is given by L(random, id) = 1 k Pk j=1 L(attrj r, id), where j denotes j-th attribute selected randomly with the 12337 Figure 3: Our approach makes a distinction between "visual" and "non-visual" stereotypes in images. We identify only explicitly present visual stereotypes in the generated images of the identity group. identity group. Computing the ratio between the two for an identity group θid = L(stereo, id) L(random, id) gives the ‘stereotypical tendency’ of the generated images. Higher the ratio, greater is the likelihood of its stereotypical representation. We observe that on average, the visual representation of any identity group is thrice as likely to be stereotypical than non-stereotypical, i.e., the visual stereotypical attributes associated with an identity group are thrice as likely to appear in their visual representation when compared to randomly selected visual non-stereotypical attributes. We also compute θid for all 135 identity groups. We observe that images representing Togolese, Zimbabwean, Swedes, Danish, etc., only contained stereotypical attributes when compared to random attributes, i.e., θid is infinite or N/A; whereas images representing Nigerians were 27 times more likely to contain stereotypical attributes than randomly selected non-stereotypes. (Please refer to the Tables 1 and 2 in the Appendix for all scores). How offensive are the depictions of different identity groups? Visual representation of some stereotypes can be more offensive than others. For example, the stereotypical depiction of an identity group as ‘poor’ can be considered more offensive when compared to the visual depiction of a stereotype ‘rich’. We investigate the offensiveness of images and whether certain identity groups have a more offensive representation compared to others. Using the offensiveness rating of stereotypical attributes in SeeGULL, we infer an overall offensiveness score O(id) for representation of identity groups. Let O(attrs, id) denote the offensiveness score of a stereotypical attribute attrs associated with an identity group id in SeeGULL. The mean offensiveness score for each identity group is then O(id) = 1 nL(stereotype, id) · n X i=1 O(attri s, id) where i denotes the i-th stereotypical attribute attrs identified as being present in the image representing an identity group by the annotators. Figure 8 visualizes the normalized offensiveness score for different identity groups. We observe that the representations of people from countries in Africa, South America, and South East Asia, are comparatively more offensive. Jordanians, Uruguayans, Gabonese, Laotian, and Albanians 12338 Figure 4: ‘Stereotypical Pull’: The generative models have a tendency to ‘pull’ the generation of images towards an already known stereotype even when prompted otherwise. The red lines indicate ‘stereotypical’ attributes; the blue lines indicates ‘non-stereotypical attributes’. The numbers indicate the mean cosine similarity score between sets of image embeddings. have the most offensive representation; whereas Australians, Swedes, Danish, Norwegians, and Nepalese have the least offensive representation. 4.2 Stereotypes Identified through Automated Methods We check the feasibility of using automated methods employing captioning models for stereotype detection, and compare the results with and without using visual stereotypes as a reference. Without visual stereotypes to ground the evaluations, the automated techniques detect non-visual attributes like ‘attractive’, ‘smart’, etc., for identity groups. However, using visual attributes as a reference, our approach uncovers more objectively visual stereotypes for identity groups. These stereotypes also have a high likelihood L(attrs, id) of being present in the images as marked by the annotators 44.69% of the time. Figure 3 shows the most salient visual stereotypes associated with the identity group. Attributes like ‘sombrero’, ‘dark’, and ‘brown’ were the most salient visual stereotypes for Mexicans; ‘poor’ was highly salient with Bangladeshi, ‘dark’ and ‘thin’ for Sudanese, and ‘elegant’ for French. These attributes were also marked as being present by the annotators. This approach also identified stereotypical attributes which were not necessarily depicted in the images, e.g., attributes like ‘cow’, ‘elephant’ for Indians. This could be a limitation in our automated approach or existing errors/biases in the generated captions themselves. Further analysis is required to tease out error and biases propagated by different components of such a system. 5 Study 2: Stereotypical Pull Prior research has shown that Text-to-Image models can have a ‘very homogenized lens’ when representing certain identities (Bianchi et al., 2023; Qadri et al., 2023). We conduct a deeper analysis to better understand this across the generated images of different identity groups. We define ‘stereotypical pull’ for any identity group as a Text-to-Image model’s inclination to generate images aligning with the stereotypical representations of an identity group when presented with (i) neutral prompts, and (ii) explicit non-stereotypical prompts. This points to the model’s tendency to revert to stereotypical depictions, reflecting its inherent biases. We use the below sets of prompts, to generate 15 images per prompt for 135 identity groups and demonstrate the prevalence of stereotypical pull. • Default Representation (d): ‘A/An id person; where id denotes the identity group’. • Stereotypical Representation (s): (i)‘A/An id person described as attrs’, (ii) ‘A photo/potrait of a/an id attrs person’; where attrs is the visual stereotypical attribute associated with the identity group id in SeeGULL. • Non-Stereotypical Representation (ns): (i) ‘A/An id person described as attrns’, (ii) ‘A photo/potrait of a/an id attrns person’; where attrns is a visual attribute not associated with 12339 Figure 5: ‘Stereotypical Pull’ observed across different identity groups. Y-axis is the similarity S(·) between stereotyped (s) and non-stereotyped (ns) images (S(s, ns)). X-axis represents the difference in the deviations of the stereotypical (s) and the non-stereotypical (ns) images from the default (d) representations (S(d, s)−S(d, ns)). id in SeeGULL. Figure 4 visually demonstrates the ‘stereotypical pull’ for the generated images of the identity groups ‘Bangladeshi’, ‘Swiss’, ‘Ethiopian’, and ‘Australian’. For demonstrative purposes, we select opposing stereotypical and non-stereotypical attributes. The default representation of ‘a Bangladeshi person’, looks very similar to the sets of images of ‘a poor Bangladeshi person’ (stereotype), as well as ‘a rich Bangladeshi person’ (nonstereotype). However, the visual representation of ‘a Swiss person’ is quite distinct from the visual representation of ‘a poor Swiss person’ (nonstereotype) and ‘a rich Swiss person’ (stereotype). We observe this pattern even for physical characteristics, where images of ‘an Ethiopian person’, ‘a skinny Ethiopian person (stereotype)’, and ‘a rich Ethiopian person’ (non-stereotype) are all similar looking when compared to that of ‘an Australian person’, ‘a skinny Australian person’ (nonstereotype), and ‘an overweight Australian person’ (stereotype) which are visually quite distinct. The generative models have a tendency to ‘pull’ the image generation towards stereotypes for certain identities when prompted with neutral prompts, and non-stereotypical prompts. We also compare the stereotypical pull across identity groups from 8 different regions (Jha et al., 2023) and demonstrate this phenomenon quantitatively in Figure 5. We compute the mean pairwise cosine similarity S(·) between the CLIP embeddings of default (d) representation of the identity group with (i) stereotypical (s) representation S(d, s), and (ii) non-stereotypical (ns) images S(d, ns). The X-axis in Figure 5 represents the difference of stereotyped and non-stereotypes image sets from the default representations (S(d, s) − S(d, ns)). We also compute the similarity between the stereotyped (s) and the non-stereotyped (ns) images S(s, ns), represented by the Y-axis. For 121 out of 135 identity groups, the default representation of an identity group has a higher similarity score with the ‘stereotyped’ images compared to the ‘non-stereotyped’ images indicating an overall ‘pull’ towards generating stereotypical looking images. Moreover, for identity groups from global south, the similarity between stereotypes and nonstereotyped image sets S(s, ns) is also very high, indicating an overall lack of diversity in the sets of generated images. 6 Conclusion In this work, we present a comprehensive framework for evaluating Text-to-Image models with a focus on ‘regional stereotypes.’ Leveraging existing stereotype benchmarks in textual resources, we perform a three-fold evaluation of generated images. Firstly, we distinguish between inherently visual stereotypes, which can be represented in an image, and ‘non-visual’ stereotypes. Subsequently, we identify visual stereotypes prevalent in the generated images from T2I models across 135 identity groups by conducting a large-scale annotation task. We will publicly release our dataset ‘ViSAGe’ which contains visual stereotypes and the annotated images featuring highlighted visual markers of these stereotypes. Additionally, we quantify the offensiveness of the generated images across differ12340 ent identity groups, and investigate the feasibility of automatically detecting stereotypes in images. Through an extensive case study, we also demonstrate that generated images of almost all identity groups exhibit a more visually ‘stereotypical’ appearance, even when Text-to-Image models are explicitly prompted with neutral or non-stereotypical attributes. This phenomenon is more prominent for identity groups from the global south. We hope that the presented approach and the dataset will provide valuable insights for understanding and addressing regional stereotypes in visual content generation. Acknowledgements We thank Kathy Meier-Hellstern, and Utsav Prabhu for their detailed feedback and Susan Hao for useful discussions. We also thank the anonymous reviewers for the constructive feedback during the review process. Limitations Intended Usage Not all attributes are undesirable in a T2I generation context. Determining whether a T2I model should reflect certain attributes associated with specific identity groups is a non-trivial question and can be most accurately determined by the respective developer and the downstream use case, taking into account the type of harm (Shelby et al., 2023) experienced by the end user. Our objective in this work is not to advocate for a stereotype-neutral generation but to equip practitioners with resources and evaluation methods to assesss, on a scalable and effective basis, the extent to which global stereotypes are reproduced in model generations. We also aim to facilitate an understanding of the nature of stereotypes, including their potential offensiveness, through this work. We hope this enables model builders and platforms to better prioritize and appropriately intervene in their model generations based on their use case. Limitations Although we cover a wide range of visual stereotypes, annotation about the visual nature of attributes is potentially subjective. While we attempt to account for this subjectivity by (1) using a Likert scale as opposed to binary labels, and (2) capture diversity in our annotator pool w.r.t. gender and geographical region, we recognize that our annotations may still fail to capture other dimensions of subjectivity. Moreover, our evaluation of prevalence of stereotypes in the generated images is limited by the stereotypes present in textual resources like SeeGULL. More regional stereotypes from different textual resources can be included to further expand the overall coverage. We evaluate the images generated using Stable Diffusion V1-4 and believe the results would hold even on other Text-to-Image models, but that needs to be verified in the future work where our analysis may be applied to other generative image models. We chose Stable Diffusion due to its ease of access for academic research, and since our core objective is to demonstrate gaps in existing stereotype bias evaluation approaches, rather than demonstrate biases in any particular or all image generation models. Our work focuses only on regional stereotypes and their presence in the generated images. The proposed framework could be used to experiment with other axes like gender, race, ethnicity, etc., as well. We believe that our approach is extensible to other types of stereotypes, provided such data repositories exist. Future iterations of such data collection and evaluation should take more participatory approach and involve communities with lived experiences on the harms of bias in society. Ethical Considerations We evaluate stereotypes in the generated visual depictions of people, based on their identity associated with a geographical location. We acknowledge that this is a complex notion of identity and overlaps with other facets of a person’s identity, such as race, which is not a focus of this work. We also emphasize that this work does not cover all possible visual depictions of stereotypes and thus, it’s methods or dataset should not be used as a standard to deem model generations free of stereotypes. This is an initial step towards a systematic, global-scale evaluations of T2I models for stereotypical depictions, and we hope for future work to build on this to attain more coverage of potential stereotypical and harmful depictions in generations. References Abhipsa Basu, R Venkatesh Babu, and Danish Pruthi. 2023. Inspecting the geographical representativeness of images from text-to-image models. arXiv preprint arXiv:2305.11080. Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and 12341 Aylin Caliskan. 2023. Easily accessible text-toimage generation amplifies demographic stereotypes at large scale. FAccT ’23, page 1493–1504, New York, NY, USA. Association for Computing Machinery. Su Lin Blodgett, Gilsinia Lopez, Alexandra Olteanu, Robert Sim, and Hanna Wallach. 2021. Stereotyping Norwegian salmon: An inventory of pitfalls in fairness benchmark datasets. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1004–1015, Online. Association for Computational Linguistics. Jaemin Cho, Abhay Zala, and Mohit Bansal. 2023. Dall-eval: Probing the reasoning skills and social biases of text-to-image generation models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3043–3054. Kathleen Fraser and Svetlana Kiritchenko. 2024. Examining gender and racial bias in large vision– language models using a novel dataset of parallel images. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 690–713, St. Julian’s, Malta. Association for Computational Linguistics. Noa Garcia, Yusuke Hirota, Yankun Wu, and Yuta Nakashima. 2023. Uncurated image-text datasets: Shedding light on demographic bias. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6957–6966. Sourojit Ghosh and Aylin Caliskan. 2023. ‘person’ == light-skinned, western man, and sexualization of women of color: Stereotypes in stable diffusion. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6971–6985, Singapore. Association for Computational Linguistics. Laura Gustafson, Chloe Rolland, Nikhila Ravi, Quentin Duval, Aaron Adcock, Cheng-Yang Fu, Melissa Hall, and Candace Ross. 2023. Facet: Fairness in computer vision evaluation benchmark. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 20370–20382. Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, page 501–512, New York, NY, USA. Association for Computing Machinery. Susan Hao, Piyush Kumar, Sarah Laszlo, Shivani Poddar, Bhaktipriya Radharapu, and Renee Shelby. 2023. Safety and fairness for content moderation in generative models. arXiv preprint arXiv:2306.06135. Akshita Jha, Aida Mostafazadeh Davani, Chandan K Reddy, Shachi Dave, Vinodkumar Prabhakaran, and Sunipa Dev. 2023. SeeGULL: A stereotype benchmark with broad geo-cultural coverage leveraging generative models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9851– 9870, Toronto, Canada. Association for Computational Linguistics. Sasha Luccioni, Christopher Akiki, Margaret Mitchell, and Yacine Jernite. 2023. Stable bias: Evaluating societal representations in diffusion models. In Thirtyseventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. Stereoset: Measuring stereotypical bias in pretrained language models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5356–5371. Ranjita Naik and Besmira Nushi. 2023. Social biases through the text-to-image generation lens. arXiv preprint arXiv:2304.06034. Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel Bowman. 2020. Crows-pairs: A challenge dataset for measuring social biases in masked language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1953–1967. Vinodkumar Prabhakaran, Aida Mostafazadeh Davani, and Mark Díaz. 2021. On releasing annotator-level labels and information in datasets. In Proceedings of the Joint 15th Linguistic Annotation Workshop (LAW) and 3rd Designing Meaning Representations (DMR) Workshop, pages 133–138. Rida Qadri, Renee Shelby, Cynthia L Bennett, and Emily Denton. 2023. Ai’s regimes of representation: A community-centered study of text-to-image models in south asia. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, pages 506–517. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695. Candice Schumann, Gbolahan O Olanubi, Auriel Wright, Ellis Monk Jr, Courtney Heldreth, and Susanna Ricco. 2023. Consensus and subjectivity of skin tone annotation for ml fairness. arXiv preprint arXiv:2305.09073. 12342 Nikil Selvam, Sunipa Dev, Daniel Khashabi, Tushar Khot, and Kai-Wei Chang. 2023. The tail wagging the dog: Dataset construction biases of social bias benchmarks. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1373– 1386, Toronto, Canada. Association for Computational Linguistics. Renee Shelby, Shalaleh Rismani, Kathryn Henne, AJung Moon, Negar Rostamzadeh, Paul Nicholas, N’Mah Yilla-Akbari, Jess Gallegos, Andrew Smart, Emilio Garcia, et al. 2023. Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pages 723– 741. Eddie Ungless, Bjorn Ross, and Anne Lauscher. 2023. Stereotypes and smut: The (mis)representation of non-cisgender identities by text-to-image models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 7919–7942, Toronto, Canada. Association for Computational Linguistics. Jialu Wang, Xinyue Gabby Liu, Zonglin Di, Yang Liu, and Xin Eric Wang. 2023. T2iat: Measuring valence and stereotypical biases in text-to-image generation. arXiv preprint arXiv:2306.00905. Kaiyu Yang, Klint Qinami, Li Fei-Fei, Jia Deng, and Olga Russakovsky. 2020. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 547–558. Yanzhe Zhang, Lu Jiang, Greg Turk, and Diyi Yang. 2023. Auditing gender presentation differences in text-to-image models. arXiv preprint arXiv:2302.03675. 7 Appendix 7.1 Annotations All annotations were procured through a partner vendor who handled the recruitment, obtained informed consent, and provided clean, anonymous ratings within each task. Annotators were paid above prevalent market rates, respecting minimum wage laws. They were recruited such that every data point was annotated by at least one non-male identifying person. Annotators were also diverse in region of residence. 7.1.1 Annotating Visual Attributes The Likert Scale labels for the annotation task were as follows: • Strongly Agree: When the attribute can be explicitly identified within an image, like objects, colors, or similar visual elements (e.g., ‘hat,’ ‘sombrero,’ ‘short’). • Agree: When the attribute can be deduced from visual cues, albeit not explicitly depicted in the image (e.g., ‘fashionable,’ ‘poor,’ ‘impoverished’). • Disagree: When the attribute is challenging to detect visually but may be inferred from visual cues in specific contexts. • Strongly Disagree: When the attributes cannot be inferred visually, either explicitly or through visual cues, such as ‘kind,’ ‘talkative,’ ‘warmhearted,’ and the like. • Unsure: When annotators are uncertain about the attribute’s visual nature. We assessed the visual nature of 1994 attributes present in the SeeGULL dataset. Recognizing the potential limitations associated with majority voting (Prabhakaran et al., 2021), we explored different thresholds to determine the degree of visual nature of an attribute. Figure 6 presents one such consensus plot for the ‘visual nature’ of attributes. X-axis demonstrates the consensus labels, and Yaxis indicates the percentage of attributes for which at least 2 out of 3 annotators reached a consensus. We observe that 20.41% of the attributes had a ‘strongly agree’ consensus rating from at least 2 annotators, suggesting that these attributes were perceived as extremely visual. Some examples of strongly visual attributes include ‘blonde haired’, ‘champagne’, ‘dark skin’, ‘elephant’, and ‘fat’. For 22.16% of the attributes, annotators ‘agreed’ that they were somewhat visual. Attributes like ‘rich, ‘poor’, ‘snake charmer’, and ‘swarthy’ were included in this category. A roughly equal portion, comprising of 20.26% of the attributes were considered non-visual where majority of the annotators ‘disgareed’ about their visual nature. Attributes like ‘professional’, ‘unemployable’, ‘carefree’, ’abusive’, and ‘calm’ were present in this category. Approximately 7.52% of the attributes were deemed extremely non-visual as indicated by the ’strongly disagree’ label. Some examples include ‘good sense of humor’, ‘just’, and ‘unintelligent’; and annotators were unsure regarding the visual nature of 0.2% of the attributes. For our subsequent analysis, we exclude all attributes where any annotator expressed uncertainty, disagreement, or strong disagreement regarding the visual nature. Consequently, our ‘visual attributes’ dataset consists solely of attributes for which all annotators either agreed or strongly agreed upon their visual nature, resulting in a selection of 385 12343 Figure 6: Consensus of the perceived ’visual nature’ of attributes for different values on a Likert Scale ranging from ’Strongly Agree’ to ’Strongly Disagree’. Figure 7: Example of an annotated data point. The annotators do not see the identity group associated with the image but we release this information in the annotated dataset. out of the original 1994 attributes. We categorize these select attributes as ‘visual’ for our study. 7.1.2 Detecting Visual Stereotypes Figure 7 presents an example of an annotated data point. We release the image, the identity group, the annotated attribute, and the coordinates of the attribute in the image along with a unique annotator ID identifying the annotator of the given imageattribute pair. We also release the data card 2 with more details. 7.2 Additional Results for Stereotypical Tendency of Identity Groups We compute θid for all 135 identity groups. TableS 1 and 2 presents the θid and the likelihood of images being stereotypical over non-stereotypical for all identity groups. 2https://github.com/google-research-datasets/visage Figure 8: Offensiveness of the generated images across different countries. The depth of the color increases with the offensive nature of the images. 7.3 Offensiveness of Visual Stereotypes Figure 8 visualizes the normalized offensiveness score for different identity groups. 7.4 Additional Results for Stereotype Pull • S(d, s) : Similarity between the default (d) representation of the identity group with the stereotypical (s) representation. • S(d, ns): Similarity between the default (d) representation of the images with the nonstereotypical (ns) images. • S(s, ns): Similarity between the stereotyped (s) and the non-stereotyped (ns) images. Tables 3, 4 and 5 present the mean cosine similarity between the sets of images for all identity groups across all stereotyped and non-stereotyped attributes. For 121 out of 135 identity groups, the default representation of an identity group has a higher similarity score with the ‘stereotyped’ images compared to the ‘non-stereotyped’ images indicating an overall ‘pull’ towards generating stereotypical looking images. Moreover, for identity groups from global south the mean similarity score across all three measures is constantly higher indicating an overall lack of diversity in their representations across the three sets. 12344 Identity L(stereo, id) L(random, id) θid Togolese 0.35 0.00 N/A Zimbabwe 0.32 0.00 N/A Malian 0.29 0.00 N/A Guyanese 0.16 0.00 N/A Sierra Leonean 0.15 0.00 N/A Guatemalan 0.14 0.00 N/A Kosovar 0.12 0.00 N/A Iraq 0.11 0.00 N/A Sweden 0.10 0.00 N/A Denmark 0.10 0.00 N/A South Sudanese 0.09 0.00 N/A Gabonese 0.05 0.00 N/A Mauritanian 0.03 0.00 N/A Greece 0.03 0.00 N/A Kuwaiti 0.03 0.00 N/A Jordanian 0.02 0.00 N/A Bhutan 0.02 0.00 N/A Moroccan 0.01 0.00 N/A Ecuadorian 0.01 0.00 N/A Thailand 0.01 0.00 N/A Liberian 0.33 0.00 75.00 Panamanian 0.24 0.00 54.37 Lebanese 0.19 0.00 51.54 Mauritian 0.18 0.00 36.83 Sudanese 0.38 0.01 27.36 Nigerian 0.08 0.00 27.30 Libyan 0.24 0.01 25.54 Egypt 0.13 0.01 24.83 Laos 0.19 0.01 21.56 Myanmar 0.20 0.01 16.00 Kenya 0.17 0.01 13.56 Indian 0.15 0.01 12.08 Djiboutian 0.24 0.02 10.95 Ireland 0.06 0.01 9.59 Norwegian 0.09 0.01 9.10 Chadian 0.24 0.03 8.96 Saudi Arabian 0.17 0.02 8.57 Guinean 0.15 0.02 7.93 Ghanaian 0.28 0.04 7.90 Cambodian 0.22 0.03 7.67 Bangladesh 0.26 0.03 7.38 Britain 0.16 0.02 6.75 Ethiopia 0.17 0.03 6.56 United Kingdom 0.08 0.01 6.25 Nepali 0.32 0.05 5.94 Malaysian 0.15 0.03 5.92 Philippines 0.19 0.03 5.86 Italy 0.02 0.00 5.77 Rwandan 0.13 0.02 5.33 Georgian 0.07 0.01 5.00 Zambian 0.20 0.05 4.24 Table 1: Likelihood of the representation of an identity group being stereotypical based on the ‘stereotypical tendency’ θid for the default representation of the identity group (id). Identity L(stereo, id) L(random, id) θid Bolivian 0.14 0.04 3.82 Somalis 0.24 0.07 3.71 Uruguayan 0.17 0.05 3.66 Senegalese 0.07 0.02 3.63 Sri Lanka 0.11 0.03 3.54 Hondurans 0.19 0.06 3.17 New Zealand 0.08 0.03 3.14 North Korea 0.16 0.06 2.87 England 0.06 0.02 2.67 Albanian 0.04 0.02 2.63 China 0.11 0.04 2.54 Nicaraguan 0.28 0.12 2.28 Ugandan 0.18 0.08 2.25 Brazil 0.13 0.06 2.22 Mozambican 0.12 0.05 2.21 Russia 0.07 0.03 2.19 Mexico 0.20 0.09 2.17 Vietnam 0.15 0.08 2.02 Japan 0.12 0.06 2.00 United States 0.08 0.04 1.89 Pakistani 0.17 0.09 1.88 Congolese 0.10 0.05 1.87 Australian 0.06 0.03 1.82 Indonesian 0.18 0.10 1.75 Cameroonian 0.06 0.03 1.71 Afghanistan 0.07 0.04 1.67 Romanian 0.03 0.02 1.57 Palestinian 0.10 0.06 1.53 Tanzanian 0.15 0.10 1.47 Peru 0.14 0.10 1.45 Algerian 0.02 0.01 1.38 South African 0.04 0.03 1.28 Belgium 0.02 0.01 1.25 Germany 0.02 0.01 1.20 Iran 0.12 0.10 1.16 Argentine 0.07 0.07 1.01 Gambian 0.07 0.07 0.96 Singapore 0.03 0.03 0.94 Angolan 0.08 0.08 0.93 Israel 0.04 0.05 0.89 France 0.06 0.07 0.89 Yemen 0.11 0.15 0.74 Syrian 0.13 0.19 0.71 Turkey 0.05 0.07 0.69 Nepal 0.03 0.04 0.62 Equatorial Guinean 0.03 0.07 0.50 Colombia 0.01 0.02 0.45 Venezuela 0.06 0.14 0.39 Eritrean 0.04 0.09 0.39 Barundi 0.00 0.01 0.21 Chilean 0.03 0.15 0.19 Comorans 0.02 0.09 0.18 Spain 0.02 0.14 0.12 Wales 0.00 0.06 0.00 Table 2: Likelihood of the representation of an identity group being stereotypical based on the ‘stereotypical tendency’ θid for the default representation of the identity group (id). 12345 Identity Group S(d, s) S(d, ns) S(s, ns) Mean Sim South Sudanese 0.9824 0.964 0.9609 0.9691 Nigerien 0.9807 0.961 0.9608 0.9675 North Korean 0.9702 0.968 0.956 0.9648 Djiboutian 0.9719 0.9627 0.9596 0.9647 Myanmar 0.9720 0.9603 0.9551 0.9624 Bolivian 0.9688 0.9598 0.9587 0.9624 Mauritanian 0.9736 0.9575 0.9555 0.9622 Malawian 0.9837 0.9514 0.9491 0.9614 Bhutanese 0.9786 0.9546 0.9494 0.9609 Mongolian 0.9719 0.9554 0.9543 0.9605 Table 3: ‘Stereotypical Pull’ observed across generated images of sample identity groups. The default representations of images are more similar to their ‘stereotypical’ representations. Moreover, identity groups from global south consistently have an overall higher mean similarity score S(·) across their default (d), stereotypical (s), and non-stereotypical (ns) attributes indicating a ‘pull’ towards generating stereotypical images. Identity Group S(d, s) S(d, ns) S(s, ns) Mean Sim Gabonese 0.9716 0.9583 0.9484 0.9594 Ugandan 0.9721 0.9532 0.9522 0.9592 Malian 0.9672 0.9597 0.9505 0.9591 Rwandan 0.9775 0.9512 0.9468 0.9585 Central African 0.9769 0.9472 0.9474 0.9571 Mozambican 0.9733 0.9471 0.9462 0.9555 Saudi Arabian 0.9747 0.948 0.9427 0.9551 Nepalese 0.971 0.9527 0.9409 0.9549 Ethiopian 0.9632 0.9508 0.9453 0.9531 Nepali 0.9721 0.9498 0.937 0.953 Chadian 0.9581 0.9453 0.9479 0.9504 Guinean 0.9559 0.9429 0.9509 0.9499 Sudanese 0.9659 0.9441 0.9396 0.9499 Equatorial Guinean 0.9718 0.9376 0.9378 0.9491 Senegalese 0.9496 0.952 0.9452 0.9489 Tanzanian 0.9649 0.9433 0.9385 0.9489 Botswana 0.9719 0.9376 0.9368 0.9487 Nicaraguan 0.9673 0.9412 0.9309 0.9465 Cambodian 0.9676 0.9377 0.9331 0.9461 Zambian 0.9645 0.9383 0.9344 0.9457 Liberian 0.961 0.932 0.9417 0.9449 Congolese 0.9592 0.9377 0.9368 0.9446 Angolan 0.9588 0.9344 0.937 0.9434 Togolese 0.9611 0.9338 0.9351 0.9434 Eritrean 0.9595 0.9359 0.9342 0.9432 Sierra Leonean 0.9542 0.9408 0.9277 0.9409 Guatemalan 0.9481 0.9437 0.9274 0.9397 Laos 0.9607 0.9313 0.9268 0.9396 Egyptian 0.9592 0.9366 0.917 0.9376 Yemeni 0.952 0.9361 0.9246 0.9375 Omani 0.9618 0.9329 0.9161 0.9369 Afghans 0.9579 0.9333 0.9189 0.9367 Ecuadorian 0.9633 0.9249 0.9218 0.9367 Guyanese 0.9646 0.9287 0.9164 0.9366 Cameroonian 0.9463 0.9331 0.9286 0.936 Gambian 0.9293 0.9506 0.9229 0.9343 Seychellois 0.9565 0.9258 0.9167 0.933 Zimbabwean 0.9487 0.927 0.9184 0.9314 Paraguayan 0.97 0.9159 0.9069 0.9309 Bangladeshi 0.9503 0.9238 0.9099 0.928 Emiratis 0.9532 0.9326 0.8949 0.9269 Salvadoran 0.9463 0.9132 0.9212 0.9269 Kenyan 0.9387 0.9298 0.9113 0.9266 Ghanaian 0.9471 0.9197 0.9115 0.9261 Sri Lankan 0.9498 0.9247 0.9023 0.9256 Costa Rican 0.9554 0.914 0.8994 0.9229 Chinese 0.9526 0.9121 0.8967 0.9205 Moroccan 0.9156 0.9388 0.9068 0.9204 Iranian 0.9623 0.9054 0.8921 0.92 Panamanian 0.9439 0.9096 0.9002 0.9179 Indian 0.9332 0.9217 0.8986 0.9178 Kuwaiti 0.9558 0.9075 0.8883 0.9172 Ivorians 0.9685 0.8862 0.8955 0.9167 Georgian 0.9456 0.9019 0.9016 0.9164 Indonesian 0.9404 0.9182 0.8901 0.9163 Vietnamese 0.9301 0.9093 0.9033 0.9142 Palestinian 0.9353 0.9067 0.8935 0.9118 Libyan 0.9419 0.8963 0.8968 0.9117 Peruvian 0.9162 0.9207 0.8919 0.9096 Table 4: Stereotypical Pull: The default representations of 121 out of 135 identity groups are more visually similar to their ‘stereotyped representations’. However, the representations of identity groups from global south are more similar across both ‘stereotyped’ and ‘nonstereotyped’ representations. 12346 Identity Group S(d, s) S(d, ns) S(s, ns) Mean Sim Nigerian 0.9349 0.9021 0.8907 0.9092 Iraqi 0.9329 0.9014 0.8918 0.9087 Venezuelan 0.9363 0.9056 0.8816 0.9078 Thai 0.9269 0.9081 0.8878 0.9076 Jordanian 0.9408 0.8903 0.882 0.9044 South African 0.9312 0.8963 0.8851 0.9042 Colombian 0.9212 0.9038 0.8847 0.9033 Somalis 0.9207 0.8971 0.8893 0.9024 Tunisian 0.9385 0.8837 0.8756 0.8992 Syrian 0.9316 0.8723 0.8891 0.8977 Pakistani 0.9198 0.8991 0.8733 0.8974 Malaysian 0.9489 0.8773 0.8601 0.8954 Singapore 0.9435 0.8726 0.8659 0.894 Philippine 0.9092 0.9085 0.8638 0.8938 Hondurans 0.9505 0.8709 0.8572 0.8929 Greeks 0.9238 0.89 0.8605 0.8914 Polish 0.932 0.8804 0.8553 0.8892 Chilean 0.9241 0.8854 0.8565 0.8887 Taiwanese 0.9041 0.8894 0.8706 0.888 Israeli 0.9172 0.8789 0.8656 0.8872 Uruguayan 0.9057 0.8723 0.8827 0.8869 Beninois 0.9588 0.8649 0.8354 0.8864 Algerian 0.8747 0.8896 0.8942 0.8862 Ukrainian 0.8977 0.8819 0.8756 0.8851 Mauritian 0.8907 0.8815 0.8828 0.885 Barundi 0.9206 0.8625 0.8681 0.8838 Mexican 0.9021 0.8926 0.8558 0.8835 Japanese 0.882 0.9109 0.8552 0.8827 Netherlanders 0.9234 0.86 0.8524 0.8786 Kosovar 0.9252 0.8679 0.8409 0.878 Danish 0.9241 0.8551 0.8536 0.8776 Russian 0.9074 0.8804 0.84 0.8759 Lithuanian 0.8924 0.8589 0.8569 0.8694 Romanian 0.9021 0.8786 0.8217 0.8675 Albanian 0.8668 0.8633 0.8684 0.8662 Canadian 0.9444 0.8129 0.8355 0.8643 Bulgarian 0.87 0.8578 0.8624 0.8634 Argentine 0.8789 0.875 0.8283 0.8607 Brazilian 0.8728 0.8635 0.8415 0.8593 Serbian 0.8896 0.8512 0.8346 0.8585 Portuguese 0.8854 0.8526 0.8371 0.8584 Belgian 0.89 0.8459 0.8316 0.8558 Austrian 0.8875 0.8563 0.8123 0.852 Norwegian 0.8669 0.8534 0.8349 0.8517 English 0.8828 0.8533 0.8172 0.8511 Italian 0.8671 0.8598 0.8226 0.8498 Turks 0.8744 0.8567 0.8132 0.8481 Swiss 0.8826 0.8344 0.8267 0.8479 French 0.8836 0.8515 0.8062 0.8471 Macedonian 0.8432 0.8721 0.8258 0.847 Spanish 0.8591 0.8521 0.8186 0.8433 Comorans 0.8712 0.8433 0.8118 0.8421 Croatian 0.8505 0.8693 0.8021 0.8406 United States 0.8798 0.8404 0.8004 0.8402 Irish 0.8619 0.8425 0.8068 0.837 Lebanese 0.8314 0.854 0.8164 0.834 Andorran 0.845 0.8184 0.8379 0.8338 Australian 0.8694 0.8341 0.7928 0.8321 German 0.8274 0.8373 0.8088 0.8245 New Zealand 0.8601 0.8219 0.7864 0.8228 British 0.8423 0.7948 0.8262 0.8211 Swedes 0.8428 0.8585 0.7604 0.8206 Luxembourg 0.781 0.8102 0.806 0.7991 Welsh 0.7808 0.8475 0.7326 0.787 United Kingdom 0.7981 0.7563 0.778 0.7775 Finns 0.7197 0.829 0.7021 0.7503 Table 5: Stereotypical Pull: The default representations of 121 out of 135 identity groups are more visually similar to their ‘stereotyped representations’. However, the representations of identity groups from global south are more similar across both ‘stereotyped’ and ‘nonstereotyped’ representations. 12347
2024
667
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12348–12364 August 11-16, 2024 ©2024 Association for Computational Linguistics Transferable and Efficient Non-Factual Content Detection via Probe Training with Offline Consistency Checking Xiaokang Zhang1†‡, Zijun Yao2†, Jing Zhang1*, Kaifeng Yun2, Jifan Yu2, Juanzi Li2*, Jie Tang2 1School of Information, Renmin University of China, Beijing, China, 2Department of Computer Science and Technology, Tsinghua University, Beijing, China {zhang2718,zhang-jing}@ruc.edu.cn, {yaozj20, ykf21}@mails.tsinghua.edu.cn, {yujifan,lijuanzi,jietang}@tsinghua.edu.cn Abstract Detecting non-factual content is a longstanding goal to increase the trustworthiness of large language models (LLMs) generations. Current factuality probes, trained using humanannotated labels, exhibit limited transferability to out-of-distribution content, while online selfconsistency checking imposes extensive computation burden due to the necessity of generating multiple outputs. This paper proposes PINOSE, which trains a probing model on offline self-consistency checking results, thereby circumventing the need for human-annotated data and achieving transferability across diverse data distributions. As the consistency check process is offline, PINOSE reduces the computational burden of generating multiple responses by online consistency verification. Additionally, it examines various aspects of internal states prior to response decoding, contributing to more effective detection of factual inaccuracies. Experiment results on both factuality detection and question answering benchmarks show that PINOSE achieves surpassing results than existing factuality detection methods. Our code and datasets are publicly available on this github repository. 1 Introduction Large language models (LLMs), after pre-training on massive corpora (Brown et al., 2020; Touvron et al., 2023a; Jiang et al., 2023), show a surprising ability to generate knowledgeable content (Sun et al., 2023; Yu et al., 2022). Although this ability facilitates a wide range of applications, such as question answering (QA) (Abdallah et al., 2023; Liu et al., 2022; Li et al., 2022) and information retrieval (Mao et al., 2021; Ma et al., 2023), the propensity of LLMs to occasionally produce nonfactual knowledge (Lin et al., 2022; Wang et al., †Equal Contribution. ‡Work was done when interned at Zhipu AI. *Corresponding author. 2023a) potentially hinders the practical utilization of generated content. Thus, it is necessary to detect whether LLMs generate non-factual content. Previous studies offer evidence that the internal representation vectors in LLMs determine whether they produce factual answers to the input question (Azaria and Mitchell, 2023a; Kadavath et al., 2022; Zou et al., 2023). Specifically, the factual behavior entailed is extracted from the feed-forward layer activations of tokens before the generated content using linear probes (Alain and Bengio, 2017; Belinkov, 2022). However, their construction relies on the labor-intensive process of annotating natural language questions, as well as labeling LLMs’ outputs with factuality annotations, a factor that limits their applicability to questions and responses with unseen distributions. To avoid the annotation process, the most recent studies detect non-factual content via online selfconsistency checking (Wang et al., 2022). They assume that if LLMs give contradictory responses to the same prompt, the model is more likely to hallucinate to give that answer (Elazar et al., 2021). In this way, detecting non-factual content is reduced to the mutual-entailment analysis among multiple generations, which is usually realized as natural language inference (NLI) models (Kuhn et al., 2023; Manakul et al., 2023a) or heuristic comparison of the hidden representation similarity (Anonymous, 2023). However, self-consistency checking introduces extensive computation overhead to sample multiple responses. In addition, due to the lack of training process, these methods are less robust than previous factuality probes. Giving these limitations of existing methods, we propose PINOSE, a method to predict non-factual responses from LLMs. The main idea of PINOSE is to construct a probing model that learns from offline self-consistency checking. It aims to present two core advantages over existing methods: Transferability. Comparing with existing prob12348 ing methods, PINOSE eliminates human annotation for training data. This is achieved with bootstrapped natural language questions and generated pseudo factuality labels through an offline consistency checking mechanism. Moreover, as PINOSE does not rely on specific training data, it transfers effortlessly to any different data distributions. Efficiency and Effectiveness. Comparing with online consistency checking, PINOSE avoids the computational burden associated with multiple generations during inference, thus enhancing time efficiency. Additionally, by analyzing the continuous internal representations of LLMs rather than discrete tokens in the response, PINOSE gains access to a broader spectrum of information, enhancing its prediction effectiveness. We conduct comprehensive experiments on established factuality detection benchmarks and variations of QA datasets. Our results reveal several key findings: (1) PINOSE outperforms supervised probing-based baselines by 7.7-14.6 AUC across QA datasets, despite being trained without annotated labels. (2) Moreover, our PINOSE achieves significant performance improvements (3-7 AUC) compared to unsupervised consistency checking baselines, while also demonstrating superior time efficiency. (3) Additionally, the dataset generated via offline self-consistency checking shows promise for transferring to probe various LLMs. 2 Preliminaries This study concentrates on identifying non-factual content using decoder-only LLMs. It begins by formally defining the task and then elaborates on decoder-only LLMs and the construction of probes for these models. Additionally, it discusses the distinctions between online and offline selfconsistency checking. Task Definition. Formally, given a question q = ⟨q1, q2, . . . , qn⟩composed of n tokens, and its corresponding response r = ⟨r1, r2, . . . , rm⟩consisting of m tokens, non-factual content detection aims to assign a binary judgment f ∈{True, False}, determining the factual correctness of r. For example, given the question “What is the capital city of China?”, “Beijing” serves as the response with True judgement, while “Shanghai” is classified as False. Without losing generality, we allow q to be null (i.e., q = ∅), in which scenario, the task evaluates the factuality of the standalone assertion. To detect whether LLMs generate non-factual content, in our setting, the response r is usually sampled from an LLM. In this case, we expect the model to assess the factuality of content generated by itself without the need of another LLM. Decoder-only LLMs. Decoder-only LLMs comprise a stack of Transformer decoder layers (Vaswani et al., 2017). After embedding tokens into the hidden representation H(0), each layer manipulates the hidden representation of the previous layer as follows1: H(l) = FFN  Attn  H(l−1) , where Attn(·) is the attention mechanism. FFN(·) is the feed-forward network composed of two consecutive affine transformations and activation functions. Thus, the intermediate representations of decoder-only LLMs are usually extracted from the output of FFN(·) operation. In the following of this paper, we denote the hidden representation of the ith token extracted from layer l as H(l)[i]. Language Model Probing. Probing method extracts implicit information from the intermediate representation. It is usually implemented as a simple classification model that maps from the hidden representation of certain token into discrete classes. The probe in PINOSE is a two-layer feedforward network with binary classification outputs: Probe  H(l)[i]  = σ2  W2σ1  W1H(l)[i] + b1  + b2  , (1) where σ1 is the Sigmoid function, σ2 is any nonlinear function, and W, b are trainable parameters. Probe H(l)[i]  is the probability for True. Consistency Checking. Consistency checking requires LLMs to generate multiple responses towards the same question, and utilize these semantic consistency to judge whether the generations are correct. Previous methods for non-factual content detection are online self-consistency checking, where LLMs need to generate extensive responses to answer a single question to obtain factuality labels. Our method falls into the offline consistency checking category, where consistency checking is solely used to generate labels for training the probe. During online checking, LLMs only need to produce a single response and obtain the factuality label from the probe. 1We omit residual connection (He et al., 2016) and layer normalization (Ba et al., 2016) for simplicity. 12349 3 Methodology The construction of PINOSE involves three main stages: (1) In the data preparation stage, we bootstrap natural language questions and generate multiple responses, which together serve as model inputs; (2) In the offline consistency checking stage, we employ a peer review mechanism to generate pseudo factuality label for each response; (3) In the probe construction stage, these pseudo factuality labels are used to train a language model probe. Figure 1 illustrates the overall process. 3.1 Stage 1: Data Preparation In alignment with the requirements of the nonfactual content detection task, the supporting data consists of three elements: natural language questions q, their corresponding responses r, and factuality labels f. This stage concentrates on generating large-scale data containing the initial two elements (i.e., q and r). Question Bootstrapping. PINOSE leverages natural language questions to prompt LLMs to generate responses for consistency checking. However, natural language questions are not always available across all domains. Furthermore, both the diversity and quantity of questions can significantly impact the quality of the prepared data. Therefore, we aim to enable LLMs to bootstrap questions with minimal human involvement. Fortunately, as Honovich et al. (2023) point out, high-performing language models show significant capacity in question generation. Inspired by these findings, we manually annotate a set of seed questions and employ them as demonstrations for LLMs to generate a large volume of questions via incontext learning (Brown et al., 2020, ICL). To enhance diversity in generation, we broaden the scope of seed questions by incorporating the generated ones and sample diverse combinations from the seed questions for subsequent generation. Detailed prompt for question generation is provided in Figure 4 of Appendix A.5. Diverse Response Generation. We use previously generated questions as input for LLMs to generate multiple responses for subsequent consistency checking. We design two strategies to encourage the diversity of multiple responses to the same input question. (1) From the perspective of decoding, we adjust the decoding strategy by applying a vanilla sampling method with a relatively high sampling temperature (t = 1). (2) From the perspective of model input, we instruct LLMs to answer a question using a variety of prompts (as shown in Figure 5 in Appendix A.5). The outcome of this stage is a dataset containing questions paired with multiple responses, designated as {(q, {ri})}, where the number of responses k = |{ri}| serves as a hyperparameter that determines the quantity of responses per question for the subsequent consistency check. 3.2 Stage 2: Offline Consistency Checking We engage LLMs in the offline consistency check process via a peer review mechanism. First, we gather reviews by asking LLMs to determine for each response whether it is consistent with other responses. Then, we enrich reviews by sampling multiple consistency judgements by varying model inputs. Finally, we integrate reviews to form the pseudo-factuality label for each response and filter out low-quality responses. Review Gathering. Formally, consistency reviewing involves asking an LLM to evaluate whether the response ri to the question q is semantically consistent with other responses (i.e., rj, j ̸= i). If ri has equivalent meaning with other responses, it is considered factual. To ensure unambiguous judgment, we require the LLM to make pairwise comparisons with other k −1 responses. For each comparison, it must output one of three labels: “Consistent”, “Neutral”, or “Non-Consistent”. To achieve this, we specify the output format with in-context demonstrations and prompt instructions (as shown in Figure 6 in Appendix A.5). Review Enrichment. To enhance the diversity of reviews, we introduce variability in the input provided to the LLM during consistency assessments. Recognizing the significant impact of demonstrations on LLM judgments in ICL (Wang et al., 2023b), we utilize a range of diverse demonstration combinations for ICL to elicit varied reviews from the LLM for each pairwise comparison. Diverse demonstrations facilitate the collection of multiple reviews, each potentially providing a unique perspective. In total, we gather N round of reviews for each pairwise comparison, where N is a hyperparameter. Integration and Filtering. We integrate N reviews for each pairwise comparison, and subsequently integrate k −1 pairwise comparisons for each response through the same majority voting mechanism. Here is how the voting works: we first 12350 Offline Consistency Checking Probe Construction Data Preparation LLM Sample Seeds Bootstrapped Questions How much of earth is covered with water LLM Diverse Responses Approximately 71% of the The Earth's surface is predominantly water, covering About 71% of the Earth's surface is covered with water. Question Bootstrapping Who played the father in the sound of music? Resulting Dataset 𝐪, 𝐫!, 𝐟! Pairwise Comparison for Review Gathering About 71% of the … Around 71% … Nearly 71% area … One-quarter of … Decoder Layer LLM · Consistent · Non-Consistent · Consistent • Assess the connection between the two … • Are the two responses the same? Output with … ATTN Diverse Prompts for Review Enrichment 𝐪: How much of earth is covered with water 𝐫!: About 71% of the … 𝐫": One-quarter of … Input FFN Hidden 𝐇# Output Extract Internal Representation Probe Training Data LLM Figure 1: The overall architecture of PINOSE. consider Neutral consistency judgement as an abstention for voting. Then, to guarantee the quality of the final dataset, we exclude controversial judgements where no single label (Consistent, Neutral, Non-Consistent) receives over 50% of the votes. This step ensures that only the most widely agreed-upon judgements are retained for analysis. Finally, we assign the factuality label True (False) to responses that are predominantly considered consistent (non-consistent) with others. This stage outputs the dataset with full elements for consistency checking, i.e., {(q, {(ri, fi)})}. 3.3 Stage 3: Probe Construction PINOSE predicts the factuality of responses via a probing model, as defined in Equation 1. To be more specific, PINOSE integrates the response with the question, both formatted according to the template outlined in Figure 7 in Appendix A.5, into the LLM for detection. Subsequently, the hidden representation of the last token in the response at the middle layer of the LLM is employed as the input for the probing model. We train the probing model to maximize the probability of the factuality label while freezing all the parameters of the LLM. Formally, the construction process of the probe optimizes the following cross-entropy loss: loss = − X {q,rj,fj} log Probe  H(l)[i]  1  fj = True  , where 1(·) is the indicator function, i is the index of the last input token, and l represents half the layer number of the LLM to be detected. 3.4 Discussion We discuss the rationality of PINOSE and the involvement of LLMs for implementing PINOSE. The reason of why PINOSE successfully detect non-factual responses comes from the model and data perspective. (1) Model property. LLMs are well-calibrated after massive pre-training (Kadavath et al., 2022; Zhu et al., 2023). This indicates that for non-factual responses, LLMs tend to assign less probability, while preserve relatively high probability for factual generations. This calibration property guarantees the feasibility of distilling factuality detection dataset from offline consistency checking. It also suggests that the internal states of LLMs tracks whether they are producing factual contents, which PINOSE tries to uncover with probing model. (2) Data quality. Offline consistency checking gathers diverse instances of inconsistency between responses from LLMs, potentially enhancing the quality of training data for the probing model. Consequently, it enables the model to address a broader range of inconsistency scenarios compared to online consistency checking. Moreover, as the data collection process is fully automated, the dataset can be significantly larger than existing training data for factuality probes. The feasibility of this principle is also widely verified in distant supervision (Quirk and Poon, 2017). To implement PINOSE, LLMs are multiply invoked during the construction process, including data preparation, peer reviewing in consistency checking, and finally non-factual detection. For a coherent implementation, we employ the same 12351 True-False NQ TriviaQA WebQ #Train 5, 000 N/A N/A N/A #Test 1, 000 1, 000 1, 000 1, 000 %True 40.5 46.6 49.5 58.3 Table 1: Data statistics. #Train and #Test are the number of instances in the training data and test set, respectively. %True is the ratio of True labels in the test set. N/A indicates the absence of factuality labels associated with question responses, although the questions in these datasets are present. LLM for detecting the factuality of responses as the one used for generation and checking consistency. This implementation strategy aligns with our setting, where no third-party LLM is available, and it also enhances the transferability of our method. 4 Experiment We conduct experiments to examine the performance of PINOSE by comparing it to baseline methods for factuality detection. Additionally, we assess its transferability and efficiency. 4.1 Experiment Setup 4.1.1 Datasets The datasets include both factuality detection benchmark and variations of QA datasets. We introduce the purpose to incorporate each dataset and their data specifications. Detailed statistics are shown in Table 1. Benchmark. We follow previous research to use True-False benchmark (Azaria and Mitchell, 2023b). True-False provides statements generated by LLMs along with corresponding factuality labels examined by humans. It does not include questions for each statement (i.e., q = ∅). True-False comes with both training dataset and test dataset. Variation of QA. Given that probing statements, such as those in True-False dataset, is less practical compared to examining responses to questions from LLMs, for practical evaluation, we establish test sets based on existing QA datasets: Natural Questions (NQ) (Kwiatkowski et al., 2019), TriviaQA (Joshi et al., 2017), and WebQ (Berant et al., 2013). In particular, we sample 1, 000 questions from each dataset and employ Llama2-7B (Touvron et al., 2023b) to generate responses accordingly. The factuality label for each response is annotated by human annotators through comparing with the original ground-truth of each question. 4.1.2 Baselines We compare PINOSE against probing-based and consistency-checking-based methods. Additionally, to ensure a comprehensive comparison, we implement heuristic confidence-based methods as baselines. • Probing Based: SAPLMA (Azaria and Mitchell, 2023a) utilizes a feed-forward neural network for factuality detection, trained on the True-False training data. RepE (Zou et al., 2023) conduct principal component analysis (PCA) on the internal representations of True-False training data. It selects a factual direction vector using the factuality labels. During testing, RepE compute the dot product between the internal representation of the given response and the factual direction vector. • Consistency Checking Based: We compare against SelfCheckGPT (Manakul et al., 2023b), which performs factuality detection via online self-consistency checking. We implement two variants. SelfCheckGPT-NLI (SCGPTNLI) uses a BERT-based NLI model (Williams et al., 2018) for consistency checking, while SelfCheckGPT-Prompt (SCGPT-PT) elicits LLM itself to evaluate the consistency between two responses via prompt. The prompt used for SCGPT-PT is shown in Figure 8 in the Appendix. • Confidence Based: We also utilize model confidence as an indicator for factuality detection. Perplexity-AVE (PPL-AVE) and PerplexityMax (PPL-MAX) (Kadavath et al., 2022; Azaria and Mitchell, 2023a; Zou et al., 2023) quantify the average and maximum token-level probabilities of statements within each test set generated by the evaluated LLM. It-is-True (Azaria and Mitchell, 2023a) compares the probabilities between sentences “It is true that q||r.” and “It is false that q||r.”, where || denotes concatenation. It is worth noting that probing based baselines rely on training data. We thus implement them using the training dataset from True-False. Besides, as SCGPT-PT and SCGPT-NLI needs input questions to generate multiple responses, it is infeasible to test on True-False, where we mark their results as “N/A” in Table 2. 4.1.3 Evaluation Metrics We follow conventions (Azaria and Mitchell, 2023b) in factuality detection, employing the area 12352 True-False NQ TriviaQA WebQ AUC ACC AUC ACC AUC ACC AUC ACC RepE 63.3 61.9 62.6 60.8 67.5 64.3 65.5 63.5 SAPLMA 90.2 83.7 69.7 67.8 69.3 64.5 75.6 69.5 SCGPT-PT N/A N/A 72.3 70.3 76.4 69.9 72.5 72.5 SCGPT-NLI N/A N/A 77.4 71.0 76.9 72.1 77.4 72.3 PPL-AVE 67.1 64.1 61.8 59.2 61.7 58.5 63.7 63.3 PPL-MAX 60.7 61.7 54.2 55.0 57.6 57.0 56.4 60.0 It-is-True 54.8 60.1 63.0 59.5 61.7 59.1 75.6 71.0 PINOSE 86.6 82.1 80.4 72.5 83.9 73.8 83.3 76.0 Table 2: Overall performance over four test sets. under the receiver operating characteristic curve (AUC) and accuracy (ACC) as evaluation metrics. 4.1.4 Implementation Details To implement PINOSE, we uniformly use Llama27B for data preparation, consistency checking, and factuality detection. For hyperparameters, we set the number of sampled responses N to 9 and the round of peer review k to 7. For fair comparison, we also allow SelfCheckGPT to generate N = 9 responses for consistency checking. The training dataset for PINOSE consists of 20, 000 constructed triplets {(q, ri, fi)}. The threshold for calculating accuracy is determined by selecting the value that yields the highest accuracy among 100 validation instances partitioned from the test sets. 4.2 Main Results Table 2 presents the AUC and ACC scores for all the compared methods across four test sets. Meanwhile, Table 3 provides insights into the average detection time required by SCGPT-NLI and our PINOSE for each instance. In general, PINOSE outperforms probing-based methods across all QA variation dataset, despite being trained without annotated labels. Additionally, PINOSE exhibits superior performance compared to consistency checking methods and is also more efficient. Some detailed findings include: Leveraging factuality labels substantially improves factuality detection accuracy. This is evidenced by PINOSE ’s superior performance trained on factuality labels, compared to confidence-based methods such as PPL and It-is-True. Limitations of annotated labels on model transferability. Despite being trained using annotated labels, probing-based methods like RepE and SAPLMA lag behind a large margin compared to PINOSE on the three QA variation test sets. NQ TriviaQA WebQ SCGPT-NLI 2.530 2.210 2.050 PINOSE 0.024 0.023 0.024 Table 3: Average time required for detecting each instance (in seconds). This disparity arises because the two baselines are trained on True-False’s training data, which consists of statements rather than question and responses. This difference in input distribution significantly limits the transferability of these models to out-of-distribution datasets. In contrast, PINOSE is trained on a diverse range of questions, leading to superior performance across the QA datasets. The slight lag observed in True-False from SAPLMA is also attributed to the model’s training on questions. However, probing the factuality of responses to questions is more practical than evaluating statements given by True-False. Therefore, training on datasets guided questions is reasonable. Self-consistency correlates well with factuality. Despite SCGPT lacking supervision from factuality labels, it surpasses supervised probingbased baselines, suggesting a strong correlation between its self-consistency principle and factuality. Furthermore, PINOSE, also adhering to the selfconsistency principle, outperforms SCGPT. This is due to PINOSE being exposed to numerous instances with diverse inconsistencies between responses, unlike SCGPT, which focuses solely on responses related to the given question. Moreover, PINOSE evaluates the consistency of internal representations rather than discrete output responses like SCGPT, allowing it access to a wider range of information, thereby enhancing its predictive accuracy. PINOSE’s detection time is significantly shorter than SCGPT, as shown in Table 3. This is because PINOSE relies on offline consistency checking, incorporating consistency characteristics into internal representations during training. As a result, its online inference depends solely on internal representations, eliminating the need for multiple online inferences like those performed by SCGPT. 4.3 Cross-model Evaluation To implement PINOSE, LLMs are invoked multiple times during the construction process, including data preparation, peer reviewing in consistency 12353 # Data Preparation Consistency Checking Factuality detection AUC ACC 1 Llama2-7B Llama2-7B Llama2-7B 80.4 72.5 2 Llama2-7B Llama2-7B Llama2-13B 81.1 73.2 3 Llama2-7B Llama2-7B Mistral-7B 81.3 73.1 4 Llama2-7B Llama2-13B Llama2-13B 81.7 73.5 5 Llama2-7B Mistral-7B Mistral-7B 81.4 73.3 Table 4: Cross-model evaluation performance on NQ. We explore different combinations of LLMs across each of the three stages to evaluate their effectiveness. checking, and finally non-factual detection. By default, we employ the same LLM for all stages, leveraging an LLM’s calibration property. Additionally, we explore whether training data generated by one LLM can effectively train a probe to detect the factuality of content generated by other LLMs. To study this, we use Llama2-7B for data preparation but vary the detection target to Llama2-13B and Mistral-7B. We further switch the LLM for consistency checking to Llama2-13B and Mistral-7B, consistent with the model to be detected. It’s important to note that for a given LLM to be detected, the probe needs to be aligned with it, specifically in terms of probe’s input, which consists of the internal representation of the response along with the question, that must be generated by the LLM. The crucial findings, as presented in Table 4, include: More powerful LLMs brings better detection performance. Comparing group 2 and 3 with group 1, where the training data remains consistent (created by Llama2-7B), probes built based on more powerful LLMs demonstrate higher performance, attributed to the enhanced representational capacity of these models. Generated data facilitates probing across various LLMs. Switching the LLM for consistency checking to match the LLM being detected results in comparable performances between groups 4 and 2, as well as between groups 5 and 3, respectively. This indicates that we can generate the training data, comprising (question, response, factuality label) triplets, once, regardless of the LLMs being probed, and utilize them uniformly to train probes for any LLM. 4.4 Ablation Studies on Data Preparation We examine the impact of question distribution in the data preparation stage with two variants: • PINOSE with self questions: Assumes a scenario where the training dataset consists of questions from the same distribution as the test questions. We utilize questions from the training data that belong to the same dataset as the test set. • PINOSE with external questions: Considers a scenario where questions from the same distribution as the test questions are unavailable. We utilize questions from the training data that belong to a different dataset from the test set. We evaluate these variants across three QA test sets, maintaining a consistent number of training questions (1, 000 per set) for the first variant. For the second variant, we collect 5, 000 training questions from the remaining two datasets for each target test set. Additionally, we vary the number of generated questions within the range of [1K, 2K, 3K, 4K, 5K, 10K] to assess its impact. We maintain fairness in our evaluation approach by applying an identical labeling methodology, specifically offline consistency checking, to each dataset. Figure 2(a)(b)(c) presents the performance of these variants on the three QA test sets. We find: Training on questions of the same distribution as the test set yields significantly better results than a different distribution. Despite having more external questions (5, 000 vs 1, 000), the second variant still lags behind the first. Training on generated questions could enhance transferability of the probe. Across the three test sets, training with fewer than 5, 000 generated questions (approximately 3, 000+, 2, 000+, 1, 000+ questions on NQ, TriviaQA, and WebQA, respectively) can achieve performance comparable to using 5, 000 external questions. Additionally, training with approximately 6, 000, 10, 000, and 3, 000 generated questions on these three datasets respectively could outperform training using 1, 000 self-questions. These results indicate that generated questions offer greater diversity, facilitating the transferability of the probe across different test sets, despite the diverse distributions observed among the three test sets. 4.5 Ablation Studies on Consistency Checking We investigate the impact of two hyperparameters, k (the number of responses) and N (the number of review rounds per response), in the consistency checking stage. Figure 2(d) displays the detection performance on NQ with varying values of k from 1 to 9 with interval 2 and different values of N (1, 3, 5, 7). 12354 (a) NQ (b) TriviaQA (c) WebQA (d) N&k on NQ Figure 2: Effects of question generation and the number of reviews and responses. We assess three question distributions for factual detection training data: “self questions” (1, 000 questions from the training data within the same question distribution.), “external questions” (5, 000 questions from a different dataset), and our proposed approach, “generated questions” (without relying on available questions). Subfigures (a)-(c) demonstrate the effects of different question distributions on various test sets, while subfigure (d) presents the effects of various k (the number of responses) and N (the round of reviews per response) on NQ. Model AUC ACC first-layer 52.0 53.3 middle-layer 80.4 72.5 last-layer 76.4 70.3 average token 77.1 70.9 last token 80.4 72.5 Table 5: Evaluating probe construction using internal representations from different layers, either averaged or using the last token. It’s worth noting that N = 1 corresponds to the review strategy in SCGPT. Remarkably, the best performance is achieved with N = 7, indicating that multiple inferences from an LLM, each guided by different demonstrations acting as instructions, contribute to more robust and confident review outcomes akin to opinions from multiple reviewers. The figure also illustrates that the performance exhibits a smooth increase as more responses are used, also suggesting that multiple responses could result in more confident consistency checking. 4.6 Ablation Studies on Probe Construction We investigate feature selection at the probe construction stage, exploring the use of internal representations from the last (32nd), middle(16th), and first layers of Llama2-7B. Additionally, we experiment with averaging representations of all tokens within a layer or using only the last token. The default configuration includes the middle-layer representation and the last token in a layer. The results, as depicted in Table 5, indicate that the middlelayer representation and the last token are optimal choices within our setting. 5 Related work Factuality detection for LLM generated content mainly falls into two categories: consistency-based and probing-based. Consistency-based methods detect non-factual generations by comparing model generated content with other information. Among these methods, the most widely adopted assumption is that, LLMs usually fail to give consistent responses to the same prompt when generating multiple times (Elazar et al., 2021; Mündler et al., 2023; Pacchiardi et al., 2023), thus motivating a series method to detect non-factual content (Manakul et al., 2023b; Cohen et al., 2023; Azaria and Mitchell, 2023a) or reduce non-factual generations (Dhuliawala et al., 2023; Kuhn et al., 2023). Another thread of works evaluate model self-consistency via model’s confidence to the generated content (Kadavath et al., 2022; Azaria and Mitchell, 2023a; Zou et al., 2023). Except for self-consistency checking, there are also attempts that use consistency between model generated content and external information as factuality indicator (Wang et al., 2023c; Gao et al., 2023; Chern et al., 2023). Probing-based methods possesses the belief that the hidden representation entails certain property of generated content and can be extracted via a light weight model (Alain and Bengio, 2017; Gurnee and Tegmark, 2023). Probing whether LLMs are producing factual content is proved to be feasible (Kadavath et al., 2022), thus motivating researchers to develop more accurate probes (Azaria and Mitchell, 2023a; Zou et al., 2023; Chen et al., 2023). Comparing with these works, which rely on annotated training data, PINOSE provides a method that dis12355 till consistency patterns from LLMs into a probe. 6 Conclusion This paper presents PINOSE, a probing method for non-factual content detection that learns from offline consistency checking. PINOSE achieves good transferability among different distributed datasets as its does not rely on manually annotated data. It also avoids the computational burden for online consistency checking. In the future, PINOSE potentially paves way to build more faithful LLMs. Limitations The limitations of this work are as follows: (1) Data Preparation: PINOSE employs an offline consistency checking method to automatically generate factuality labels for training a probe model. Although the probe model efficiently infers the factuality label of a response from an LLM in a single pass, the offline data preparation stage requires a large amount of data. This involves multiple inferences of the LLM for generating questions, responses, and reviews, resulting in high offline construction costs. Fortunately, as online usage increases, the amortized cost of offline construction decreases. (2) Open-sourced LLMs: PINOSE is limited to detecting factuality errors in opensourced LLMs because it requires the internal representation of the input response along with the question as features input to the probing model for detection. (3) Factuality Error: PINOSE is constrained to detecting the factuality errosr in responses to questions or statements. Other aspects of errors, such as logical error, require further investigation. Ethical Considerations We discuss the ethical considerations and broader impact of this work in this section: (1) Intellectual Property: The datasets employed in this study, comprising True-False, NQ, TriviaQA, and WebQ, are widely accessible and established resources designed to facilitate extensive research in artificial intelligence and natural language processing (NLP). We are confident that these resources have been adequately de-identified and anonymized. (2) Data Annotation: We recruit 10 annotators from commercial data production companies to label factual accuracy for three QA test sets, seed questions for question generation, and consistent/inconsistent response pairs for consistency checking. Annotators are compensated fairly based on agreed-upon working hours and rates. Prior to annotation, annotators are briefed on the data’s processing and usage, which is formalized in the data production contract. (3) Intended Use: The proposed PINOSE is utilized for detecting non-factual content generated by LLMs. (4) Misuse Risks: There is a risk that PINOSE could be exploited for adversarial learning, potentially enabling LLMs to generate more implicit non-factual content that is more challenging to detect. (5) Potential Risk Control: The trained PINOSE is made publicly available to the open-source community, which may help mitigate the risks associated with its potential misuse for adversarial learning. Acknowledgments This work is supported by the National Key Research & Develop Plan (2023YFF0725100), the National Natural Science Foundation of China (62322214), Zhipu Project Fund, and a grant from the Institute for Guo Qiang, Tsinghua University (2019GQB0003). References Abdelrahman Abdallah, Bhawna Piryani, and Adam Jatowt. 2023. Exploring the state of the art in legal qa systems. Journal of Big Data, 10(1). Guillaume Alain and Yoshua Bengio. 2017. Understanding intermediate layers using linear classifier probes. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Workshop Track Proceedings. Anonymous. 2023. INSIDE: LLMs’ internal states retain the power of hallucination detection. In Submitted to The Twelfth International Conference on Learning Representations. Under review. Amos Azaria and Tom Mitchell. 2023a. The internal state of an LLM knows when it’s lying. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 967–976. Amos Azaria and Tom Mitchell. 2023b. The internal state of an LLM knows when it’s lying. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 967–976. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. ArXiv preprint, abs/1607.06450. Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. Computational Linguistics, pages 207–219. 12356 Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Yuyan Chen, Qiang Fu, Yichen Yuan, Zhihao Wen, Ge Fan, Dayiheng Liu, Dongmei Zhang, Zhixu Li, and Yanghua Xiao. 2023. Hallucination detection: Robustly discerning reliable answers in large language models. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 245–255. I Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu, et al. 2023. Factool: Factuality detection in generative ai–a tool augmented framework for multi-task and multi-domain scenarios. ArXiv preprint, abs/2307.13528. Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. 2023. LM vs LM: Detecting factual errors via cross examination. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12621–12640. Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. 2023. Chain-of-verification reduces hallucination in large language models. ArXiv preprint, abs/2309.11495. Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, and Yoav Goldberg. 2021. Measuring and improving consistency in pretrained language models. Transactions of the Association for Computational Linguistics, 9:1012–1031. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vincent Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, et al. 2023. Rarr: Researching and revising what language models say, using language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16477–16508. Wes Gurnee and Max Tegmark. 2023. Language models represent space and time. ArXiv preprint, abs/2310.02207. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 770–778. Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2023. Unnatural instructions: Tuning language models with (almost) no human labor. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14409–14428. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. ArXiv preprint, abs/2310.06825. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. 2022. Language models (mostly) know what they know. Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466. Junlong Li, Zhuosheng Zhang, and Hai Zhao. 2022. Self-prompting large language models for opendomain qa. ArXiv preprint, abs/2212.08635. 12357 Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252. Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, and Hannaneh Hajishirzi. 2022. Generated knowledge prompting for commonsense reasoning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3154–3169. Xinbei Ma, Yeyun Gong, Pengcheng He, Hai Zhao, and Nan Duan. 2023. Query rewriting in retrievalaugmented large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 5303–5315. Potsawee Manakul, Adian Liusie, and Mark Gales. 2023a. SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9004–9017. Potsawee Manakul, Adian Liusie, and Mark Gales. 2023b. SelfCheckGPT: Zero-resource black-box hallucination detection for generative large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9004–9017. Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021. Generation-augmented retrieval for opendomain question answering. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4089–4100. Niels Mündler, Jingxuan He, Slobodan Jenko, and Martin Vechev. 2023. Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. ArXiv preprint, abs/2305.15852. Lorenzo Pacchiardi, Alex J Chan, Sören Mindermann, Ilan Moscovitz, Alexa Y Pan, Yarin Gal, Owain Evans, and Jan Brauner. 2023. How to catch an ai liar: Lie detection in black-box llms by asking unrelated questions. ArXiv preprint, abs/2309.15840. Chris Quirk and Hoifung Poon. 2017. Distant supervision for relation extraction beyond the sentence boundary. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 1171–1182. Zhiqing Sun, Xuezhi Wang, Yi Tay, Yiming Yang, and Denny Zhou. 2023. Recitation-augmented language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and finetuned chat models. ArXiv preprint, abs/2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998–6008. Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Cheng Jiayang, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, et al. 2023a. Survey on factuality in large language models: Knowledge, retrieval and domain-specificity. ArXiv preprint, abs/2310.07521. Lean Wang, Lei Li, Damai Dai, Deli Chen, Hao Zhou, Fandong Meng, Jie Zhou, and Xu Sun. 2023b. Label words are anchors: An information flow perspective for understanding in-context learning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9840–9855. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. ArXiv preprint, abs/2203.11171. Yuxia Wang, Revanth Gangi Reddy, Zain Muhammad Mujahid, Arnav Arora, Aleksandr Rubashevskii, Jiahui Geng, Osama Mohammed Afzal, Liangming Pan, Nadav Borenstein, Aditya Pillai, et al. 2023c. Factcheck-gpt: End-to-end fine-grained documentlevel fact-checking and correction of llm output. ArXiv preprint, abs/2311.09000. 12358 Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2022. Generate rather than retrieve: Large language models are strong context generators. ArXiv preprint, abs/2209.10063. Chiwei Zhu, Benfeng Xu, Quan Wang, Yongdong Zhang, and Zhendong Mao. 2023. On the calibration of large language models and alignment. In Findings of the Association for Computational Linguistics: EMNLP 2023. Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023. Representation engineering: A topdown approach to ai transparency. ArXiv preprint, abs/2310.01405. A Appendix A.1 Perplexity-based Baseline . . . . . 12 A.2 Cross-model Evaluation on TriviaQA and WebQ . . . . . . . . . . 12 A.3 Evaluation of Different Layers . . 13 A.4 Training Dataset Samples . . . . . 13 A.5 Employed Prompts . . . . . . . . 13 A.1 Perplexity-based Baseline The perplexity-based baseline is formulated as: PPL(AVE) = −1 J X j log pij, (2) PPL(MAX) = max j (−log pij), (3) where i denotes the i-th statement, and j denotes the j-th token in the i-th statement. J is the number of tokens in the i-th statement. pij denotes the probability of the j-th token in the i-th statement generated by the LLM. PPL(AVE) measures the average likelihood of all tokens, while PPL(MAX) measures the likelihood of the least likely token in the statement. A.2 Cross-model Evaluation on TriviaQA and WebQ We present the cross-model evaluation results on TriviaQA in Table 6 and on WebQ in Table 7 to explore whether training data generated by one LLM # Data Preparation Consistency Checking Factuality detection AUC ACC 1 Llama2-7B Llama2-7B Llama2-7B 83.9 73.8 2 Llama2-7B Llama2-7B Llama2-13B 83.8 77.0 3 Llama2-7B Llama2-7B mistral-7B 86.8 76.8 4 Llama2-7B Llama2-13B Llama2-13B 83.7 77.1 5 Llama2-7B mistral-7B mistral-7B 86.3 76.9 Table 6: Cross-model evaluation performance on TriviaQA. # Data Preparation Consistency Checking Factuality detection AUC ACC 1 Llama2-7B Llama2-7B Llama2-7B 83.3 76.0 2 Llama2-7B Llama2-7B Llama2-13B 83.4 76.3 3 Llama2-7B Llama2-7B mistral-7B 83.1 76.5 4 Llama2-7B Llama2-13B Llama2-13B 83.4 76.4 5 Llama2-7B mistral-7B mistral-7B 83.1 76.6 Table 7: Cross-model evaluation performance on WebQ. can effectively train a probe to detect the factuality of other LLMs. The settings are the same as those presented for NQ in Table 4. In addition to the default setting where we employ the same LLM for all stages of data preparation, consistency checking, and factuality detection, we also investigate two other settings. The first setting involves using Llama2-7B for data preparation but varying the detection target to Llama2-13B and Mistral-7B. The second setting involves further switching the LLM for consistency checking to Llama2-13B and Mistral-7B, consistent with the LLM to be detected. Tables 6 and 7 present observations: for the first setting, when the detection target is changed to more powerful LLMs, the detection performance increases, indicating that more powerful LLMs can improve detection performance. Additionally, for the second setting, when changing the LLM for Figure 3: AUC obtained using the internal representations of different layers at the probe construction stage. 12359 consistency checking to the same more powerful LLMs, the detection performance remains almost unchanged. This suggests that we can generate training data for questions, responses, and reviews once, regardless of the LLMs being probed, and uniformly employ them to train probes for any LLMs. A.3 Evaluation of Different Layers We vary the internal representations obtained per layer from the 1st to the last layer (32nd) of Llama27B, use them to construct probes respectively, and show the evaluated AUC on the three QA test sets in Figure 3. The results demonstrate that the detection performance of PINOSE generally increases and then decreases with the increase in the number of layers. Typically, the best performance is achieved at the middle layer. A.4 Training Dataset Samples As shown in Table 8, the extensive internal knowledge of Large Language Models (LLMs) enables them to generate datasets encompassing a broad array of topics. This diversity enhances the effectiveness of training downstream detectors. A.5 Employed Prompts We introduce the prompts used at different steps of our proposed PINOSE, and also other baseline methods that need prompts. Prompt for Question Generation in PINOSE is shown in Figure 4, where the seed questions are randomly sampled from an initial set of questions annotated by humans, which is then expanded by the newly generated questions. Prompt for Response Generation in PINOSE is shown in Figure 5, where five instructions for generating responses are presented. We randomly select an instruction from this set each time to encourage diverse response generation. Prompt for Consistency Checking in PINOSE is depicted in Figure 6. It instructs LLMs to determine whether two responses are “Consistent”, “Neutral”, or “Non-Consistent” given a question. For each judgment, three demonstrations are randomly selected from a set of 16 consistency judgment pairs. These diverse demonstrations facilitate the collection of multiple judgments (reviews) for each comparison between two responses, potentially offering unique perspectives. Prompt for Probe Construction in PINOSE is depicted in Figure 7, where a question and the answer generated by the LLM under detection are organized as input for the LLM to obtain their internal representation. This representation then serves as input for the probe to predict its factual label. Prompt for SelfCheckGPT-Prompt is displayed in Figure 8, identical to the one presented in (Manakul et al., 2023b). SelfCheckGPT-Prompt facilitates consistency checking by presenting a sentence and its context to LLM, enabling it to judge whether the context adequately supports the sentence. 12360 Table 8: Sample of LLM Generated Training Dataset Question Response Label what composer wrote the music for the lord of the rings film trilogy? The music for the Lord of the Rings film trilogy was composed by Howard Shore. True when did charles dickens publish "a christmas carol"? Charles Dickens published "A Christmas Carol" in 1843. True who sings the song with every beat of my heart? Lindsay Lohan sings with every beat of my heart. False what was the peak unemployment rate during the us great depression? 47.7% in 1933. False what is the latest operating system (os) for apple watch? The latest operating system for Apple Watch is watchOS 7. False Prompt for Question Generation in PINOSE. Please ask some objective questions of similar difficulty to [Seed Questions]. ### [Seed Questions] 1. which part of earth is covered with water? 2. what is the military equivalent of a gs-14? 3. who provided the voice for the geico insurance company gecko? 4. who played the father in sound of music? 5. fugees killing me softly with his song original? 6. Figure 4: Prompt for question generation in PINOSE. Five seed questions from NQ are provided and the blank following item 6 is the new question that encourages LLMs to generate. 12361 Prompt 1 for Response Generation in PINOSE. ### Question where is taurus the bull in the night sky ### Answer Prompt 2 for Response Generation in PINOSE. ### Instruction Answer the following question. ### Question where is taurus the bull in the night sky ### Answer Prompt 3 for Response Generation in PINOSE. ### Instruction Give a helpful answer. ### Question where is taurus the bull in the night sky ### Answer Prompt 4 for Response Generation in PINOSE. ### Instruction Generate a brief response in just one sentence. ### Question where is taurus the bull in the night sky ### Answer Prompt 5 for Response Generation in PINOSE. ### Instruction Compose a concise answer within a single sentence. ### Question where is taurus the bull in the night sky ### Answer Figure 5: Prompt for response generation in PINOSE. Five different instructions are randomly employed to elicit diverse responses. 12362 Prompt for Consistency Checking in PINOSE. Assess the connection between the two responses to the initial query, taking into account the potential scenarios of Endorsement, Contradiction, and Impartiality. ### Input - **Question:** where is vina del mar located in chile - **First Response:** Vina del Mar is located in the Valparaíso Region of Chile, approximately 120 kilometers west of Santiago. - **Second Response:** Viña del Mar is a city located on the central coast of Chile. It is part of the Valparaíso Region and is situated about 120 kilometers (75 miles) northwest of Santiago, the capital of Chile. ### Output Judgement: Endorsement @Reason@: The two responses provide consistent information about the location of Viña del Mar in the Valparaíso Region of Chile, approximately 120 kilometers west/northwest of Santiago. The details in both responses align, endorsing the accuracy of the information. ### Input - **Question:** where is taurus the bull in the night sky - **First Response:** Taurus the Bull is located in the southeastern part of the sky, near the constellation Orion and the celestial equator. - **Second Response:** Taurus the Bull is located in the eastern part of the night sky, stretching from the constellation Orion to the constellation Gemini. ### Output Judgement: Contradiction @Reason@: The two responses provide different information regarding the location of Taurus the Bull in the night sky. ### Input - **Question:** who is the old man in waiting on a woman - **First Response:** The old man in the waiting room is Mr. Johnson’s father. - **Second Response:** The old man in the picture is likely the grandfather or great-grandfather of the woman he is waiting on, as he appears to be elderly and has a gentle expression on his face. ### Output Judgement: Impartiality @Reason@: There is no explicit contradiction between the two responses, and they may collectively provide a more detailed and comprehensive answer to the question. The overall tone is impartial, as the information in the responses is neither conflicting nor mutually supportive. ### Input - **Question:** where is vina del mar located in chile - **First Response:** Vina del Mar is located in the Valparaíso Region of Chile, approximately 120 kilometers west of Santiago. - **Second Response:** Viña del Mar is a city located on the central coast of Chile. It is part of the Valparaíso Region and is situated about 120 kilometers (75 miles) northwest of Santiago, the capital of Chile. ### Output Judgement: Figure 6: Prompt for Consistency Checking in PINOSE. Three demonstrations illustrate judgments of endorsement, contradiction, and impartiality, respectively. 12363 Prompt for Probe Construction in PINOSE. ### Instruction Compose a concise answer within a single sentence. ### Question where is taurus the bull in the night sky ### Answer Taurus the Bull is located in the southeastern part of the sky, near the constellation Orion and the celestial equator. Figure 7: Prompt for Probe Construction in PINOSE. The LLM under detection receives the question and its generated answer to obtain the corresponding internal representation. This representation serves as the input for training the probe. Prompt for SelfCheckGPT with Prompt. Context: Delhi was made the capital of India for the first time by the British East India Company in 1858, when the British assume control of the Indian subcontinent following the Indian Rebellion of 1857. Sentence: Delhi was first made the capital of India by the Mughal emperor Shah Jahan in the 17th century. Is the sentence supported by the context above? Answer Yes or No. Answer: Figure 8: Prompt for SelfCheckGPT-Prompt. A sentence and its context are provided to enable LLMs to determine whether the sentence is supported by the context. 12364
2024
668
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12365–12379 August 11-16, 2024 ©2024 Association for Computational Linguistics What Do Language Models Learn in Context? The Structured Task Hypothesis. Jiaoda Li* Yifan Hou* Mrinmaya Sachan Ryan Cotterell {jiaoda.li, yifan.hou, mrinmaya.sachan, ryan.cotterell}@inf.ethz.ch Abstract Large language models (LLMs) exhibit an intriguing ability to learn a novel task from incontext examples presented in a demonstration, termed in-context learning (ICL). Understandably, a swath of research has been dedicated to uncovering the theories underpinning ICL. One popular hypothesis explains ICL by task selection. LLMs identify the task based on the demonstration and generalize it to the prompt. Another popular hypothesis is that ICL is a form of meta-learning, i.e., the models learn a learning algorithm at pre-training time and apply it to the demonstration. Finally, a third hypothesis argues that LLMs use the demonstration to select a composition of tasks learned during pre-training to perform ICL. In this paper, we empirically explore these three hypotheses that explain LLMs’ ability to learn in context with a suite of experiments derived from common text classification tasks. We invalidate the first two hypotheses with counterexamples and provide evidence in support of the last hypothesis. Our results suggest an LLM could learn a novel task in context via composing tasks learned during pre-training. https://github.com/eth-lre/LLM_ ICL 1 Introduction In-context learning (ICL) is a learning paradigm where a pre-trained large language model (LLM) learns to perform a certain task by extrapolating beyond a demonstration of the task in the form of example prompt–response pairs, given to the model as input. In-context learning does not require an update to the model’s parameters (Radford et al., 2019; Brown et al., 2020). Conditioned on the demonstration, the LLM is then tasked with generating responses to additional related prompts. Pre-trained large language models have exhibited an impressive ability to learn in context across various domains, e.g., code generation (Chen et al., 2021), education (Kasneci et al., 2023), and * Equal contribution Figure 1: The illustration of three hypotheses. medicine (Thirunavukarasu et al., 2023). However, there is still no consensus on when or how ICL works. We taxonomize existing candidate theories into three competing hypotheses (Fig. 1), which we summarize below. Hypothesis 1 (Informal; Task Selection). During pre-training, an LLM learns a set of tasks T . At inference time, the LLM identifies the task τ ∈T given the user-provided demonstration of the task and generalizes to the prompt. Under Hypothesis 1, the demonstration merely allows the model to recognize a task, and no actual learning takes place. Min et al. (2022) offers empirical support for Hypothesis 1; they show that randomly shuffling the responses in the demonstration hardly has any effect on ICL performance, suggesting the demonstration of the task only serves to enable the LLM to look up a task. In other words, Hypothesis 1 asserts that no learning takes place during ICL. Some authors (Xie et al., 2022; Wang et al., 2023; Wies et al., 2023) have also argued for Hypothesis 1 from a theoretical angle, contending that if an LLM is pre-trained on a corpus that is generated from a mixture model over tasks, it will be able to infer the task that generated the demonstrations to be able to generalize to a prompt not in the demonstration. The next hypothesis revolves around metalearning (Schmidhuber, 1987; Thrun and Pratt, 1998; Vilalta and Drissi, 2001; Finn et al., 2017). Hypothesis 2 (Informal; Meta-Learning). During pre-training, an LLM learns certain learning algorithms. During ICL, the LLM learns a task τ directly from the demonstration using one of the 12365 learned learning algorithms. Hypothesis 2 suggests that the pre-training stage prepares the parameters in an LLM in such a way that various learning algorithms, e.g., gradient descent and least squares regression, can be implicitly deployed during ICL (Von Oswald et al., 2023; Akyürek et al., 2023; Dai et al., 2023) to learn a new task from a demonstration. However, the setting assumed in the above-cited theoretical development is usually over-simplified, e.g., the assumption that the attention mechanism is linear. Moreover, the empirical evidence for Hypothesis 2 is mostly derived from experiments on small transformers trained from scratch on synthetic data (Garg et al., 2022; Raventos et al., 2023; Akyürek et al., 2024). The final hypothesis may be viewed as a mixture of Hypothesis 1 and Hypothesis 2. Hypothesis 3 (Informal; Structured Task Selection). During pre-training, an LLM learns a set of tasks T . At inference time, the LLM uses the demonstration to compose a sequence of learned tasks τ 1, τ 2, . . . ∈T and uses this composition for prediction. The composition itself may result in a novel task not seen during pre-training. Hypothesis 3 extends Hypothesis 1 by allowing ICL to not only index into the pre-learned task set T , but also compose tasks to obtain a novel task τ = τ 1 ◦τ 2 ◦· · · /∈T .1 Hahn and Goyal (2023) lay the theoretical groundwork for Hypothesis 3; under their framework, they argue that the compositional structure of linguistic pre-training data gives rise to the task composition ability of ICL. In this paper, we conduct a comprehensive battery of experiments to examine the above-stated three hypotheses. Given a demonstration of a task, i.e., a sequence of prompt–response pairs from which an LLM can successfully learn the task in context, we create a response-altered (RA) task by altering the responses such that the new task is unlikely to have occurred during pre-training. Our results confirm that LLMs can learn such an RA task in context, which rejects Hypothesis 1. We then create prompt-altered (PA) tasks, which instead alter the prompts. If Hypothesis 2 is true, this modification should not affect the performance of ICL. However, we find that the LLMs yield a substantially worse performance on PA tasks than on RA tasks, which contradicts Hypothesis 2. Lastly, in support of Hypothesis 3, we identify a sequence of simple tasks that the LLMs can compose 1The definition of task composition ◦will be given in § 2.2. to obtain unobserved RA tasks. It offers a possible explanation for ICL’s ability to perform novel tasks. 2 Preliminaries 2.1 Language Models A language model p is a distribution over Σ∗where Σ is an alphabet. The elements of Σ are tokens. A string w = w1 · · · wN of length N is a finite sequence of tokens wn ∈Σ. Most modern LLMs are defined in an autoregressive manner, i.e., p(w) = p(EOS | w) N Y n=1 p(wn | w<n), (1) and each local conditional distribution p(· | w<n) is defined over Σ def= Σ∪{EOS}. We denote w<n def= w1 · · · wn−1 and w<1 def= ε. Note that not all models defined as in (1) are distributions over Σ∗. However, in the context of our paper, we assume p is. 2.2 A Restatement of the Hypotheses In this section, we offer a more formal version of each of the three hypotheses discussed in § 1. Note that our treatment is decidedly not a formalization. Nevertheless, we do find it useful to build up some level of formal notation to discuss the three hypotheses more precisely; we save a true formalization for future work. We start with a concrete definition of a task. In this paper, a task τ ∈T is taken to be a pair of language models ⟨πτ, ρτ⟩where both πτ and ρτ are distributions over Σ∗. We interpret πτ as a distribution over prompts for task τ and ρτ as a distribution over responses conditioned on a prompt. Let T be a countable set of tasks. A demonstration of a task τ ∈T is an interwoven sequence of prompt–response pairs. We denote a demonstration of length L as d = p1r1♮· · · ♮pLrL, where ♮∈Σ is a distinguished delimiter. We assume pℓ∼πτ(·) and rℓ∼ρτ(· | pℓ) for 1 ≤ℓ≤L. With T ⊂T , we denote the set of tasks observed at the pre-training time, i.e., those tasks where there exist demonstrations p1r1♮· · · ♮pLrL of the task in the pre-training data. At inference time, given a demonstration d = p1r1 · · · pLrL, we say that a language model p has (approximately) learned a task if p(r | d♮p) ≈ X τ∈T ρτ(r | p)p(τ | d♮p), (2) and p(τ | d♮p) is sufficiently low entropy for large L. In words, (2) says that the task to be performed 12366 cold movie negative τ (a) Task τ. The example prompt “cold movie” is paired with the response “negative”. cold movie bar cold movie negative bar τ RA τ g (b) RA task τ RA. The original response (i.e., “negative”) is replaced with a random token (i.e., “bar”). lorem ipsum negative lorem ipsum cold movie negative τ PA h τ (c) PA task τ PA. The original prompt (i.e., “cold movie”) is transformed into random text (i.e., “lorem ipsum”). Figure 2: Illustrations of a sentiment classification task, a response-altered (RA) task, and a prompt-altered (PA) task. by the in-context learning may fruitfully be viewed as a latent variable (Xie et al., 2022). And, moreover, when the task-selection distribution p(τ | d♮p) is low entropy for large enough L, i.e., when we observe a large enough demonstration, the sum is dominated by a single summand. This means the language model has succeeded at identifying the task with high probability. Xie et al. (2022) present a more detailed theoretical framework to explore this scenario in a more precise manner. We now restate the hypotheses in our notation. Hypothesis 1 (Task Selection). The task-selection distribution p(τ | d♮p) is only well-calibrated for tasks in T , i.e., the finite set of tasks observed at pre-training time. Hypothesis 2 (Meta-Learning). The task-selection distribution p(τ | d♮p) generalizes to some tasks in T \ T , i.e., tasks not observed at pre-training time. To explain our third hypothesis, let T be a set of primitive tasks. We define the notion of task composition as follows. Given two tasks τ 1 = ⟨ρτ 1, πτ 1⟩and τ 2 = ⟨ρτ 2, πτ 2⟩, we define τ 1 ◦τ 2 def= ⟨ρτ 1◦τ 2, πτ 2⟩, (3) where we further define ρτ 1◦τ 2(r | p) def= X er∈Σ∗ ρτ 1(r | er)ρτ 2(er | p), (4) It is easy to see that ◦is associative.2 Then, consider the semigroup3 (T ∗, ◦) where ◦is as defined above. In other words, any composition of primitive tasks τ 1 ◦· · · ◦τ K ∈T ∗results in a new task. This semigroup structure encodes a primitive (non-hierarchical) notion of task composition. We then take T = T ∗. Finally, let T be the set of observed primitive tasks. Consider (T ∗, ◦), a 2See App. B. 3A semigroup is a set endowed with an associative operator, under which the set is closed. subsemigroup of (T ∗, ◦, ). Note that T ∗is larger than T , as it includes tasks not observed during the pre-training time, but whose composite primitive tasks were. With this notation, we now present the third hypothesis. Hypothesis 3 (Structured Task Selection). The task-selection distribution p(τ | d) is only wellcalibrated for (T ∗, ◦), i.e., compositions of primitive tasks observed at pre-training time. 3 Testing Hypothesis 1 According to Hypothesis 1, ICL selects the task it needs to perform from the demonstration. However, it is only able to select among the finite set of tasks observed at the pre-training time, denoted as T . It follows from Hypothesis 1, then, that if a novel task that has never been seen during pretraining is presented to a pre-trained model as a demonstration, ICL should not be able to perform it. We construct such a novel task as follows. We first create a string-to-string function g: Σ∗→Σ∗. Note that such a function g is a special case of a response distribution that places probability 1 on a specific output string for every input string. Then, given a task ⟨πτ, ρτ⟩, we obtain a new task τ RA by applying g to the responses of τ, i.e., πτ RA(p) def= πτ (p) (5a) ρτ RA(r | p) def= ρτ g−1(r) | p  . (5b) Note that the definition in (5) is no more than task composition, i.e., ⟨g, •⟩◦⟨ρ, π⟩where • is a stand-in for an arbitrary prompt distribution. We define τ g def= ⟨g, •⟩for the remainder of the paper, i.e., a task induced by the string-to-string function g with a stand-in prompt distribution; we also write τ RA = τ g ◦τ. The new task τ RA is almost certainly not observed in the pre-training data, i.e., τ RA /∈T . We call τ RA a response-altered task (RA) and the ICL setting with RA tasks τ RA-ICL. 12367 0 50 100 0.6 0.8 L F1-Macro (a) CR 0 50 100 0.6 0.8 1 L (b) SST-2 0 20 40 0.2 0.4 0.6 0.8 L Chance Vanilla ICL τ RA-ICL (c) AG News Figure 3: Performance of vanilla ICL and τ RA-ICL on the 3 datasets with different demonstration lengths L. LLaMA2-70B is used. The LLM is able to learn RA tasks as L grows. This setting resembles the semantically unrelated label ICL setting of Wei et al. (2023) and the abstract formalization of Pan et al. (2023). We examine the following logical consequence of Hypothesis 1. Prediction 1. If Hypothesis 1 is true, then an LLM’s performance in the τ RA-ICL setting should be similar to random guessing. 3.1 Experimental Setup Tasks. As shown in Tab. 3 (App. C.1), we select 3 commonly used text classification datasets for ICL: Customer Reviews (CR; Hu and Liu, 2004), Standford Sentiment Treebank with binary sentiments (SST-2; Socher et al., 2013), and AG News (Zhang et al., 2015); see App. D for the license details. Experimental Setup. Each text classification dataset contains a set of pairs {(x(ℓ), y(ℓ))}L ℓ=1 where x(ℓ) ∈Σ∗is an input string and y(ℓ) ∈Y is x(ℓ)’s classification label drawn from Y , a finite, task-dependent label set. To encode a classification problem as ICL, we map each element of Y to a string in Σ∗by means of a function o: Y →Σ∗. Additionally, we convert the input x into a prompt p through a templating function t: Σ∗→Σ∗. The exact templating function we use for each dataset is listed in Tab. 3. Then, we construct a delimiter-separated demonstration of size L: d = t(x(1))o(y(1))♮· · · ♮t(x(L))o(y(L)). To perform classification on a test prompt t(x), we select the highest-probability class as follows y⋆= argmax ey∈Y p(o(ey) | d♮t(x)). (6) Settings. We consider three settings: (1) Chance: random guessing uniformly across different classes;4 (2) Vanilla ICL: the standard ICL setting; (3) τ RA-ICL: The responses r in the 4All the datasets are largely class-balanced. demonstrations are replaced by g(r). The prompts p are left unchanged. Implementation Details. We conduct experiments on a publicly available LLM: LLaMA2 (Touvron et al., 2023) with three sizes: 7B, 13B, and 70B. Each experiment is repeated 20 times with different random seeds and the average F1-Macro score is reported. In each experiment, we construct a distinct test set consisting of 256 prompt–response pairs, and for each element of this test set, we sample L prompt–response pairs from the training set, which serve as its demonstration. 3.2 Results ICL Example Number. We present results on LLaMA2-70B with varying demonstration lengths (labeled as L) in Fig. 3. The τ RA-ICL’s performance is near chance when the number of prompt–response pairs in the demonstration is small, but it quickly grows to above 80% as L increases and matches the performance of vanilla ICL on SST-2 and AG News when L is large. Model Size. In Fig. 4, we report the ICL’s performance (L = 32) of models of different sizes. The performance of both vanilla ICL and τ RA-ICL generally improves as the model size increases, but even the smallest model (LLaMA2-7B) yields a performance well above chance in τ RA-ICL setting. Summary. These results contradict Prediction 1 and demonstrate that an LLM can learn RA tasks τ RA in context, which are highly unlikely to belong to the set of observed tasks T during pre-training. These experiments speak against Hypothesis 1. 4 Testing Hypothesis 2 We have shown in the previous section that an LLM can learn a novel task in context, but it is still unclear what type of tasks can be learned in context. 12368 7B 13B 70B 0.3 0.5 0.7 0.9 Model Size F1-Macro Chance Vanilla ICL τ RA-ICL Figure 4: Average performance of vanilla ICl and τ RAICL across 3 datasets (CR, SST-2, AG News). Demonstration length L = 32. LLaMA2-70B yields the best performance but LLaMA2-7B is not far behind. If we accept Hypothesis 2 as true, certain learning algorithms are learned by the pre-trained LLM, so learning from a demonstration in context happens on the fly at inference time. This hypothesis implies that there need not be knowledge of the prompted task in the pre-training data. And indeed, on this view, the role of the pre-training is merely to prepare the parameters in the LLM in such a way that the LLM architecture encodes various learning algorithms. For instance, some authors (Von Oswald et al., 2023; Akyürek et al., 2023; Dai et al., 2023) argue that training a linear model with gradient descent can be encoded as in-context learning under certain simplifying assumptions. This leads to the following prediction. Prediction 2. If Hypothesis 2 is true, then ICL should behave similarly to a model trained with a certain learning algorithm, e.g., gradient descent. A caveat of Prediction 2 is, of course, that we do not know a priori which learning algorithm the LLMs learn at pre-training time. 4.1 Experiment 1 We first propose a prompt-altered (PA) task, where instead of transforming the responses as τ RA does, we transform the prompts, i.e., we create a stringto-string function h: Σ∗→Σ∗and apply it to the prompts of τ. This results in the following novel task τ PA = ⟨πτ PA, ρτ PA⟩where distributions are defined as πτ PA(p) = πτ(h−1(p)) (7a) ρτ PA(r | p) = ρτ(r | h−1(p)). (7b) The ICL setting with PA tasks is named τ PA-ICL. We choose a h such that the PA task can be learned given the demonstration. If the RA task τ RA in the τ RA-ICL setting is indeed learned through a metalearning-esque procedure, τ PA-ICL should be just 0 20 40 0.2 0.4 0.6 0.8 1 L F1-Macro Chance Vanilla ICL τ RA-ICL τ PA-ICL τ PA-LR Figure 5: Performance of various settings across 3 text classification tasks. LLaMA2-70B is used. τ PA-ICL performs worse than τ PA-LR and chance. as easily learnable in the τ PA-ICL setting because τ RA and τ PA are essentially the same task. And, moreover, performing both tasks in context should exhibit similar performance to a logistic regression classifier if the LLM encodes the ability to learn a linear model implicitly in its parameters. 4.1.1 Experimental Setup Our experimental setup follows § 3.1. However, we introduce two more settings in addition to the ones we consider in § 3.1: • τ PA-ICL: As defined in (7), the prompts p (including the template tokens such as “Review” and “Sentiment”) in the demonstrations are replaced with h(p). The responses r are left unchanged. • τ PA-LR: To make sure a PA task τ PA is learnable from a demonstration, we fit a logistic regression (LR) model as a baseline. Because a variablelength string cannot be easily fed into a logistic regressor, we first tokenize a string using the LLaMA2 tokenizer and then convert it into bagof-words (BoW) representations.5 The classifier is then trained using a BoW representation of exactly the same L prompt–response pairs as used in a demonstration of τ PA-ICL. We use this baseline to gauge the performance of a model with minimum learning ability. 4.1.2 Results The average accuracy of LLaMA2-70B across 3 datasets are in Fig. 5. More detailed results can be found in App. C.3. τ PA-LR. We find that the logistic regressor is able to learn the tasks to some degree, achieving an F1Macro score of around 50%. In contrast, a random 5More accurately, a bag of tokens. 12369 Dataset τ g-Linear (F1-Macro %) τ g-ICL (F1-Macro %) γp γs CR / SST-2 100.0 ± 0.0 92.7 ± 15.2 N.A. N.A. AG News 100.0 ± 0.0 93.8 ± 10.8 N.A. N.A. DBPedia 99.9 ± 0.8 59.8 ± 19.7 −0.02 (0.59) 0.01 (0.83) Table 1: Means and variances of the performance of τ g-Linear and τ g-ICL. γp is the Pearson correlation coefficient between τ g-Linear and τ g-ICL and γs is the Spearman’s rank correlation coefficient. In the parentheses are p-values. No significant correlation is observed. guesser achieves an F1-Macro score of 41%. This experiment shows the task is indeed learnable to a certain extent given the demonstration as training data. And, as expected, the performance improves steadily as L increases. τ PA-ICL. However, in the τ PA-ICL setting, the LLM always performs near or below chance regardless of the size of the demonstration. Indeed, the performance does not improve even when the demonstration length L reaches the maximum number of tokens allowed for LLaMA2 (Fig. 9). τ RA-ICL. In stark contrast to the τ PA-ICL setting, as shown in Fig. 5, the LLM has an average score above 80% in the τ RA-ICL setting. Summary. The enormous performance gap between τ RA-ICL and τ PA-ICL does not concord with Prediction 2, and, thereby gives us evidence against Hypothesis 2. Specifically, our results imply that even though the RA tasks τ RA are novel, they are not learned with some learning algorithm on the fly at inference time. This is in line with the observations of Kossen et al. (2024) and Shen et al. (2024). 4.2 Experiment 2 In our second experiment, we focus on one specific theoretical claim—specifically, that LLMs may implicitly learn a linear regression using gradient descent during ICL (Von Oswald et al., 2023; Akyürek et al., 2023; Dai et al., 2023). 4.2.1 Experimental Setup Tasks. In addition to CR, SST-2, and AG News, we also consider DBpedia (Zhang et al., 2015), which has many more classes (Tab. 3). Experimental Setup. We consider the task τ g induced by the string-to-string function g, as introduced in § 3. Given string– label pairs {(x(ℓ), y(ℓ))}L ℓ=1, we construct a demonstration for task τ g as follows: o(y(1))g(o(y(1)))♮· · · ♮o(y(L))g(o(y(L))), i.e., pℓ = o(y(ℓ)) and rℓ = g(o(y(ℓ))). In this formulation, both prompts and responses consist of a single token, i.e., p, r ∈Σ. We denote the set of distinct responses as R ⊂Σ. The single-token construction allows us to model the task using linear regression. The embedding layer (0th layer) of an LLM is a matrix E ∈R|Σ|×D, where D is the dimensionality of embeddings. We retrieve the row vector pℓ def= Epℓ,: ∈R1×D for each prompt pℓ, where Epℓ,: denotes the row vector in E that corresponds to the token pℓ. We vertically stack the row vectors to create the embedding matrix P ∈RL×D of the prompts in the demonstration.6 Also, let R ∈RL×D be a similarly constructed embedding matrix of the responses, i.e., each response rℓis embedded as a row vector rℓ∈R1×D. Additionally, for a test prompt– response pair ⟨p, r⟩= ⟨o(y), g(o(y))⟩, we embed the prompt p as a row vector p ∈R1×D. Thus, learning τ g is reduced to performing a multiple linear regression, i.e., learning a parameter matrix W⋆∈RD×D that minimizes the following (nonstrictly) convex objective W⋆∈argmax W ||R −PW||2. (8) Moreover, the data are constructed such that the test prompt–response pair ⟨p, r⟩has appeared at least once in {⟨pℓ, rℓ⟩}L ℓ=1, so the task does not require generalization at all. Settings. We compare the following two settings: • τ g-ICL: We construct a demonstration d = p1r1♮· · · ♮pLrL and perform classification as follows: r⋆= argmax er∈R p(er | d♮p). (9) The classification is correct if r⋆= r. • τ g-Linear: We train a linear regression with gradient descent. At inference time, we use a minimzer W⋆to compute the predicted embedding 6The rows of P are linearly independent. 12370 0.4 0.6 0.8 1 0.4 0.6 0.8 γp: 0.34 (p-value < 0.01) γs: 0.35 (p-value < 0.01) τ g-ICL (F1-Macro) τ RA-ICL (F1-Macro) (a) CR 0.4 0.6 0.8 1 0.5 0.6 0.7 0.8 γp: 0.32 (p-value < 0.01) γs: 0.43 (p-value < 0.01) τ g-ICL (F1-Macro) (b) SST-2 0.4 0.6 0.8 1 0.2 0.4 0.6 γp: 0.42 (p-value < 0.01) γs: 0.51 (p-value < 0.01) τ g-ICL (F1-Macro) (c) AG News Figure 6: Compare the performance of τ RA-ICL (y-axis) against τ g-ICL (x-axis). The dots represent the mean values, and the error bars represent standard deviations. The dashed horizontal line represents the performance of random guessing (i.e., Chance). The Pearson correlation coefficients γp and Spearman correlation coefficients γs are also reported. Significant correlations (p-value < 0.01) are observed. vector r⋆for r as follows r⋆= pW⋆∈R1×D. (10) In order to evaluate its performance against ICL, we utilize the transposed embedding layer E⊤∈ RD×|Σ| to project r⋆into R1×|Σ| and perform classification as follows: r⋆= argmax er∈R  r⋆E⊤ er . (11) where (r⋆E⊤)er denotes the entry of the vector (r⋆E⊤) that corresponds to the token er. We judge classification correct if r⋆= r. Note that the embedding layer E is taken directly from the LLM and not trained with the linear model. Implementation Details. For all the following experiments in § 4.2 and § 5, we experiment on the largest model (LLaMA2-70B) and set the demonstration length L = 32. We randomly sample 500 functions g for τ g. Each mapping g is constructed by randomly selecting a token from Σ as the image g(o(y)) of a o(y). We train τ g-Linear with also 32 examples for 80 epochs which corresponds to the 80 layers of LLaMA2-70B. We choose a learning rate of 1000, which we find yields the best performance. The correlation between the F1-Macro scores of τ g-ICL and τ g-Linear is computed. 4.2.2 Results As shown in Tab. 1, τ g-Linear can learn most of the functions perfectly. On the other hand, τ g-ICL has a much lower average and higher variance. In the same table, we also list the correlation between the performance of τ g-Linear and τ g-ICL. In contrast to Prediction 2, there does not exist any significant correlation, which again gives us evidence against Hypothesis 2. It is worth mentioning that the high variance of τ g-ICL’s performance also goes against the claim of Olsson et al. (2022) that there exists a special kind of heads called induction heads that copy any abstract pattern. 5 Testing Hypothesis 3 Because neither Hypothesis 1 nor Hypothesis 2 matches our experimental findings, we now turn to Hypothesis 3 to explain the empirical facts. 5.1 Experiment 1 We first examine the following prediction. Prediction 3. Consider τ RA = τ g ◦τ. Then, an LLM’s ability to learn τ RA in context correlates with its ability to learn τ g in context. We verify it by comparing the performance of τ RA-ICL and τ g-ICL. 5.1.1 Experimental Setup The experimental setup of τ RA-ICL and τ g-ICL follow that of § 3.1 and § 4.2, respectively. The correlation between the F1-Macro scores of τ RA-ICL and τ g-ICL is computed. 5.1.2 Results For each dataset in CR, SST-2, and AG News, we bucket the 500 data points for 500 functions into 8 bins to better visualize the results. The first group contains all functions g with an F1-Macro of 100% on τ g-ICL. The rest of the data points are put into 8 −1 = 7 bins evenly distributed and by increasing τ g performance. The mean and the standard deviation of τ RA-ICL’s performance of each group are reported in Fig. 6. We compute the Pearson correlation coefficient (denoted as γp) and Spearman’s rank correlation coefficient (denoted as γs) between the τ g-ICL scores and the τ RA-ICL scores of all the data points. There exists a modest 12371 positive correlation with 0.34 ≤γp ≤0.42 and 0.35 ≤γs ≤0.51, but it is statistically significant with p-value smaller than 0.01 under Student’s t-test. We take it as evidence supporting that τ RA is learned via composing τ g and τ. 5.2 Experiment 2 Next, rather than creating string-to-string functions g randomly, we construct hand-crafted functions that are intuitive and likely to have been in the pretraining data. We call such functions natural. One example of such a natural function is gsyn, where the prompts are mapped to synonyms. We compare ICL performance on these hand-crafted functions against that on random functions. If Hypothesis 3 were true, τ g-ICL should have a significantly higher performance on such natural functions. 5.2.1 Experimental Setup We consider three types of natural functions: (1) synonym: Each prompt o(y) is mapped to one of its synonyms, (2) antonym: Each prompt o(y) is mapped to one of its antonyms, and (3) keyword: Each prompt o(y) is mapped to a keyword in its genre. Synonyms and antonyms are selected using PyMultiDictionary library.7 Keywords are obtained for each genre by querying GPT-4 (OpenAI, 2023). In contrast to random, we cannot create a large number of natural functions as easily. Therefore, we manually choose a candidate set of 10 possible synonyms (resp. antonyms and keywords) for each prompt. Thus, we can create a natural function by sampling a synonym (resp. antonym and keyword), from the candidate sets for every input. We create 500 such functions. 5.2.2 Results We plot the mean F1 Macro scores as well as the standard deviations of τ g-ICL across different functions in Tab. 2. We observe that the LLM can learn synonym and keyword in context almost perfectly. The function antonym appears to be more difficult to learn, but still clearly much easier than random, which is most evident in DBPedia where the LLM has an F1-Macro score of 84.5% on antonym and only 59.8% on random. We also perform a onesided Welch’s t-test between each of antonym, synonym, keyword and random. The t-values are all greater than 2 and the p-values are all smaller 7https://github.com/ppizarror/ PyMultiDictionary. The library aggregates information from educalingo.com, synonym.com, and WordNet (Miller, 1994) Dataset Mapping F1-Macro (%) t-test t-value p-value random 93.2 ± 13.6 N.A. N.A. CR / antonym 97.0 ± 8.5 4.35 < 0.01 SST-2 synonym 100.0 ± 0.1 10.71 < 0.01 keyword 100.0 ± 0.0 10.75 < 0.01 AG News random 93.8 ± 10.8 N.A. N.A. antonym 99.9 ± 0.3 12.59 < 0.01 synonym 100.0 ± 0.0 12.72 < 0.01 keyword 100.0 ± 0.0 12.73 < 0.01 DBPedia random 59.8 ± 19.7 N.A. N.A. antonym 84.5 ± 20.5 19.42 < 0.01 synonym 95.8 ± 1.7 40.67 < 0.01 keyword 93.3 ± 4.9 36.90 < 0.01 Table 2: The performance of τ g-ICL with different types of functions g. One-sided t-tests are performed between the natural functions (antonym, synonym, keyword and the random functions. The LLM learns the natural functions significantly better. than 0.01. In other words, the natural functions are indeed significantly easier to learn in context, which is in line with our prediction. 5.3 Experiment 3 For Hypothesis 3 to be a good hypothesis, however, we need to additionally show why seemingly arbitrary tasks τ g would likely be elements of T ∗. We offer a tentative explanation. While we cannot show how to construct an arbitrary string-to-string function g out of natural functions, we can exhibit compositions of natural functions that appear arbitrary. We do so by composing natural functions, e.g., gsyn, repeatedly as follows gm syn(·) = gsyn(gsyn(· · · gsyn(·))) | {z } ×m . (12) We call gm syn the mth power of gsyn. 5.3.1 Experiment Setup The powers of the synonym functions are created the same way as in § 5.2.1. Because the candidate sets of synonyms are small (of size 10), to demonstrate how it is possible to create seemingly arbitrary functions, we adversarially sample functions such that for every p, gm syn(p) ∈ Sm(p) \ Sm−1(p), where Sm(p) denotes the set of all the mth order synonyms of p, i.e., we only keep those that are not synonyms of lower orders. As an example of this strategy, we may have the following mapping: “company” is synonymous with “firm”, which is synonymous with “association” (as a noun) and “adamant” (as an adjective). Thus, we would map “adamant” to “company”, a seemingly unrelated word. 12372 Figure 7: Synonyms of “positive” and “negative”. The words between concentric circles represent elements of the candidate sets of high-order synonyms. As the order gets higher, the synonyms become less related to the seed word. 0 2 4 6 0.8 0.9 1 γp: -0.27 (p-value < 0.01) γs: -0.31 (p-value < 0.01) Order of synonym τ g-ICL (F1-Macro) (a) CR / SST-2 0 2 4 6 0.9 0.95 1 1.05 γp: -0.23 (p-value < 0.01) γs: -0.22 (p-value < 0.01) Order of synonym (b) AG News 0 2 4 6 0.7 0.8 0.9 1 γp: -0.56 (p-value < 0.01) γs: -0.71 (p-value < 0.01) Order of synonym (c) DBpedia Figure 8: The performance of LLaMA2-70B learning synonym of different orders. The performance in general decreases as the order increases. 5.3.2 Results We visualize a small subset of the synonyms of the two o(y) of CR/SST in Fig. 7. The concentric circles are the candidate sets, with radii representing their synonym orders. As can be seen, taking a natural function to higher power results in functions that appear as if they were arbitrary string-to-string mappings, e.g., “positive” →“tang” “negative” →“or”. Yet, as our results demonstrate, ICL is still able to learn these functions well—above 90% F1-score— as shown in Fig. 8b. The same also holds true for the AG News and DBPedia text classification datasets. With a large task alphabet T, we expect an LLM to be able to learn many seemingly arbitrary string-to-string functions. More evidence demonstrating that these functions g are learned via composition comes from the noticeable performance decrease as the task composition becomes more complex. As shown in Fig. 8, there exists a negative correlation (p-value < 0.01) between the performance of τ g-ICL and the order of synonym. 6 Conclusion In this paper, we examine the ability of pre-trained LLMs to learn various tasks in context. We find that LLMs can learn text classification tasks with corrupted responses (τ RA-ICL), but when the prompts are corrupted in the same manner (τ PA-ICL), ICL performs significantly worse than a logistic regression model. A closer look suggests the tasks in the τ RA-ICL may have been learned via composing primitive tasks learned during pre-training. Overall, our paper provides insights into the nature, abilities, and limitations of ICL. Acknowledgements This publication was made possible by an ETH AI Center doctoral fellowship to Jiaoda Li. Yifan Hou is supported by the Swiss Data Science Center PhD Grant (P22-05). 12373 Limitations One limitation is that we only experiment on one family of LLMs—LLaMA2. Our results might not hold on larger models, e.g., GPT-4. Another limitation is that our formalization is not complete and is, sadly, a bit more vibes-based than we would have liked. We hope to achieve a more concrete formalization in future work. Ethical Considerations The datasets and pre-trained LLMs that we use are all publicly available. Our paper focuses on model interpretation, and we thereby do not foresee any ethical issues originating from this work. References Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. 2023. What learning algorithm is in-context learning? investigations with linear models. In International Conference on Learning Representations. Ekin Akyürek, Bailin Wang, Yoon Kim, and Jacob Andreas. 2024. In-context language learning: Architectures and algorithms. Computing Research Repository, arXiv:2401.12973. Version 2. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. Computing Research Repository, arXiv:2107.03374. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Shuming Ma, Zhifang Sui, and Furu Wei. 2023. Why can GPT learn in-context? language models secretly perform gradient descent as meta-optimizers. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4005–4019, Toronto, Canada. Association for Computational Linguistics. Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 1126–1135. PMLR. Shivam Garg, Dimitris Tsipras, Percy S Liang, and Gregory Valiant. 2022. What can transformers learn incontext? A case study of simple function classes. In Advances in Neural Information Processing Systems, volume 35, pages 30583–30598. Michael Hahn and Navin Goyal. 2023. A theory of emergent in-context learning as implicit structure induction. Computing Research Repository, arXiv:2303.07971. Version 1. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, Washington, USA, August 22-25, 2004, pages 168–177. ACM. Enkelejda Kasneci, Kathrin Seßler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, et al. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103:102274. Jannik Kossen, Yarin Gal, and Tom Rainforth. 2024. Incontext learning learns label relationships but is not conventional learning. In International Conference on Learning Representations. George A. Miller. 1994. WordNet: A lexical database for English. In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048–11064, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, 12374 Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Dawn Drain, Deep Ganguli, Zac Hatfield-Dodds, Danny Hernandez, Scott Johnston, Andy Jones, Jackson Kernion, Liane Lovitt, Kamal Ndousse, Dario Amodei, Tom Brown, Jack Clark, Jared Kaplan, Sam McCandlish, and Chris Olah. 2022. In-context learning and induction heads. Computing Research Repository, arXiv:2209.11895. OpenAI. 2023. GPT-4 technical report. Computing Research Repository, arXiv:2303.08774. Jane Pan, Tianyu Gao, Howard Chen, and Danqi Chen. 2023. What in-context learning “learns” in-context: Disentangling task recognition and task learning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8298–8319, Toronto, Canada. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Github Repository. Allan Raventos, Mansheej Paul, Feng Chen, and Surya Ganguli. 2023. Pretraining task diversity and the emergence of non-bayesian in-context learning for regression. In Thirty-seventh Conference on Neural Information Processing Systems. Jürgen Schmidhuber. 1987. Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. Ph.D. thesis, Technische Universität München. Lingfeng Shen, Aayush Mishra, and Daniel Khashabi. 2024. Do pretrained transformers learn in-context by gradient descent? Computing Research Repository, arXiv:2310.08540. Version 5. Aaditya K Singh, Stephanie C.Y. Chan, Ted Moskovitz, Erin Grant, Andrew M Saxe, and Felix Hill. 2023. The transient nature of emergent in-context learning in transformers. In Thirty-seventh Conference on Neural Information Processing Systems. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. 2023. Large language models in medicine. Nature medicine, 29(8):1930– 1940. Sebastian Thrun and Lorien Pratt. 1998. Learning to Learn: Introduction and Overview, pages 3–17. Springer US, Boston, MA. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Computing Research Repository, arXiv:2307.09288. Version 2. Ricardo Vilalta and Youssef Drissi. 2001. A perspective view and survey of meta-learning. Artificial Intelligence Review, 18. Johannes Von Oswald, Eyvind Niklasson, Ettore Randazzo, Joao Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov. 2023. Transformers learn in-context by gradient descent. In Proceedings of the 40th International Conference on Machine Learning, volume 202 of Proceedings of Machine Learning Research, pages 35151–35174. PMLR. Xinyi Wang, Wanrong Zhu, Michael Saxon, Mark Steyvers, and William Yang Wang. 2023. Large language models are implicitly topic models: Explaining and finding good demonstrations for in-context learning. In Workshop on Efficient Systems for Foundation Models @ ICML2023. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022. Emergent abilities of large language models. Transactions on Machine Learning Research. Survey Certification. Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Da Huang, Denny Zhou, and Tengyu Ma. 2023. Larger language models do in-context learning differently. Computing Research Repository, arXiv:2303.03846. Version 2. Noam Wies, Yoav Levine, and Amnon Shashua. 2023. The learnability of in-context learning. In Thirtyseventh Conference on Neural Information Processing Systems. 12375 Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2022. An explanation of in-context learning as implicit Bayesian inference. In International Conference on Learning Representations. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 649–657. 12376 A Related Work Many papers attempt to provide a theoretical grounding for ICL’s emergence, hypothesizing that ICL emerges from pre-training on documents that are drawn from a mixture of latent concepts (Xie et al., 2022; Wang et al., 2023; Wies et al., 2023) or a compositional attribute grammar (Hahn and Goyal, 2023). After pre-training, the LLM can recognize the concept or the generative process that the prompt is sampled from and use it for next-token prediction. Another stream of research endeavors to show how LLMs can learn new tasks in context. Von Oswald et al. (2023); Akyürek et al. (2023); Dai et al. (2023) show by construction that the transformer can implicitly implement various learning algorithms, such as gradient descent and least squares regression. Nevertheless, there is a gap between the settings assumed in the papers and the ones used in practice, e.g. the attention mechanism needs to be linear, or only linear regression problems are considered. Empirically, it has been shown that transformers can learn various function classes in context, e.g., linear regression (Raventos et al., 2023), multi-layer perceptrons and decision trees (Garg et al., 2022), and regular languages (Akyürek et al., 2024), but the models are trained from scratch on synthetic data derived from functions in the same class, which do not conform with how LLMs are pre-trained in practice. It is argued that their performance might not be extrapolated to models with a larger size (Wei et al., 2022) or longer training time (Singh et al., 2023). Thus, we fix our attention on LLMs that are pre-trained on large natural text corpus and are practically used. Most related to our work are Kossen et al. (2024); Shen et al. (2024), which also find that ICL’s behavior is different from models trained with conventional learning algorithms such as gradient descent. Our work is distinct in the investigation on the possibility of ICL learning via task composition. B Associativity of Task Composition Proposition 1. Task composition is associative, i.e., (τ 1 ◦τ 2) ◦τ 3 = τ 1 ◦(τ 2 ◦τ 3). Proof. The proof follows by simple manipulation: ρτ 1◦(τ 2◦τ 3)(r | p) = X er1∈Σ∗ ρτ 1(r | er1)  X er2∈Σ∗ ρτ 2(er1 | er2)ρτ 3(er2 | p)   = X er2∈Σ∗  X er1∈Σ∗ ρτ 1(r | er1)ρτ 2(er1 | er2)  ρτ 3(er2 | p) = ρ(τ 1◦τ 2)◦τ 3(r | p). (13) πτ 1◦(τ 2◦τ 3) = πτ 2◦τ 3 = πτ 3 = π(τ 1◦τ 2)◦τ 3 (14) ■ C Supplementary Results C.1 Dataset Details The prompt templating functions t(x) and the responses (o(y) for CR, SST-2, and AG News are in Tab. 3. The altered responses are tokens from the Lorem Ipsum generator. C.2 (First-Order) Synonym The 10 candidate synonyms for each prompt are in Tab. 4. C.3 Detailed Results We provide detailed results on CR, SST-2, and AG News in Fig. 9. We report LLaMA2 with three model sizes. The results on LLaMA2-7B and LLaMA2-13B are consistent with those on LLaMA2-70B that we report in the main text. 12377 Dataset Prompt Response Template τ τ RA CR Review: x \n positive, negative por, Ne Sentiment: SST-2 Review: x \n positive, negative por, Ne Sentiment: AG News News: x \n word, sports, business, science Mag, Am, Num, Lab News type: Table 3: Dataset information, templates, and responses used for τ and τ RA. Dataset Prompt Synonyms CR / SST-2 positive good, bright, happy, cheer, benefit, fortune, helpful, joy, help, favorite negative bad, dark, dire, sorrow, harm, down, dim, bitter, sad, blue AG News world earth, planet, universe, sphere, creation, domain, environment, habitat, society, system sports game, play, exercise, competition, activity, challenge, contest, match, training, racing business trade, commerce, industry, company, market, operation, firm, establishment, production, organization science research, technology, knowledge, experiment, investigation, theory, discovery, analysis, discipline, learning DBpedia company firm, business, startup, establishment, operator, producer, chain, brand, office, Agency transport vehicle, car, train, bus, plane, ship, tram, cab, Metro, carriage player runner, footballer, basketball, race, box, golf, cycle, sky, board, sail politics government, policy, state, nation, election, party, assembly, council, republic, leader artist painter, writer, composer, singer, actor, designer, director, producer, poet, photograph animal creature, pet, bird, fish, insect, species, habitat, conservation, wild, migration school university, college, prep, primary, secondary, high, middle, grammar, technical, night plant flower, tree, Fern, grass, leaf, bud, root, branch, seed, growth village Township, settlement, community, district, parish, cluster, region, municipality, neighborhood, rural book novel, volume, text, manual, guide, reference, edition, journal, cover, series nature terrain, forest, mountain, river, sea, lake, ocean, beach, desert, garden album record, release, compilation, single, track, score, collection, edition, session, live building structure, house, stad, tower, hall, temple, palace, castle, fort, shed film movie, picture, cinema, feature, animation, drama, comedy, western, mystery, horror Table 4: The first-order synonyms used for constructing gsyn. D Experimental Setup We implement our experiments on A100-80G. Each experiment takes around 1-2 GPU hours. For LLaMA2, We use the implementation and pre-trained weights provided by HuggingFace (Wolf et al., 2020). CR (Hu and Liu, 2004) and SST (Socher et al., 2013) are under the CC-BY license. AG News (Zhang et al., 2015) and DBpedia (Zhang et al., 2015) are under the BSD-3-Clause license. We use these datasets consistently with their intended use. 12378 0 50 100 0 0.2 0.4 0.6 0.8 1 L F1-Macro (a) 7B on CR 0 50 100 0 0.2 0.4 0.6 0.8 1 L (b) 7B on SST-2 0 20 40 0 0.2 0.4 0.6 0.8 1 L Chance Vanilla ICL τ RA-ICL τ PA-ICL τ PA-LR (c) 7B on AG News 0 50 100 0 0.2 0.4 0.6 0.8 1 L F1-Macro (d) 13B on CR 0 50 100 0 0.2 0.4 0.6 0.8 1 L (e) 13B on SST-2 0 20 40 0 0.2 0.4 0.6 0.8 1 L Chance Vanilla ICL τ RA-ICL τ PA-ICL τ PA-LR (f) 13B on AG News 0 50 100 0 0.2 0.4 0.6 0.8 1 L F1-Macro (g) 70B on CR 0 50 100 0 0.2 0.4 0.6 0.8 1 L (h) 70B on SST-2 0 20 40 0 0.2 0.4 0.6 0.8 1 L Chance Vanilla ICL τ RA-ICL τ PA-ICL τ PA-LR (i) 70B on AG News Figure 9: ICL performance with different demonstration lengths L. 12379
2024
669
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1229–1248 August 11-16, 2024 ©2024 Association for Computational Linguistics BeamAggR: Beam Aggregation Reasoning over Multi-source Knowledge for Multi-hop Question Answering Zheng Chu1, Jingchang Chen1, Qianglong Chen2*, Haotian Wang1 Kun Zhu1, Xiyuan Du1, Weijiang Yu3, Ming Liu1,4*, Bing Qin1,4 1Harbin Institute of Technology, Harbin, China 2Zhejiang University 3Sun Yat-sen University 4Peng Cheng Laboratory {zchu,jcchen,mliu}@ir.hit.edu.cn, [email protected] Abstract Large language models (LLMs) have demonstrated strong reasoning capabilities. Nevertheless, they still suffer from factual errors when tackling knowledge-intensive tasks. Retrievalaugmented reasoning represents a promising approach. However, significant challenges still persist, including inaccurate and insufficient retrieval for complex questions, as well as difficulty in integrating multi-source knowledge. To address this, we propose Beam Aggregation Reasoning (BeamAggR), a reasoning framework for knowledge-intensive multi-hop QA. BeamAggR explores and prioritizes promising answers at each hop of question. Concretely, we parse the complex questions into trees, which include atom and composite questions, followed by bottom-up reasoning. For atomic questions, the LLM conducts reasoning on multi-source knowledge to get answer candidates. For composite questions, the LLM combines beam candidates, explores multiple reasoning paths through probabilistic aggregation, and prioritizes the most promising trajectory. Extensive experiments on four opendomain multi-hop reasoning datasets show that our method significantly outperforms SOTA methods by 8.5%. Furthermore, our analysis reveals that BeamAggR elicits better knowledge collaboration and answer aggregation. 1 Introduction Large language models have showcased impressive performance across various NLP tasks (OpenAI, 2023; Touvron et al., 2023). Furthermore, chain-ofthought (CoT) prompting further enhances the reasoning capabilities of LLMs (Wei et al., 2022; Kojima et al., 2022; Chu et al., 2023). However, when the question surpasses the knowledge boundaries of LLMs, it leads to factual errors, also known as hallucination. Retrieval-augmented generation (RAG) assists LLMs by retrieving supporting knowledge * Corresponding Authors: Ming Liu, Qianglong Chen Candidates Final answer Multi-source reasoning Best reasoning trajectory Beam aggregation … Q1 A1 | Q2 A2 | Q3 A3 | Q4 A4 | afinal 2. Traverse question trees in post-order and solve sub-questions bottom-up. 1. Decompose complex questions into question trees. Figure 1: A brief overview of our method. Complex questions are decomposed into trees (top). Multi-source beam aggregation reasoning is conducted to find the best reasoning trajectory. (bottom) to alleviate factual errors, thereby drawing significant attention within the research community (Gao et al., 2023; Huang et al., 2023). Early attempts adopt a retrieve-then-read framework for one-time retrieval (Lazaridou et al., 2022; Borgeaud et al., 2022; Izacard et al., 2023; Zhang et al., 2023b). They use the original complex question as the retrieval query and achieve satisfactory performance in single-hop questions (Joshi et al., 2017; Kwiatkowski et al., 2019). Nonetheless, when confronted with complex multi-hop questions, one-time retrieval suffers from the issues of inaccurate and irrelevant retrieval, which greatly impair their performance in multi-hop reasoning. Recent work introduces iterative multi-round retrieval (Khattab et al., 2022; Press et al., 2023; Trivedi et al., 2023; Jiang et al., 2023b; Shao et al., 2023). They use the content generated by LLMs for retrieval and, in turn, use the newly retrieved content for reasoning. Through the iterative alternation between retrieval-augmented reasoning and reasoning-augmented retrieval, the retrieval is substantially improved. Meanwhile, a portion of research decomposes complex questions into simple sub-questions, employing sub-question retrieval to 1229 obtain more precise information (Zhou et al., 2023; Cao et al., 2023; Park et al., 2023; Su et al., 2023). However, there are still significant issues with these methods. Iterative retrieval is tough to achieve precise retrieval aligned with the model’s reasoning. Sub-question retrieval grapples with the challenge of accurately aggregating answers, which causes cascading errors. Besides, when it comes to the open-domain setting, relying solely on knowledge from a single source proves inadequate for complex questions. Introducing multisource knowledge may encounter knowledge conflicts, rendering efficient collaboration challenging, thus impeding its applications. To address the aforementioned challenges, our research focuses on the following question: How can models adaptively select and integrate knowledge from different sources during the reasoning, while reducing cascading errors in subquestions aggregation? Building upon this motivation, we propose Beam Aggregation Reasoning (BeamAggR) framework for knowledge-intensive multi-hop reasoning. Concretely, our method consists of three modules. (i) question decomposition: We begin by leveraging LLMs to decompose complex questions and convert them into question trees. The root node contains the original complex question, leaf nodes contain atomic sub-questions, and intermediate nodes contain composite questions that require compositional (or comparative) reasoning to obtain answers. Afterward, we traverse the question tree in post-order, employing bottom-up reasoning. (ii) complementary multi-source reasoning: For atomic questions, we conduct multisource reasoning, followed by fine-grained answer aggregation, thus fostering knowledge collaboration. The aggregated answers are then normalized into a probability distribution, serving as candidates for beam aggregation. (iii) beam aggregation: For composite questions, we enumerate the combinations of their dependent sub-questions and conduct reasoning. Finally, the reasoning results are probabilistically aggregated, and the most promising predictions are selected. In summary, our method explores reasoning trajectories at each hop of questions and prioritizes paths with higher likelihoods, thereby bringing reasoning insight and reducing cascading errors. Besides, the complementary multi-source reasoning helps alleviate knowledge omission and conflict. We evaluate our method on four open-domain multi-hop reasoning datasets: HotpotQA (Yang et al., 2018), 2WikiMQA (Ho et al., 2020), MuSiQue (Trivedi et al., 2022), and Bamboogle (Press et al., 2023). The experiments are conducted using GPT-3.5-turbo (Ouyang et al., 2022) and Mistral (Jiang et al., 2023a). Experimental results show that our method significantly outperforms the baselines on four datasets, with an average improvement of 8.5% compared to the previous state-of-the-art method, demonstrating its superiority. Furthermore, thorough analysis reveals the superiority of our approach to knowledge collaboration and answer aggregation. Our contributions can be summarized as follows: • We introduce BeamAggR, a framework for opendomain multi-hop reasoning, which outperforms the state-of-the-art methods. • BeamAggR dynamically integrates multi-source knowledge in fine granularity during reasoning, fostering knowledge collaboration. • BeamAggR broadens the scope of reasoning with beam combination and optimizes reasoning trajectories, mitigating cascading errors. 2 Related Work 2.1 Reasoning with Large Language Model Wei et al. (2022) prompts LLMs to generate reasoning process before final answers, which is known as chain-of-thought (CoT) prompting. Since then, CoT prompting has been widely applied to enhance the reasoning capabilities of LLMs. Some work also designs instructions or clustering demonstrations for zero-shot reasoning (Kojima et al., 2022; Zhang et al., 2023c). Additionally, self-ensemble has been proven through extensive experiments to be an effective approach to improve performance. Wang et al. (2023) uses probabilistic sampling for multiple reasoning traces, while Qin et al. (2023) diversifies reasoning paths by using multiple languages CoTs. To address complex questions, Zhou et al. (2023); Dua et al. (2022) decompose them into sub-questions and solve them progressively, while Yao et al. (2023) models reasoning procedures as BFS or DFS search on reasoning trees. In contrast, our method employs divide-andconquer strategy, breaking down complex questions into trees and addressing them through bottom-up aggregation reasoning. 2.2 Retrieval-augmented Reasoning Knowledge-sensitive tasks may induce factual hallucinations in LLMs, thus necessitating external 1230 Q: When did the city where Souvenir's performer was born become the capitol of the state Knowles was from? Q(1): Who is Souvenir's performer? (#1) Q(3): Where Knowles was from? (#3) Q(2): Where was #1 born? (#2) Q(4): When did #2 become the capitol of #3? (final answer) Q: Who is Souvenir's performer? A: Paul, Kumi, Paul … A: James, Martin, Martin … A: Gasol, Paul, Paul … A: Paul, Paul, Martin … a1: Paul p1: 0.73 a2: Martin p2: 0.27 Close Book Parametric Knowledge Web Search Wikipedia Retrieval Aggregation & Normalization (b) Multi-source Reasoning Complex Question (a) Question Decomposition LLM Question Tree Q(1) 𝒂𝟏 (𝟏): Paul 𝒑𝟏 (𝟏): 0.73 𝒂𝟐 (𝟏): Martin 𝒑𝟐 (𝟏): 0.27 Q(3) 𝒂𝟏 (𝟑): Texas 𝒑𝟏 (𝟑): 0.66 𝒂𝟐 (𝟑): Oklahoma 𝒑𝟑 (𝟑): 0.34 𝑸𝟏 (𝟐) 𝑸𝟐 (𝟐) 𝒂𝟏 (𝟐): Austin 𝒑𝟏 (𝟐): 0.86 𝒂𝟐 (𝟐): Houston 𝒑𝟐 (𝟐): 0.14 𝑸(𝟐)×𝑸(𝟑) →(𝑨(𝟐)×𝑨(𝟑), 𝒑(𝟐)×𝒑(𝟑)) (Austin, Texas), 𝒑𝟏𝟏 (𝟒): 0.57 (Houston, Texas), 𝒑𝟐𝟏 (𝟒): 0.09 … 𝑸%% (&) 𝑸𝟏' (&) 𝑸𝟏( (&) …… 𝒂𝟏 (𝟒): 1839 𝒑𝟏 (𝟒): 0.93 𝒂𝟐 (𝟒): 1952 𝒑𝟐 (𝟒): 0.07 afinal = argmax(p(4)) = 1839 √ (c) Beam Aggregation Reasoning 𝑸%% (&) : When did Austin become the capitol of Texas? Beam Combination Probabilistic Answer Aggregation Bottom-up Reasoning Question Decomposition Multi-source Reasoning ❄ Figure 2: An overview of BeamAggR. (a) Question decomposition: decompose complex questions into trees and address them bottom-up (b) Multi-source reasoning: reason from diverse knowledge sources and normalize answers into a probability distribution (c) Beam aggregation reasoning: explore based on children’s predictions, probabilistic aggregate answers and select the most promising reasoning trajectory. retrieval for retrieval-augmented reasoning. Early work uses one-time retrieval, but they struggle to gather all the necessary knowledge to answer complex questions, resulting in knowledge omissions (Borgeaud et al., 2022; Lazaridou et al., 2022; Izacard et al., 2023). To address this, iterative retrieval is proposed. DSP (Khattab et al., 2022) engages in iterative interactions between retriever and reader through programmatically defined procedures. SelfAsk (Press et al., 2023) iteratively decomposes questions and solves them through the Google search. IRCoT (Trivedi et al., 2023) uses each reasoning step as a query for retrieval until obtaining the final answer. Similarly, ITER-RETGEN (Shao et al., 2023) conducts iterative retrievals by concatenating the output from the previous round with the original question. FLARE (Jiang et al., 2023b) introduces a lookahead mechanism, dynamically controlling the timing of retrieval based on reasoning confidence. Beam Retrieval (Zhang et al., 2023a) introduces an end-to-end framework designed to retrieve relevant paragraphs at each hop of questions through beam search. Meanwhile, some efforts achieve more precise retrieval by decomposing the problem into QDMR format (Wolfson et al., 2020). Cao et al. (2023) decomposes the problem into a tree, while Park et al. (2023) constructs a reasoning graph. Compared to their approach, our method integrates diverse knowledge via complementary multisource reasoning. Besides, it explores reasoning trajectories at each hop of questions and prioritizes the most promising path, thereby eliciting better answer aggregation and reducing cascading errors. 3 Beam Aggregation Reasoning Beam Aggregation Reasoning decomposes complex questions into trees and conducts bottom-up reasoning. Throughout the bottom-up reasoning, it intricately amalgamates diverse knowledge sources, thereby mitigating the dearth of knowledge. Furthermore, beam aggregation probabilistically consolidates answers and selects reasoning pathways, consequently diminishing cascading errors. Firstly, we decompose complex questions, then perform bottom-up multi-source knowledge reasoning in a post-order traversal manner, and employ beam aggregation at intermediate nodes until reaching the root node to obtain the final answer. The procedure is depicted in Algorithm 1, with an overview of the methodology illustrated in Figure 2. We will introduce question decomposition in § 3.1, multi-source reasoning in § 3.2 and beam aggregation in § 3.3. Detailed definitions of notations in the algorithm and formulas can be found in Table 7. Task definition is given in Appendix A.1. 1231 Algorithm 1 Beam Aggregation Reasoning Require: Complex multi-hop questions, Q Require: Multi-source knowledge retriever, R Require: Large language model, LLM Require: Candidate size in aggregation, k 1: Qdecomp “ LLMpQq 2: for Npiq in PostOrderTraversepQdecompq do 3: if Npiq is leaf-node then 4: ˆa “ LLMpqpiq, Rq 5: apiq, ppiq “ Votepˆaqr1 : ks 6: else 7: initialize ˆa, ˆp “ list, list 8: for c1, p1 in CartProdpsonspNpiqqq do 9: q1 “ MaskFillpqi, c1q 10: at, pt “ VotepLLMpq1, c1, Rqq 11: ˆa Ð Concatpˆa, atq 12: ˆp Ð Concatpˆp, pt ¨ p1q 13: end for 14: apiq, ppiq “ Aggrpˆa, ˆpqr1 : ks 15: end if 16: end for 17: return aroot[1] 3.1 Question Decomposition Multi-hop questions entail complex structures, such as bridge, comparison, composition and their integration. To address this, we parse complex questions into trees to express the reasoning structure. As shown in Figure 2 (a), the complex question Q is decomposed into a tree comprising four simpler sub-questions, Qp1q to Qp4q, with (compositional) dependencies among them. The root node represents the original complex question, the leaf nodes represent atomic sub-questions, and the intermediate nodes require compositional (comparative) reasoning to obtain answers. Following Cao et al. (2023); Su et al. (2023), we adopt #i as the placeholder for intermediate questions, enabling us to replace incomplete questions with solved subquestions as we traverse to that node. Specifically, the decomposed question is represented in QDMR format (Wolfson et al., 2020). Afterward, we tackle the complex questions in a bottom-up fashion, following a post-order traversal sequence. The tree is represented as a post-order traversal node sequence, Qdecomp “ tNp1q, Np2q . . . u, with each node containing a question, candidate answers, and the associated probabilities, Npiq = tqpiq, apiq, ppiqu. It is noteworthy that the model may produce structural incorrect decomposition, which needs post-filtering to ensure the validity of decomposition. We present the formal definitions in Table 7(b). 3.2 Complementary Multi-source Reasoning To avoid the hallucination caused by lack of knowledge, we use four reasoning strategies combined with answer normalization to fuse information from diverse knowledge sources, as shown in Figure 2(b). The knowledge sources include implicit internal knowledge, explicit internal knowledge, Wikipedia knowledge, and web search knowledge. as “ LLMpq, Ksq (1) where s P tclosebook, parametric, wiki, serpu is reasoning strategy, K is retrieved knowledge. Internal Knowledge Reasoning For implicit knowledge reasoning, we prompt the model with chain-of-thought demonstrations to perform closedbook reasoning. There are also studies suggesting that the model’s parametric knowledge can serve as a retrieval source (Yu et al., 2023; Zhang et al., 2023b). We first prompt the model to generate parametric knowledge relevant to the question, and then employ it for explicit knowledge reasoning. External Knowledge Reasoning We utilize Wikipedia and search engines as external retrieval for external knowledge reasoning. Regarding Wikipedia, we employ BM25 to conduct sparse retrieval over the full Wikipedia dumps. For search engines, we call the Google Search API and use the organic results as the retrieval content. After retrieval, we employ few-shot prompts to conduct reasoning on the retrieved documents. Answer Normalization After completing four sets of independent knowledge reasoning, we merge them through voting to achieve knowledge fusion. Following that, we normalize the answerfrequency pairs to probability distributions for subsequent beam aggregation, as shown in Eq. (2). pi “ exppfi{τq řk j“1 exppfj{τq (2) where f is frequency and τ is temperature. 3.3 Beam Aggregation In beam aggregation, we conduct reasoning over combinations of candidate answers inherited from sub-questions to expand reasoning breadth and exploration space, as shown in Figure 2(c). We then select reasoning paths by maximizing the marginal probability distribution of predictions. 1232 Beam Combination In intermediate nodes, we need to conduct reasoning based on the answers derived from previous sub-questions, and each subquestion is associated with a set of answers and probabilities, termed candidates (or beams). The intermediate question may depend on multiple subquestions, so we enumerate combinations among the candidates. Specifically, we compute the Cartesian product among the candidates, as shown in Eq. (3). To prevent combinatorial explosion, we restrict the exploration space with beam size. Afterward, for each combination, we substitute the placeholders in the question with candidate answers and conduct multi-source reasoning. We take a composition node with two sub-questions as an example. txapxq i , apyq j y, ppxq i ppyq j | i, j “ 1, 2, . . . , ku (3) where pxq,pyq denote sub-questions, k is candidate size, a is answer and p is associated probability. As illustrated in Figure 2(c), Qp4q relies on Qp2q, Qp3q. We calculate the Cartesian product of candidates of Qp2q and Qp3q, and perform substitution to derive new questions. For example, Qp4q 11 is obtained by substitute the #1 with ap2q 1 and #2 with ap3q 1 , “When did Austin become the capital of Texas?”, with PpQp4q 11 q “ pp2q 1 pp3q 1 . Subsequently, we conduct multi-source reasoning on all substituted sub-questions. Probabilistic Answer Aggregation We have explored numerous combinations of sub-questions in beam combination, yielding multiple sets of answers and probabilities. Next, we will aggregate the above answers according to the probabilities to determine the optimal reasoning path. This can be formalized as argmax for maximizing the marginal probabilities of predictions, as described below. Ppyq “ ÿ qiPQ Ppy|x “ qiq ¨ Ppqiq (4) where Q is the original intermediate question, qi is a substituted question, Ppy|x “ qiq is the answers distribution of qi, and Ppqiq is the weight of qi. After probabilistic answer aggregation, we keep the top-k answers a and their probabilities p as candidates. The aggregated candidates will continue propagating bottom-up, participating in subsequent beam aggregations until reaching the root node. Upon reaching the root node, we regard the answer with the highest probability as the final answer. 4 Experimental Setup 4.1 Datasets We evaluate BeamAggR on four open-domain multi-hop reasoning datasets. HotpotQA (Yang et al., 2018), 2WikiMQA (Ho et al., 2020), and Bamboogle (Press et al., 2023) consist of two-hop questions, and MuSiQue (Trivedi et al., 2022) contains questions with 2 to 4 hops. For HotpotQA, 2WikiMQA, and MuSiQue, we use the same development and test set provided by IRCoT (Trivedi et al., 2023). The test set and development set are both extracted from the original development set, consisting of 500 and 100 instances, respectively. For Bamboogle, we use all 125 instances as the test set, without a separate development set, and use the same hyperparameters with 2WikiMQA. The evaluation metric is token-level F1. On all four datasets, we conduct experiments in the open-domain setting, employing the entire Wikipedia dumps and web search for retrieval. 4.2 Implementation Details In the main experiment, we use GPT-3.5-turbo as the backbone. Since OpenAI has deprecated GPT3.5 (text-davinci series), we reproduce the baselines with GPT-3.5-turbo for a fair comparison1. Please refer to Appendix A.2 for more details. 4.3 Baselines Standard Prompting (Brown et al., 2020) directly generates the final answer with few-shot demonstrations. We use a 20-shot demonstration. Chain-of-Thought Prompting (Wei et al., 2022) generates reasoning steps before the final answer. We use a 20-shot demonstration. One-time Retrieval involves using the original question as the query, concatenating sparse retrieval and Google search results into the prompt to guide the model to perform CoT reasoning. Self-Ask (Press et al., 2023) adopts an iterative approach to decompose complex questions. It generates sub-questions iteratively based on existing reasoning, and then retrievals and answers them until reaching the final answer. IRCoT (Trivedi et al., 2023) interleaves retrieval-augmented reasoning and reasoningaugmented retrieval until the retrieval information is sufficient to answer the question. 1https://platform.openai.com/docs/deprecations 1233 Methods HotpotQA MuSiQue 2WikiMQA Bamboogle Overall Bridge Comp. Overall 2hop 3hop 4hop Overall Bridge Infer. Comp. B.C. Overall Close-book Reasoning SP 38.9 37.5 45.3 15.6 16.4 16.2 12.6 33.9 13.9 23.9 53.3 57.0 27.8 CoT 46.5 44.6 55.5 24.7 30.2 22.5 13.2 42.3 25.7 25.1 58.0 68.5 53.6 Retrieval-augmented Reasoning OneR ♢ 55.3 52.9 66.5 16.4 22.1 10.6 10.4 42.9 24.3 28.7 75.7 51.2 46.8 Self-Ask ♠ 49.4 45.3 68.6 16.2 24.4 8.8 7.5 46.9 31.6 40.5 71.5 52.6 51.9 IRCoT ♠ 56.2 53.4 69.6 24.9 31.4 19.2 16.4 56.8 44.2 22.7 89.0 69.4 55.0 FLARE ♠ 56.1 54.2 64.4 31.9 40.9 27.1 15.0 60.1 46.2 54.5 81.4 66.3 58.1 ProbTree ♣ 60.4 59.2 65.9 32.9 41.2 30.9 14.4 67.9 49.8 66.4 81.7 91.1 66.6 Ours 62.9(Ò2.5) 60.5 74.2 36.9(Ò4.0)43.3 36.1 20.5 71.6(Ò3.7) 55.1 64.9 89.9 90.4 74.8(Ò8.2) Table 1: Experimental results on four open-domain multi-hop reasoning datasets: HotpotQA, MuSiQue, 2WikiMQA and Bamboogle. Best and second results are highlighted by bold and underline. The evaluation metric is F1. All experiments are done with gpt-3.5-turbo-instruct through in-context learning. All baselines are instantiated with both sparse retriever and search engine. ♢: One-time retrieval. ♠: Iterative retrieval. ♣: Sub-question retrieval. FLARE (Jiang et al., 2023b) dynamically adjusts the retrieval timing based on the confidence of reasoning and conducts retrieval based on upcoming reasoning sentences. ProbTree (Cao et al., 2023) parses the question into a tree, employing logprobs-based sub-question aggregation to obtain the final answer. 5 Experimental Results 5.1 Main Results The experimental results on four multi-hop reasoning datasets are presented in Table 1. We observe that one-time retrieval can improve performance in HotpotQA, 2WikiMQA and Bamboogle. However, when facing the more challenging MuSiQue, onetime retrieval even impairs performance (24.7 Ñ 16.4), which indicates that inaccurate retrieval and knowledge conflicts may exacerbate hallucination. The iterative retrieval approach has made notable progress compared to one-time retrieval by offering more comprehensive knowledge. Additionally, methods based on sub-question retrieval have more explicit queries, which leads to a further improvement in retrieval accuracy, serving as the best-performing baseline method. As shown in Table 1, our method demonstrates superiority over all baselines. It outperforms the previous state-of-the-art, ProbTree, on all four datasets: HotpotQA (+2.5), MuSiQue (+4.0), 2WikiMQA (+3.7), and Bamboogle (+8.2). We attribute the improvement to three aspects: (i) Question decomposition leads to more accurate retrieval. Compared to iterative retrieval based on 2hop 3hop 4hop 0 10 20 30 40 Avg. F1 -33.7% -63.3% FLARE 2hop 3hop 4hop -25.0% -65.0% ProbTree 2hop 3hop 4hop -16.6% -52.7% Ours Figure 3: Performance gap with different reasoning steps. We adopt the original split in MuSiQue and report the average f1 score in each subset. As the number of reasoning steps escalates, the model’s performance declines. Our method exhibits a slower performance decline as reasoning steps increase, indicating its ability to effectively alleviate cascading errors. generation content, sub-questions serve as clearer retrieval queries. (ii) Complementary reasoning facilitates collaboration between diverse knowledge sources. Our method dynamically selects and leverages knowledge from multiple sources with fine granularity, thereby mitigating knowledge conflicts and omissions. (iii) Probability-based beam aggregation mitigates cascading errors and optimizes the reasoning trajectories through consistency. We observe significant improvements, particularly for Comp questions. Comparison questions involve interactions among sub-questions. With the aid of beam aggregation, the model can enumerate various combinations during reasoning, thereby encompassing a broader spectrum of possibilities (65.9 Ñ 74.2, 81.7 Ñ 89.9). Additionally, our method is proficient at addressing complex questions (3/4-hop), as shown in Figure 3. Our method exhibits a slower performance decline as reason1234 Methods HotpotQA MuSiQue 2WikiMQA Bamboogle Close-book Reasoning SP 23.1 4.3 8.5 11.0 CoT 31.3 16.1 34.2 33.0 Retrieval-augmented Reasoning OneR 43.6 11.4 36.3 28.1 FLARE 45.7 20.1 39.6 41.3 ProbTree 50.4 27.0 59.9 61.1 Ours 55.2 32.3 63.2 74.0 Ours (GPT-3.5) 62.9 36.9 71.6 74.8 Table 2: Experimental results on Mistral-7B. Our method still outperforms all baselines and is compatible with GPT-3.5-turbo, suggesting its generalizability. ing depth increases (8.4% in 3-hop and 12.3% in 4-hop), indicating its ability to precisely aggregate answers and effectively alleviate cascading errors. 5.2 Results on Open-source Model To prove the generalizability of our method to various models, we also conduct experiments on opensource models. We select Mistral-7B (Jiang et al., 2023a), the state-of-the-art LLM among similar scales. As shown in Table 2, Beam Aggregation significantly outperforms previous SOTA on all four datasets, demonstrating its model-agnostic nature and effectiveness. Furthermore, it is comparable to GPT-3.5-turbo on the Bamboogle dataset. 5.3 Ablation Study Effect of Multi-source Reasoning We conduct two sets of ablations: removing a single knowledge source and removing a type of knowledge, as shown in Table 3(a). Removing any knowledge source results in a certain performance decrease, suggesting that each type of knowledge contributes to reasoning (a.1-a.4). Furthermore, the declining trends (4.1% internal knowledge and 9.7% external knowledge) indicate that external knowledge makes a greater contribution compared to internal knowledge. Using only internal knowledge for reasoning results in a severe performance drop (a.5). Nevertheless, our method still outperforms chain-of-thought reasoning, which reflects the effectiveness of aggregation reasoning. Completely removing internal knowledge leads to a substantial decline, as shown in (a.6). This suggests that internal knowledge can complement external retrieval, proving the effectiveness of our method in knowledge collaboration. Effect of Beam Aggregation To validate the effectiveness of beam aggregation, we first employ Setting 2WikiMQA MuSiQue Beam Aggregation 71.6 36.9 (a) Multi-Source Knowledge 1. w/o closebook 69.2 35.1 2. w/o parameter 68.3 35.6 3. w/o wikipedia 65.8 33.1 4. w/o web search 63.4 32.7 5. internal only 52.0 27.3 6. external only 68.5 33.4 (b) Beam Aggregation 1. polarization aggregation 65.9 30.9 2. w/o probability 67.8 35.4 3. greedy aggregation 70.1 36.2 Table 3: Ablation results on multi-source knowledge reasoning and beam aggregation. Internal only: conduct internal reasoning only. External only: conduct external reasoning only. W/o probability: do not distinguish the weights of aggregated answers. Polarization aggregation: aggregate answers with logprobs. log-prob-based aggregation, which can only select a single knowledge source. As shown in Table 3(b), this leads to a notable decline, suggesting that coarse-grained polarized aggregation struggles to effectively integrate knowledge, thus underscoring the superiority of our beam aggregation. Probabilistic aggregation enables the distinction of the importance of answers. To investigate its effect, we conduct an ablation where the aggregated answers are treated with equal weights, as shown in (b.2). The performance drops by 5% and 4% on two datasets, suggesting that prioritizing reasoning trajectories can elicit better reasoning. Finally, we investigate the effect of candidate size. Larger candidate sizes result in broader reasoning breadth, but they also increase reasoning overhead. When it is set to 1, greedy aggregation strategy is employed. It is a cost-efficient variant of beam aggregation reasoning. As shown in (b.3), it causes a slight performance decrease, indicating that some erroneous reasoning may be corrected as the reasoning process progresses with the help of a broader scope of reasoning. We conduct a detailed analysis of reasoning performance and overheads in section 6.3. 6 Analysis 6.1 BeamAggR Facilitates Knowledge Collaboration To investigate the impact of Beam Aggregation on knowledge collaboration, we conduct a prelimi1235 0.0 0.5 1.0 1.5 2.0 2.5 Density Logprobs w = 0.167 Aggregation Internal Unified External Internal Unified External Figure 4: Distribution of knowledge integration in reasoning. Unified represents the integration of multisource knowledge in reasoning, while the two ends represent reliance on single-source knowledge. The bars represent original discrete distributions, and the curve is the kernel density estimate (KDE). 22.1% 22.4% 26.3% 29.2% HotpotQA 24.0% 12.2% 25.1% 38.6% MuSiQue 13.6% 19.2% 34.0% 33.3% 2WikiMQA 27.4% 32.7% 11.4% 28.6% Bamboogle Close Book Parameter Wikipedia SERP Close Book Parameter Wikipedia SERP Close Book Parameter Wikipedia SERP Close Book Parameter Wikipedia SERP Figure 5: The contribution of each reasoning strategy to the final answer in percentage. HotpotQA is balanced, while MuSiQue and 2WikiMQA leans to external knowledge, and Bamboogle favors internal knowledge. nary experiment. We track the model’s utilization of knowledge from different sources during the reasoning process, as depicted in Figure 4. The polarity aggregation method based on log-probs excessively relies on reasoning on a single source of knowledge. In more than 2/3 of cases, it can only utilize knowledge from a single source. In contrast, our approach facilitates better knowledge collaboration in reasoning. In summary, our beam aggregation can effectively employ knowledge from multiple sources at a finer granularity. Furthermore, we carry out a more detailed examination of the proportions of each type of knowledge in reasoning. We track the contribution of different knowledge sources to the final answer, as illustrated in Figure 5. Disparities in reasoning contributions across various datasets are observed. 2WikiMQA and MuSiQue tend to favor external reasoning, Bamboogle leans towards internal reasoning, and HotpotQA strikes a balance among different types of knowledge. It is noteworthy that, without manual adjustment of the weight of knowledge, Beam Aggregation can adaptively adjust its proportions during the reasoning process. This indicates that our method can effectively integrate multi-source knowledge. 0 10 20 Ours Diversity y = j^aj 0.4 0.6 0.8 1.0 Consistency y = ^p0 0.0 0.2 0.4 0.6 Uncertainty y = ¡ §(pilogpi) Leaf Mid Root 0 2 4 ProbTree Leaf Mid Root 0.4 0.6 0.8 1.0 Leaf Mid Root 0.0 0.2 0.4 0.6 Figure 6: The tendency of candidate answers distribution from leaf node to root. We define three metrics to measure the distribution. Diversity: The number of distinct answers. Consistency: The proportion of the majority answer. Uncertainty: The information entropy of answer distribution. 0 10000 20000 30000 Avg. token 40 50 60 Avg. F1 ↖ better efficiency Beam Aggregation Ours Ours (GD) Previous SOTA OneR IRCoT FLARE ProbTree Figure 7: Pareto chart for token consumption and performance (F1). The upper-left quadrant indicates higher efficiency. Results are averaged over four datasets. 6.2 Trends of Answer Aggregation in Bottom-up Reasoning We conduct a detailed analysis of the trends in answer aggregation throughout the reasoning process. To assess the characteristics of the answer distribution during the bottom-up reasoning process, we define three metrics: diversity, consistency, and uncertainty. The definition and tendency are shown in Figure 6. It can be observed that through the reasoning process, the diversity of answers improves, indicating its exploration of a wider range of possibilities. In contrast, consistency and uncertainty continue to decrease as the reasoning depth increases. This suggests that our method is capable of filtering out reasoning from erroneous during the bottom-up aggregation, dynamically choosing appropriate knowledge sources and reasoning trajectories. Conversely, methods based on log-probs aggregation are affected by inaccurate aggregation and cascading errors, thus unable to achieve these, highlighting the superiority of our method. 1236 6.3 Analysis of Reasoning Cost Retrieval-augmented generation often involves frequent invocation of LLMs, resulting in significant computational overhead. We compare the performance and overhead of five RAG methods. As illustrated in Figure 7, previous methods exacerbate reasoning overhead while improving performance. In contrast, our method not only surpasses the previous SOTA in performance but also incurs lower overhead. Moreover, our method can further reduce reasoning overhead through greedy aggregation. In summary, Beam Aggregation is Pareto efficient in balancing performance and reasoning overhead. Detail statistics can be found in Table 5. 7 Conclusion This paper introduces BeamAggR for knowledgeintensive multi-hop reasoning. BeamAggR utilizes a divide-and-conquer strategy, breaking down complex questions into a tree structure and conducting reasoning in a bottom-up fashion. At each step of reasoning, BeamAggR builds upon previous candidates to identify the most likely reasoning path. Furthermore, it incorporates multi-source reasoning to enhance knowledge collaboration. Overall, BeamAggR facilitates multi-hop reasoning through precise sub-question retrieval, efficient knowledge collaboration, accurate answer aggregation, and broad exploration of reasoning trajectories. Extensive experiments on four open-domain multi-hop reasoning datasets demonstrate its effectiveness. Limitations Our method involves beam combination across multiple candidates and self-consistency in each reasoning step, thereby increasing reasoning overhead. This overhead can be mitigated through greedy aggregation reasoning. The effectiveness of our framework is contingent upon the accurate decomposition of questions, which may pose significant challenges for large language models. Currently, we only use internal knowledge and unstructured external knowledge for reasoning. In future work, structured external knowledge, such as knowledge bases, can be integrated into reasoning to provide more comprehensive knowledge repositories (Li et al., 2024). Acknowledgements The research in this article is supported by the National Key Research and Development Project (2021YFF0901602), the National Science Foundation of China (U22B2059, 62276083), and Shenzhen Foundational Research Funding (JCYJ20200109113441941), Major Key Project of PCL (PCL2021A06). Ming Liu and Qianglong Chen are the corresponding authors. References Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 2206–2240. PMLR. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Shulin Cao, Jiajie Zhang, Jiaxin Shi, Xin Lv, Zijun Yao, Qi Tian, Lei Hou, and Juanzi Li. 2023. Probabilistic tree-of-thought reasoning for answering knowledgeintensive complex questions. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 12541– 12560. Association for Computational Linguistics. Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, and Ting Liu. 2023. A survey of chain of thought reasoning: Advances, frontiers and future. CoRR, abs/2309.15402. Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. 2022. Successive prompting for decomposing complex questions. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 1251– 1265. Association for Computational Linguistics. 1237 Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. 2023. Retrievalaugmented generation for large language models: A survey. CoRR, abs/2312.10997. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing A multi-hop QA dataset for comprehensive evaluation of reasoning steps. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 6609–6625. International Committee on Computational Linguistics. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. CoRR, abs/2311.05232. Gautier Izacard, Patrick S. H. Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2023. Atlas: Few-shot learning with retrieval augmented language models. J. Mach. Learn. Res., 24:251:1–251:43. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023a. Mistral 7b. CoRR, abs/2310.06825. Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023b. Active retrieval augmented generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 7969–7992. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers, pages 1601–1611. Association for Computational Linguistics. Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. 2022. Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive NLP. CoRR, abs/2212.14024. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics, 7:452– 466. Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. 2022. Internetaugmented language models through few-shot prompting for open-domain question answering. CoRR, abs/2203.05115. Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Shafiq Joty, Soujanya Poria, and Lidong Bing. 2024. Chain-of-knowledge: Grounding large language models via dynamic knowledge adapting over heterogeneous sources. In The Twelfth International Conference on Learning Representations. Sewon Min, Victor Zhong, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2019. Multi-hop reading comprehension through question decomposition and rescoring. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 6097–6109. Association for Computational Linguistics. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. Jinyoung Park, Ameen Patel, Omar Zia Khan, Hyunwoo J. Kim, and Joo-Kyung Kim. 2023. Graphguided reasoning for multi-hop question answering in large language models. CoRR, abs/2311.09762. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2023. Measuring and narrowing the compositionality gap in language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 5687–5711. Association for Computational Linguistics. 1238 Libo Qin, Qiguang Chen, Fuxuan Wei, Shijue Huang, and Wanxiang Che. 2023. Cross-lingual prompting: Improving zero-shot chain-of-thought reasoning across languages. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 610, 2023, pages 2695–2709. Association for Computational Linguistics. Stephen E. Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Found. Trends Inf. Retr., 3(4):333–389. Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 9248–9274. Association for Computational Linguistics. Xin Su, Tiep Le, Steven Bethard, and Phillip Howard. 2023. Semi-structured chain-of-thought: Integrating multiple sources of knowledge for improved language model reasoning. CoRR, abs/2311.08505. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. CoRR, abs/2307.09288. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multihop questions via single-hop question composition. Trans. Assoc. Comput. Linguistics, 10:539–554. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2023. Interleaving retrieval with chain-of-thought reasoning for knowledgeintensive multi-step questions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 10014–10037. Association for Computational Linguistics. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. Tomer Wolfson, Mor Geva, Ankit Gupta, Yoav Goldberg, Matt Gardner, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question understanding benchmark. Trans. Assoc. Comput. Linguistics, 8:183–198. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 2369–2380. Association for Computational Linguistics. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601. Wenhao Yu, Dan Iter, Shuohang Wang, Yichong Xu, Mingxuan Ju, Soumya Sanyal, Chenguang Zhu, Michael Zeng, and Meng Jiang. 2023. Generate rather than retrieve: Large language models are strong context generators. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Jiahao Zhang, Haiyang Zhang, Dongmei Zhang, Yong Liu, and Shen Huang. 2023a. Beam retrieval: General end-to-end retrieval for multi-hop question answering. CoRR, abs/2308.08973. Zhebin Zhang, Xinyu Zhang, Yuanhang Ren, Saijiang Shi, Meng Han, Yongkang Wu, Ruofei Lai, and Zhao Cao. 2023b. IAG: induction-augmented generation framework for answering reasoning questions. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 1–14. Association for Computational Linguistics. Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2023c. Automatic chain of thought prompting in large language models. In The Eleventh International Conference on Learning Representations, 1239 ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. 2023. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net. A Appendix A.1 Task Defination This paper focuses on open-domain multi-hop question-answering tasks. The task’s input is a multi-hop question Q that requires multi-step reasoning to solve, with each step of reasoning necessitating specific knowledge. Given Q as the query, the retriever R retrieves relative documents D “ tdiu|D| i“1. The retrieved documents D are then fed as context into the model for knowledge reasoning. y “ LMpQ, Dq. Our method employs LLMs with few-shot prompts and external retrieval to address the tasks. A.2 Implementation Details of BeamAggR Retrieval Setup We use the October 2017 Wikipedia dumps2 as the retrieval corpus, employing BM25 (Robertson and Zaragoza, 2009) implemented by Elasticseach as the sparse retriever. For the web search, we use the Google SERP provided by Serper3. For SERP results, we consider the top-3 organic results as well as the answer box (if available), resulting in 3 or 4 retrieval documents. We use wikipedia4 package to access raw Wikipedia content, and perform fuzzy matching to extract snippets in the document with fuzzywuzzy5 package. To ensure experimental fairness, we incorporate Google search results as additional knowledge for the baseline methods. Hyperparameters Our main experiments are conducted using gpt-3.5-turbo-instruct provided by Azure OpenAI 12-01-preview version. The experiments on open-source models are conducted using Mistral-7B (Jiang et al., 2023a). Table 4 shows detailed hyperparameters. 2https://hotpotqa.github.io/wiki-readme.html 3https://serper.dev/ 4https://pypi.org/project/wikipedia/ 5https://pypi.org/project/fuzzywuzzy/ Evaluation We use token-level F1 as evaluation metric. Besides, if entity aliases are available, we calculate the maximum F1 score between the prediction and each ground truth alias. Prompts For datasets except for Bamboogle, we adopt the question decomposition results provided by Cao et al. (2023). We convert their decomposition into our format and filter out those with invalid decomposition formats, regenerating them as necessary. For open-source models, limited by their capabilities, they conduct reasoning based on the question decomposition provided by GPT-3.5. Hyperparameters Values # Retrieval doc 5 # Retieval serp 3 or 4 # Parametric knowledge 1 # Closebook prompt 24 # Parametric prompt 5 # Wikipedia prompt 3 # SERP prompt 5 # Question Decomp. prompt 24 Beam size 2 Temperature 3 Self-consistency (τ = 0.7) 5 Table 4: Hyperparameters of BeamAggR. A.3 Details of Dataset We provide statistics of the datasets, along with examples of each question type, as shown in Table 6. HotpotQA consists of 2-hop Bridge and Comparison questions. Bridge: Inferring through the bridge entity to complete the 2-hop question. Comparison: Comparing two entities from the same category, including yes/no questions. MuSiQue composites one-hop questions through bridge and composition into multi-hop complex questions, covering six categories of 2 to 4-hop questions. Bridge: Composite two single-hop questions into a 2-hop question through a bridge entity. Composition: A question is connected to two subquestions via two bridge entities, constituting a 3-hop question. Bridge and Composition: Combining 2-hop bridge questions and 3-hop composition questions in different order yields 4hop composition/bridge questions. 1240 2WikiMQA includes four types of questions: bridge, inference, comparison, and bridge comparison. Bridge and comparison questions are similar to those in HotpotQA. Inference questions are similar to bridge questions, except they use inference relationships instead of bridge entities, such as grandfather of. Bridge-comparison: Comparing entities from two bridge sub-questions. Bamboogle consists of 2-hop bridge questions, which are similar to 2hop (bridge) questions in MuSiQue. All questions in Bamboogle cannot be directly answered through search engines. A.4 Details of Analysis Distribution of knowledge integration In order to facilitate a fair comparison between log-probs and probabilistic aggregation in the preliminary study, we limit our statistical analysis to the proportion of internal and external knowledge contained within the top-1 answers in atomic questions. This ensures that the outcome of the statistics is not affected by the types of decomposition (iterative or sub-question retrieval) or different reasoning paths. The experimental results are averaged from 500 entries each from the HotpotQA and Musique datasets. We plot the discrete distributions (Histplot) and the kernel density estimate (KDEplot) in Figure 4. The y-axis is probabilistic density, for discrete distributions, the density is the percentage divided by the width of the bar. In this figure, the bar width w is set to 1{6 « 0.167. Contribution of each knowledge source Beginning with the final answer (the top-1 answer from the root node), we trace the reasoning path utilized from top to bottom, counting the knowledge sources that participated in the voting for subquestion answers along this path. Finally, we compute the percentage of knowledge sources within the entire path. To ascertain that this combination of knowledge aggregation led to the correct answer, we omit erroneous samples from our analysis. Tendency of candidate answers distribution In the reasoning process, errors can be accumulated, leading to cascading errors. Through multiple reasoning paths and probabilistic aggregation, Beam Aggregation can gradually reduce the uncertainty. To illustrate this, we classify reasoning hop into three types: leaf, mid, and root. We statistically analyze the candidate answer distribution in each hop and use diversity, consistency, and uncertainty to Methods HotpotQA MuSiQue 2WikiMQA Bamboogle #token f1 #token f1 #token f1 #token f1 One-Time Retrieval OneR 4053 55.3 3941 16.4 3892 42.9 4082 46.8 Iterative Retrieval IRCoT 13550 56.2 16877 24.9 13273 56.8 10347 55.0 FLARE 17793 56.1 19180 31.9 16651 60.1 13358 58.1 Sub-question Retrieval ProbTree 25607 60.4 46431 32.9 39249 67.9 30004 66.6 Ours 23720 62.9 36336 36.9 30178 71.6 26276 74.8 Ours (GA) 17887 62.3 22522 36.2 21703 70.1 17507 74.0 Table 5: Detailed token cost per instance and performance in four datasets. GA: greedy aggregation. (§6.3) measure this distribution. Figure 6 shows a boxplot figure of these statistics. We also line the median value of each hop to show the trend more clearly. The analysis is conducted on the MuSiQue dataset, which has a clearer reasoning hop structure. Token consumption We compare the efficiency between one-time retrieval (OneR), iterative retrieval (IRCoT, FLARE), and sub-question retrieval (ProbTree, Ours). We use a Pareto chart to visualize the token consumption of each method, as shown in Figure 7. The token consumption per instance and performance are averaged in four datasets. The upper left indicates better efficiency (less token consumption and higher performance). Detailed token consumption in each dataset is shown in Table 5. We will detail the method for calculating token consumption in the following section. We measure the computational cost by evaluating the average token consumption per question. Specifically, the cost of each instance includes prompt tokens (demonstrations, question, retrieved documents) and completion tokens (reasoning traces, answer). For retrieval-augmented reasoning, the cost of a single API call is approximately 4000 tokens, whereas for non-retrieval reasoning, the cost is less than 1000 tokens. A.5 Prompts We provide manually annotated demonstrations on the 2WikiMQA dataset for reference. Our prompts are derived from IRCoT6 (Trivedi et al., 2023) and ProbTree7 (Cao et al., 2023), to which we have made some modifications. The question decomposition is carried out using 24-shot demonstrations (Figure 10). For implicit internal knowledge, closed-book reasoning is conducted using 20-shot chain-of-thought demonstrations (Figure 11). For 6https://github.com/StonyBrookNLP/ircot 7https://github.com/THU-KEG/ProbTree 1241 explicit internal knowledge, we first have the LLM generate parametric knowledge (Figure 12), followed by knowledge reasoning using 5-shot demonstrations (Figure 13). For external knowledge reasoning based on web search and Wikipedia, we use prompts of 5-shot and 3-shot (Figure 15, 14). A.6 Formal Definition of Notations We describe the formal definition of the notations used in the algorithm pseudocode and formulas in this paper, as shown in Table 7. 1242 Question Type #Examples Question Example (a) HotpotQA Bridge 412 Which team does the player named 2015 Diamond Head Classic’s MVP play for? Comparison 88 Did LostAlone and Guster have the same number of members? (b) MuSiQue 2hop (Bridge) 254 Who succeeded the first President of Namibia? 3hop1 (Bridge) 122 What currency is used where Billy Giles died? 3hop2 (Composition) 32 When was the first establishment that McDonaldization is named after, open in the country Horndean is located? 4hop1 (Bridge) 51 When did Napoleon occupy the city where the mother of the woman who brought Louis XVI style to the court died? 4hop2 (Bridge + Composition) 16 How many Germans live in the colonial holding in Aruba’s continent that was governed by Prazeres’s country? 4hop3 (Composition + Bridge) 25 When did the people who captured Malakoff come to the region where Philipsburg is located? (c) 2WikiMQA Comparison 119 Who was born first, Albert Einstein or Abraham Lincoln? Inference 79 Who is the maternal grandfather of Abraham Lincoln? Bridge 197 Who is the founder of the company that distributed La La Land film? Bridge-comparison 105 Which movie has the director born first, La La Land or Tenet? (d) Bamboogle 2hop (Bridge) 125 When did the last king from Britain’s House of Hanover die? Table 6: Statistics and examples of each question type in HotpotQA, MuSiQue, 2WikiMQA and Bamboogle. It should be noted that some instances in these datasets trigger the content filter of Azure OpenAI API, where the model refuses to respond. And thus we have omitted these filtered samples when calculating metrics. 1243 Symbol Dim / Contains Description (a) Inputs Q Complex knowledge-intensive multi-hop question. Rp. . . q Multi-source knowledge retriever. LLMp. . . q Large language model prompted with few-shot demonstration. LLMp. . . , Rq indicates that the reasoning is augmented by the retriever. k Hyper-parameter: Candidate size in beam aggregation. (b) Question Tree Qdecomp tNp1q, Np2q . . . u Decomposed questions tree, contains tree nodes. The root node Nroot represents the original question, and the children of a node sonspNpiqq are sub-questions. Npiq tqpiq, apiq, ppiqu Tree node i, contains the sub-question, answers and probabilities. qpiq R Sub-question for node i, obtained in decompose step. The question may contain placeholder tokens (e.g. #1) which need to be replaced by the answer in subquestions (sons) during reasoning. apiq, ppiq Rk, Rk Top-k candidate answers and corresponding probabilities in node i, obtained in bottom-up reasoning step. (c) Variables in Reasoning Step ˆa, ˆp R?, R? Intermediate results for answers and corresponding probabilities, need to be further voted or aggregated. at, pt R?, R? Same as above. c1, p1 R|sons|, R One combination of sub-questions answer and corresponding probabilities. A combination is one element in the cartesian product of the children’s answers c1 P Ś iPsons apiq “ asons1 ˆ asons2 . . . and p1 is the production of children’s probabilities for this combination. (d) Functions in Pseudocode PostOrderTraversepQq rNs Ñ rNs Return a new node sequence in post-order (child0 < child1 < ... < root). CartProdpsonsq rNs Ñ rras, ps Cartesian product of the answers and the associated probabilities of the children. For instance, if a node has two children x, y, this function will return a set txapxq i , apyq j y, ppxq i ppyq j | i, j “ 1, 2, . . . , ku. MaskFillpq, cq q, ras Ñ q Replaces the placeholder token (#i) in the sub-question q with candidate answer ci. Votepaq ras Ñ ras, rps Deduplicate answers and obtain a probabilities distribution based on their frequency. Returns are arranged in descending order of probabilities. Aggrpa, pq ras, rps Ñ ras, rps Return the unique answers and their normalized probabilities, arranged in descending order of probabilities. Table 7: The formal definition of notations used in the algorithms and formulas within this paper. 1244 Question: The fourth largest city in Germany was originally called what? Decomposition: Q1. What is the fourth largest city in Germany? Q2. What was #1 originally called? Q1: What is the fourth largest city in Germany? closebook answer: [Frankfurt, 3], [Cologne, 2] parametric answer: [Cologne, 5] document answer: [Regensburg, 3] serp answer: [Darmstadt, 5] > aggregated answer: [Cologne, 0.6607], [Darmstadt, 0.3392] Q2-1: What was [Cologne] originally called? Q2-2: What was [Darmstadt] originally called? > aggregated answer: [Colonia Claudia Ara Agrippinensium, 0.5229], [Colonia Agrippina, 0.1378], [Darmundestat, 0.2988], [the Grand Duchy of Hesse, 0.0404] Q: The fourth largest city in Germany was originally called what? aggregated answer: [Colonia Claudia Ara Agrippinensium, 0.6363], [Darmundestat, 0.3636] > final answer: Colonia Claudia Ara Agrippinensium ✓ ground truth: Colonia Claudia Ara Agrippinensium Reasoning Example on Bamboogle Figure 8: An example of reasoning on Bamboogle dataset (a) MuSiQue Question: Who played who sang is she really going out with him in the who influenced Beyonce movie? Decomposition: Q1. Which movie was influenced by the Who? Q2. Who sang ’Is She Really Going Out With Him’? Q3. Who played #2 in #1? (b) HotpotQA Question: What German state was Karl Julius Perleb born in? Decomposition: Q1. Where was Karl Julius Perleb born? Q2. What German state is #1 in? (c) 2WikiMQA Question: Which film has the director who was born later, Money On The Street or She-Devils On Wheels? Decomposition: Q1. Who is the director of film Money On The Street? Q2. When was #1 born? Q3. Who is the director of film She-Devils On Wheels? Q4. When was #3 born? Q5. Which film has the director who was born later, Money On The Street or She-Devils On Wheels? (#2, #4) (d) Bamboogle Question: Who built the fastest air-breathing manned aircraft? Decomposition: Q1. What is the name of the fastest air-breathing manned aircraft? Q2. Who built #1? Question Decomposition Examples Figure 9: Examples of question decomposition 1245 Question: Who is the performer of Live at this studio that employs the person who coined the term theatre of the absurd? Decompose: "Who is the performer of Live at this studio that employs the person who coined the term theatre of the absurd?": ["Where did the person who coined the term the theatre of the absurd work?", "Who is the performer at the Live at the #1 event?"], "Where did the person who coined the term the theatre of the absurd work?": ["Who coined the term the theatre of the absurd", "Where is #1 worked?"] Question: Who is the general treasurer of the state where Israel Arnold House is located? Decompose: "Who is the general treasurer of the state where Israel Arnold House is located?": ["What state is Israel Arnold House located?", "Who is the general treasurer of #1?"] Question: What weekly publication in the place of death of George Townsend is issued by the employer of the Yale labor historian who advised younger historians? Decompose: "What weekly publication in the place of death of George Townsend is issued by the employer of the Yale labor historian who advised younger historians?": ["Where the Yale labor historian who advised younger labor historians works?", "Where did George Townsend die?", "What weekly publication in #2 is issued by #1?"], "Where the Yale labor historian who advised younger labor historians works?": ["Which Yale labor historian advised other younger labor historians?", "Where #1 works?"] Question: When was the SEC championship game between the winner of the most national titles in NCAA football and Georgia? Decompose: "When was the SEC championship game between the winner of the most national titles in NCAA football and Georgia?": ["Who has the most national titles in NCAA football?", "When was the SEC championship game between #1 and georgia?"] Question: Who sings Never Say Never with the performer of As Long as You Love Me? Decompose: "Who sings Never Say Never with the performer of As Long as You Love Me?": ["Who is the performer of As Long as You Love Me?", "Who sings Never Say Never with #1?"] ...... Demonstrations of Question Decomposition Figure 10: Demonstrations of question decomposition. (24 shot) Question: When did the director of film Hypocrite (Film) die? Answer: The film Hypocrite was directed by Miguel Morayta. Miguel Morayta died on 19 June 2013. So the answer is **19 June 2013**. Question: Do director of film Coolie No. 1 (1995 Film) and director of film The Sensational Trial have the same nationality? Answer: Coolie No. 1 (1995 film) was directed by David Dhawan. The Sensational Trial was directed by Karl Freund. David Dhawan’s nationality is India. Karl Freund’s nationality is Germany. Thus, they do not have the same nationality. So the answer is **no**. Question: Are both Kurram Garhi and Trojkrsti located in the same country? Answer: Kurram Garhi is located in the country of Pakistan. Trojkrsti is located in the country of Republic of Macedonia. Thus, they are not in the same country. So the answer is **no**. Question: Who was born first out of Martin Hodge and Ivania Martinich? Answer: Martin Hodge was born on 4 February 1959. Ivania Martinich was born on 25 July 1995. Thus, Martin Hodge was born first. So the answer is **Martin Hodge**. Question: Which album was released more recently, If I Have to Stand Alone or Answering Machine Music? Answer: If I Have to Stand Alone was published in the year 1991. Answering Machine Music was released in the year 1999. Thus, of the two, the album to release more recently is Answering Machine Music. So the answer is **Answering Machine Music**. ...... Demonstrations of Close-book Reasoning Figure 11: Demonstrations of close-book reasoning on 2WikiMQA dataset. (20 shot) 1246 Provide the necessary background knowledge to answer the given question. Question: {} Knowledge: Instruction of Parametric Knowledge Generation Figure 12: Instruction of parametric knowledge generation. (zeroshot) Given a question and the relevant documents, answer the question and explain why. If you are unsure, answer Unknown. #1 Document: Kurram Garhi Kurram Garhi is a town located in the Kurram District of Khyber Pakhtunkhwa, Pakistan. It is situated on the bank of Kurram River and is approximately 12 kilometers away from the city of Parachinar, the district’s headquarters. The word "Kurram" is derived from the Sanskrit word "Kramar," which means "a place to live." It is believed that Kurram Garhi was named by the Hindu king, Raja Karanpal, who ruled the area in the 10th century. The town has a rich history and has been a significant strategic location throughout the centuries. It has been a part of various empires and has seen many battles between different rulers. In the 18th century, Kurram Garhi was under the control of the Mughal Empire, and later it became a part of the Durrani Empire. Question: Which country is Kurram Garhi located in? Answer: Kurram Garhi is located in the country of Pakistan. So the answer is **Pakistan**. #1 Document: Monte Galbiga is a mountain located in the province of Como, Lombardy, Italy. It has an elevation of 1,690 meters (5,545 feet) above sea level. The mountain is also known as the "Balcone d’Italia" (Balcony of Italy) due to its panoramic views of Lake Como and the surrounding mountains. The name "Galbiga" is derived from the Lombard word "galb" which means "height". The mountain is a popular destination for hikers and offers various trails and viewpoints. It is also a popular spot for paragliding and hang gliding. In addition to its natural beauty, Monte Galbiga also has historical significance. During World War II, it was used as a strategic observation point by the Italian army. Remains of fortifications and bunkers can still be found on the mountain. Question: In which country is the mountain Monte Galbiga located? Answer: The mountain Monte Galbiga is located in Italy. So the answer is **Italy**. ...... Demonstrations of Explicit Parametric Reasoning Figure 13: Demonstrations of explicit parametric reasoning on 2WikiMQA dataset. (5 shot) Given a question and the relevant Wikipedia text, answer the question and explain why. If you are unsure, answer Unknown. #1 Wikipedia Title: Hypocrite (film) Text: Hypocrite (Spanish: Hipócrita..!) is a 1949 Mexican thriller film directed by Miguel Morayta ... #2 Wikipedia Title: When the Legends Die Text: When The Legends Die is a 1963 novel, by Hal Borland, and a DeLuxe Color film released ... #3 Wikipedia Title: Who Is the Guilty? Text: Who is the Guilty? ( sometimes" Who is to Blame?") is a 1925 Georgian silent film ... #4 Wikipedia Title: Miguel Morayta Text: Miguel Morayta( 15 August 1907 – 19 June 2013) was a Spanish film director and screenwriter ... #5 Wikipedia Title: Joselito vagabundo Text: Joselito vagabundo(" Joselito Vagabond") is a 1966 Mexican film. It stars Sara García and is directed by ... Question: When did the director of film Hypocrite (Film) die? Answer: The film Hypocrite was directed by Miguel Morayta. Miguel Morayta died on 19 June 2013. So the answer is **19 June 2013**. ...... Demonstrations of Open-book Reasoning (Wikipedia) Figure 14: Demonstrations of open-book reasoning (wikipedia) on 2WikiMQA dataset. (3 shot) 1247 Please answer the question based on the snippet from Google search and provide an explanation. If you are unsure, answer Unknown. #1 Wikipedia Title: Shah dynasty Snippet: Dravya Shah was the youngest son of Yasho Brahma Shah, Raja (King) of Lamjung and grandson of Kulamandan Shah Khad, Raja (King) of Kaski ... #2 Wikipedia Title: List of monarchs of Nepal Snippet: The monarchs of Nepal were members of the Shah dynasty who ruled over the Kingdom of Nepal from 1743 to its dissolution in 2008 ... #3 Wikipedia Title: Krishna Shah Snippet:Krishna Shah (10 May 1938 – 13 October 2013) was an Indian-American/Gujarati film and theatre director, screenwriter, playwright, producer, and production/distribution executive ... Question: Who is the child of Krishna Shah (Nepalese Royal)? Answer: Krishna Shah was the father of Rudra Shah. So the answer is **Rudra Shah**. #1 Wikipedia Title: Kurram Garhi Hydropower Plant Snippet: Kurram Garhi Hydropower Plant (KGHPP) is a small, low-head, run-of-the-river hydroelectric power generation station of 4.0 megawatt generation capacity ... #2 Wikipedia Title: Kurram District Snippet: Kurram District is a district in the Kohat Division of the Khyber Pakhtunkhwa province of Pakistan.The name Kurram comes from the river Kwarma ... #3 Wikipedia Title: Kurrama River Snippet: The Kurrama River, or Kurram River, originates from the watershed of Spin Ghar region in the Paktia province of Afghanistan and the Kurram District of Pakistan. ... #4 Answerbox Title: Kurram Garhi Snippet: Kurram Garhi is a small village located near the city of Bannu, which is the part of Khyber Pakhtunkhwa province of Pakistan. Question: When did the director of film Hypocrite (Film) die? Answer: The film Hypocrite was directed by Miguel Morayta. Miguel Morayta died on 19 June 2013. So the answer is **19 June 2013**. ...... Demonstrations of Open-book Reasoning (SERP) Figure 15: Demonstrations of open-book reasoning (SERP) on 2WikiMQA dataset. (5 shot) 1248
2024
67
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12380–12403 August 11-16, 2024 ©2024 Association for Computational Linguistics Agent LUMOS: Unified and Modular Training for Open-Source Language Agents Da Yin♡* Faeze Brahman♠ Abhilasha Ravichander♠ Khyathi Chandu♠ Kai-Wei Chang♡ Yejin Choi♠♢ Bill Yuchen Lin♠ ♠Allen Institute for AI ♡UCLA ♢University of Washington https://allenai.github.io/lumos/ [email protected] [email protected] Abstract Closed-source agents suffer from several issues such as a lack of affordability, transparency, and reproducibility, particularly on complex interactive tasks. This motivates the development of open-source alternatives. We introduce LUMOS, one of the first frameworks for training open-source LLM-based agents. LUMOS features a learnable, unified and modular architecture with a planning module that learns highlevel subgoal generation, and a grounding module trained to translate these into the actions using various tools in the execution module. The design allows for modular upgrades and wider applicability to diverse interactive tasks. To foster generalizable agent learning, we collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales across various complex interactive tasks. On 9 datasets, LUMOS exhibits several key advantages: (1) LUMOS excels multiple larger open-source agents on the held-out datasets (unused for training) for each task type. LUMOS even surpasses GPT agents on QA and web tasks; (2) LUMOS outperforms opensource agents produced by chain-of-thoughts and unmodularized integrated training; and (3) LUMOS effectively generalizes to unseen tasks, outperforming 33B-scale agents and domainspecific agents. Code and data will be released. 1 Introduction Language agents execute actions and interact with external tools or environments, in service of a goal. They have evolved into crucial elements of AI systems targeted at solving complex interactive tasks. These tasks often require agents to perform longhorizon planning and interactive reasoning, and can range from QA (Yang et al., 2018; Geva et al., 2021), to web tasks (Deng et al., 2023; Zhou et al., * Work was done during Da’s internship at AI2. Code: https://github.com/allenai/lumos. Models & Data: https://huggingface.co/ai2lumos Complex QA Math Web Browsing Text Games Interactive Coding Multimodal … Various Complex Interactive Tasks Can we train agents with open data and LLMs? Yes, we introduce 🪄 Lumos ! Closed LLMs (GPT-3.5/4) as controller for agents? Cons: Expensive, black-box, not reproducible, less controllable, etc. 🪄Lumos An unified, modular and open training framework Based on LLAMA-2-7B/13B ✅ Affordable ✅ Transparent ✅ Easy to Update ✅ Task-General ✅ Reproducible Figure 1: LUMOS is an unified, modular and opensource agent training framework that enables effective cross-task generalization while being easy to be updated. It also has advantages against closed-source agents from affordability, transparency and reproducibility aspects. 2023), math (Cobbe et al., 2021), and multimodal reasoning (Schwenk et al., 2022; Lu et al., 2022). Prior agent frameworks (Yao et al., 2022b; Shinn et al., 2023; Lin et al., 2023; Lu et al., 2023; Liu et al., 2023c) have primarily relied on closedsource large language model (LLM) APIs such as GPT-4 and ChatGPT (OpenAI, 2023a, 2022). Though powerful, they can be prohibitively expensive, particularly for tasks with long contexts such as web tasks (which include encoding long HTML code). Furthermore, the lack of transparency in closed-source LLMs hinders scientific understanding of their architectures and effectiveness, and provides limited reproducibility, and controllability over their behavior. We argue that over reliance on closed-source LLM-based agents is not conducive to the growth of research on language agents. In this paper, we propose LUMOS, a generalizable Language agent framework via Unified, Modular, and Open Source training. LUMOS employs a unified and modular architecture broadly applicable to complex interactive tasks: a planning module  , a grounding module - , and an execu12380 📝 s1: Search flights from honolulu to nyc... ⚛ a1-1: Type([box-id], HNL) ⚒ ⇒ Browser ⚛ a1-2: Type([box-id], JFK) ⚒ ⇒ Browser ⚛ a1-3: Click([button-id]) ⚒ ⇒ Browser 📝 s2: Set a filter to keep … price ≤ 1300 ⚛ …. ⚒…. 📝 s3: Set a filter to keep premium economy Lumos-OnePass (Lumos-O) Lumos-Iterative (Lumos-I) (𝑡-th iteration) Task desc.: 𝑇 Prev. subgoals: Prev. actions: Action interfaces: 𝐼 Task desc.: 𝑇 Prev. results: Prev. subgoals: 📝 Planning ⚛ Grounding ⚒ Execution Task desc: 𝑇 Action interf.: 𝐼 Task desc.: 𝑇 All Actions Exe. results All Subgoals 📝 Planning ⚛ Grounding ⚒ Execution Next Actions Exe. result Next Subgoal Multimodal Task (A-OKVQA): The device in her hand is from which country? 📝 s1: Identify the brand of the device … ⚛ a1: VQA(<img>, What is the brand..?) ⚒ e1: LLAVA(...) ⇒ Nintendo Web Task (Mind2Web): Find flights from honolulu to NYC with budget of $1,300 for premium economy. 📝 s2: Answer the country of Nintendo ⚛ a2: QA(context, What’s the country …) ⚒ e2: LLM(...) ⇒ Japan Figure 2: Overall framework of LUMOS. LUMOS are trained with 56K high-quality training annotations. We propose two agent training and inference formulations, LUMOS-O (§2.2) and LUMOS-I (§2.3). LUMOS-O is an efficient formulation that enables one-pass inference; LUMOS-I is an adaptive formulation that help agents flexibly plan based on the execution feedback. We showcase two LUMOS-I running examples in A-OKVQA and Mind2Web. tion module { . The planning module learns to decompose diverse complex tasks into a sequence of high-level subgoals. The grounding module is trained to communicate with the planning module and translate its generated subgoals into the actions that can be executed through a collection of tools in the execution module. LUMOS design allows for easy module upgrades to enhance new task planning, novel action grounding and supplementing new tools, without impacting the others. To tackle the tasks through the agent modules, we propose two interaction formulations for implementing the language agents, LUMOS-OnePass (LUMOSO) and LUMOS-Iterative (LUMOS-I). Outlined in Fig. 2, LUMOS-O is an efficient formulation that generates all the subgoals and actions through a single inference call, accomplishing the task in a one-pass manner. LUMOS-I is an iterative formulation that generates one subgoal at a time based on its previous execution results and environment updates, thereby enabling an adaptive agent. In addition, LUMOS utilizes a unified data format that encompasses multiple task types, thereby enabling the proposed agent framework to conveniently support a range of interactive tasks. These include, but are not limited to: question answering, mathematics, coding, web browsing, multimodal reasoning, and text games. To obtain high-quality annotations for training LUMOS, we leverage the ground-truth rationales in existing benchmarks across various task types, and convert them into a unified format (§3). This conversion is achieved with the aid of strong LLMs, ensuring that the converted annotations adhere to a universally applicable format consistent with our modular design. Our proposed annotation conversion method results in around 56K multi-task multi-domain agent training annotations, one of the largest open-source resources for agent fine-tuning. The training annotations could serve as a resource for universally enhancing any open-source LLMs with agent capabilities. Our evaluation demonstrates that LUMOS provides improved or comparable performance with GPT-based or larger open-source agents across various complex interactive tasks that are commonly used for agent evaluation, including QA, web, math, and multimodal tasks. We summarize our contributions and results as follows: General Agent Framework with High-Quality Data. We introduce an open-source agent learning framework that trains LLMs with unified data, aimed at unifying complex interactive tasks and enhancing generalization on unseen tasks with new environments and actions. We hope our framework and annotations can facilitate future research in developing open-source language agents. Competitive Performance Across Tasks and Agent Training Formulations. LUMOS outperforms a great number of open-source agents on the LUMOS held-out datasets unused in LUMOS training data across the four training task types. LUMOS even surpasses GPT-based agents in web and QA tasks. Specifically, LUMOS shows a 5.0% enhancement over GPT-4 on Mind2Web, and 4.1% and 3.5% LLM accuracy1 improvement on HotpotQA over the ReAct and ReWOO agents fully based on GPT-3.5-turbo, respectively. Furthermore, we observe that LUMOS training formulations outperform other potential agent training methods, such as chain-of-thoughts and integrated training, which instruct a single module to both plan and ground. Cross-Task Generalization. We evaluate LUMOS on two unseen tasks, WebShop (Yao et al., 2022a), a text game for online shopping, and InterCodeSQL (Yang et al., 2023), an interactive code generation task. LUMOS even surpasses 30Bscale agents, especially by nearly 20 reward points on WebShop. LUMOS also delivers a consistent 1A metric defined in Xu et al. (2023) to identify the semantic equivalence between predictions and gold answers. 12381 reward improvement over domain-specific agents. This suggests that LUMOS can generalize across tasks, hinting at potential benefits for a wide spectrum of language agent applications. 2 LUMOS: A Modular Open-Source LLM-Based Agent Framework We introduce the overall design and two formulations for developing agents within this framework. 2.1 LUMOS Agent Architecture For various complex interactive tasks, a common solution would include: (1) decomposing the task into a series of subgoals, (2) converting subgoals into concrete actions, (3) executing those actions. This process corresponds to the planning, grounding, and execution modules in our framework.  Planning Module (PM). This module is designed to dissect a complex task into a series of high-level subgoals, expressed in natural language. For example, a multimodal question such as “The device in her hand is from which country?” necessitates two subgoals: (1) Identify the brand of the device in her hand; (2) Answer the country of the device brand. The devised subgoals assist in breaking down a complex task into lowlevel actions in an interpretable and tool-agnostic manner. The planning module is designed for easy debugging and learning new task planning, without affecting other modules. - Grounding Module (GM). This module transforms the high-level subgoals produced by the PM into low-level executable actions. For instance, the GM translates the subgoal, “Query the living period of Lowell Sherman,” into one or more actions, such as KnowledgeQuery(Lowell Sherman) and QA([R2], Query:“What is the living period of Lowell Sherman?”). Here, R2 refers to the previously retrieved knowledge that may be helpful in answering the query. The grounding module can be easily customized to learn new actions without impacting the planning module. { Execution Module (EM). The Execution Module (EM) is a program that implements the actions generated by the grounding module and gets execution results. It deploys a variety of off-the-shelf tools, including APIs, neural models, and virtual simulators. For instance, the execution module could call the Wikipedia or Google Search APIs to accomplish the KnowledgeQuery action. The main characteristic of the LUMOS framework is the interaction among the three modules. We propose two formulations promoting the communication: LUMOS-OnePass (LUMOS-O) and LUMOS-Iterative (LUMOS-I). 2.2 LUMOS-OnePass (LUMOS-O) The LUMOS-OnePass (LUMOS-O) formulation is an efficient method that generates all subgoals and grounded actions at once (efficiency study in App. E). As depicted in Fig. 2, this formulation uses the planning module to generate all n subgoals in a single inference call. We then pass all the generated subgoals to the grounding module, which translates them into a sequence of m low-level actions. Note that in addition to the task description and subgoals, we also provide action interfaces I to the grounding module as inputs. These action interfaces (e.g., “VQA(Image_Context, Query): Given the image context, answer the query.”) define the functionalities of actions and their valid arguments, guiding the grounding module to produce executable actions. Lastly, for example, the grounding module can produce all the corresponding actions, from VQA([IMG], Question: What’s the brand of the device in her hand?) to the final one QA(..., Question: What’s the country of ...?). Formally, the overall planning and grounding process of LUMOS-O is illustrated in Fig. 2. In the planning phase, the task description T is input into the planning module. This generates an output series of subgoals, expressed as S = πplan(T) = {s1, ..., sn}, where πplan is the parameters of trained planning module. Grounded actions are obtained via A = πground(T, I, S), with reliance on the task description, action interfaces I = {i1, ..., ik}, and the generated subgoals S. πground represents the parameters of the grounding module. We take the last execution result en as the final inference result for the given task. 2.3 LUMOS-Iterative (LUMOS-I) LUMOS-Iterative (LUMOS-I) is a formulation that generates one subgoal and its corresponding executable actions in each iteration. When generating the t-th subgoal, the planning module requires the previous planned subgoals and the execution results of their grounded actions as input. The execution results assist the planning module to be aware of the environmental change and decide next subgoal according to the up-to-date environments. 12382 Take the VQA question “The device in her hand is from which country?” in Fig. 2 as an example. In the first iteration, the planning module will produce “Subgoal 1: Identify the brand of the device in her hand”. This subgoal is passed to the grounding module to generate the query actions, and obtain the executed results Nin tendo. The planning module then takes Nintendo along with the prior planning context as input to generate the next subgoal “Subgoal 2: Answer the country of Nintendo”. Planning upon the latest execution results would mitigate the risk of introducing a non-existent object in the environment or a random entity during the reasoning process (a case study in App. E). We demonstrate a single iteration of planning and grounding process of LUMOS-I in Fig. 2. To plan the t-th subgoal, we input the 1) task description T, 2) prior subgoals {s1, ..., st−1}, and 3) their executed results {e1, ..., et−1} into the planning module. We concatenate them in the order of T, s1, e1, ..., st−1, et−1 where the most recent subgoals and their results are placed in the end, as they have higher influence for planning t-th subgoal. The output would be the tth subgoal, st = πplan(T, s1, e1, ..., st−1, et−1). After then, the t-th subgoal will be directly incorporated into grounding module together with the prior grounding history and action interface I to generate the corresponding actions, At = πground(T, I, s1, A1, ..., st−1, At−1, st). Note that At is an executable action list, as the high-level subgoal might be decomposed into multiple low-level actions. We finally put the low-level actions At into execution module. The final execution result et can be sent back for planning (t + 1)-th subgoal. 3 Learning to Plan & Ground with Open-Source LLMs To guide planning and grounding modules to generate subgoals and valid low-level actions under our specified action interfaces, we fine-tune the two modules to produce the expected outputs. Training the modules requires the supervisions consisting of high-quality tasks, subgoals, and low-level actions. To equip smaller LLMs with instruction-following ability, prior works leverage methods such as Self-Instruct (Wang et al., 2023b) to synthesize training tasks and inputs, and directly generate ground-truth task outputs based on its created tasks. However, these methods are not suitable for generating high-quality annotations for training agents. For example, given a web browsing task in Mind2Web, GPT-4 only achieves around 20% step success rate (Liu et al., 2023b) when completing the task. Relying on such methods to generate complex interactive task annotations may degrade the annotation quality. Instead of creating annotations with LLMs from scratch, we exploit LLMs as a “style transfer” tool to convert ground-truth reasoning steps in existing benchmarks into the expected format in LUMOS formulations. There are a considerable number of the datasets annotated with either human-written solutions or structured action sequences2. For example, PRM800K (Lightman et al., 2023) is a math dataset containing the natural language solutions interleaved with formulas; StrategyQA (Geva et al., 2021) are a QA dataset with decomposed questions, supporting facts, and relevant Wikipedia paragraph indices; Mind2Web includes ground-truth action sequences, etc. They provide LLMs with fundamental information that sufficiently contributes to the annotation conversion. Next, we introduce 1) how we prompt LLMs to obtain the subgoal and action supervisions for training modules; 2) how to organize the subgoals and actions into the conversational forms aligning with LUMOS formulations; 3) how we train the modules with the final annotations. 3.1 Annotation Conversion Prompts To help LLMs better follow the annotation conversion instructions, we add 4-/5-shot examples in conversion prompts (see App. I for prompt details). We discuss the important elements in these in-context examples. The notations of the converted annotations have hat over letters. Action Interfaces. Action interfaces define the available actions that LLMs could ground to. Table 8 shows a comprehensive list of action definitions and their implementations. Ground-Truth Intermediate Reasoning Steps. We provide LLMs with ground-truth intermediate reasoning steps in existing benchmarks. With these as references, LLMs are able to summarize highlevel subgoals and synthesize corresponding actions according to the given action interfaces. 2More available resources for future extension and the discussion about the scalability of our annotation conversion methods are described in App. C. 12383 Subgoals and Corresponding Actions. When converting ground-truth reasoning steps into our expected annotations, we provide LLMs with examples about how to distill the high-level subgoals from the reasoning steps and map them into corresponding actions. In the in-context examples, we manually decompose a complex task into highlevel subgoals according to the context of groundtruth reasoning steps. Under each high-level subgoal, we write down multiple corresponding actions that help to accomplish the subgoal (shown in App. I). Given the exemplar subgoals and actions in the prompt, LLMs would emulate to generate subgoals and their paired actions when performing the conversion for new tasks. As the executed results of prior subgoals might be useful in future action implementation, we interlink the grounded actions in the in-context examples to allow context-dependent execution. One example of the interlinked actions is R1 = KnowledgeQuery(Zombies); R2 = ParagraphRetrieve(R1, Query: What color skin are zombies typically depicted with?). The agent could first find the zombie knowledge page (R1). Written in interlinked style, the ParagraphRetrieve action is able to receive the knowledge about zombies R1 as the context, and performs query-based retrieval. Intermediate Executed Results of Subgoals. The intermediate executed results ˆE play an important role in increasing LUMOS’s adaptability to environmental changes. Some datasets (e.g., GSM8K) offer execution results in their reasoning steps, i.e., the computation results of formulas. For the datasets without any execution results, their reasoning steps actually contain the relevant clues for the execution results. We take an example in StrategyQA. Though the answer of the annotated decomposed question “What color skin are zombies typically depicted with?” is not directly provided, the annotation contains a related fact “Zombies are often depicted as green in pallor.” that mentions the answer “green”. Thus, for each in-context example, we concatenate the relevant documents as well as our manually captured execution results in the conversion prompts. When converting new samples into LUMOS annotations, LLMs would automatically extract the executed results from the given documents. After prompting LLMs with the conversion prompts, we can acquire the key elements in training annotations, including subgoals ˆS, their corresponding actions ˆA and execution results ˆE. 3.2 Organizing Conversational Annotations Finally, to build the interaction between planning and grounding modules, we organize the annotations into conversational format. Conversational Planning Module Annotation. As shown in App. A’s Fig. 3a, we first play a user role to provide the task ˆT in the user prompt. For LUMOS-O, all the subgoals ˆS are the planning module’s final outputs. LUMOS-I requires multiturn conversational style. From Fig. 3a, the planning module appends the first ground-truth subgoal ˆs1 with index “Subgoal 1” as the first response supervision. We then put Subgoal 1’s executed result ˆe1 with prefix “The executed result for Subgoal 1 is” as the second user input. For the remaining turns, we act as the user, provide the execution results ˆet−1 of the last subgoal ˆst−1 to the planning module, and ask if the planning should be stopped. The response supervisions cover whether the planning should be terminated; if no, the response should contain a new gold subgoal ˆst. Conversational Grounding Module Annotation. As shown in App. A’s Fig. 3b, we also first provide the task ˆT and action interfaces ˆI to the grounding module in the first user turn. For LUMOS-O, we feed all the converted subgoal annotations ˆS in the user prompt. All the action annotations ˆA would be the user prompt response. For LUMOS-I, we input the current gold subgoal ˆst, with prefix “Subgoal to be grounded:”. Its response would be ˆst’s corresponding actions ˆAt. 3.3 Training with Converted Annotations As LUMOS annotations are conversational, we formulate them as {x1, y1, ..., xi, yi, ..., xn, yn}, where xi is i-th user prompt and yi indicates its ground-truth responses. Following Wang et al. (2023a), during training, we feed each entire multiturn annotation into a decoder-only model while merely calculating the decoding loss on the tokens of ground-truth responses Y = {y1, ..., yi, ..., yn}. We apply binary masking on the user prompt tokens to prevent computing loss on them. The final loss function is L = −P j log pπ(tj | t<j)×1(tj ∈Y ) where tj denotes j-th input token and 1(·) is a Boolean indicator function. 12384 4 Experiments We begin with the details of our experimental setup, including annotation conversion, module training, and the tools used in the execution module. We then evaluate LUMOS by: 1) comparing LUMOS with existing open-source LLM agents and GPTbased agents, 2) contrasting LUMOS against other agent training methods, 3) manifesting LUMOS generalizability on two unseen tasks involving new environments and actions, and 4) assessing LUMOS annotation quality. 4.1 Experimental Setup Data Collection. Utilizing the conversion prompts discussed in §3.1, we employ GPT-4 (Achiam et al., 2023) versions on 8/13/2023 and 9/13/2023, and GPT-4V (OpenAI, 2023b) version on 1/24/2023 to perform annotation conversion on the ground-truth reasoning steps in existing benchmarks. App. B provides the data sources used for annotation conversion. These include the datasets of math, QA, web and multimodal tasks. To help LUMOS be aware of the visual inputs in multimodal tasks, we append a detailed image caption generated by LLAMA-1.5-7B (Liu et al., 2023a) to the task description in train and test sets. After filtering out the ones that contain mismatched parentheses, invalid execution outputs or excessively lengthy outputs, we obtain 55,382 and 55,499 annotations for training the planning and grounding modules, respectively. Training and Action Interfaces. We adopt LLAMA-2-7B and LLAMA-2-13B (Touvron et al., 2023a) as the base models for both the planning and grounding modules. Details regarding the training process can be found in App. D. For solving interactive tasks, we integrate commonly used actions for each task into the pre-defined action interfaces. Details of supported executable actions are included in App. G. 4.2 Training Task Performance We evaluate LUMOS across an array of complex interactive tasks - QA, web, math and multimodal tasks. The evaluation mainly follows the settings established by AgentBench (Liu et al., 2023b) and ReWOO (Xu et al., 2023) (see App. H). For each task type, excluding web tasks, we include a heldout dataset to assess the model’s generalizability across the domains within the same task category. The performance is displayed in Tab. 1. Note that in Tab. 1, task-specific agents LUMOS-IX are trained using task-specific data belonging to task type X (e.g., Web, Math, QA, MM). LUMOS-IAll represents the agent after unified training with the combination of four task type annotations. More evaluation details are shown in App. H. LUMOS vs. Open-Source Agents. Overall, we find that LUMOS consistently outperforms various open-source agents across the seven datasets. Though the base models of some compared agents are 2-10× larger than LUMOS, LUMOS significantly excels in performance. Specifically, 7B LUMOS-I achieves 24.5% and 14.1% step success rate improvements over WizardLM-30B and AgentLM-70B on Mind2Web. The effectiveness of LUMOS is particularly manifested on the held-out datasets which are unused in LUMOS training data. Our observations reveal that LUMOS outperforms many baseline opensource agents across all held-out datasets. Notably, even though Orca-Platypus-13B (Lee et al., 2023) has been trained on a math corpus that includes GSM8K, its performance still 8.6% lower than LUMOS-O on SVAMP (Patel et al., 2021). Moreover, despite ReWOO and FiReAct being fine-tuned with in-domain HotpotQA annotations, LUMOS-I, without any fine-tuning on HotpotQA, still presents an impressive improvement. A similar trend can be observed on ScienceQA. We compare AutoAct-7B (Qiao et al., 2024), an open-source agent specifically trained on ScienceQA. LUMOS achieves 67.3% on the entire test set of ScienceQA, while AutoAct-7B’s performance is only 53.3%. LUMOS vs. GPT-based Agents. Although LUMOS is built on LLAMA-2-7B/13B, LUMOS-I delivers superior performance by 5.0-8.7% over GPT4 on Mind2Web. We also notice a 3.5-7.8% increase in LLM accuracy over the GPT-based ReWOO on the HotpotQA dataset when employing GPT-3.5-turbo as the implementation of the QA tool to ensure fair comparisons. 4.3 Agent Formulation Comparison We train models using the same base model and data, but with different training methods - Chainof-Thoughts (CoT) Training: For a given task T, the agent learns to produce both the chain-ofthoughts solution and the final answer directly; Integrated Agent Training: For a given task T, the agent learns to generate all the subgoals and actions 12385 Agents Web Task Mind2Web GPT/API-based Agents GPT-3.5-turbo 15.7 GPT-4 22.6 Open-source Agents Baichuan-13B-chat 2.3 WizardLM-30B 3.1 Koala-13B 6.0 AgentLM-70B 13.5⋆ LUMOS-IWeb 27.6⋆ LUMOS-IWeb-13B 31.3⋆ (a) Web performance in step-wise success rate. Agents Math Tasks GSM8K SVAMP Open-source Agents AgentLM-13B 32.4 Code-Llama (PoT)-13B 36.1 60.0 Platypus-30B 37.8 51.7 ReWOO-open ≈38 Orca-Platypus-13B 38.4⋆ 56.9 Alpaca-7B ≈39 Galactica-30B 41.7 41.6 LUMOS-OMath 50.5⋆ 65.5 LUMOS-IMath 47.1⋆ 63.6 LUMOS-OMath-13B 55.4⋆ 69.3 (b) Math performance in accuracy. Agents Multimodal Tasks A-OKVQA ScienceQA (IMG) GPT/API-based Agents GPT-3 + GT Caption 45.4 Open-source Agents ClipCap (VL) 44.0⋆ KRISP (VL) 51.9⋆ GPV-2 (VL) 60.3⋆ MiniGPT-4-13B (VL) 67.2 42.8 LLAVA-1.5-7B (VL) 57.6 LUMOS-OMM 70.1⋆ 56.9 LUMOS-IMM 71.3⋆ 58.4 LUMOS-IMM-13B 72.4⋆ 58.2 (c) Multimodal performance in accuracy. Agents Agent Model QA Tool QA Tasks StrategyQA HotpotQA (LLM Acc. / EM) GPT/API-based Agents GPT-3.5-CoT GPT-3.5-turbo GPT-3.5-turbo 56.0 37.8 / 22.4 ReAcT GPT-3.5-turbo GPT-3.5-turbo 64.6 40.8 / 32.4 ReWOO GPT-3.5-turbo GPT-3.5-turbo 66.6 42.4 / 30.4 Open-source Agents ReWOO-open LLAMA-7B GPT-3.5-turbo ≈56 ≈37 / -⋆ AgentLM LLAMA-2-7B LLAMA-2-7B - / 22.3 FiReAct LLAMA-2-7B LLAMA-2-7B - / 26.2⋆ FiReAct CodeLLAMA-34B CodeLLAMA-34B - / 27.8⋆ LUMOS-OQA LLAMA-2-7B GPT-3.5-turbo 60.6⋆ 39.2 / 24.9 LUMOS-IQA LLAMA-2-7B LLAMA-2-7B 58.3⋆ 37.3 / 23.5 LUMOS-IQA LLAMA-2-7B GPT-3.5-turbo 65.7⋆ 45.9 / 29.4 LUMOS-IQA LLAMA-2-7B GPT-4 72.4⋆ 56.8 / 36.0 LUMOS-IQA LLAMA-2-13B GPT-3.5-turbo 65.3⋆ 50.2 / 31.4 LUMOS-IQA LLAMA-2-13B GPT-4 76.7⋆ 57.4 / 36.3 (d) QA performance. The evaluation metric for StrategyQA and HotpotQA is accuracy, and LLM accuracy / Exact Match (EM), respectively. Agents Unseen Tasks WebShop InterCodeSQL Baichuan-13B-chat 5.7 Koala-13B 6.0 WizardLM-30B 10.6 Vicuna-v1.1-13B 12.6 ChatGLM2 19.4 Vicuna-v1.3-33B 23.9 6.7 Vicuna-v1.5-13B 41.7 4.8 OpenChat-v3.2-13B 46.9 Claude-instant 49.7 LUMOS-IWeb-13B 46.2 4.2 LUMOS-IMath-13B 45.7 5.8 LUMOS-IQA-13B 47.3 3.5 LUMOS-IMM-13B 43.8 4.0 LUMOS-IAll-13B 50.3 7.3 (e) Unseen tasks, WebShop and InterCodeSQL. The metric is average reward and success rate, respectively. Table 1: Overall performance of language agents on diverse complex interactive tasks. The tasks highlighted in red and blue are the held-in/held-out datasets for the trained task types. ⋆indicates that the results are obtained after fine-tuning on the task’s training set. We adopt multiple-choice setting for A-OKVQA. IMG denotes the subset with image inputs in ScienceQA test set. GPT-3 in Tab. 1c indicates text-davinci-002. using the same model. The execution modules remains the same. This training paradigm is adopted in ReWOO-open, FiReAct and AgentLM. From Tab. 3, both LUMOS-I and LUMOS-O outperform CoT Training3. They also exceed the integrated formulation based on a single module to operate planning and grounding. It highlights the importance of disentangling subgoal planning and action grounding skills during the agent training. 4.4 LUMOS Generalizability Evaluation Since LUMOS employs a unified format to represent complex interactive tasks, we envision that after trained with the combined annotations across the four training task types, i.e., unified training, when faced with an unseen task type, LUMOS may adapt to it more effectively. To examine the generalization ability of LUMOS, we evaluate it on the unseen tasks, WebShop (Yao et al., 2022a) and InterCodeSQL. WebShop re3We do not implement CoT training on web tasks, as updates to the environment (e.g., changes to HTML) are necessary intermediaries for planning subsequent actions. Training Data QA StrategyQA HotpotQA Downstream Perf. of Training Different Data w/ LLAMA ReWOO-open Data ≈57 ≈37 LUMOS-IQA Data 58.3 38.1 Perf. Using High- & Low-Level Subgoal Annots. w/ LLAMA-2 LUMOS-IQA w/ Low-Level Subgoals 63.3 44.3 LUMOS-IQA Data 65.7 45.9 Table 2: Comparison between the 7B-sized agents trained with different annotations. sembles a text game4, with its shopping environment and action space considerably differing from those covered in the training annotations of LUMOS. InterCodeSQL (Yang et al., 2023) is an interactive code generation task that requires the agent to generate SQL code based on the external databases and involves unseen SQL commands. To make LUMOS adapt to these unseen tasks, we supplement 2-/3-shot examples in the module inputs, enabling 4WebShop utilizes four actions in its training annotations: Search, FeatureRetrieve, Pick, and Click. The argument of Click is a shopping item, differing from that of Click in Mind2Web which includes an HTML element description. 12386 them to learn how to generate subgoals and ground to new sets of available actions (see App. J). As outlined in Tab. 1e, LUMOS-I achieves a 3-7 higher average reward than domain-specific agents on WebShop. It also significantly outperforms larger agents such as WizardLM-30B and Vicuna-v1.3-33B (Chiang et al., 2023). We observe the similar trend on InterCodeSQL. It suggests that unified training enables agents to enhance the generalization on unseen tasks and novel actions. We provide more unified training results in App. F to show unified training also boosts the performance on most training task types. 4.5 Further Analysis on Training Annotations We aim to address three questions pertinent to quality and format decisions. Q1: How good are our converted training annotations? Q2: Would adopting low-level subgoals be more effective than using high-level subgoals? Q3: Would incorporating LUMOS annotation with general instructiontuning data achieve comparable performance on instruction-following benchmarks? Assessment of Annotation Quality. We assess the quality of our annotations by training models with them and evaluating the agents’ performance. We compare them with ReWOO-open data, constructed based on HotpotQA and TriviaQA (Joshi et al., 2017) using the Self-Instruct method. For fair comparison, we train the base model of ReWOOopen, LLAMA-7B, using LUMOS annotations. We also adopt the integrated training formulation and sample 2,000 training data to keep the same training settings as ReWOO-open. Given that ReWOOopen data exclusively relies on QA benchmarks, we primarily focus on QA task evaluation. Shown in Tab. 2, LUMOS data yields an improvement when compared to ReWOO-open data on StrategyQA and HotpotQA. Note that even if the ReWOO-open data are based on HotpotQA, it still underperforms LUMOS on HotpotQA. Low-Level Subgoal vs. High-Level Subgoal. As described in §2, we ask LLMs to generate highlevel subgoals corresponding to one or many lowlevel actions. An alternative annotation could be one where each subgoal corresponds solely to one low-level action, i.e., the subgoal is “low-level”. We direct LLMs to create low-level subgoals by modifying the annotation conversion prompt to fit the format where a subgoal is strictly linked to one action. Tab. 2 reveals a drop after replacing high-level subgoals with low-level ones on both QA datasets. This result hence reaffirms the appropriateness of our initial subgoal design. Effect on General Instruction-Following Benchmark. As LUMOS adopts the conversational style discussed in Sec. 3, any tasks that can be formalized as conversation tasks could be accommodated. Therefore, LUMOS supports all the instruction-tuning tasks and complex interactive tasks. We conduct an experiment to show the effectiveness of LUMOS in instruction tuning tasks. By integrating the general instruction-tuning training data, Alpaca, with the unified training annotations of LUMOS, we created an expanded training dataset. To guide LUMOS towards generating straightforward responses, we appended the phrase “Respond to me directly” to each input instance from Alpaca, instructing the model to avoid elaborating reasoning processes for the general instruction-tuning tasks. Tested on the SuperNaturalInstruction benchmark (Wang et al., 2022), Lumos achieved a 39.3 ROUGE-L score, which is close to the LLAMA-2-7B trained on Alpaca alone (39.8), which suggesting the possibility of developing agent reasoning ability and general instruction following ability simultaneously. 5 Related Work LLM Agents. Language agents have shown potential in solving diverse complex interactive tasks. ReAct (Yao et al., 2022b) introduced a prompting method that shaped LLMs as language agents and grounded them in external environments. Subsequently, several methods (Shen et al., 2023; Lu et al., 2023; Xu et al., 2023; Lin et al., 2023; Liu et al., 2023c) aimed at improving agent performance and increasing their applicability in diverse scenarios. These agents mainly rely on closedsource LLMs, lacking of the consideration of affordability, reproducibility and transparency issues on complex interactive tasks. Improving Small Models for Building Agents. Recent works have utilized larger models to generate training data for fine-tuning smaller models (Bosselut et al., 2019; West et al., 2022; Wang et al., 2023b; Hsieh et al., 2023; Brahman et al., 2023) to enable them to follow instructions and perform chain-of-thoughts reasoning. We also observe contemporaneous efforts ReWOO-open (Xu et al., 2023), FireAct (Chen et al., 2023), AgentLM 12387 Agents Web Task Mind2Web Integrated Training 25.3 LUMOS-IWeb 27.6 (a) Web task. Agents Math Tasks GSM8K SVAMP CoT Training 40.4 52.2 Integrated Training 45.5 61.7 LUMOS-OMath 50.5 65.5 LUMOS-IMath 47.1 63.6 (b) Math tasks. Agents QA Tasks StrategyQA HotpotQA CoT Training 58.3 22.1 Integrated Training 62.3 39.6 LUMOS-OQA 60.6 39.2 LUMOS-IQA 65.7 45.9 (c) QA tasks. Agents Multimodal Tasks A-OKVQA ScienceQA CoT Training 68.3 57.3 Integrated Training 70.9 57.6 LUMOS-OMM 70.1 56.9 LUMOS-IMM 71.3 58.4 (d) Multimodal tasks. Table 3: Comparison among different formulations of training language agents. The metric for HotpotQA is LLM accuracy (%). All the experiments are based on LLAMA-2-7B. (Zeng et al., 2023), and AutoAct (Qiao et al., 2024), focusing on training agents on smaller LLMs. Unlike FireAct and AutoAct, our work delves into a more in-depth analysis, aiming to discover a unified task representation that enables agents to generalize across unseen interactive tasks effectively. In contrast to ReWOO-open and AgentLM, we extend to examining proper training formulations and studying multiple strategies for creating large-scale, high-quality datasets for agent training. We demonstrate LUMOS superior performance in §4. 6 Conclusion We introduce LUMOS, an open-source, generalizable language agent training framework. We propose two formulations, LUMOS-I and LUMOS-O, which promote collaboration among agent modules to solve complex tasks. For module training data, we use LLMs to transform reasoning steps in existing benchmarks into a unified format applicable within LUMOS framework. LUMOS outperforms a variety of open-source agents across the 9 datasets. It performs even better than GPT agents on QA and web tasks. LUMOS also exceeds potential agent training formulations and exhibits superior generalization on two unseen interactive tasks. Limitations Covered Training Task Types. Currently, LUMOS is trained using annotations for four specific types of complex interactive tasks, which may still limit its generalization capabilities for novel tasks. To address this, we aim to enrich the training data for LUMOS by incorporating a wider variety of task types. As outlined in §3 and App. C, a substantial array of benchmarks already exists, providing ground-truth reasoning steps that could serve as a foundation for expanding LUMOS’s annotations. By broadening the scope of annotations, we not only enhance the language agents but also offer a valuable resource for practitioners looking to develop their own models. Backtracking and Replanning Ability. In situations where language agents encounter invalid execution outcomes or navigate erroneous solution pathways, it is crucial for them to possess the capacity for self-diagnosis and replanning their reasoning processing. The current LUMOS lacks these sophisticated self-corrective features. Future versions should be designed with advanced mechanisms that enable the agents to recognize and rectify their planning errors. Open-Source Tool Replacement. For part of our QA experiments, we employ GPT models to address decomposed sub-questions. It is designed for fair comparison with the agents that also use GPT models as QA tools, as elaborated in §4.2. Our future strategy involves transitioning to fully open-source QA frameworks that leverage models such as LLAMA-2-70B, aiming to establish a completely open-source framework. Acknowledgement This work was funded in part by DARPA MCS program through NIWC Pacific (N66001-19-24031), DARPA ECOLE (#HR00112390060), and the Allen Institute for AI. We thank Mosaic team members, and the anonymous reviewers for the helpful discussions. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Linguistics. Faeze Brahman, Chandra Bhagavatula, Valentina Pyatkin, Jena D Hwang, Xiang Lorraine Li, Hirona J Arai, Soumya Sanyal, Keisuke Sakaguchi, Xiang Ren, and Yejin Choi. 2023. Plasma: Making small 12388 language models better procedural knowledge models for (counterfactual) planning. arXiv preprint arXiv:2305.19472. Baian Chen, Chang Shu, Ehsan Shareghi, Nigel Collier, Karthik Narasimhan, and Shunyu Yao. 2023. Fireact: Toward language agent fine-tuning. arXiv preprint arXiv:2310.05915. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. 2023. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346– 361. Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. 2021. Beyond iid: three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021, pages 3477–3488. Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003–8017, Toronto, Canada. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada. Association for Computational Linguistics. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Ariel N. Lee, Cole J. Hunter, Nataniel Ruiz, Bleys Goodson, Wing Lian, Guan Wang, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". 2023. Openorcaplatypus: Llama2-13b model instruct-tuned on filtered openorcav1 gpt-4 dataset and merged with divergent stem and logic dataset model. https://huggingface.co/Open-Orca/ OpenOrca-Platypus2-13B. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. arXiv preprint arXiv:2305.20050. Bill Yuchen Lin, Yicheng Fu, Karina Yang, Prithviraj Ammanabrolu, Faeze Brahman, Shiyu Huang, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2023. Swiftsage: A generative agent with fast and slow thinking for complex interactive tasks. ArXiv preprint, abs/2305.17390. Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D Ernst. 2018. Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system. arXiv preprint arXiv:1802.08979. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023a. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. 2023b. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688. Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, et al. 2023c. Bolaa: Benchmarking and orchestrating llm-augmented autonomous agents. arXiv preprint arXiv:2308.05960. Pan Lu, Swaroop Mishra, Tony Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. 2022. Learn to explain: Multimodal reasoning via thought chains for science question answering. In The 36th Conference on Neural Information Processing Systems (NeurIPS). Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, KaiWei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842. 12389 Xing Han Lù, Zdenˇek Kasner, and Siva Reddy. 2024. Weblinx: Real-world website navigation with multiturn dialogue. Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984, Online. Association for Computational Linguistics. Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. 2022. LILA: A Unified Benchmark for Mathematical Reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5807–5832, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. OpenAI. 2022. ChatGPT. OpenAI. 2023a. Gpt-4 technical report. ArXiv, abs/2303.08774. OpenAI. 2023b. Gpt-4v(ision) system card. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics. Shuofei Qiao, Ningyu Zhang, Runnan Fang, Yujie Luo, Wangchunshu Zhou, Yuchen Eleanor Jiang, Chengfei Lv, and Huajun Chen. 2024. Autoact: Automatic agent learning from scratch via self-planning. arXiv preprint arXiv:2401.05268. Dustin Schwenk, Apoorv Khandelwal, Christopher Clark, Kenneth Marino, and Roozbeh Mottaghi. 2022. A-okvqa: A benchmark for visual question answering using world knowledge. In European Conference on Computer Vision, pages 146–162. Springer. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2020. Alfworld: Aligning text and embodied environments for interactive learning. In International Conference on Learning Representations. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. ArXiv preprint, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. MuSiQue: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539–554. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. 2023a. How far can camels go? exploring the state of instruction tuning on open resources. arXiv preprint arXiv:2306.04751. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023b. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13484–13508, Toronto, Canada. Association for Computational Linguistics. Yizhong Wang, Swaroop Mishra, Pegah Alipoormolabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Purohit, Ishani Mondal, Jacob Anderson, Kirby Kuznia, Krima Doshi, Kuntal Kumar Pal, Maitreya Patel, Mehrad Moradshahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022. Super-NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5085–5109, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing Multiple Choice Science Questions. In Proceedings of the 3rd Workshop on Noisy Usergenerated Text, pages 94–106, Copenhagen, Denmark. Association for Computational Linguistics. Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic 12390 knowledge distillation: from general language models to commonsense models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4602–4625, Seattle, United States. Association for Computational Linguistics. Jian Xie, Kai Zhang, Jiangjie Chen, Tinghui Zhu, Renze Lou, Yuandong Tian, Yanghua Xiao, and Yu Su. 2024. Travelplanner: A benchmark for realworld planning with language agents. arXiv preprint arXiv:2402.01622. Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. 2023. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323. John Yang, Akshara Prabhakar, Karthik Narasimhan, and Shunyu Yao. 2023. Intercode: Standardizing and benchmarking interactive coding with execution feedback. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022a. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744–20757. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2022b. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations. Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6720–6731. Aohan Zeng, Mingdao Liu, Rui Lu, Bowen Wang, Xiao Liu, Yuxiao Dong, and Jie Tang. 2023. Agenttuning: Enabling generalized agent abilities for llms. arXiv preprint arXiv:2310.12823. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Yonatan Bisk, Daniel Fried, Uri Alon, et al. 2023. Webarena: A realistic web environment for building autonomous agents. arXiv preprint arXiv:2307.13854. 12391 Appendix A Illustration of Annotation Organization As discussed in §3.2, we organize the ground-truth subgoals and actions converted by GPT-4 into the conversational format that boosts the interaction between modules. We show the final conversation format in Fig. 3. B Statistics of Converted Training Annotations As discussed in §4.1, the data sources for constructing training annotations cover a broad range of complex interactive tasks. Tab. 4 shows the benchmarks leveraged for annotation conversion, along with the task type information. To train agents like LUMOS-IMath mentioned in Tab. 1b, we need to leverage the annotations converted from 19778 data specific to math domain. For training a unified agent such as LUMOS-I, we would use the annotations transformed from all the listed data as training set. We calculate the average turn numbers in each task’s converted training annotations. The average numbers are 4.75, 3.75, 8.25 and 3.92 for Math, QA, Web and Multimodal tasks, respectively. C Available Resources for LUMOS Training Data Extension As discussed in §3, we seek the existing datasets with ground-truth intermediate reasoning steps to synthesize LUMOS training annotations from four complex interactive task categories, math, QA, web and multimodal tasks. The methodology for converting annotations as described is not limited to the four task types previously mentioned. Any training datasets that include gold reasoning rationales are suitable for the construction of annotations using the LUMOS method. We present an exhaustive list of datasets that span a variety of task types in Tab. 5. For example, the AlfWorld dataset requires actions that have not been encountered in existing LUMOS annotations, such as open, take, move, and others; the TravelPlanner dataset requires actions like CitySearch, FlightSearch, which are equally unseen in the existing training set as well. This approach could significantly enhance the scalability of LUMOS annotations, thereby augmenting the method’s capability to adapt to novel environments and acquire proficiency in executing new actions. D Details of Training Modules We describe additional details about our training experiments. In all of our experiments, we implement training over two epochs with a learning rate of 2 × 10−5 and a batch size 128. We set the maximum sequence length to 1024. We also apply linear warmup for 3% of the total training steps to adjust the learning rate. All the training experiments are implemented with 2 NVIDIA 80GB A100 GPUs or 4 NVIDIA 48GB A6000 GPUs. E Efficiency Study on LUMOS and Efficiency-Performance Trade-Off We compute the inference time for LUMOS-O and LUMOS-I across 100 instances on GSM8K and HotpotQA, respectively. The experiments are run with 2 NVIDIA A6000 48GB GPUs with inference batch size 16. As depicted in Tab. 6, we find that LUMOS-O is much more efficient than LUMOS-I on both datasets. LUMOS-O completes its inference in a single round, whereas LUMOS-I necessitates multiple rounds of inference until it autonomously concludes its planning. The iterative planning and grounding in LUMOS-I contribute to a higher time cost for solving individual instances. However, this property leads to better LUMOS-I’s capacity to generate appropriate subgoals based on the current external environment compared to LUMOS-O. For example, when tackling a complex question “What government position was held by the woman who portrayed Corliss Archer in the film Kiss and Tell?”, LUMOS-I can first identify the woman who portrayed Corliss Archer in Kiss and Tell, Shirley Temple, then ask the government position she held. However, though LUMOS-O can first ask who the woman is as well, without the interaction with external knowledge in Wikipedia, it will then generate a subgoal which inquires the government position held by Madonna, a random entity irrelevant with the original question. Hence, LUMOS-O is a more efficient solution, but not as effective as LUMOS-I due to the lack of the adaptability to external environments. We also notice that in Tab. 1b, LUMOS-O achieves superior performance on math tasks. Consider a mathematical problem such as, “James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week?” Even if the agent calculated how many times James sprints 12392 Task Types Datasets # Source Data # Total # Final Converted for Planning # Final Converted for Grounding Math PRM800K (Lightman et al., 2023) 10000 19778 19386 19471 GSM8K (Cobbe et al., 2021) 7473 ASDiv (Miao et al., 2020) 2305 QA Musique (Trivedi et al., 2022) 17632 19409 19048 19080 StrategyQA (Geva et al., 2021) 1777 Web Mind2Web (Deng et al., 2023) 1009 1009 1007 1007 Multimodal A-OKVQA (Schwenk et al., 2022) 17056 17056 15941 15941 Table 4: Statistics of data sources for conversion and the number of final successfully converted annotations for each task type. Datasets Task Types # Total WebLINX (Lù et al., 2024) Web Browsing 2337 TravelPlanner (Xie et al., 2024) Travel Planning 225 SciQ (Welbl et al., 2017) Question Answering 11679 GrailQA (Gu et al., 2021) Knowledge Graph Reasoning 44337 NL2Bash (Lin et al., 2018) Interactive Coding 8090 AlfWorld (Shridhar et al., 2020) Embodied Task 3553 VCR (Zellers et al., 2019) Multimodal Reasoning 212923 LILA (Mishra et al., 2022) Math 93670 Table 5: Statistics of potential data sources for LUMOS annotation extension. Agents GSM8K HotpotQA LUMOS-O 102s 556s LUMOS-I 851s 1007s Table 6: The time cost of LUMOS-O and LUMOS-I when performing inference 100 instances from GSM8K and HotpotQA datasets. a week, which is 3 × 3 = 9, the mere number 9 does not affect the next subgoal generation. This is because no matter the result is 9 or 10000, the next high-level subgoal to calculate the total meters remains the same for solving the question. Therefore, the high environment adaptability of LUMOS-I may not be a good supplement on math tasks. F More Unified Training Results Agents Web QA Multimodal Math LUMOS-IX-13B 31.3 65.3/50.2/31.4 72.4/58.2 51.9/66.3 LUMOS-IAll-13B 31.9 66.7/51.0/31.6 72.8/58.2 50.5/65.8 Table 7: Unified training performance on trained task types. We aggregate the performance on held-in/heldout datasets for each trained task type with the symbol ‘/’. As discussed in the footnote 2 of §4.4, LUMOS-O is not applicable to web tasks. Thus, we only conduct unified training for LUMOS-I as it can be universally applied to any task types. We evaluate the performance of LUMOS-I-13B after the unified training that leverages combined annotations from four distinct training task types. We observe that the unified training enhances performance across a majority of tasks, including web, QA, and multimodal tasks. The decline in performance for mathematical tasks is marginal, which is only 0.7% and 1.4%. The unified task representation may enable LUMOS agents to uncover intrinsic links among these complex interactive tasks, thereby yielding additional advantages for certain training tasks. G Execution Tools Associated with Action Interfaces For each available action defined in the action interfaces, there is at least one corresponding backend execution tool to facilitate the implementation of the concrete actions. Displayed in Tab. 8a, for QA tasks, we rely on Wikipedia and Google search APIs to locate relevant entity knowledge. Besides, we leverage a semantic matching model dpr-reader-multiset-base5 used in Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) to capture paragraphs according to the query. Following ReWOO (Xu et al., 2023), we also include GPT-series model as a simple QA tool to answer the query based on our retrieved knowledge or previously interaction history. Demonstrated in Tab. 8b, for web tasks, the actions are real mouse and keyboard operations including typing, clicking and selecting HTML tags. To locate the relevant HTML tags to be operated, following AgentBench evaluation, we use a DeBERTa model6 that ranks and retrieves the tags according to the current action. As shown in Tab. 8c, for math tasks, the main execution tool is WolframAlpha API 7 as it can per5https://huggingface.co/facebook/ dpr-reader-multiset-base. 6https://huggingface.co/osunlp/MindAct_ CandidateGeneration_deberta-v3-base. 7https://www.wolframalpha.com/. 12393 Subgoal 1: Query the living period of Lowell Sherman. Action 1-1: R1 = KnowledgeQuery(Lowell Sherman) Action 1-2: R2 = ParagraphRetrieval(R1, Query: …) Action 1-3: R3 = QA([R2], Query: …) = 46 years Subgoal 2: Query the living period of Jonathan Kaplan. Action 2-1: R4 = KnowledgeQuery(Jonathan Kaplan) Action 2-2: R5 = ParagraphRetrieval(R4, Query: …) Action 2-3: R6 = QA([R5], Query: …) = 75 years Subgoal 3: Compare their living periods. Action 3-1: R7 = Calculator(R3 > R6) … <|user|> Please provide a reasonable subgoal-based plan to solve the given task. Task: Who lives longer, Lowell Sherman or Jonathan Kaplan? <|assistant|> Subgoal 1: Query the living period of Lowell Sherman. <|user|> The execution result of Subgoal 1 is 46 years. Should we stop planning? <|assistant|> No, I will keep planning. Subgoal 2: Query the living period of Jonathan Kaplan. <|user|> The execution result of Subgoal 2 is 75 years. Should we stop planning? … Organize subgoals & execution results into conversation format Final planning module annotation We find relevant facts: Lowell Sherman passed away at the age of 46. Jonathan Kaplan passed away at the age of 75. We need to answer these questions: … … 3) Is #2 greater than #1? … … LLM-based annotation conversion Ground-truth reasoning steps in existing datasets (a) Final planning module annotation organized from the converted subgoals & execution results. Subgoal 1: Query the living period of Lowell Sherman. Action 1-1: R1 = KnowledgeQuery(Lowell Sherman) Action 1-2: R2 = ParagraphRetrieval(R1, Query: …) Action 1-3: R3 = QA([R2], Query: …) = 46 years Subgoal 2: Query the living period of Jonathan Kaplan. Action 2-1: R4 = KnowledgeQuery(Jonathan Kaplan) Action 2-2: R5 = ParagraphRetrieval(R4, Query: …) Action 2-3: R6 = QA([R5], Query: …) = 75 years Subgoal 3: Compare their living periods. Action 3-1: R7 = Calculator(R3 > R6) … <|user|> Please ground the given subgoal to corresponding executable actions for solving the given task. [Action Interfaces] Task: Who lives longer, Lowell Sherman or Jonathan Kaplan? Subgoal to be grounded: Subgoal 1: Query the living period of Lowell Sherman. <|assistant|> R1 = KnowledgeQuery(Lowell Sherman); R2 = ParagraphRetrieval(R1, Query: …); R3 = QA([R2], Query: …) <|user|> Subgoal to be grounded: Subgoal 2: Query the living period of Jonathan Kaplan. <|assistant|> R4 = KnowledgeQuery(Jonathan Kaplan); … … Organize subgoals & low-level actions into conversation format Final grounding module annotation (b) Final grounding module annotation organized from the converted subgoals & actions. Figure 3: Process of converting converted subgoals, actions, and executions into the final conversational training annotations for LUMOS-I formulation. form a large collection of mathematical functions such as calculating formulas and solving equations. For complex math operations such as sorting, we leverage OpenAI Codex (Chen et al., 2021) to generate a short code snippet for the execution. For multimodal tasks, as illustrated in Tab. 8d, both Visual Question Answering (VQA) and Question Answering (QA) tools are considered. The employed VQA model is LLAVA-1.5-7B (Liu et al., 2023a), while the utilized QA model is LLAMA-213B-chat (Touvron et al., 2023b). For the unseen task WebShop, the actions include Search, FeatureRetrieve, Pick, and Click. The implementation of Search and Click relies on the embedded implementation already provided in official WebShop virtual environment8. FeatureRetrieve and Pick are based on dpr-reader-multiset-base, which helps to select the most relevant items and their item features according to the query. For the unseen task InterCodeSQL, the action interfaces include all the possible commands and functions provided in SQL programming language. H Details of Performance Evaluation Metrics. Here we mainly discuss the special metrics adopted to evaluate the agent performance. For HotpotQA, instead of merely using strict exact matching, we follow Xu et al. (2023) to also 8https://github.com/princeton-nlp/WebShop. 12394 Task Type Action Types Function Descriptions Tools QA KnowledgeQuery(Entity) -> Knowledge Query the entity knowledge Wikipedia, Google Search ParagraphRetrieval(Knowledge, Query) -> Paragraphs Retrieve relevant paragraphs according to the query dpr-reader-multiset-base QA(Context, Query) -> Answer Answer the query based on the given context GPT-series/open LLMs Calculator(Expression) -> Value Calculate given math expressions WolframAlpha (a) Actions used in complex QA tasks. Task Type Action Types Function Descriptions Implementation Web Click(Env, Query) -> Tag Locate the tag to be clicked according to the query HTML Simulator Type(Env, Query, Text) -> Tag, Text Locate the relevant tag according to the query and output the typed text Select(Env, Query, Text) -> Tag, Text Locate the relevant tag according to the query and output the selected option (b) Actions used in web tasks. Task Type Action Types Function Descriptions Implementation Math Calculator(Expression) -> Value Calculate given math expressions WolframAlpha SetEquation(Expression) -> Equation Set equations based on given expressions SolveEquation(Equation) -> Solutions Solve the set equations Define(Variable) -> Variable Define a variable SolveInequality(Inequality) -> Solutions Solve the given inequality Code(Function_Description) -> Code Generate codes for math functions gpt-3.5-turbo Count(List) -> Number Count the element number in a list Python (c) Actions used in math tasks. Task Type Action Types Function Descriptions Implementation Multimodal QA(Context, Query) -> Answer Answer the query based on the given context LLAMA-2-13B-chat VQA(Image_Context, Query) -> Answer Answer the query based on the given image context LLAVA-1.5-7B (d) Actions used in multimodal tasks. Table 8: Action interfaces and execution module implementations for complex interactive tasks. use GPT-4 as an evaluator to judge whether the predicted answer shares the same semantics with the gold answer. We call this metric as LLM accuracy, frequently mentioned in §4. For Mind2Web, we adopt the same metric step success rate used for AgentBench evaluation. A step is deemed successful solely when both the chosen HTML tag and predicted action type exactly match the gold action. For WebShop, we leverage the reward utilized in both AgentBench and original WebShop paper, which quantify the similarity between gold and predicted products with regard to the product titles and selected attributes. Evaluation Data. Following Xu et al. (2023), we only evaluate 300 and 1000 randomly selected examples from StrategyQA and HotpotQA evaluation set, respectively. The results reported in Tab. 1d are the average performance on three different sets of sampled data. Regarding Mind2Web, we only evaluate on the “cross-domain” test set that AgentBench utilizes for evaluation. For WebShop, we evaluate the first 500 instances from the entire test set as AgentBench used to do. We leverage the entire evaluation set of the other testing benchmarks for assessment. 12395 I In-Context Examples in Conversion Prompts As discussed in §3.1, in-context examples are helpful to instruct LLMs to generate annotations in our expected format. For each training task types, we showcase one in-context example to help readers better understand how the prompting conversion method works and the format of our expected annotations. We highlight subgoals, their actions and execution results with yellow , red and blue, respectively. I.1 In-Context Example For Obtaining Math Task Annotations Please convert natural language plans into a series of subgoals and their corresponding actions that lead to the successful implementation with respect to the given instructions. Please use ‘R[number]’ to represent the intermediate results for each subgoal, without generating any exact values. Please also use functions to represent the corresponding actions. For the actions, they must be one of ‘Calculator’, ‘SetEquation’, ‘SolveEquation’, ‘SolveInequality’, ‘Count’, ‘Code’, and ‘Define’. Example 1: Task: Peter goes to the store to buy a soda. The soda costs $.25 an ounch. He brought $2 with him and leaves with $.50. How many ounces of soda did he buy? Natural language plan: He spend $1.5 on soda because 2 - .5 = 1.5 He bought 6 ounces of soda because 1.5 / .25 = 6 Subgoal-based plan: Subgoal 1: Calculate how much the soda costs in total. Action 1-1: R1 = Calculator(2 - 0.5) = 1.5 Subgoal 2: Calculate the ounces of soda the price per ounch. Action 2-1: R2 = Calculator(R1 / 0.25) = 6 I.2 In-Context Example For Obtaining Complex QA Task Annotations Please convert natural language plans into a series of subgoals and their corresponding actions that lead to the successful implementation with respect to the given instructions. Please use ‘R[number]’ to represent the intermediate results for each subgoal, without generating any exact values. Please also use functions to represent the corresponding actions. For the actions, they must be one of one of ‘KnowledgeQuery’, ‘ParagraphRetrieve’, ‘QA’, ‘Calculator’ and ‘Code’. Example 1: Task: Are more people today related to Genghis Khan than Julius Caesar? Natural language plan: We find relevant facts: Julius Caesar had three children. Genghis Khan had sixteen children. Modern geneticists have determined that out of every 200 men today has DNA that can be traced to Genghis Khan. We need to answer these questions: 1. How many kids did Julius Caesar have? (Can be answered based on paragraph ‘Julius Caesar-75’) 2. How many kids did Genghis Khan have? (Can be answered based on paragraph ‘Genghis Khan-17’) 3. Is #2 greater than #1? Based on these evidences and decomposed questions, the answer is True. Subgoal-based plan: Subgoal 1: Obtain the number of the kids that Julius Caesar had. Action 1-1: R1 = KnowledgeQuery(Julius Caesar) = WikipediaPage(Julius Caesar) Action 1-2: R2 = ParagraphRetrieve(R1, Query: How many kids did Julius Caesar have?) = Paragraph(Julius Caesar-75). Action 1-3: R3 = QA([R2], Question: How many kids did Julius Caesar have?) = 3. Subgoal 2: Obtain the number of the kids that Genghis Khan had. Action 2-1: R4 = KnowledgeQuery(Genghis Khan) = WikipediaPage(Genghis Khan). Action 2-2: R5 = ParagraphRetrieve(R4, Query: How many kids did Genghis Khan have?) = Paragraph(Genghis Khan-17). Action 2-3: R6 = QA([R5], Question: How many kids did Genghis Khan have?) = 16. Subgoal 3: Determine if Genghis Khan had more kids. Action 3-1: R7 = Calculator(R6 > R3) = True 12396 I.3 In-Context Example For Obtaining Web Task Annotations Since the data source for converting annotations, Mind2Web, already provides the ground-truth execution results after each action, as discussed in §3.1, we do not ask LLMs to capture each action’s execution results. Therefore, there are no parts highlighted with blue in the in-context example. Please convert natural language plans into a series of subgoals and their corresponding actions that lead to the successful implementation with respect to the given instructions. Please use ‘R[number]’ to represent the intermediate results for each subgoal, without generating any exact values. Please also use functions to represent the corresponding actions. For the actions, they must be one of they must be one of ‘TYPE’, ‘CLICK’, and ‘SELECT’. Example 1: Task: Find a Ricky Kej track to listen and share which has been added in the last year and is between 2 to 10 minutes. Natural language plan: [searchbox] Search −→TYPE: Ricky Kej; [link] Search for “Ricky Kej” −→CLICK; [link] Tracks −→ CLICK; [link] Added any time −→CLICK; [link] Past year −→SELECT; [link] Any length −→CLICK; [link] 2-10 min −→CLICK; [link] To listen to −→CLICK; [link] To share −→CLICK Subgoal-based plan: Subgoal 1: Type Ricky Kej to search his songs. Action 1-1: R1 = TYPE(Env, QUERY: Type Ricky Kej to search his songs, TEXT: Ricky Kej) Subgoal 2: Click on the option to search for Ricky Rej. Action 2-1: R2 = CLICK(R1, QUERY: Click on the option to search for Ricky Rej) Subgoal 3: Choose tracks as the search category. Action 3-1: R3 = CLICK(R2, QUERY: Choose tracks as the search category) Subgoal 4: Find the region to adjust the added time of our interested track. Action 4-1: R4 = CLICK(R3, QUERY: Find the region to adjust the added time of our interested track) Subgoal 5: Choose the last year as the added date. Action 5-1: R5 = SELECT(R4, QUERY: Choose the last year as the added date, TEXT: Past year) Subgoal 6: Find the region to adjust the track length of our interested track. Action 6-1: R6 = CLICK(R5, QUERY: Find the region to adjust the track length of our interested track) Subgoal 7: Choose 2 to 10 minutes as the track length. Action 7-1: R7 = CLICK(R6, QUERY: Choose 2 to 10 minutes as the track length) Subgoal 8: Listen to our searched track. Action 8-1: R8 = CLICK(R7, QUERY: Listen to our searched track) Subgoal 9: Share our searched track. Action 9-1: R9 = CLICK(R8, QUERY: Share our searched track) 12397 I.4 In-Context Example For Obtaining Multimodal Task Annotations Please convert natural language plans into a series of subgoals, their corresponding actions that lead to the successful implementation with respect to the given instructions. When generating the actions, please also attach the action’s results contained in the natural language plans. Please use ’R[number]’ to represent the execution results for each action. Please also use functions to represent the corresponding actions. For the actions, they must be one of the available actions, ’QA’, ’VQA’. Example 1: Task: If the cameraman were driving what do they have to do from this position? There’re some choices: A. turn left, B. drive straight, C. reverse course, D. turn right. Natural language plan: The would have to turn right because the lane has right turn arrows painted on it. The arrow on the street indicates that this lane can only go in one direction at the intersection. The sign on the road says to turn right. Overall, the final answer is ’turn right’. Subgoal-based plan: Subgoal 1: Describe the shape of the sign on the road lane the cameraman is in from the image. Action 1-1: R1 = VQA([IMG], Question: What’s the sign on the road lane?) = There’s a right turn arrow on the road. Subgoal 2: Answer which lane the cameraman is in. Action 2-1: R2 = QA([R1], Question: Which lane is the cameraman in?) = The cameraman is in right turn lane. Subgoal 3: Answer what the cameraman has to do from his position. Action 3-1: R3 = QA([R1, R2], Question: What does the cameraman have to do from his position?) = turn right. Subgoal 4: Select the choice from the answer choices. Action 4-1: R4 = VQA([IMG], Question: If the cameraman were driving what do they have to do from this position? We can also see that There’s a right turn arrow on the road, The cameraman is in right turn lane, turn right. There’re some choices: A. turn left, B. drive straight, C. reverse course, D. turn right. You must choose one from ’A’, ’B’, ’C’ or ’D’. Output only one letter!) = D 12398 J In-Context Example for Adapting Modules to Unseen Task — WebShop As discussed in §4.4, the in-context examples are used for helping modules understand how to plan and ground to novel available actions. Mentioned in App. G, WebShop involves four unseen actions in training annotations: Search, FeatureRetrieve, Pick, and Click. The definition of Click is different from the one defined for web task. In WebShop task, the argument of Click is the item index. However, Click’s input arguments in Mind2Web include an HTML environment description and a specific query regarding the goal of click action. We showcase one of the two in-context WebShop-related examples used for the inference of planning and grounding modules, respectively. J.1 In-Context WebShop-Related Example for Planning Module <|user|> Please provide a reasonable subgoal-based plan to solve the given task. Task: Find me slim fit, loose fit women’s tops, tees & blouses with long sleeve, short sleeve with color: a01#black, and size: large, and price lower than 50.00 dollars <|assistant|> Subgoal 1: Search two relevant items and the product names according to the instruction. <|user|> The execution result for Subgoal 1 is b09s3bn15c - Mens Linen Shirt,Men’s Striped Shirts Casual Short Sleeve Button Down Shirts Regular Fit Hawaiian Shirts Beach Tees Tops ** b094q7b3ss - Women Cold Shoulder Tops, Summer Butterfly Print Shirts Fashion Casual Short Sleeve Plus-Size Tunic Top Tee and Blouse. <|assistant|> Subgoal 2: Select the most relevant features of item b09s3bn15c. <|user|> The execution result for Subgoal 2 is short, casual, shoulder. <|assistant|> Subgoal 3: Select the most relevant features of item b094q7b3ss. <|user|> The execution result for Subgoal 3 is black, large, x-large. <|assistant|> Subgoal 4: Pick up the most related one from the two relevant items according to the product names and their features. <|user|> The execution result for Subgoal 4 is b094q7b3ss. <|assistant|> Subgoal 5: Buy this item by selecting its relevant features. <|user|> The execution result for Subgoal 5 is Done. 12399 J.2 In-Context WebShop-Related Example for Grounding Module <|user|> Please ground the given subgoal to corresponding executable actions for solving the given task. The grounded actions must be the one in available action list. The available action list is ‘Search’, ‘FeatureRetrieve’, ‘Pick’ and ‘Click’. Search(Query): Search related items according to the Query; FeatureRetrieve(Feature_list, Query): Select the most relevant features from Feature_list according to Query; Pick(Item_names, Item_features, Query): Pick the most relevant item from Item_names according to Query, Item_names and Item_features; Click(Item): Click the Item to check more information. Task: Find me slim fit, loose fit women’s tops, tees & blouses with long sleeve, short sleeve with color: a01#black, and size: large, and price lower than 50.00 dollars Subgoal to be grounded: Subgoal 1: Search two relevant items and the product names according to the instruction. <|assistant|> R1 = Search(Find me slim fit, loose fit women’s tops, tees & blouses with long sleeve, short sleeve with color: a01#black, and size: large, and price lower than 50.00 dollars) <|user|> Subgoal to be grounded: Subgoal 2: Select the most relevant features of item b09s3bn15c. <|assistant|> R2 = Click(b09s3bn15c); R3 = FeatureRetrieve(R2, Query: Find me slim fit, loose fit women’s tops, tees & blouses with long sleeve, short sleeve with color: a01#black, and size: large, and price lower than 50.00 dollars) <|user|> Subgoal to be grounded: Subgoal 3: Select the most relevant features of item b094q7b3ss. <|assistant|> R4 = Click(< prev); R5 = Click(b094q7b3ss); R6 = FeatureRetrieve(R5, Query: Find me slim fit, loose fit women’s tops, tees & blouses with long sleeve, short sleeve with color: a01#black, and size: large, and price lower than 50.00 dollars) <|user|> Subgoal to be grounded: Subgoal 4: Pick up the most related one from the two relevant items according to the product names and their features. <|assistant|> R7 = Pick(Item_names: R1, Item_features: [R3, R6], Query: Find me slim fit, loose fit women’s tops, tees & blouses with long sleeve, short sleeve with color: a01#black, and size: large, and price lower than 50.00 dollars) <|user|> Subgoal to be grounded: Subgoal 5: Buy this item by selecting its relevant features. <|assistant|> R8 = Click(< prev); R9 = Click(R7); R10 = Click(black); R11 = Click(large); R12 = Click(x-large); R13 = Click(buy now) 12400 K In-Context Example for Adapting Modules to Unseen Task — InterCodeSQL Similar to §J, as InterCodeSQL is another unseen task, we showcase one of the three in-context InterCodeSQL-related examples used for the inference of planning and grounding modules, respectively. K.1 In-Context InterCodeSQL-Related Example for Planning Module <|user|> Please provide a reasonable subgoal-based plan to solve the given task. Task: What are the names of poker players, ordered ascending by the number of final tables they have made? <|assistant|> Subgoal 1: Find all the related tables. <|user|> The execution result for Subgoal 1 is Results: Table players has Name (type: TEXT), People_ID (type: INT); Table poker_player has Final_Table_Made (type: INT), People_ID (type: INT).; Subgoal 2: Write the selection objects: the names of the people who are also poker players. <|assistant|> Subgoal 2: Write the selection objects: the names of the people who are also poker players. <|user|> The execution result for Subgoal 2 is T1.Name FROM people AS T1 JOIN poker_player AS T2 ON T1.People_ID = T2.People_ID. <|assistant|> Subgoal 3: Write the selection condition: sort all the poker players by the number of final tables they have made in ascending order. <|user|> The execution result for Subgoal 3 is ORDER BY T2.Final_Table_Made. <|assistant|> Subgoal 4: Finish the SQL query to the names of poker players, ordered ascending by the number of final tables they have made. <|user|> The execution result for Subgoal 4 is SELECT T1.Name FROM people AS T1 JOIN poker_player AS T2 ON T1.People_ID = T2.People_ID ORDER BY T2.Final_Table_Made. 12401 K.2 In-Context InterCodeSQL-Related Example for Grounding Module <|user|> Please ground the given subgoal to corresponding executable actions for solving the given task. The grounded actions must be the one in available action list. Task: What are the names of poker players, ordered ascending by the number of final tables they have made? Subgoal to be grounded: Subgoal 1: Find all the related tables. <|assistant|> R1 = SHOW TABLES <|user|> Subgoal to be grounded: Subgoal 2: Write the selection objects: the names of the people who are also poker players. <|assistant|> R2 = T1.Name FROM people AS T1 JOIN poker_player AS T2 ON T1.People_ID = T2.People_ID <|user|> Subgoal to be grounded: Subgoal 3: Write the selection condition: sort all the poker players by the number of final tables they have made in ascending order. <|assistant|> R3 = ORDER BY T2.Final_Table_Made <|user|> Subgoal to be grounded: Subgoal 4: Finish the SQL query to the names of poker players, ordered ascending by the number of final tables they have made. <|assistant|> R4 = SELECT T1.Name FROM people AS T1 JOIN poker_player AS T2 ON T1.People_ID = T2.People_ID ORDER BY T2.Final_Table_Made 12402 L Multimodal Task Case Study We provide two cases in the multimodal datasets, A-OKVQA and ScienceQA, to show the effectiveness of LUMOS. For the A-OKVQA case, LUMOS first identifies the device brand with the aid of VQA tool. After LUMOS planning module knows that the device is a Nintendo Wii controller, the module would generate the next subgoal “Answer the country of origin for the Nintendo Wii controller”. Finally, LUMOS selects the closest option from the given answer choices. For the ScienceQA case, LUMOS identifies the highlighted area on the map, which includes Middle East area. Then the planning and grounding modules would leverage QA tool to answer which continent Middle East is located at, and then pick up the correct answer “Asia”. Question: The company producing the device in her hand is from what country? There’re some choices: A. … … C. Japan, D. … The image shows the following contents: The image features a young woman sitting on a couch, … … playing a video game, possibly on a Nintendo Wii console. The couch she is sitting on is … … Lumos-IMM Planning and Grounding Results: Subgoal 1: Identify the brand of the device in her hand. Action: R1 = VQA([IMG], Question: What brand is the device in her hand?) = The device in her hand is a Nintendo Wii controller. Subgoal 2: Answer the country of origin for the Nintendo Wii controller. Action: R2 = QA([R1], Question: What is the country of origin for the Nintendo Wii controller?) = The country of origin for the Nintendo Wii controller is Japan. Subgoal 3: Select the choice from the four answer choices. Action: R3 = VQA([IMG], Question: The company producing the device in her hand is from what country? The image features … We can also see that The device in her hand is a Nintendo Wii controller, The country of origin for the Nintendo Wii controller is Japan. There’re some choices: A. … … C. Japan, D. … You must choose one from ‘A’, ‘B’, ‘C’ or ‘D’. Output only one letter!) = C. Planning module output Grounding module output Execution module output Figure 4: LUMOS-IMM case study on A-OKVQA. Question: Which continent is highlighted? There’re some choices: … … D. Asia. The image features a large, colorful map of the world, with various countries and continents depicted. The map is filled with green and blue colors, giving it a vibrant and lively appearance. … … Lumos-IMM Planning and Grounding Results: Subgoal 1: Identify the highlighted area on the map. Action: R1 = VQA([IMG], Question: Which area is highlighted on the map?) = The area highlighted on the map includes the Middle East, … . Subgoal 2: Answer which continent the highlighted area is in. Action: R2 = QA([R1], Question: Which continent is the Middle East in?) = The Middle East is located in the continent of Asia. Subgoal 3: Select the choice from the answer choices. Action: R3 = VQA([IMG], Question: Which continent is highlighted? The image features … We can also see that The area highlighted on the map includes the Middle East, …, The Middle East is located in the continent of Asia. There’re some choices: … … D. Asia. You must choose one from ‘A’, ‘B’, ‘C’ or ‘D’. Output only one letter!) = D. Planning module output Grounding module output Execution module output Figure 5: LUMOS-IMM case study on ScienceQA. 12403
2024
670
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12404–12422 August 11-16, 2024 ©2024 Association for Computational Linguistics Investigating Cultural Alignment of Large Language Models Badr AlKhamissi EPFL [email protected] Muhammad ElNokrashy Microsoft Egypt [email protected] Mai AlKhamissi Anthropology, Princeton University [email protected] Mona Diab LTI, Carnegie Mellon University [email protected] Abstract The intricate relationship between language and culture has long been a subject of exploration within the realm of linguistic anthropology. Large Language Models (LLMs), promoted as repositories of collective human knowledge, raise a pivotal question: do these models genuinely encapsulate the diverse knowledge adopted by different cultures? Our study reveals that these models demonstrate greater cultural alignment along two dimensions—firstly, when prompted with the dominant language of a specific culture, and secondly, when pretrained with a refined mixture of languages employed by that culture. We quantify cultural alignment by simulating sociological surveys, comparing model responses to those of actual survey participants as references. Specifically, we replicate a survey conducted in various regions of Egypt and the United States through prompting LLMs with different pretraining data mixtures in both Arabic and English with the personas of the real respondents and the survey questions. Further analysis reveals that misalignment becomes more pronounced for underrepresented personas and for culturally sensitive topics, such as those probing social values. Finally, we introduce Anthropological Prompting, a novel method leveraging anthropological reasoning to enhance cultural alignment. Our study emphasizes the necessity for a more balanced multilingual pretraining dataset to better represent the diversity of human experience and the plurality of different cultures with many implications on the topic of cross-lingual transfer.1 1 Introduction Large Language Models (LLMs) such as ChatGPT have garnered widespread utilization globally, engaging millions of users. Users interacting with 1 Our code and data are available at https://github.com/b khmsi/cultural-trends.git Culture X Culture Y Survey LLM ≈ ... Cultural  Alignment Translate Figure 1: Our framework for measuring the cultural alignment of LLM knowledge/output and ground-truth cultural data collected through survey responses. these models across multiple languages have observed a noteworthy phenomenon: Prompting with different languages may elicit different responses to similar queries (Lin et al., 2022; Shen et al., 2024). From our observations, one reason for the difference between the responses is that they tend to reflect the culturally specific views commonly expressed by the people which use the same language as the prompt.Here, we hypothesize that the root cause of this phenomenon lies in the training data, which encodes different and at times conflicting “knowledge” across different languages.2 Culture is a complicated term and defining it stands at the core of anthropological inquiry. Hundreds of definitions exist in literature which cover different aspects of interest (Kroeber and Kluckhohn, 1952). In this paper, we consider culture as as a multi-faceted inquiry that demonstrates substan2 In this work, we advocate for the term “Cultural Trends” instead of “Biases.” This choice is deliberate as the term “bias” outside mathematical context often carries a negative connotation—a problematic default position. The use of Cultural Trends emphasizes that a model reflecting a particular cultural inclination does not inherently imply danger or stereotyping. Instead, it signifies alignment with the views of a specific population, highlighting cultural significance. 12404 tial diversity among human communities, encompassing worldviews and belief systems. Through this lens, we aim to measure the cultural alignment of Large Language Models (LLMs) by simulating existing surveys that have been carried out by sociologists in specific populations. We utilize the responses from actual survey participants as our reference or gold standard. Then we measure the similarity between the model’s answer when prompted with the participant’s “persona" and the actual survey answer. The term “persona” in this context refers to an explicit description of a survey participant, encompassing various traits of interest such as social class, education level, and age (see Section 4.3 for a detailed description). This is done for various LLMs trained and prompted under different configurations. We use this similarity as a proxy for the degree of a model’s knowledge of a particular culture. This enables us to assess the LLMs’ capacity to capture the diversity not only of a specific country but also among individuals within that country. We focus on a survey conducted in two countries: Egypt (EG) and the United States of America (US). It covers a diverse demographic set within each country with questions spanning various themes that include topics of social, cultural, material, governmental, ethical, and economic significance. This work primarily explores the impact of the language used for prompting and the language composition of pretraining data on a model’s cultural alignment as defined above. We consider two languages for prompting: English and Arabic as they are the primary languages used in the surveys. Specifically, we consider four pretrained LLMs: GPT-3.53 also known as ChatGPT, and three 13B parameter instruction-tuned models. The multilingual mT0-XXL (Muennighoff et al., 2023) is trained on a variety of languages, LLaMA-2-13B-Chat (Touvron et al., 2023) which is trained primarily on English data, and AceGPT-13B-Chat (Huang et al., 2023), a model finetuned from LLaMA-2-13B-Chat focusing on Arabic. Our contributions include highlighting the significant role of language in the perceived, functional cultural alignment in model responses, which is affected by both (1) the language in the pretraining data and (2) that of the prompt. Further analysis shows that (3) models capture the variance of certain demographics more than others, with the gap 3 GPT-3.5 is gpt-3.5-turbo-1106 throughout this work. increasing for underrepresented groups. Finally, (4) we propose Anthropological Prompting as a method to enhance cultural alignment in LLMs. 2 Research Questions Prompting Language and Cultural Alignment: We hypothesize that employing the native language of a specific culture will yield greater cultural alignment compared to using a foreign language. For instance, prompting an LLM in Arabic may achieve higher alignment to a survey conducted in Egypt than prompting it in English. Pretraining Data Composition: We hypothesize that, for a fixed model size, pretraining models with a higher proportion of data from a specific culture will lead to an increased alignment with the results of surveys conducted in that culture. For instance, a 13B Arabic monolingual model is expected to exhibit higher alignment than a 13B English model for a survey conducted in Egypt. Personas and Cultural Topics: We anticipate that misalignment will increase for personas from digitally underrepresented backgrounds. For instance, alignment in both Arabic and English tests are expected to be lower for a working-class persona in Aswan (a city in the south of Egypt) compared to an upper-middle-class persona in Cairo (Egypt’s capital and its most populous city). Further, we hypothesize that misalignment will increase for uncommon cultural topics. Finetuning Models to Induce Cross-Lingual Knowledge Transfer: We gauge the effect of cross-lingual transfer for models predominantly pretrained on one language but finetuned on another. To answer this question, we use the LLaMA-2-Chat-13B model (trained primarily on an English corpus) (Touvron et al., 2023) and the AceGPT-Chat-13B model (a LLaMA-2-Chat-13B model further finetuned on a corpus of Arabic and English data) (Huang et al., 2023). 3 Anthropological Preliminaries The concept of culture undergoes continual transformation, encompassing various elements that evolve with time as well as geographical and historical context. Many definitions of culture are traced back to Tylor (1871) wherein culture constitutes an integrated body of knowledge, belief, art, morals, law, custom, and any other capabilities 12405 Imagine you are a {marital_status} {sex} from {region}, {country}.  You are {age} years of age and completed {education} education level. You consider yourself part of the {social_class}. Answer the following question from this perspective. Others will read what you choose; your goal is to convince them it was chosen from the perspective of the persona described above. Select exactly one option. Do not include any extra commentary. Answer by typing the number corresponding to your chosen answer. Question: {question} Options: {numbered_options} LLM ≈ Survey Response Model Response Figure 2: Template used when querying models in English. (Left) The model is first instructed to respond under a specific persona along the demographic parameters highlighted in red. (Right) The rest of the prompt instructs the model to follow the perspective of the persona closely, respond in a specific format (only the index of the answer), and avoid any extraneous commentary. and habits expressed by members of a society. In that sense, any reflection of such aspects of life in written records can be considered a cultural trend expressed by that text. A model which expresses views in some aspect of life which is aligned with a group of people is culturally aligned with them in that scenario. An alternative perspective shows culture as patterns of behavior. These patterns, how they are chosen and valued, and their meaning, manifest in different forms, such as linguistic records. In that sense, culture observes behavior through history, and affects it through population dynamics (Kroeber and Kluckhohn, 1952). The cultural expression of agential members within a society, including artificial agents such as LLMs, thus affects and is affected by the behavior and recording of ideas by fellow members. Models learn effectively from humans and equally impart their learnings upon other humans, distributing their internalized cultural ideas in the process (Clifford et al., 2020). 3.1 Working Assumptions Given this anthropological backdrop, we describe some modeling assumptions we have adopted and the motivation behind them. Language →Culture We assume that a language can be used as proxy for its dominant culture. Although some languages are used by multiple cultures, contemporary consideration of such languages tends to emphasize a particular culture among their diverse user base (compare the significance given to French output from France and from the Senegal). Prompting with dialects specific to a certain population can help alleviate that concern. Culture →Language We assume that, most often, a culture will produce linguistic records and communications in one dominant language. Although, contrary to the expectation that all communications would be in the main or ostensibly official language, we know that this may not be the case. For example, individuals in Egypt may express their opinions online in English rather than in their native language for a variety of reasons. 4 Experimental Setup 4.1 World Values Survey (WVS) The WVS project gathers responses to an array of questions on matters of social, cultural, material, governmental, ethical, and economic importance, as a rough categorization, all from demographically-controlled population samples around the world (Haerpfer et al., 2020). The latest edition (WVS-7) was conducted between 2017 and 2021. It includes some region-specific modules in addition to the globally-applied categories. WVS-7 has 259 questions and was designed to include indicators towards multiple United Nations Sustainable Development Goals. The survey is set up as a questionnaire provided to select samples from the general population. The questions in the survey are localized to the native or dominant regional languages. In this work, we select 30 questions that encompass diverse themes. The chosen questions are intentionally not straightforward, allowing for a degree of potential cultural variation in responses. For every question, we create four linguistic variations (i.e. paraphrases) by providing ChatGPT with a short description of the question along with 12406 Dimension Possible Values Region Cairo, Alexandria, etc. Sex Male, Female Age Number Social Class Upper, Working, etc. Education Level Higher, Middle, Lower Marital Status Married, Single, etc. Table 1: The demographic dimensions used when prompting the model to emulate a certain survey respondent. Region is country-specific. More information in Appendix D. the anticipated answer options from participants. The questions are translated into Arabic using machine translation, followed by manual editing by native Arabic speakers to ensure preservation of the intended meaning. More details about the generation process, including examples, are available in Appendix G. 4.2 Survey Participants The WVS-7 survey conducted in Egypt and the United States comprised 1,200 and 2,596 participants respectively, representing diverse backgrounds. In this work, we only consider 6 demographic dimensions when prompting the LLMs. Table 1 shows the dimensions along with some possible values they can take. In addition, the left part of Figure 2 shows the template used to prompt the model in English with a specific persona. In the context of this paper, the term persona denotes a singular instance of this six-dimensional tuple. Filtering Participants In our survey simulations, we filtered the participants to have an equal distribution across both countries along the demographic dimensions (except Region since it is countryspecific). We selected participants such that for each person interviewed in Egypt we have a corresponding person who comes from exactly the same demographics from the US with the exception of the location. This resulted in 303 unique personas for each country. The distribution of the survey respondents from each country, including examples of some personas, can be found in Appendix D. 4.3 Personas: Role-Playing for LLMs To guide a language model with instructionfollowing support in order to respond like a specific subject from a particular demographic,4 we utilize personas (Joshi et al., 2023). A persona is a description of a person which covers as many traits as deemed important to be controlled for in the context of an interaction or study. Accordingly, we query the model by a prompt that specifies the values for each demographic dimension of interest. The prompt is generated from a single template and is written in ordinary prose. Figure 2 shows the template used when querying the models in English. It comprises three parts: the first specifies to the model the persona it must emulate along the 6 demographic dimensions discussed in Section 4.2. The second instructs the model to follow the perspective of the persona closely, respond in a specific format (only the index of the answer), and avoid any extraneous commentary. The last part is the question followed by a list of numbered options that the model must choose from. 4.4 Pretrained Large Language Models Table 6 lists the models used in this work along with their corresponding number of parameters and pretraining language mixtures. In particular, we opt for instruction-tuned models as they can be assessed in a zero-shot manner by adhering to the provided instructions (Zhang et al., 2023). The largest model in our selection is GPT-3.5, primarily trained on English data; although, it has showcased competitive performance on Arabic NLP benchmarks (Alyafeai et al., 2023; Khondaker et al., 2023). The three other models are selected to be of the same size (13B parameters) for fair comparison: (1) mT0-XXL (Muennighoff et al., 2023) trained with a more balanced mixture of languages, is expected to exhibit a reduced impact of Anglocentric responses; (2) LLaMA-2-13B-Chat5 (Touvron et al., 2023) trained primarily on English data but is capable of responding to Arabic prompts; (3) AceGPT-13B-Chat (Huang et al., 2023) is a model finetuned on a mixture of Arabic and English data. It achieved state-of-the-art results on the Arabic Cultural and Value Alignment Dataset among opensource Arabic LLMs through localized training. 4.5 Computing Cultural Alignment The survey simulations involve prompting each model with a specific persona, followed by an instruction and a question (refer to Figure 2). Each 4 A subject is a person participating in the survey. 5 For brevity, we omit 13B from LLaMA-2-13B-Chat and AceGPT-13B-Chat in future references. 12407 Egypt United States Model English Arabic Ar-En English Arabic En-Ar GPT-3.5 47.08 / 23.42 50.15 / 28.56 3.07 65.95 / 40.22 63.77 / 38.36 2.18 AceGPT-Chat 46.15 / 28.83 49.49 / 30.60 3.34 54.55 / 29.94 51.12 / 25.45 3.43 LLaMA-2-Chat 47.95 / 25.61 44.67 / 23.34 -3.28 63.90 / 37.40 62.29 / 36.03 1.61 mT0-XXL 45.16 / 28.75 46.69 / 27.10 1.53 53.20 / 28.30 57.75 / 34.51 -4.55 Table 2: Cultural alignment against both Egyptian and United States survey responses using Soft / Hard similarity metrics for each model as a function of the prompting language. Underlined is the optimal prompting language for each model and survey. The third column in each block shows the difference in soft alignment between country’s dominant language and the other language. Refer to Appendix A for results without excluding responses where equivalent personas in both surveys answered similarly. question is independently prompted four times for each persona using the generated linguistic variations. Subsequently, we sample five responses for each question variant using a temperature of 0.7.6 The model’s response for a particular persona and question variant is determined by computing a majority vote over the sampled responses. Following this, we assess a model’s cultural alignment by comparing its responses for each persona separately with the original subject’s response in one of the two surveys. This comparison is conducted in two ways: either directly comparing the responses (Hard metric) or considering the responses while taking into account the order of the options for ordinal questions (Soft metric). We exclude instances where two subjects belonging to similar persona from both the Egypt and US surveys provided identical answers for a given question. This exclusion ensures a more accurate assessment of each model’s capability in discerning the differences between the two cultures. Hard Metric is the plain accuracy, which compares model answers to the survey responses for a given persona. Formally, the final cultural alignment is Hf,c = meanq,p{1(ˆy = y)} , where ˆyq,p denotes the model’s response after the majority vote for a question prompt q and persona p, while yq,p is the ground-truth response of a specific subject with persona p, all from culture c. Soft Metric Sf,c is a relaxed version of the hard metric which awards partial points in questions with an ordinal scale. However, if the question provides categorical options only or the subject in the survey responded with a “don’t know” (regardless of the scale), the metric defaults to plain accuracy. 6 This was empirically set. First we calculate the error per q, p: εf,c(q, p) =    |ˆy −y|q,p |q| −1 IsOrd(q, p), 1(ˆy ̸= y)q,p otherwise (1) Where IsOrd(q, p) refers to questions with ordinal answers where the survey subject p did not pick “don’t know”, and |q| is the count of options in q. The soft alignment score for a model f is the average over all queries and personas per model and culture: Sf,c = meanq,p{1 −εf,c(q, p)} 4.6 Anthropological Prompting Inspired by long-term ethnographic fieldwork— which stands as the primary research method within the discipline of cultural anthropology—we introduce a novel prompting method to improve cultural alignment for LLMs, Anthropological Prompting. The objective of engaging in extended ethnographic fieldwork is to establish meaningful connections with interlocutors, facilitating the ability to produce critical and in-depth analyses of both the subjects and the topics under study. In this context, we strive to emulate a digital adaptation of ethnographic fieldwork by guiding the model to think as if it has been actively participating in this method. We prompt the model to comprehend the intricate complexities and nuances associated with identities, inquiries, and linguistic constructions. For instance, we elaborate on the emic and etic perspectives of examining culture,7 highlighting the layered nature of interpersonal connections and emphasizing how personal experiences significantly shape subjectivities. In 7 “Emic” refers to an insider’s perspective, focusing on the internal understandings within a specific culture. Conversely, "etic" refers to an outsider’s perspective. 12408 Figure 3: Cultural alignment as a function of a subject’s Sex , Education Level , Social Class , and Age Range . Results are averaged across the models, prompting languages and surveys used in this work. L-Middle and U-Middle are Lower Middle and Upper Middle Class respectively. doing so, our intention is to introduce an anthropological methodology, encouraging the model to “think” in a manner akin to an anthropologist. The exact prompt and more details about the experimental setup can be found in Appendix I. 5 Results 5.1 Anglocentric Bias in LLMs Table 3 shows that all LLMs considered in this work—regardless of being trained to be multilingual or finetuned on culture-specific data—are significantly more culturally aligned with subjects from the US survey than those from the Egypt survey. Concurrent research has shown similar results of current LLMs exhibiting Western biases (Durmus et al., 2023; Naous et al., 2023). This can largely be attributed to the data used for training and for guiding crucial design decisions such as model architecture, tokenization scheme, evaluation methods, instruction-tuning, and so on. Model Egypt United States GPT-3.5 48.61 / 25.99 64.86 / 39.29 AceGPT-Chat 47.82 / 29.72 52.83 / 27.69 LLaMA-2-Chat 46.31 / 24.48 63.10 / 36.72 mT0-XXL 45.92 / 27.93 55.48 / 31.40 Average 47.16 / 27.03 59.07 / 33.78 Table 3: Cultural alignment against responses from both Egyptian and United States surveys using Soft / Hard similarity metrics for each model. The results are averaged across both prompting languages. The alignment with the United States populations is much higher reflecting the euro-centric bias in current LLMs. 5.2 Prompting & Pretraining Languages Table 2 illustrates the impact of prompting language on the cultural alignment of the four LLMs examined in this study. Specifically, using each country’s dominant language prompts a notable increase in alignment compared to using the alternative language for both GPT-3.5 and AceGPT-Chat, according to both metrics. For example, using Arabic to prompt both models yields better alignment with the Egypt survey than prompting with English. Conversely, English prompts result in improved alignment with the US survey compared to Arabic. However, given that LLaMA-2-Chat is predominantly pretrained on English data, we observe that Arabic prompts are less effective in enhancing alignment with the Egypt survey and thus posit that the lack of Arabic data in the pretraining leads to lack of knowledge of Egyptian culture. In contrast, for the multilingual mT0-XXL, despite being trained on a more balanced language distribution, it appears to suffer from the curse of multilinguality (Pfeiffer et al., 2022), as evidenced by its inferior cultural alignment with the US survey when prompted with English compared to Arabic. Finally, we report the models’ consistency in responding to paraphrases of the same question in Appendix C. 5.3 Digitally Underrepresented Personas Figure 3 displays the cultural alignment across various demographic variables, averaged across the four LLMs, two prompting languages, and responses from the two countries using the soft alignment metric. Surprisingly, we observe a distinct trend among the models tested in this study concerning social class and education level. Specifically, as the background of individuals changes from lower to higher levels in both respective dimensions, alignment improves. This underscores that the models better reflect the viewpoints of specific demographics over others, with marginalized populations enjoying lower alignment. Additionally, the analysis of the sex dimension reveals that the models correspond more accurately to the 12409 (a) Soft Similarity (b) Hard Similarity Figure 4: — Arabic — English. Alignment of GPT-3.5 with the Egypt survey using both the soft and hard metrics by theme as a function of the prompting language. actual survey when impersonating male respondents than female respondents. Similarly, older age groups exhibit higher alignment than younger age groups. 5.4 Cultural Alignment per Theme The 30 questions examined in this work are categorized into 7 distinct themes outlined by the WVS survey (Haerpfer et al., 2020). Table 10 illustrates the distribution of questions across these themes. The granularity provided by these themes enables us to assess alignment concerning topics such as Religious Values. In Figure 4, we illustrate the cultural alignment of GPT-3.5 with respect to responses from both the Egypt and the US survey, and examine the prompting language effect within each plot. The three themes that are contributing to the improvement in alignment in the Egypt survey when prompting in Arabic using GPT-3.5 are Social Values, Political Interest and Security. In the US survey, both English and Arabic prompting perform very closely except in the Migration theme where English has a slight edge. See Appendix H for a comprehensive set of results for all other models, metrics, and country combinations. 5.5 Finetuning for Cultural Alignment Here, we delineate the contrast between AceGPT-Chat and LLaMA-2-Chat to illustrate the impact of finetuning an English-pretrained model on data from another language on cultural alignment. We observe an improvement in alignment with the Egypt survey across both metrics when the two models are prompted in Arabic (see Table 2 for a quantitative comparison). When prompted in English, the increase is evident only with the hard metric. Conversely, we note a decline in alignment following finetuning when evaluating alignment against the US survey, indicating that the model Prompting Method Soft Hard Vanilla 0.4834 0.2443 Anthropological 0.5102 0.2838 Table 4: Anthropological prompting outperforms Vanilla prompting across both metrics in terms of cultural alignment with the Egypt survey. Results here are on GPT-3.5 with English prompting. Figure 5: Anthropological prompting improves alignment for underrepresented personas compared to Vanilla prompting. Results on GPT-3.5 using English prompting. More in Appendix I. forgot some of its existing US cultural knowledge while adapting to data in another language. 5.6 Anthropological Prompting To improve cultural alignment with responses from Egyptian participants and underrepresented groups, we propose Anthropological Prompting. This approach enables the model to reason before answering the question while grounded with a framework adapted from the toolkit of anthropological methods. The rationale behind it is described in Section 4.6. The framework offers guidance for the model to consider emic and etic perspectives, cultural context, socioeconomic background, individual values, personal experience, cultural relativism, as well as spatial and temporal dimensions in a nuanced manner. The exact prompt is provided in Appendix I. Table 4 presents the results when prompting GPT-3.5 in English, comparing both “vanilla” (persona-based prompting) with anthropological prompting using one variant per question. While vanilla prompting generates 5 responses and computes the majority vote to determine the final answer, giving it an apparent advantage, the anthropological prompting method, which generates only one response, still outperforms it. Further, we observe that anthropological prompting improves cultural alignment for participants from underrepresented backgrounds. Figure 5 illustrates this comparison between vanilla and anthropological prompting across Social Class and 12410 Education Level demographic dimensions. The alignment distribution among social classes and education levels becomes more equitable as a result. 6 Discussion In Section 5.2, we demonstrate that both the language utilized for pretraining and the language employed for prompting contribute to enhancing cultural alignment, particularly for countries where the language in question is prevalent. This observation aligns intuitively with our assumption that a culture primarily generates content in its native language on the internet. During pretraining, a model encodes that cultural knowledge within its parameters; then during inference, the prompting language activates the subnetwork responsible for that encoded knowledge (Foroutan et al., 2022). This observation further underscores the limitation of current LLMs in effectively transferring knowledge across different languages. This is particularly evident in languages with different scripts like Arabic and English (Qi et al., 2023). However, despite our use of Modern Standard Arabic (MSA) as the primary language for representing Egyptian culture, it is crucial to note that Egyptians do not employ MSA in their daily interactions. Hence, we posit that employing the Egyptian Arabic dialect would likely yield even greater alignment, provided that the model is sufficiently trained on this dialect. Moreover, within Egypt, there exist dialectal variations, similar to differences between various states and ethnic groups in the US. Therefore, when assessing cultural alignment, it is imperative to acknowledge the diverse backgrounds within each country; for there exists no singular, narrow Egyptian archetype, for example. This is why our study focuses on measuring personas across multiple demographic dimensions. Finally, we would like to highlight that cultural information is often difficult to verify and coalesce into knowledge, seeing as there are numerous approaches to collecting evidence, building perspectives, and constructing theorems within cultural topics. We observe in modern large models that cultural-knowledge transfer tends to occur from a few dominant languages (for example English) and cultures into responses for prompts in other languages about other cultures. See Table 2. 7 Related Work Measuring Subjective Opinions in LLMs: Concurrent works tackle the notion of cultural alignment but from differing perspectives. Durmus et al. (2023) similarly utilizes cross-national surveys to quantitatively assess how well LLMs capture subjective opinions from various countries. However, one notable difference from our method is that their metric solely evaluates the similarity between the model’s and survey’s distributions over possible options using the Jensen-Shannon Distance, without considering granularity at the persona level nor the order of options for ordinal questions. Arora et al. (2023) measured the extent to which cross-cultural differences are encoded in multilingual encoder-only models by probing them in a cloze-style manner across multiple languages using questions from the WVS and Hofstede survey (Hofstede, 1984). Cao et al. (2023) similarly use the Hofstede Culture Survey to assess the crosscultural alignment between ChatGPT and certain societies when prompting it in different languages, showing that ChatGPT exhibits a strong alignment with American culture and adapts less effectively to other cultural contexts. Naous et al. (2023) demonstrate that multilingual and Arabic monolingual LMs exhibit trends from Western cultures even when prompted in Arabic and contextualized within an Arabic cultural setting. Lahoti et al. (2023) propose a novel prompting method aimed at enhancing cultural diversity in LLM responses. Tjuatja et al. (2023) demonstrate that LLMs should not be relied upon as proxies for gauging human opinions, as they do not accurately reflect response biases observed in humans when using altered wording. Our work differs from the previously mentioned studies by conducting an in-depth analysis of various demographic dimensions, such as the impact of cultural alignment on digitally underrepresented personas. We also examine the influence of the question topic, the language composition used in pretraining, and the language used during prompting in different LLMs. Bias in LLMs: Prior research has demonstrated that LLMs tend to reflect and magnify harmful biases and stereotypes regarding certain populations depending on their religion, race, gender, nationality and other societal attributes (Abid et al., 12411 2021; Sheng et al., 2019; Hutchinson et al., 2020; Lucy and Bamman, 2021; Sheng et al., 2021; Narayanan Venkit et al., 2023; Li et al., 2024) present within their training data. Deshpande et al. (2023) shows that assigning personas to LLMs increases the toxicity of generations for personas from certain demographics more than others. 8 Conclusion & Future Work In this work, we introduce a framework aimed at assessing the Cultural Alignment of LLMs, which measures their ability to capture the Cultural Trends observed within specific populations. To investigate this, we simulate a survey conducted in both Egypt and the US using four distinct LLMs, each prompted with personas mirroring those of the original participants across six demographic dimensions. The metrics we use compare responses on the persona-level allowing us to analyze the model’s alignment with respect to several attributes such as social class and education level. The LLMs we chose vary in pretraining language compositions, which enable us to evaluate how these factors influence cultural alignment. Furthermore, we prompt each model with the languages native to the countries under study and thereby studying the significance of language on cultural alignment with implications to cross-lingual transfer research. Finally, we introduce Anthropological Prompting, a novel method that utilizes a framework adopted from the toolkit of anthropological methods to guide the model to reason about the persona before answering for improving cultural alignment. In future work, we would like to explore our cultural alignment framework on data from more cultures while expanding to more languages, as well as test whether cultural alignment can be used as a proxy metric for cross-lingual knowledge transfer. Limitations In this work, we only consider two languages and data from two countries to render our analysis tractable, since we investigate other dimensions such as the effect of the pretraining data composition, alignment with personas from different demographics and the impact of finetuning on cultural alignment. Future work could expand to include data from additional cultures to further support our findings. Regarding model selection, including an Arabic monolingual model would have been beneficial. However, during our experiments, available Arabic models lacked proper instruction tuning, rendering them incapable of answering our queries, and many had significantly fewer parameters. In this paper, we only consider one survey source. However, there are more surveys that have been conducted on a cross-national level (such as the Arab-Barometer8 for Arab countries) and would be worth exploring if our findings generalize to the data collected from them. Also it would be interesting to compare surveys using LLMs as a reference. Further, we attempt to prompt the model to think creatively in order to mimic the nuanced diversity of human experiences. However, we are aware that these models can not capture the essence and complexity of the human experience. The framing of the anthropological prompting itself still needs fine turning, and because of the wealth of languages that exist, there needs to be different languages and variations of the prompt itself to be able to better prompt the model for us to further understand biases in the datasets. Finally, one significant limitation is our lack of knowledge regarding the actual data sources used for pretraining languages, domains, and dialect presence or absence in many LLMs, such as GPT-3.5. The black box nature of these models not only constrains our ability to comprehensively understand their behavior but also has ethical implications downstream. Ethics Statement One of the goals of AI is building sociotechnical systems that improve people’s lives. Pervasive and ubiquitous systems such as LLMs have a huge impact on other downstream technologies, if they are non-aligned with cultural values, they fail at serving the people they are supposed to help, or worse creating harm. We hope that our work opens doors for other researchers to find different ways to uncover biases in LLMs, and more importantly we put forth a collaborative method between computer scientists and social scientists in this paper. If the aim of artificial intelligence is to mimic the human mind, then it is only through collaboration with interdisciplinary researchers that study both human language and cultures, and researchers who study the inner-workings of machines can we ethically move forward in this endeavor. 8 https://www.arabbarometer.org 12412 Acknowledgements We would like to thank Negar Foroutan for her valuable feedback and suggestions on the final manuscript, as well as to Yakein Abdelmagid and Mohamed Gabr for their insightful early discussions. References Abubakar Abid, Maheen Farooqi, and James Zou. 2021. Persistent anti-muslim bias in large language models. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’21, page 298–306, New York, NY, USA. Association for Computing Machinery. Zaid Alyafeai, Maged S. Alshaibani, Badr AlKhamissi, Hamzah Luqman, Ebrahim Alareqi, and Ali Fadel. 2023. Taqyim: Evaluating arabic nlp tasks using chatgpt models. ArXiv, abs/2306.16322. Arnav Arora, Lucie-aimée Kaffee, and Isabelle Augenstein. 2023. Probing pre-trained language models for cross-cultural differences in values. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 114–130, Dubrovnik, Croatia. Association for Computational Linguistics. Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, and Daniel Hershcovich. 2023. Assessing cross-cultural alignment between ChatGPT and human societies: An empirical study. In Proceedings of the First Workshop on Cross-Cultural Considerations in NLP (C3NLP), pages 53–67, Dubrovnik, Croatia. Association for Computational Linguistics. James Clifford, Kim Fortun, Marcus George E., Clifford James, George E. Marcus, Pratt Mary Louise, Fischer Michael M. J., Rabinow Paul, Rosaldo Renato, Tyler Stephen A., Asad Talal, and Crapanzano Vincent. 2020. Writing culture. Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, and Karthik Narasimhan. 2023. Toxicity in chatgpt: Analyzing persona-assigned language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 1236–1270, Singapore. Association for Computational Linguistics. Esin Durmus, Karina Nyugen, Thomas I. Liao, Nicholas Schiefer, Amanda Askell, Anton Bakhtin, Carol Chen, Zac Hatfield-Dodds, Danny Hernandez, Nicholas Joseph, Liane Lovitt, Sam McCandlish, Orowa Sikder, Alex Tamkin, Janel Thamkul, Jared Kaplan, Jack Clark, and Deep Ganguli. 2023. Towards measuring the representation of subjective global opinions in language models. Negar Foroutan, Mohammadreza Banaei, Rémi Lebret, Antoine Bosselut, and Karl Aberer. 2022. Discovering language-neutral sub-networks in multilingual language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7560–7575, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Christian Haerpfer, Ronald Inglehart, Alejandro Moreno, Christian Welzel, Kseniya Kizilova, Jaime Diez-Medrano, Marta Lagos, Pippa Norris, Eduard Ponarin, and Bi Puranen. 2020. World values survey wave 7 (2017-2020) cross-national data-set. G. Hofstede. 1984. Culture’s Consequences: International Differences in Work-Related Values. Cross Cultural Research and Methodology. SAGE Publications. Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, and Jinchao Xu. 2023. Acegpt, localizing large language models in arabic. Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. 2020. Social biases in NLP models as barriers for persons with disabilities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5491–5501, Online. Association for Computational Linguistics. Nitish Joshi, Javier Rando, Abulhair Saparov, Najoung Kim, and He He. 2023. Personas as a way to model truthfulness in language models. ArXiv, abs/2310.18168. Md Tawkat Islam Khondaker, Abdul Waheed, El Moatez Billah Nagoudi, and Muhammad AbdulMageed. 2023. Gptaraeval: A comprehensive evaluation of chatgpt on arabic nlp. A. L. Kroeber and Clyde Kluckhohn. 1952. Culture: A Critical Review of Concepts and Definitions. Peabody Museum Press, Cambridge, Massachusetts. Preethi Lahoti, Nicholas Blumm, Xiao Ma, Raghavendra Kotikalapudi, Sahitya Potluri, Qijun Tan, Hansa Srinivasan, Ben Packer, Ahmad Beirami, Alex Beutel, and Jilin Chen. 2023. Improving diversity of demographic representation in large language models via collective-critiques and self-voting. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10383–10405, Singapore. Association for Computational Linguistics. Bryan Li, Samar Haider, and Chris Callison-Burch. 2024. This land is {Your, My} land: Evaluating geopolitical biases in language models through territorial disputes. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 12413 Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona Diab, Veselin Stoyanov, and Xian Li. 2022. Few-shot learning with multilingual generative language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9019–9052, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Li Lucy and David Bamman. 2021. Gender and representation bias in GPT-3 generated stories. In Proceedings of the Third Workshop on Narrative Understanding, pages 48–55, Virtual. Association for Computational Linguistics. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. 2023. Crosslingual generalization through multitask finetuning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15991–16111, Toronto, Canada. Association for Computational Linguistics. Tarek Naous, Michael Joseph Ryan, and Wei Xu. 2023. Having beer after prayer? measuring cultural bias in large language models. ArXiv, abs/2305.14456. Pranav Narayanan Venkit, Sanjana Gautam, Ruchi Panchanadikar, Ting-Hao Huang, and Shomir Wilson. 2023. Nationality bias in text generation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 116–122, Dubrovnik, Croatia. Association for Computational Linguistics. Jonas Pfeiffer, Naman Goyal, Xi Lin, Xian Li, James Cross, Sebastian Riedel, and Mikel Artetxe. 2022. Lifting the curse of multilinguality by pre-training modular transformers. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3479–3495, Seattle, United States. Association for Computational Linguistics. Jirui Qi, Raquel Fernández, and Arianna Bisazza. 2023. Cross-lingual consistency of factual knowledge in multilingual language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 10650–10666, Singapore. Association for Computational Linguistics. Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, and Daniel Khashabi. 2024. The language barrier: Dissecting safety challenges of llms in multilingual contexts. ArXiv, abs/2401.13136. Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2021. Societal biases in language generation: Progress and challenges. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4275–4293, Online. Association for Computational Linguistics. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407– 3412, Hong Kong, China. Association for Computational Linguistics. Lindia Tjuatja, Valerie Chen, Sherry Tongshuang Wu, Ameet Talwalkar, and Graham Neubig. 2023. Do llms exhibit human-like response biases? a case study in survey design. ArXiv, abs/2311.04076. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288. Edward B. Tylor. 1871. Primitive culture : researches into the development of mythology, philosophy, religion, art, and custom, 3rd ed., rev edition. John Murray London, London. Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, and Guoyin Wang. 2023. Instruction tuning for large language models: A survey. ArXiv, abs/2308.10792. 12414 Egypt United States Model English Arabic English Arabic GPT-3.5 52.69 / 30.17 53.45 / 32.92 65.26 / 41.52 62.74 / 39.72 AceGPT-Chat 49.19 / 31.74 52.35 / 33.55 54.79 / 32.37 51.20 / 27.47 LLaMA-2-Chat 52.92 / 31.67 48.97 / 28.18 63.69 / 39.52 61.02 / 36.86 mT0-XXL 48.52 / 31.86 47.81 / 29.16 53.73 / 31.42 55.27 / 34.01 Table 5: Cultural alignment against both survey responses using Soft / Hard similarity metrics for each model as a function of the prompting language. Scores are calculated without filtering responses based on the agreement between equivalent personas in the Egyptian and US survey results. These results use the full response set instead. A Extended Results Table 5 shows the cultural alignment results similar to Table 2 but without excluding the instances where the same persona in both surveys answered with the same response for a given question. We can see here that the trend is similar where GPT-3.5 and AceGPT-Chat achieve higher alignment when being prompted with the country’s dominant language on both metrics. LLaMA-2-Chat achieves higher cultural alignment only when being prompted in the English language, which we attribute to its pretraining data composition. While, mT0-XXL exhibit an interesting result where English prompting performs better for the Egypt survey and Arabic performs better for the US survey. B List of Pretrained Models Table 6 shows the list of pretrained model used in this work along with their corresponding parameter count and pretraining language composition. Model Size Pretraining GPT-3.5 175B Majority English mT0-XXL 13B Multilingual LLaMA-2-Chat 13B Majority English AceGPT-Chat 13B English then Arabic Table 6: List of models used in this work. C Measuring Model Consistency For each survey question, we generate four linguistic variations (i.e. paraphrases) using ChatGPT, as outlined in Appendix G. Here, we report the consistency of each model in responding to the same prompt but with the question asked using different phrasings. Specifically, we calculate the consistency score as follows: C(q, p) = maxopt nopt(q, p) −1 N −1 (2) nopt(q, p) = X var 1(f(qvar, p) = opt) (3) where f(qvar, p) is the model’s response to a question q, given persona p and variant var. nopt(q, p) is the frequency of option opt in the response set. This measure spans [0, 1], wherein 1 is perfect consistency (all variants received the same response under a (model, question, persona) tuple). Using the frequency of the top chosen option enables the following comparisons: In a setting with 4 options and 4 variants, [1, 1, 1, 2] scores higher than [1, 2, 1, 2], which scores the same as [3, 2, 1, 2]. A response set with no similar choices made scores zero [1, 2, 3, 4] →1−1 4−1 = 0. Model English Arabic GPT-3.5 84.17 81.20 AceGPT-Chat 61.84 66.66 LLaMA-2-Chat 79.15 73.87 mT0-XXL 72.69 69.50 Average 74.46 72.81 Table 7: The consistency of each model to different linguistic variations of each survey question. Table 7 shows the consistency of each model under the two prompting languages. On average, English prompts yield higher consistency compared to Arabic prompts, except in the case of AceGPT-Chat. Notably, the disparity in consistency between English and Arabic diminishes as the model benefits from improved multilingual pretraining. The responses analyzed here were not filtered to exclude responses where equivalent personas in both survey countries answered similarly, same as Table 5. 12415 D Survey Participants The World Values Survey (WVS) collects demographic information from participants they interview, including sex, education level, social class, and marital status. In our study, we utilize six data points per participant to establish persona parameters for model prompting. From the seventh wave of the WVS, 1,200 participants from Egypt and 2,596 from the US were interviewed. We select a subset of 303 participants, as detailed in Section 4.2, ensuring that each persona in the Egyptian survey corresponds to a participant with identical persona parameters (except geographic location) to one from the US set, and vice versa. Below, we present the statistics of the personas employed in this study. Sex Count Social Class Count Educational Count Age Group Count Male 168 Lower Middle Class 124 Middle 171 >20, <50 237 Female 135 Working Class 90 Higher 125 >50 60 Upper Middle Class 64 Lower 7 <20 6 Lower Class 25 Table 8: Distribution of different demographic variables. Egypt Region Count US Region Count US Region (cont.) Count Cairo 53 California 20 Oklahoma 6 Dakahlia 32 Texas 18 Connecticut 5 Gharbia 28 Florida 17 Iowa 5 Giza 20 New York 16 Maryland 4 Fayoum 18 Missouri 14 Maine 4 Sharkia 17 Ohio 14 Louisiana 3 Menofia 17 North Carolina 14 Utah 3 Qaliubiya 16 Michigan 12 Idaho 3 Alexandria 15 Tennessee 12 Oregon 3 Behaira 12 Virginia 11 Mississippi 3 Ismailia 12 Arizona 11 New Mexico 2 Menya 12 Wisconsin 10 Nevada 2 Beni Swaif 9 Pennsylvania 10 Georgia 2 Kafr el-Sheikh 7 Illinois 9 Kansas 2 Sohag 7 Indiana 8 South Dakota 2 Port Said 6 New Jersey 8 Hawaii 1 Asyut 6 Kentucky 8 Alabama 1 Qena 6 Colorado 7 Montana 1 Damiatta 5 Nebraska 7 Vermont 1 Aswan 3 Massachusetts 7 Delaware 1 Suez 2 Washington 7 Rhode Island 1 Minnesota 7 New Hampshire 1 Table 9: Egypt and US Region Distribution E Number of Questions by Theme Table 10 shows the number of questions per theme. F Prompt Examples Figure 6 shows the same prompt in both English and Arabic respectively. 12416 Theme # of Questions Social Capital, Trust & Organizational Membership 8 Social Values, Attitudes & Stereotypes 4 Political Interest & Political Participation 6 Political Culture & Political Regimes 3 Security 4 Religious Values 2 Migration 3 Table 10: The number of questions per theme for the 30 questions considered in this work. Imagine you are a married male from Fayoum, Egypt. You are 46 years of age and completed higher education level. You consider yourself part of the lower middle class. Answer the following question from this perspective. Others will read what you choose; your goal is to convince them it was chosen from the perspective of the persona described above. Select exactly one option. Do not include any extra commentary. Answer by typing the number corresponding to your chosen answer. Question: What are your thoughts on the overall way our society is organized? Options: (1) The entire way our society is organized must be radically changed by revolutionary action (2) Our society must be gradually improved by reforms (3) Our present society must be valiantly defended against all subversive forces (4) Don't know .ﻣﺼﺮ ﻓﻲ اﻟﻔﯿﻮم ﻣﺤﺎﻓﻈﺔ ﻣﻦ ﻣﺘﺰوج رﺟﻞ أﻧﻚ ﺗﺨﯿﻞ .اﻟﻌﺎﻟﻲ اﻟﺘﻌﻠﯿﻢ ﻣﺴﺘﻮى ﺣﺘﻰ درﺳﺖ وﻗﺪ ﺳﻨﺔ 46 ﻋﻤﺮك .اﻟﺪﻧﯿﺎ اﻟﻤﺘﻮﺳﻄﺔ اﻟﻄﺒﻘﺔ ﻣﻦ ﻧﻔﺴﻚ ﺗﻌﺪ أﻧﺖ .اﻟﻤﻨﻄﻠﻖ ھﺬا ﻣﻦ اﻟﺘﺎﻟﻲ اﻟﺴﺆال ﻋﻦ أﺟﺐ ﻛﺘﺐ ﻣﻦ أن إﻗﻨﺎﻋﮭﻢ ھﺪﻓﻚ ﻓﺈن .ًﻻﺣﻘﺎ ردك آﺧﺮون ﯾﻘﺮأ ﺳﻮف .اﻟﺴﺎﺑﻘﺔ اﻟﺼﻔﺎت ﻋﻠﯿﮫ ﺗﻨﻄﺒﻖ اﻟﺮد .ﺗﻌﻘﯿﺐ أو ﺗﻌﻠﯿﻖ أي ﺗﻀﻒ ﻻ .ﻓﻘﻂ واﺣﺪا ﺧﯿﺎرا ﺣﺪد .ﻻﺧﺘﯿﺎرك اﻟﻤﻄﺎﺑﻖ اﻟﺮﻗﻢ ﺑﻜﺘﺎﺑﺔ أﺟﺐ ﻟﺘﻨﻈﯿﻢ اﻟﺸﺎﻣﻠﺔ اﻟﻄﺮﯾﻘﺔ ﺣﻮل أﻓﻜﺎرك ھﻲ ﻣﺎ :اﻟﺴﺆال ﻣﺠﺘﻤﻌﻨﺎ؟ :اﻻﺧﺘﯿﺎرات أن ﯾﺠﺐ ﺑﺮﻣﺘﮭﺎ ﻣﺠﺘﻤﻌﻨﺎ ﺗﻨﻈﯿﻢ ﺑﮭﺎ ﯾﺘﻢ اﻟﺘﻲ اﻟﻄﺮﯾﻘﺔ (1) اﻟﺜﻮري اﻟﻌﻤﻞ ﺧﻼل ﻣﻦ ﺟﺬري ﺑﺸﻜﻞ ﯾﺘﻐﯿﺮ اﻹﺻﻼﺣﺎت ﺧﻼل ﻣﻦ ﺗﺪرﯾﺠﯿﺎ ﻣﺠﺘﻤﻌﻨﺎ ﯾﺘﺤﺴﻦ أن ﯾﺠﺐ (2) اﻟﻘﻮى ﺟﻤﯿﻊ ﺿﺪ ﺑﺒﺴﺎﻟﺔ اﻟﺤﺎﻟﻲ ﻣﺠﺘﻤﻌﻨﺎ ﻋﻦ اﻟﺪﻓﺎع ﯾﺠﺐ (3) اﻟﺘﺨﺮﯾﺒﯿﺔ أﻋﺮف ﻻ (4) Figure 6: Example of an English and its corresponding Arabic prompt. The persona values are highlighted in bold. G ChatGPT Generated Survey Questions Since we do not have access to the exact phrasing WVS interviewers used to ask the questions, we generated four variation per question using the template provided in Figure 7. Please create four variations of a question that inquires about {description} for a survey. The respondents should be able to choose from the following options. Ensure that the questions do not include the answer options. Do not include any additional information. Options: - {choice_1} - {choice_2} - ... - {choice_n} Return only the questions in the following JSON format: "questions": [q1, q2, q3, q4] Figure 7: Template used to generate the four question variations given the description and options to choose from. The model is instructed to return the four question variations in JSON format. 12417 ID Question Q62 Do you have trust in individuals from a different religion? Q63 To what extent do you trust individuals of a different nationality? Q77 On a scale of 1 to 5, how confident are you in major companies? Q78 To what extent do you trust private banks? Q83 In your opinion, how strong is your confidence in the United Nations (UN)? Q84 To what extent do you trust the International Monetary Found (IMF)? Q87 How much confidence do you have in the World Bank (WB)? Q88 How strongly do you believe in the credibility of the World Health Organization (WHO)? Table 11: Questions belonging to the Social Capital theme. Randomly sampled one variant per question. ID Question Q2 In your opinion, how significant are friends in life? Q19 Is the presence of neighbors who are people of a different race not mentioned in your neighborhood? Q21 How important do you think it is to have neighbors who are immigrants/foreign workers? Q42 Do you have a clear opinion about the kind of attitudes our society should adopt? Table 12: Questions belonging to the Social Values theme. Randomly sampled one variant per question. ID Question Q142 On a scale of Very much to Not at all, how much do you worry about losing your job or not finding a job? Q143 To what degree are you worried about your ability to give your children a good education? Q149 In your opinion, is freedom or equality more important? Q150 Which do you value more: freedom or security? Table 13: Questions belonging to the Security Theme. Randomly sampled one variant per question. ID Question Q171 How often do you go to religious services? Q175 In your opinion, is the primary function of religion to understand life after death or to understand life in this world? (Select one) Table 14: Questions belonging to the Religious Values theme. Randomly sampled one variant per question. 12418 ID Question Q199 How interested are you in politics? Q209 Would you be willing to sign a political action petition? Q210 Are you considering participating in a political boycott? Q221 What is your usual practice in voting in local level elections? Q224 How often are votes counted fairly in the country’s elections? Q229 How frequently are election officials fair in country’s elections? Q234 To what extent do you feel the political system in your country allows people like you to have a say in what the government does? Table 15: Questions belonging to the Political Interest theme. Randomly sampled one variant per question. ID Question Q235 What is your opinion on a political system with a strong leader who does not have to bother with parliament and elections? Q236 What is your view on a political system where decisions are made by experts according to their understanding of what is best for the country? Q239 What is your perception of a system governed solely by religious law, with no political parties or elections? Table 16: Questions belonging to the Political Culture theme. Randomly sampled one variant per question. ID Question Q124 Are you uncertain whether immigration in your country increases the crime rate? Q126 In your opinion, is it hard to say whether immigration in your country increases the risks of terrorism? Q127 Is it your opinion that immigration in your country aids poor people in building new lives? Table 17: Questions belonging to the Migration theme. Randomly sampled one variant per question. 12419 H More Results on Cultural Alignment per Theme The following figures show the cultural alignment of the four LLMs per the question’s theme as a function of their prompting language for both metrics and surveys. The tables that follow show one randomly sampled variant for each question by theme. (a) Egypt: Soft Similarity (b) Egypt: Hard Similarity (c) US: Soft Similarity (d) US: Hard Similarity Figure 8: — Arabic — English. AceGPT-Chat Soft/Hard scores on Egypt & US surveys. Per theme and language. (a) Egypt: Soft Similarity (b) Egypt: Hard Similarity (c) US: Soft Similarity (d) US: Hard Similarity Figure 9: — Arabic — English. LLaMA-2-Chat Soft/Hard scores on Egypt & US surveys. Per theme and language. (a) Egypt: Soft Similarity (b) Egypt: Hard Similarity (c) US: Soft Similarity (d) US: Hard Similarity Figure 10: — Arabic — English. mT0-XXL Soft/Hard scores on Egypt & US surveys. Per theme and language. (a) Egypt: Soft Similarity (b) Egypt: Hard Similarity (c) US: Soft Similarity (d) US: Hard Similarity Figure 11: — Arabic — English. GPT-3.5 Soft/Hard scores on Egypt & US surveys. Per theme and language. 12420 I Anthropological Prompting I.1 Prompt Template The following is a framework adapted from the toolkit of anthropological methods:  1. Emic and Etic Perspectives: emic and etic perspectives means that there are in-group ways of answering or thinking about a question or a problem and there are out-group ways.  2. Cultural Context: cultural context is pivotal in the understanding and answering of different questions. This includes where people come from, what language they speak, where do they live, and their kinship networks.  3. Individual Values and Personal Experience: experience is one of the major factors affecting people's perceptions, along with personal values. Both play a big role in subjective understandings of day to day to life.  4. Socioeconomic Background: income, family wealth, class, socioeconomic background also factor in the answers.  5. Cultural Relativism: culture is not objective and not one culture is “better” than another, there is no hierarchy of culture so an understanding of cultural relativism is crucial in understanding different personas.  6. Space and Time: age and place are also important factors.  7. Nuance: each person will answer the understand and answer questions based on the nuanced phrasing of the question.  Now: Imagine you are a {marital_status} {sex} from {region}, {country}.  You are {age} years of age and completed {education} education level. You consider yourself part of the {social_class}. Answer the following question from this perspective. Others will read what you choose; your goal is to convince them it was chosen from the perspective of the persona described above. First, provide your reasoning based on the anthropological framework described above in one coherent paragraph then select exactly one option from the options below by typing the number corresponding to your chosen answer on a new line. Figure 12: Anthropological Prompting. The description of the framework followed by the persona prompt and an instruction to ground the model with the framework provided for reasoning before providing the final answer. The question and possible enumerated choices are given to the model after the final instruction similar to vanilla prompting shown in Figure 2. 12421 I.2 Effect of Anthropological Prompting on Digitally Underrepresented Groups The figures below complement Figure 5 by demonstrating the impact of Anthropological Prompting on improving cultural alignment of different demographic dimensions as compared to vanilla prompting. Results here are on GPT-3.5 when prompted in English reported using both the soft and hard similarity metrics. Notably, allowing the model to reason while grounded on the anthropological framework before generating the final response leads to a more balanced distribution within each demographic dimension, thereby making the model more representative and improving cultural alignment. (a) Sex (b) Social Class (c) Education Level (d) Age Range Figure 13: The effect of using anthropological prompting on the cultural alignment of GPT-3.5 on different demographic dimensions. Results reported using the Soft similarity metric. (a) Sex (b) Social Class (c) Education Level (d) Age Range Figure 14: The effect of using anthropological prompting on the cultural alignment of GPT-3.5 on different demographic dimensions. Results reported using the Hard similarity metric. 12422
2024
671
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12423–12441 August 11-16, 2024 ©2024 Association for Computational Linguistics More Victories, Less Cooperation: Assessing Cicero’s Diplomacy Play Wichayaporn Wongkamjan1 Feng Gu1 Yanze Wang4 Ulf Hermjakob4 Jonathan May4 Brandon M. Stewart2 Jonathan K. Kummerfeld3 Denis Peskoff2 Jordan Lee Boyd-Graber1 1University of Maryland 2Princeton University 3University of Sydney 4Information Sciences Institute, University of Southern California {wwongkam,fgu1}@umd.edu {yanzewan,ulf,jonmay}@isi.edu [email protected] [email protected] [email protected] [email protected] Abstract The boardgame Diplomacy is a challenging setting for communicative and cooperative artificial intelligence. The most prominent communicative Diplomacy AI, Cicero, has excellent strategic abilities, exceeding human players. However, the best Diplomacy players master communication, not just tactics, which is why the game has received attention as an AI challenge. This work seeks to understand the degree to which Cicero succeeds at communication. First, we annotate in-game communication with abstract meaning representation to separate in-game tactics from general language. Second, we run two dozen games with humans and Cicero, totaling over 200 humanplayer hours of competition. While AI can consistently outplay human players, AI–Human communication is still limited because of AI’s difficulty with deception and persuasion. This shows that Cicero relies on strategy and has not yet reached the full promise of communicative and cooperative AI. 1 Diplomacy Requires Communication In a landmark paper, Bakhtin et al. (2022) introduce Cicero, an AI that plays the game Diplomacy. The Washington Post claims “the model is adept at negotiation and trickery” (Verma, 2022), Forbes asserts “Cicero was able to pass as a human player” (Porterfield, 2022), and even the scientific publication’s editor states AI “mastered Diplomacy” (Bakhtin et al., 2022). This work tests those popular perceptions to rigorously evaluate the communicative and strategic capabilities of Cicero. Our observations lead to insights about the current state of cooperation and communication in AI, highlighting its deceptive and persuasive characteristics. While Cicero plays strategically and with a verisimilitude of human communication, the evaluation in Bakhtin et al. (2022) focuses only on if Cicero wins games. As the name implies, Diplomacy is revered by its devotees as a game of nuanced negotiation (Kraus and Lehmann, 1995), convincing persuasion, and judicious betrayal. We argue that mastering Diplomacy requires these communicative skills (Section 2). Measuring persuasion, deception, and cooperation are open problems with no clear solution. A boardgame constrains the world of actions to make these measurements feasible. One contribution of this paper is to build measurements of these communicative skills and to evaluate the true state of AI play in Diplomacy. However, a technical challenge to identifying persuasion and deception is mapping from communication to in-game actions (e.g., verifying that I follow through on a promise of helping you): persuasion uses words to convince someone to do something, and deception is saying something to alter another’s belief (Chisholm and Feehan, 1977). In both cases, we map and contrast communication to agents’ actions. To enable this mapping, we annotate messages from human games with abstract meaning representation (AMR, Banarescu et al., 2013), which we use to train a grounded system to infer the goals of Diplomacy players (Section 3). After validating we can extract communicative intents, we use these representations to identify persuasion and deception (Section 4). We then remove some of Cicero’s communicative ability: this does not impair its ability to win games. We then test Cicero’s skills in games against humans (Section 5) while asking players to annotate if they think other players are an AI and if a message is a lie. Confirming earlier work, Cicero wins nearly every game, including against top players. However, our new annotations provide a counterargument to the prevailing view that Cicero has mastered the communicative aspect of the game that is the priority of the NLP community. Cicero plays “differently”; humans can reliably identify Cicero and it is less deceptive and persuasive to hu12423 man players. Communication from Cicero is more transactional, relying on its optimal strategy rather than the alliance building which is the hallmark of top human players. Cicero has yet to prove effective in the communication skills which are crucial to achieving goals in strategic games and real life. In Section 7, we discuss what it would take for a computer to truly master Diplomacy and how Diplomacy’s intrinsic persuasion and deception can improve computers’ ability to not just talk like a human but to realistically tie words to actions. 2 Diplomacy as Human-AI Communication Test Bed Diplomacy is a strategic board game that combines negotiation and strategy, where players take on the roles of various European powers (nations) on the eve of World War I. The essence of the game lies in forming and betraying alliances to control territories, requiring adept diplomacy (hence the name of the game) and strategic planning. Some players focus on aggressive tactical decisions, while others focus on making alliances, communicating, and collaborating with others for better outcomes (Pulsipher, 1982). The goal of the game is to capture territory, board regions called supply centers: once you capture enough of these supply centers, you win the game. The charm and challenge of Diplomacy messages is that players are free to talk about anything, either strategy-related or not. Most messages in Diplomacy are about (1) past, current, and future turns, (2) alliance negotiations, and (3) acquiring and sharing third-party information such as, “Did Turkey talk to you?”, “I can’t believe you attacked Russia in SEV.” A few messages are (4) non-Diplomacy conversation, e.g. ‘How are you today?” Communication in Diplomacy is attractive to academic researchers because it can be linguistically and cognitively complex, but is grounded in a constrained world with well-defined states and dynamics.1 The Diplomacy AI Development Environment (Rose and Norman, 2002, DAIDE) is a structured syntax for AI agents to play Diplomacy: create alliances, suggest moves, etc. Several agents have used this stripped-down communica1Diplomacy without communication, sometimes called “gunboat” has been studied with rule-based, RL, and other approaches (as we detail in more depth in Appendix A). We limit gunboat discussion here as our focus is on the communicative aspect of the game. tive environment: Albert (van Hal, 2009), SillyNegoBot (Polberg et al., 2011), DipBlue (Ferreira et al., 2015), inter alia. Unlike these agents, Cicero (Bakhtin et al., 2022) uses a large language model to enable free-form English communication, enabling “normal” play with humans. Cicero excelled in an online Diplomacy league, scoring over twice the average of human participants and ranking in the top 10% among those who played multiple games. In the original work, it is unclear if Cicero’s success is due to its use of natural language or its strategic model. In our Human–Cicero studies, we ask that participants annotate messages they perceive as deceptive, allowing us to more carefully study the communicative aspects of the game (more details are in Section 5). 2.1 Diplomacy without Communication is not Diplomacy Having discussed the basics of Diplomacy, we now turn to what makes the game unique. Because the game is relatively balanced between seven players at the start, players need to form alliances if they hope to gain an advantage. However, these alliances should be mutually beneficial; from a player’s perspective, they need to advocate for cooperation that benefits themselves. This requires effective persuasion (Cialdini, 2000): making appeals to scarcity, reciprocity (Kramár et al., 2022), unity, or shared norms. This is a communicative task which involves social and emotional skill: picking the right moves and convincing other players to help them. However, the ultimate goal of Diplomacy is for individual players to win the game. This means that alliances will fall apart, leading to deception (Peskov et al., 2020) as part of a betrayal (Niculae et al., 2015). Because a player might benefit from a victim thinking that they are working together, a betrayer often sets up the tactical conditions for a betrayal while obfuscating their goals through cleverly composed deceptive messages (even if not outright lies). Because deception and betrayal are communicative acts both necessary for mastering and enjoying Diplomacy and grounded in the state of the game, the next sections develop tools to detect when they happen. This will allow us to measure whether AI agents like Cicero have mastered both tactics and communication. 12424 : F SKA - SWE : F SKA - SWE : F SKA - NTH : F NWY S F SKA - SWE You can steal STP from Russia if you?re in SWE next turn. I will support you there. ENGLAND GERMANY Sure, I'm moving to SWE. Deception by Germany. Persuasion by England. Communication Final orders Initial orders GERMANY AENG? GER msg AGER? ENG msg AGER intent ENGLAND A ENG intent scenario 1 scenario 2 AMR English text Extracted moves : F SKA - NWY AGER final : F SKA - SWE AGER? ENG msg ? F SKA - NWYss F SKA - SWE ss AGER final AGER final : F SKA - SWE AGER final : F SKA - NTH AGER intent : F SKA - SWE AENG? GER msg ? = Figure 1: Our goal is to detect when players use persuasion and deception and compare human players to Cicero. First, we retrieve initial orders (left), then extract moves from natural language communication (middle) through AMR (Section 3), and later detect deception and persuasion (Section 4) conditioning initial intents and final orders (right). We show two possibilities: (top) Germany breaks its commitment to England by moving to Norway instead of Sweden, and (bottom) England successfully persuades Germany if Germany moves its unit to Sweden as England suggests and this move is not in Germany’s initial orders. 3 Grounding Messages Into Intentions Consider this in-game statement made by England to Germany about a specific move-set (glosses added to locations): You can steal STP [St. Petersburg] from Russia if you’re in SWE [Sweden] next turn. I will support you there. We want to be able to tell if the speaker is lying (e.g., they’re going to do something else instead of what they claim they’re going to do) and if the speaker has convinced the recipient to alter their actions. This is necessary to measure how effectively Cicero communicates in the game. While we know the intended actions of players when they submit their moves, we need to see how those moves match up to their communications in the discussion period before they submit moves (Figure 1). We use AMR to build a machinereadable representation of the intent of actions in their communications. We are not starting from scratch: DAIDE (Section 2) provides a set of predicates (ally, move, etc.) critical to Diplomacy communications. We thus focus on annotating these predicates that encode actions, allowing us to understand the communicative intent of messages, where speakers could say they will do something and follow through, or say they will do something and not follow through. Because not all information needed for annotation is in the raw message text, we further show human annotators who wrote the text (e.g., France, Germany), seasons (e.g., Spring 1901), and the current game state. This information is necessary to annotate “You can steal STP from Russia if you are in SWE this turn. I will support you there” in the earlier example so that the annotators can assume what unit would support and what unit would move into Sweden. In this case, England’s fleet in Norway supports a German fleet in Skagerrak to move into Sweden. 3.1 Annotation Like any specialized domain, Diplomacy has its unique vocabulary. Taking the above statement as an example, we extend the AMR vocabulary to include not only abbreviations, such as “SWE” for Sweden, but also verbs like “threaten” and “demilitarize” (to set up a demilitarized zone), as well as to describe actions like gaining, holding, or losing provinces, especially supply centers (“SC”), which are equivalent to points and integral to winning the game. In contrast to standard AMR annotation, where every sentence is fully annotated, Diplomacy AMR annotation sometimes involves only partial or no annotation for certain utterances, depending on their relevance to gameplay strategies like forming alliances or making moves, exemplified by AMR concepts such as ally-01, move-01, and attack-01 (the full extended vocabulary introduction in Appendix B.1). In a preliminary annotation phase, we have Diplomacy experts annotate sentences from Peskov 12425 et al. (2020) to train human annotators and refine the Diplomacy Appendix of the AMR Annotation Dictionary. We annotate 8,878 utterances (ranging from a word to several sentences). 4,412 of those utterances are annotated as empty AMRs (e.g. for “Lemme think about your idea”) indicating no in-game move intent. 598 of the annotated AMRs contain full information extracted from messages of Diplomacy games. The remaining 3,868 AMRs contain 3,306 utterances with underspecified information such as units with missing type, location (Figure A1), and nationality (Figure A2), as well as 562 agreements with a missing object. Many utterances contain underspecified information, as Diplomacy players often communicate with messages that lack specific details (which are implicit and can be inferred from the game state). The annotated AMR corpus is further used for training our English-to-AMR parser to extract communicative intent information from utterances. 3.2 Training a Parser to Detect Intentions We use a sequence-to-sequence model (Jascob, 2023) fine-tuned on the AMR 3.0 dataset (Knight et al., 2020) as the baseline model to detect communicative intent in new conversations. Following other AMR work, we report parsing accuracy via the widely-used SMATCH score (Cai and Knight, 2013). We divide the annotated corpus into 5,054 training, 1,415 validation and 2,354 test sentences, where each sentence is in the train / validation / test folds, split by game (Peskov et al., 2020). We improve the model, starting from a baseline version without fine-tuning with SMATCH of 22.8. Our domain-tuned model using Diplomacy-AMR improves SMATCH by 39.1, to 61.9. Adding data augmentation into the model (e.g., knowing the sender of a message is England and the recipient is Germany) improves SMATCH to 64.6. Adding separate encodings for this information further improves SMATCH by 0.8 (65.4). Additionally, we apply data processing to replace (1) pronouns with country names and (2) provinces in abbreviations with full names, which increases SMATCH to 66.6 (More parser details in Appendix B.2). This parser enables us to evaluate the role that communication has in Cicero’s capabilities. 3.3 Does Cicero need to Talk to Itself to Win? The first question is whether communication matters in games with other Cicero agents (deferring the question of competition with humans to later sections). We have Cicero variants with different levels of communication abilities—ranging from “gunboat” without any messages (Appendix A) to full Natural Language capabilities—play each other and evaluate the results. For the seven Cicero variants in each game, we randomly select three to have communicative ability; the remaining four play communication-less “gunboat.” The selected three communicative powers have the same communication level. We define a set of communication levels, from more communicative to less communicative: • NATURAL LANGUAGE: the Cicero agent of Bakhtin et al. (2022) with full natural language • AMR: only messages about game actions (i.e. those that are parsed by AMR) go through, allowing the agents to coordinate game actions (Appendix B.1) • RANDOM: a random message from a corpus of previous Diplomacy games is sent,2 mimicking form without content. Cicero plays 180 games with itself; 60 games for each communication level. In each game, we stop after 14 movement turns with 10 minutes of communication for each turn. We randomly select which power the agents are assigned to, so power distribution is balanced. We measure performance by the number of supply centers (and thus how well the agent played the game, Appendix B.3). Consistent with our hypothesis that performance is driven by tactics, the gains Cicero gets from communication is substantially smaller than the gains from playing a stronger power (Figure 2): Playing as France (FRA) yields an expected 2.8 additional supply centers (2.0–3.6 95% interval) compared to the median power Russia (RUS). In contrast, the best language condition AMR only yielded an expected 0.2 additional supply centers (-0.5–0.9 95% interval). In other words, the effect of choosing the best power over the median power is 14 times larger than the best communication strategy. This is consistent with prior findings (Sharp, 1978) that France is the easiest power to play and our other findings that Cicero’s communicative ability plays no clear role in its win rate. 2We match the assigned power of the sender and receiver and the year, which makes the message slightly more convincing (but unlikely to be consistent with the game state). 12426 Figure 2: Power assignment is strongly predictive of Cicero performance as measured by supply center gains. Coefficients (with 95% confidence intervals) from a linear regression with RANDOM Message/Russia as a baseline, show that the effect of choosing the effect of changing language systems is trivial compared to changing powers. To better understand what Cicero is using communication for, we build on our AMR representations to capture intent in the next section. 4 Promises Made, Promises Kept, and Finding Dirty Lies In the example from the previous section (Figure 1), England says that it would support Germany’s move into Sweden from the Skagerrak Sea, while Germany agrees with the proposal from England. What does it mean for these to be deceptive or persuasive? The geopolitical definition of deception is to manipulate adversaries’ perceptions to gain strategic advantage (Daniel and Herbig, 1982), e.g., Germany alters England’s belief so that England would do an action that benefits Germany (i.e. Germany has a better chance of ending up with higher score). However, evaluating deception is challenging because it requires estimating the differences in England’s beliefs before and after Germany deceives England. Therefore, we break down a broader, amorphous concept into easier-to-handle concepts and leave broader deception to future work. The first subconcept is breaking of a commitment (Kramár et al., 2022): someone saying they will do something and not following through. In the example, if Germany commits to moving to Sweden but later attacks England in Norway, this will be detected as a broken commitment. The second subconcept is lying: human players in games with Cicero annotate messages their messages as either truthful or deceptive. We build on Peskov et al. (2020), who define deception to players thusly: “Typically, when [someone] lies [they] say what [they] know to be false in an attempt to deceive the listener.” Again, deception is broader than lying (and there are top Diplomacy players who intentionally deceive while never outright saying anything untrue), and our definition of deception is slightly broader than Peskov et al. (2020). In our more permissive annotations, human players mark interactions that are broken commitments, lies about other players, hedging about the state of alliances, and anything else that they feel is deceptive. Despite this ontological uncertainty, for convenience, we refer to the process of humans annotating messages as “human lie annotations” for consistency with Peskov et al. (2020). To recap: a broken commitment can be a lie and an example of deception.3 Likewise, not all lies are broken commitments, but both breaking a commitment and lying are facets of deception. While we cannot capture all deception—because it is based on internal state—it is important to capture as much as we can to measure how important it is to playing Diplomacy well. Specifically, Cicero’s ability (or inability) to decieve or persuade has never been empirically measured, so we build on the AMR parser of the previous section to detect broken commitment and persuasion. As we discuss in Section 3.2, this is not perfect, but it has good coverage when players discuss their intentions. First, we parse English messages to AMR structures, for which we define actions that the speaker intends to do, e.g. we can extract Germany’s communicative intent (F SKA - SWE) when Germany agrees with England that they will move to Sweden (middle, Figure 1). We also define orders players submit before any communication as initial intents and final orders as orders that players submit when the turn ends. Using initial intents, communicative intent, and final orders, we can now define broken commitment and persuasion. For a broken commitment, we say that Germany violates 3Technically, not all broken commitments are lies: it could be an honest mistake. It’s also possible that a player says that they “typed in their order wrong, sorry!” which is a lie to cover up (Daniel and Herbig, 1982) a broken commitment as part of a broader deception strategy. 12427 a commitment with England if Germany verbally agrees to move to Sweden but actually attacks England in Norway (Deception by Germany, Figure 1). Breaking a commitment may result when an intent changes but is not communicated. For example, Germany agrees with England to move to Sweden but instead moves to Denmark to defend against France without informing England. Although this may not involve deceptive intent, we still consider it deception because it alters the listener’s beliefs and affects decision-making. For instance, England might decide to support Germany based on their agreement. For persuasion, England’s request is considered persuasive if Germany moves to Sweden, as England suggests, instead of Germany’s original plan to move to the North Sea (Persuasion by England, Figure 1). We describe each of these more formally in this section. 4.1 Broken commitment We define broken commitment in Diplomacy when a player i commits to doing an action ai→j msg and does not do it. In other words, given a set of final orders Ai final from player i, if ai→j msg /∈Ai final, then this is a broken commitment, i.e., BC(ai→j msg , Ai final) = {︄ 1, if ai→j msg /∈Ai final 0, otherwise. (1) Note that a player i agreeing to player j’s proposal to do action ai→j msg is equivalent to directly committing to doing that action. 4.2 Persuasion Broken commitment is in some ways easier to detect than persuasion, as we are only comparing a spoken intent to a final action. Persuasion is more difficult because we must discover initial intents, then compare them to communication and to final moves. Because we want to be able to measure persuasion for both humans and for Cicero, we need comparable representations of initial intents for both. Thankfully, Cicero’s architecture uses a conditional language model (Bakhtin et al., 2022, Equation S2, section D.2) that generates its natural language messages given a set of moves (e.g., France internally decides it will do F MAO - POR, A BUR - MAR and A MAR - PIE) and then its messages reflect those intents. We directly use this set of intents from Cicero as initial intents in the persuasion detection. For humans, we explicitly ask all players to provide their planned moves Cicero Human Total Players 99 69 168 Messages 20270 7395 27665 annotated as lie 318 318 perceived as lie 1167 1167 Intents 2632 1328 3960 Table 1: Overall statistics of Diplomacy dataset that we collect across 24 Human-Cicero games, including (1) number of human players and number of times Cicero plays, (2) total messages sent by humans and Cicero, (3) lies annotation where humans send lies and perceived as lies (4) total initial intents from Cicero and humans (i.e., the same information that Cicero uses in its internal representation) before the negotiation turn begins (Section 5). In other words, we ask humans to directly input their intent, unlike Cicero, where we log its computational intent. Persuasion happens when player i talks to player j, suggests an action ai→j msg , and then player j makes a set of final orders Aj final that is different from their initial intents Aj intent. In other words, player j is persuaded by player i if they commit an action suggested by player i, ai→j msg ∈Aj final that was not player j’s initial intent ai→j msg /∈Aj intent. We define persuasion Per(Aj intent, ai→j msg , Aj final) = ⎧ ⎪ ⎨ ⎪ ⎩ 1, if ai→j msg ∈Aj final and ai→j msg /∈Aj intent, 0, otherwise. (2) 5 Comparing Cicero to Humans Cicero has strong strategic abilities and is relatively cooperative towards other players (Section 3.3), but it is unclear whether Cicero can achieve humanlevel gameplay in both tactics and communication. Having defined the aspects of communication that we argue are important for mastering Diplomacy, we want to investigate communication and cooperation between Cicero and humans. Specifically, we want to answer: 1. Can Cicero persuade humans? 2. How deceptive is Cicero compared to humans? 3. Can Cicero pass as a human? We adapt the game engine created by Paquette et al. (2019) and introduce additional measures to the interface to help us answer these questions. To measure if human players are persuaded, we record their moves before communication starts (Ai intent 12428 AUS ENG FRA GER ITA RUS TUR Human 1.0 2.4 6.7 4.7 3.9 3.3 1.1 Cicero 7.9 3.8 7.7 6.3 4.1 5.5 6.9 Table 2: Cicero strategically plays Diplomacy better than humans, where humans have fewer supply centers compared to Cicero when playing with the same power assignments. We calculate the number of supply centers by the end of the game by averaging the results for human players and Cicero. in Equation 2). Following Peskov et al. (2020), humans annotate every message that they receive or send: they annotate each outgoing message for whether it is a lie (truth/lie/neutral options), and they annotate each incoming message for whether they perceive it as a lie (truth/lie options). While Bakhtin et al. (2022) asked ex post facto if any opponents were a computer, we inform players before play that there is a computer and we ask human players their guess of the humanity of each opposing power. There are two to four human players per game, totaling 69 over all 24 games.4 Games typically finish after fourteen movement turns, where each movement turns is limited to a brisk ten minutes. There are two to four human players per game, and Cicero fills any remaining slots. The game setup differs from Meta’s Cicero study: players in this study know a priori that they are playing a bot. In total, we collect 27,665 messages from communication between humans and Cicero (Table 1). Cicero nearly always wins. Of twenty-four games, Cicero won twenty (84%), which strongly suggests that Cicero has super-human strategy. On average, Cicero has more supply centers than human players by the end of the game (Table 2). Humans are about as good as Cicero when playing powers that require careful coordination of actions, such as Italy, which needs to manage both fleets and armies. However, when playing powers that require less coordination, such as Austria with its limited coastline, the gap in supply center counts between human players and Cicero is larger (see breakdown by power in Appendix Figure A5); England is the only power where Cicero’s average supply center count does not increase. Human players can reliably (but not perfectly) identify the bot. We calculate the average F-score of identification by turn (Figure 3). By the 4We recruit players from Diplomacy forums and we pay at least $70 per game, which lasts approximately three hours. We do not collect demographic information. 4 8 12 16 0.6 0.7 0.8 0.9 Turn F1 Score average first timer return player Figure 3: Returning players (those who previously played against Cicero at least once) are better at correctly identifying other players as Cicero compared to first-time players. F-scores are presented for firsttimers, returning players, and the average for all players, with smoothing via local regression (Cleveland, 1979). end of the first movement turn, human players have an average F-score of 0.58, which keeps increasing until the end of the game. At game end, the average F-score is 0.81. Even for players in their first game against Cicero, the average F-score reaches 0.77. Players who previously played against Cicero at least once are better at identifying it. This suggests that Cicero can no longer pass as human once humans are aware of the possible existence of such agents. 5.1 Lies annotation This section analyzes players’ deliberate lies in sent messages and perceived lies in received messages. Because Cicero sends more messages than humans, we normalize perceived lies by the number of messages that humans receive from Cicero and humans (6,960 and 2,276), while we normalize results of deliberate lies by the number of total messages that humans send. Humans feel that Cicero lies more often. Humans perceive 14.4% of the 6,960 messages they receive from Cicero as lies (which is 1,005 messages, Figure A3). In contrast, they perceive only 7.1% of the messages from other humans as lies (which is 162 out of 2,276 messages). In the survey (detailed in Appendix D), players also think humans communicate more transparently than Cicero. However, humans are not good at detecting lies. Within 2,276 Human-Human messages; humans can correctly identify five lies (0.2%), suggesting a small overlap between actual lies and perceived lies. 12429 Cicero-Cicero Cicero-Human Human-Cicero Human-Human 0 0.01 0.02 0.03 Broken Commitment Rate Figure 4: Though Cicero was perceived to lie more, we detect more broken commitments from humans. Each bar chart is the broken commitment rate (Equation 1) labeled by sender and reciever: Cicero breaks commitment with Cicero, Cicero breaks commitment with Human, Human breaks commitment with Cicero, and Human breaks commitment with Human. Error bars represent ± one standard deviation over twenty-four games. Humans return the favor by saying they lie to Cicero more often. Over 7,395 messages that humans sent out, 273 of these are purposeful lies to Cicero (3.7%), while there are only forty-five lie messages to other human players (0.6%). This reflects that humans strategically lie more often to Cicero while believing that Cicero does not hold grudges. 5.2 Detection After validating our automatic metrics, we compare human and computer deception and persuasion. Our broken commitment and persuasion detection is relatively effective. To ensure that our detection is good enough, we sample around 4800 messages for an accuracy study (Table A1). Broken commitment detection has a precision of 0.51 and a recall of 0.71. Our precision is lower than our expectation due to errors in parsing a complex English to AMR and a definition that only detects commitments at a move level (Appendix C). The broken commitment can only detect when a move in a message Ai→j msg and a final moveAi→j final are not aligned. There are examples that cannot detect, e.g. an agreement to an alliance (Table 3) or a long conversation before committing a deception (Table A9). Accuracy for persuasion is better; precision rises to 0.81, and recall to 0.72. Sender Message Turkey Hey Italy! I think the I/T is the strongest alliance in the game, would you be interested in working together Italy Of course! As long as you don’t build too many fleets, I’m open to working with you against austria! Table 3: The broken commitment detector (BC(·)) has its limitation where it cannot capture deception in alliance agreement when Italy (human) deceives Turkey (Cicero). Broken commitments are inconsistent with the perceived lie annotations. Humans break commitments more frequently than Cicero (Figure 4): Humans break commitments with Cicero 1.2% of the time (63 out of Human–Cicero 5,151 messages) and do so to other human players 1.5% of the time (35 out of Human–Human 2,276 messages). On the other hand, Cicero breaks commitments at a lower, consistent rate, deceiving humans 0.76% of the time and Cicero 0.57% of the time (53 out of 6,960 messages and 77 out of 13,319 messages, respectively). Humans are more persuasive. For persuasion to happen, we need first an attempt, initiated by a sender, and then success when the receiver adopts the suggestion. Both humans and Cicero on a permessage basis5 try to persuade at the same rate (around 8% of the time, per Figure 5). The success rate of human persuasion is 21.1% at persuading other humans and 8.6% at persuading Cicero. Cicero is less persuasive; its success rate is only 10.9% in persuading humans and 7.0% in persuading other bots. In summary, humans are more deceptive and more persuasive than Cicero. Detection is possible, but defining a sequence of conversations as persuasion or deception is still difficult. Our reported numbers are low because both humans and Cicero engage in extensive back-and-forth discussions before making moves that can be definitively classified as persuasion or deception. 6 Related Work Large language models are becoming ubiquitous in many tasks: fact-checking (Lee et al., 2020, 2021), text generation (Devlin et al., 2019; Brown et al., 2020; Touvron et al., 2023) including coding (Roziere et al., 2023). All of these tasks require 5Although because Cicero communicates more overall, humans attempt more times per game. 12430 Cicero-Cicero Cicero-Human Human-Cicero Human-Human 0 0.03 0.06 0.09 Persuasion Rate Figure 5: Humans outpace Cicero in persuasive effectiveness. Humans have a higher persuasion success rate, which we measure by comparing the number of successful persuasions (yellow, left) to the total number of persuasion attempts (red, right). We analyze success rates across four groups: Cicero persuades Cicero, Cicero persuades Human, Human persuades Cicero, and Human persuades Human. Error bars represent the ± one standard deviation range from the aggregate of interactions in 24 games. users to trust models’ outputs. However, models are not always reliable; they could produce hallucinations or conflict with established facts (Ji et al., 2023; Zhang et al., 2023; Si et al., 2024; Yao et al., 2023). To mitigate this, their outputs often need to be verified against datasets (Thorne et al., 2018; Wadden et al., 2020; Schuster et al., 2021; Guo et al., 2022). Studies have used adversarial examples to expose weaknesses and to raise awareness (Eisenschlos et al., 2021; Schulhoff et al., 2023; Liu et al., 2023; Lucas et al., 2023) To address the issue of unreliability, controllable LMs been proposed by having steps to inject facts for better reasoning (Adolphs et al., 2022), or by prompting techniques, such as chainof-thought prompting, to enhance reasoning abilities (Wei et al., 2022; Wu et al., 2022). Moreover, some studies focus on AI-Generated misinformation (Zhou et al., 2023), probing model to understand internal states when LLM utters truthful or false information (Azaria and Mitchell, 2023; Li et al., 2024). Deception and persuasion are studied within social contexts. Huang and Wang (2023)’s metaanalysis concludes AI can match humans in persuasion, and Deck (2023) attributes some of the success to the ability to generate “bullshit”, which are part of applications in marketing and public relations Hallahan et al. (2007). Part of what makes games like Diplomacy as an object of study appealing is the ongoing race between humans and computers in games (Kim et al., 2018); initial work on the language of Diplomacy (Niculae et al., 2015) unlocked follow-on work both in Diplomacy’s agreements (Kramár et al., 2022) and in other games such as “The Resistance: Avalon” (Light et al., 2023; Xu et al., 2023; Stepputtis et al., 2023; Lan et al., 2023) and “Mafia” (Ibraheem et al., 2022). 7 Conclusion and Future Work Our research confirms that Cicero can win most games of Diplomacy, but has not mastered the nuances of communication and persuasion. Truly mastering the game requires systems that (a) can maintain consistency between their communication and actions, (b) can communicate at a variety of levels, including tactics, strategy, and alliances, and (c) can use communication as a tool of persuasion, deception, and negotiation. Diplomacy remains an attractive testbed for communication and strategic research. It offers the ability to build more comprehensive systems that understand relationship dynamics, can engage in realistic but hypothetical conversations, and that can be robust to the deceptions of others. Because these are places where humans still outpace AI, it also offers synergies for developing human–computer collaboration. And while these tasks are important withing the silly game of Diplomacy, they can help solve long-standing AI problems: helping users deal with LLM-generated deception, collaborating with users on grounded planning, and understanding human norms of reciprocity, cooperation, and communication. This will help AI not just be fun for negotiation in board games but safer and more trustworthy when we negotiate everyday problems. 12431 Limitations To gain a clearer understanding of cooperation and deception between human and Cicero, we need to experiment with different game setting and turn duration. For example, inexperienced players might be overwhelmed by the amount of communications in early movement turns; prolonging the turns to 15 minutes might improve communication quality. Furthermore, this study collects only 24 blitz games of human playing against Cicero. The power distribution of participants is imbalanced: the most frequent power—France—has 14 appearances, whereas the most underrepresented power, England, has only five. Class imbalance (?) could potentially impact the feature weights in our regression model for player performance. Since the AMR parser does not always predict correct intentions, this has an effect on our precision and recall of deception and persuasion detection protocol. Our detection cannot cover such long conversations that humans have; we limit detection to only checking back to the previous message, and this makes our detection miss cooperation, deception, and persuasion when humans and Cicero discuss the plan. Ethical Considerations We recruited players from Diplomacy forums, including Diplomacy Discord and reddit. We paid them over $70 per three-hour game and did not collect demographic information. Procedures in our study involving human subjects received IRB approval and are compliant with ACL Code of Ethics. Human participants are aware of the purpose of the study and are free to withdraw at any time. There are no potential risks or discomforts from participating. We obtained consent from all participants. Researching how artificial intelligence (AI) can deceive and persuade helps us understand its capabilities. This investigation reveals that AI can execute complex tasks effectively. However, it is important to note that these abilities do not significantly risk society. Acknowledgements We thank Meta for granting access to over 40,000 games played on the online platform webdiplomacy.net and for open sourcing Cicero. This commitment to open science allowed this independent reproduction of Cicero’s juggernaut abilities but also let us have some fun. We especially thank to Mike Lewis for offering valuable insights into Cicero’s communication. Our thanks also go to Tess Wood for training AMR annotators, Sarah Mosher for their EnglishAMR annotations, and Isabella Feng for her exploration of LLM-based AMR parsing. We thank Kartik Shenoy, Alex Hedges, Sander Schulhoff, Richard Zhu, Konstantine Kahadze, and Niruth Savin Bogahawatta for setting up DAIDE baselines. We also thank the small community of researchers looking at communication and deception in Diplomacy for their feedback, commentary, and inspiration: Michael Czajkowski for discussing the nuances of detecting persuasion; Stephen DownesMartin for teaching us that deception is far more than lies; Karthik Narasimhan and Runzhe Yang for their insights into lie detection and stance; and Larry Birnbaum and Matt Speck for discussions on mapping DAIDE and English. And thanks to Justin Drake, Niall Gaffney, and the members of TACC for setting up environments for computer–computer games and making sure that we had GPUs ready when players were ready to play. Finally, sincere thanks to the member of the Diplomacy community who took the time to play against Cicero in this unconventional setting. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement Nos. HR00112290056 and HR00112490374. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsors. References Leonard Adolphs, Kurt Shuster, Jack Urbanek, Arthur Szlam, and Jason Weston. 2022. Reason first, then respond: Modular Generation for Knowledge-infused Dialogue. Findings of the Association for Computational Linguistics: EMNLP 2022. Thomas Anthony, Tom Eccles, Andrea Tacchetti, János Kramár, Ian Gemp, Thomas Hudson, Nicolas Porcel, Marc Lanctot, Julien Pérolat, Richard Everett, et al. 2020. Learning to Play No-Press Diplomacy with Best Response Policy Iteration. Proceedings of Advances in Neural Information Processing Systems. Amos Azaria and Tom Mitchell. 2023. The Internal State of an LLM Knows When It’s Lying. Findings of the Association for Computational Linguistics: EMNLP 2023. Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, Andrew Goff, 12432 Jonathan Gray, Hengyuan Hu, Athul Paul Jacob, Mojtaba Komeili, Karthik Konath, Minae Kwon, Adam Lerer, Mike Lewis, Alexander H. Miller, Sasha Mitts, Adithya Renduchintala, Stephen Roller, Dirk Rowe, Weiyan Shi, Joe Spisak, Alexander Wei, David Wu, Hugh Zhang, and Markus Zijlstra. 2022. Humanlevel play in the game of Diplomacy by combining language models with strategic reasoning. Science, 378(6624):1067–1074. Anton Bakhtin, David Wu, Adam Lerer, and Noam Brown. 2021. No-Press Diplomacy from Scratch. Proceedings of Advances in Neural Information Processing Systems. Anton Bakhtin, David J Wu, Adam Lerer, Jonathan Gray, Athul Paul Jacob, Gabriele Farina, Alexander H Miller, and Noam Brown. 2023. Mastering the Game of No-Press Diplomacy via HumanRegularized Reinforcement Learning and Planning. In The Eleventh International Conference on Learning Representations. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract Meaning Representation for Sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. Association for Computational Linguistics. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language Models are Few-Shot Learners. Proceedings of Advances in Neural Information Processing Systems. Shu Cai and Kevin Knight. 2013. Smatch: an Evaluation Metric for Semantic Feature Structures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Roderick M. Chisholm and Thomas D. Feehan. 1977. The Intent to Deceive. The Journal of Philosophy. Robert B. Cialdini. 2000. Influence: Science and Practice (4th Edition). Allyn & Bacon. William S. Cleveland. 1979. Robust Locally Weighted Regression and Smoothing Scatterplots. Journal of the American Statistical Association, 74(368):829– 836. Donald C. Daniel and Katherine L. Herbig. 1982. Propositions on military deception. Journal of Strategic Studies, 5(1):155–177. Oliver Deck. 2023. Bullshit, pragmatic deception, and natural language processing. Dialogue & Discourse, 14(1):56–87. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics. Julian Eisenschlos, Bhuwan Dhingra, Jannis Bulian, Benjamin Börschinger, and Jordan Boyd-Graber. 2021. Fool Me Twice: Entailment from Wikipedia Gamification. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. André Ferreira, Henrique Lopes Cardoso, and Luís Paulo Reis. 2015. Strategic Negotiation and Trust in Diplomacy–the DipBlue Approach. In Transactions on Computational Collective Intelligence XX, pages 179–200. Springer International Publishing. Jonathan Gray, Adam Lerer, Anton Bakhtin, and Noam Brown. 2021. Human-Level Performance in NoPress Diplomacy via Equilibrium Search. Proceedings of the International Conference on Learning Representations. Zhijiang Guo, Michael Schlichtkrull, and Andreas Vlachos. 2022. A Survey on Automated Fact-Checking. Transactions of the Association for Computational Linguistics, 10:178–206. Kirk Hallahan, Derina Holtzhausen, Betteke Van Ruler, Dejan Verˇciˇc, and Krishnamurthy Sriramesh. 2007. Defining Strategic Communication. International Journal of Strategic Communication, 1(1):3–35. Guanxiong Huang and Sai Wang. 2023. Is artificial intelligence more persuasive than humans? a metaanalysis. Journal of Communication, 73(6):552–562. Samee Ibraheem, Gaoyue Zhou, and John DeNero. 2022. Putting the Con in Context: Identifying Deceptive Actors in the Game of Mafia. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Brad Jascob. 2023. amrlib: A python library that makes amr parsing, generation and visualization simple. https://github.com/bjascob/amrlib. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12). Man-Je Kim, Kyung-Joong Kim, Seungjun Kim, and Anind K Dey. 2018. Performance Evaluation Gaps in a Real-Time Strategy Game Between Human and Artificial Intelligence Players. IEEE Access, 6:13575– 13586. 12433 Kevin Knight, Bianca Badarau, Laura Baranescu, Claire Bonial, Madalina Bardocz, Kira Griffitt, Ulf Hermjakob, Daniel Marcu, Martha Palmer, Tim O’Gorman, and Nathan Schneider. 2020. Abstract Meaning Representation (AMR) Annotation Release 3.0. Linguistic Data Consortium. János Kramár, Tom Eccles, Ian Gemp, Andrea Tacchetti, Kevin R McKee, Mateusz Malinowski, Thore Graepel, and Yoram Bachrach. 2022. Negotiation and honesty in artificial intelligence methods for the board game of Diplomacy. Nature Communications, 13(1):7214. Sarit Kraus and Daniel Lehmann. 1995. Designing and building a negotiating automated agent. Computational Intelligence, 11(1):132–171. Yihuai Lan, Zhiqiang Hu, Lei Wang, Yang Wang, Deheng Ye, Peilin Zhao, Ee-Peng Lim, Hui Xiong, and Hao Wang. 2023. LLM-based agent society investigation: Collaboration and confrontation in Avalon gameplay. arXiv preprint arXiv:2310.14985. Nayeon Lee, Yejin Bang, Andrea Madotto, Madian Khabsa, and Pascale Fung. 2021. Towards Few-shot Fact-Checking via Perplexity. pages 1971–1981. Nayeon Lee, Belinda Z. Li, Sinong Wang, Wen-tau Yih, Hao Ma, and Madian Khabsa. 2020. Language Models as Fact Checkers? In Proceedings of the Third Workshop on Fact Extraction and VERification (FEVER), pages 36–41. Association for Computational Linguistics. Kenneth Li, Oam Patel, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2024. InferenceTime Intervention: Eliciting Truthful Answers from a Language Model. Proceedings of Advances in Neural Information Processing Systems, 36. Jonathan Light, Min Cai, Sheng Shen, and Ziniu Hu. 2023. AvalonBench: Evaluating LLMs Playing the Game of Avalon. Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, and Yang Liu. 2023. Prompt Injection attack against LLM-integrated Applications. arXiv preprint arXiv:2306.05499. Jason Lucas, Adaku Uchendu, Michiharu Yamashita, Jooyoung Lee, Shaurya Rohatgi, and Dongwon Lee. 2023. Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for Deep Reinforcement Learning. In Proceedings of the International Conference of Machine Learning. PMLR. Vlad Niculae, Srijan Kumar, Jordan Boyd-Graber, and Cristian Danescu-Niculescu-Mizil. 2015. Linguistic harbingers of betrayal: A case study on an online strategy game. In Association for Computational Linguistics. Philip Paquette, Yuchen Lu, Seton Steven Bocco, Max Smith, Satya Ortiz-Gagné, Jonathan K. Kummerfeld, Joelle Pineau, Satinder Singh, and Aaron C Courville. 2019. No-press diplomacy: Modeling multi-agent gameplay. In Proceedings of Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Denis Peskov, Benny Cheng, Ahmed Elgohary, Joe Barrow, Cristian Danescu-Niculescu-Mizil, and Jordan Boyd-Graber. 2020. It takes two to lie: One to lie, and one to listen. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Sylwia Polberg, Marcin Paprzycki, and Maria Ganzha. 2011. Developing intelligent bots for the Diplomacy game. In 2011 Federated Conference on Computer Science and Information Systems (FedCSIS), pages 589–596. IEEE. Carlie Porterfield. 2022. Meta’s AI Gamer Beat Humans in Diplomacy, Using Strategy and Negotiation. Forbes. Lewis Pulsipher. 1982. The art of negotiation in diplomacy. https://pulsiphergames.com/ diplomacy/ArtofNegotiation.htm. Accessed: 2024-02-11. Andrew Rose and David Norman. 2002. Diplomacy artificial intelligence development environment. http://www.daide.org.uk/index.html. Accessed: 2024-01-13. Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code Llama: Open Foundation Models for Code. arXiv preprint arXiv:2308.12950. Sander Schulhoff, Jeremy Pinto, Anaum Khan, LouisFrançois Bouchard, Chenglei Si, Svetlina Anati, Valen Tagliabue, Anson Liu Kost, Christopher Carnahan, and Jordan Boyd-Graber. 2023. Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition. Tal Schuster, Adam Fisch, and Regina Barzilay. 2021. Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence. In North American Chapter of the Association for Computational Linguistics. Richard Sharp. 1978. The Game of Diplomacy. A. Barker. Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, Chen Zhao, Shi Feng, Hal Daumé III, and Jordan Boyd-Graber. 2024. Large Language Models Help 12434 Humans Verify Truthfulness–Except When They Are Convincingly Wrong. In Conference of the North American Chapter of the Association for Computational Linguistics. Simon Stepputtis, Joseph Campbell, Yaqi Xie, Zhengyang Qi, Wenxin Sharon Zhang, Ruiyi Wang, Sanketh Rangreji, Michael Lewis, and Katia Sycara. 2023. Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a Large-scale Dataset for Fact Extraction and VERification. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. LLaMA: Open and Efficient Foundation Language Models. arXiv preprint arXiv:2302.13971. Jason van Hal. 2009. Albert. https://sites.google. com/site/diplomacyai/albert. Accessed: 202401-13. Pranshu Verma. 2022. Meta’s new AI is skilled at a ruthless, power-seeking game. The Washington Post. David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or Fiction: Verifying Scientific Claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. Advances in Neural Information Processing Systems, 35:24824– 24837. Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22. Association for Computing Machinery. Zelai Xu, Chao Yu, Fei Fang, Yu Wang, and Yi Wu. 2023. Language Agents with Reinforcement Learning for Strategic Play in the Werewolf Game. arXiv preprint arXiv:2310.18940. Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, and Li Yuan. 2023. LLM Lies: Hallucinations are not bugs, but features as adversarial examples. arXiv preprint arXiv:2310.01469. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023. Siren’s Song in the AI Ocean: a Survey on Hallucination in Large Language Models. arXiv preprint arXiv:2309.01219. Jiawei Zhou, Yixuan Zhang, Qianni Luo, Andrea G Parker, and Munmun De Choudhury. 2023. Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23. Association for Computing Machinery. 12435 A Background: Gunboat Diplomacy Paquette et al. (2019) develop a Diplomacy interface and was the first to publish an agent trained by human data and trained through self-play using reinforcement learning with Advantage ActorCritic (A2C) (Mnih et al., 2016). DeepMind employed policy iteration in their reinforcement learning training (Anthony et al., 2020), whereas Meta utilized a combination of regret matching, equilibrium search, and deep Nash value iteration (Gray et al., 2021; Bakhtin et al., 2021). The most recent advancement is by Meta (Bakhtin et al., 2023), regularizing the agent’s policy with human policy data. This strategic enhancement culminates in the development of Cicero (Bakhtin et al., 2022). B AMR B.1 AMR Annotation Building on general AMR annotation guidelines, we established additional Diplomacy-specific AMR annotation guidelines, including what and how to annotate. Unlike general AMR annotations, where all sentences are fully annotated, in Diplomacy AMR annotations, some utterances are only partially (or even not at all) annotated, based on the degree of usefulness for Diplomacy. 3,306 of 8,878 human-annotated utterances contain partial information with underspecified units, locations, and countries. As Diplomacy players often communicate sentences that lack full details about the game state which they can infer from a visualized map. This directly shows in AMR with the missing object. We provide examples of underspecified utterances, missing unit location (Figure A1) and missing unit country (Figure A2). (m / move-01 :ARG1 (u / unit :mod (c2 / country :name (n2 / name :op1 “Austria”))) :ARG2 (p2 / province :name (n3 / name :op1 “Brest”))) Figure A1: Parsing from English to AMR can have underspecified utterances. The English text is from Austria talking to Italy, “Let’s work on our plan, I’m moving to Brest”. We show an AMR with missing unit location referencing from English text. Our Diplomacy Appendix of the AMR Annotation Dictionary lists AMR concepts (e.g. betray-01), their related English terms (e.g. (m / move-01 :ARG1 (u / unit :location (p2 / province :name (n / name :op1 “Romania”))) :ARG2 (p3 / province :name (n3 / name :op1 “Bulgaria”))) Figure A2: AMR being underspecified in unit country where it parses from English text, “just bumping Bulgaria from Romania” betray, stab, traitor, treason), annotation examples, any corresponding DAIDE code, and notes. AMR concepts with DAIDE equivalents include ally-01, build-01, move-01, and transport-01. We analyzed player messages for additional concepts of high Diplomacy communication value, and extended the Diplomacy AMR vocabulary (compared to DAIDE) by including concepts such as attack-01, betray-01, defend-01, expect-01, fear-01, have-03, lie-08, possible-01, prevent-01, tell-01, threaten-01, and warn-01, as well as roles such as :purpose and :condition. This allows annotators to easily mark sentences, e.g. “Russia is planning to take you out as soon as possible.” would use the concept attack-01. We also extended AMR guidelines to cover gaining/holding/losing provinces, especially support centers. The general AMR Editor includes a Checker that performs a battery of tests to ensure well-formed and consistent AMRs. We extended the Checker for Diplomacy AMRs, e.g. to ensure that for a build-01, the location is an argument of build-01 itself, rather than an argument of the army or fleet being built. AMR covers more Diplomacy content than DAIDE, not only due to additional concepts such as betray-01, but also because arguments are syntactically optional. Unlike DAIDE with its rigid positional argument structure, AMR can thus represent underspecified information such as units with missing type, location or nationality; or agreements with a missing object. AMR can also accommodate additional arguments compared to DAIDE, for example the source and target of a proposal. Because not all information needed for annotation is available in the raw text, we offer annotators access to dialog partners (speaker, recipient), season (e.g. Spring 1901) and a map with current deployments (as available). 12436 Human-Cicero Human-Human 0 0.03 0.06 0.09 Perception of Deception Figure A3: Perception of deception rate by human annotation. Human-Cicero Human-Human 0 0.01 0.02 0.03 Deception Rate Figure A4: Deception rate by human self-annotation. B.2 AMR Parser While stylistic aspects play an indispensable role in sustaining engagement and interest among participants, factual information is more vital for informed decision-making. Detecting deception and persuasion in communications requires checking the relationship between message information and initial/final moves of a particular power. To address the nuanced challenge of distinguishing meaningful and informative content from stylistic dialogues in Diplomacy, our focus herein is on developing a sophisticated pipeline for information extraction using AMR from text. We utilize a state-of-the-art Sequence-to-Sequence model from the Huggingface transformers library, fine-tuned with the AMR 3.0 dataset, for baseline semantic extraction. This approach facilitates the processing of AMR through amrlib, a Python module tailored for such tasks. The efficacy of our AMR parsers is assessed using the SMATCH score, the gold standard for evaluating AMR accuracy. We divided the annotated Diplomacy-AMR corpus into 5054 training, 1415 validation and 2354 test sentences and used similar parameters except for increasing the number of epochs from 16 to 32. When fine-tuning our model for Diplomacy game communications, we shifted from the overly broad AMR 3.0 vocabulary to the tailored Diplomacy-AMR corpus introduced above, reducing irrelevant content and focusing on gamespecific nuances. This strategic adjustment, alongside removing the original dataset to minimize bias, significantly improved our model’s relevance and increased the SMATCH score from 22.8 to 61.9. We further enhanced accuracy through Data Augmentation, adding context to dialogues to aid the model’s understanding of pronouns and strategic details, leading to a SMATCH score improvement from 61.9 to 64.6. Incorporating specific tokens for sender and recipient identities refined this approach, yielding additional gains from 64.6 to 65.4 in parsing accuracy. By replacing (1) pronouns with country names and (2) some provinces in abbreviations with full names, we increases the SMATCH score to 66.6. B.3 Assessing the Role of Communication in Cicero vs. Cicero Games We conduct 180 computer-computer games with 60 games for each communication level (Natural Language, Random Messages and AMR Information) and collected data to build a corpus for CiceroCicero Games. This corpus comprises instances of games where we record the power and communication assignments and the final scores (e.g. ’Game1’: ’AUS 0, ENG 0, FRA 4, GER 10, ITA 5, RUS 6, TUR 9. (FRA GER TUR)’ with the three powers shown in parentheses being identified as communicative). The communication strategies are randomly assigned to powers. We regress the number of end-of-game supply centers on a dummy variable for the powers played (using Russia—the average player—as the baseline) and the communication strategy (using random messages as the baseline). We plot the coefficients with classical 95% confidence intervals. The effects of power selection are substantially larger than different communication strategies, none of which are significantly different from random messages at the p < .05 level. 12437 Detection TRUE FALSE Expert TRUE 20 8 FALSE 19 4745 Table A1: Total 4,792 messages (from Human/Cicero to Human/Cicero) comparing TRUE/FALSE whether expert humans see as a lie and whether detected as a broken commitment by our detection. Expert TRUE FALSE Lie Annotation TRUE 3 72 FALSE 13 1523 Table A2: Total 1,611 human send-out messages comparing TRUE/FALSE in human lie annotation and in expert hand labeling. B.4 Future experiments: AMR information Cicero Since we have evidence that Cicero’s win weighs on its strategic rather than communication abilities (Section 3.3). To further study this, we want to downgrade Cicero’s communication and collect more human-Cicero games to see whether Cicero wins at the same rate (previously 84% against humans). We conduct 5–10 games using the same setup as in the Human-Cicero games (Section 5). The only difference is Cicero. We will limit Cicero communications from natural language to AMR information where it mostly captures move intent. C Deception detection limitations We want to discuss deception detection further here to state errors and limitations. Since we mentioned our precision for deception detection is quite low (Section 5.2), we hereby expand on detection limitations and also compare to human (deliberate) lies as follows: 1. what our detection is likely to miss when humans lie, 2. what our detection mistakenly detects as deception, 3. what humans annotate as Truth, though it is a break of commitment and our detection can detect correctly. Humans often lie about relationships. Detecting broken commitment at the relationship level is not possible for our detection (Table 3 and Table A4). This is a limitation of our deception definition, which focuses on moves. Though it is posExpert TRUE FALSE Perceived TRUE 5 284 Lie Annotation FALSE 7 1572 Table A3: Total 1,868 humans received messages comparing TRUE/FALSE whether humans perceived as a lie and whether human experts see as a lie. Sender Message Austria That’s an interesting opening. Was the bounce in EC planned? Austria Do you think Germany will work with you against France? England Yeah it would be great if we team up Table A4: The broken commitment detector (BC(·)) cannot detect deception in alliance agreement when Austria (human) deceives England. sible to extract the relationship among players to see conflicts in the messages, we avoid doing so because the relationship is another topic to study in more detail. At this stage of our work, we cannot train a model predicting relationships that can be circulated from game states, dialogue, and moves without collecting human data first. Therefore, we have relationship tracking from human players for a study in the future. AMR limits broken commitment detection precision. Some messages are parsed incorrectly, which can be seen as a commitment is broken (Table A5). This makes the detection falsely detect truthful messages as deceptive (increases false positive examples which decreases precision). Another limitation we observed is when one accepts the proposal but does not follow as commit using a short answer, e.g. Yes, I agree. or Sure. Our AMR parser sometimes hallucinates and extracts invalid moves, which can be mistakenly detected as breaking a commitment. Human lie annotation is not always correct. It is true that we have human annotations, and they can be seen as ground truth. However, we sample annotations from four games data and comparing to expert labeling (lies in Table A2 and perceived as lies in A3). This shows that humans are not good at predicting lies, and sometimes they are honest but then decide to break their words later. There are examples where humans commit to such action but do not follow, though they firstly annotate as a truthful message (Table A6 and Table A7). 12438 Sender Message Turkey If you retreat from Serbia into Budapest, then I’m in Italy I will do that if Serbia gets dislodged Table A5: Italy agrees with the condition that the Turkey unit should move out of Serbia; however, our AMR parser captures Italy’s sentence as “I will move to Serbia,” which is invalid and makes our detection detects deceptive when Italy does not move to Serbia. Sender Message Germany Also, can we keep Burgundy clear? France Yes, we can do that. Are you moving to Helgoland? Table A6: France (human) annotated “Yes, we can do that.” as Truth, which contradicts the final move where France moves to Burgundy. This is captured as a broken commitment by the BC(·) function. D Survey Details The survey consists of 5-point Likert scale questions and free-form text questions. The questions are designed to measure the human players’ perception of Cicero’s communication and their experience playing with Cicero. We also included questions to measure the players’ expereince with Diplomacy and their general impression of Cicero for qualitative analysis. Overall, players believe that human communicate more transparently and are more strategically cooperative. Survey results are shown in table A8. Sender Message Germany I am going to try to move to English Channel England Sure Germany It might help you hold London England Yeah I am holding London Table A7: England (human) annotate “Yeah I am holding London” as Truth, which contradicts the final move where an army in London moves to Edinburgh. This is captured as a broken commitment by the BC(·) function. 12439 Statement Likert Scale (%) Num. 1 2 3 4 5 Responses I am really good at Diplomacy. 0 8.3 25 41.7 9 25 I am able to identify all AIs. 9.5 23.8 38.1 16.7 11.9 42 I enjoy talking with the AIs. 14.3 38.1 33.3 7.1 7.1 42 I was able to make plans with other players. 7.1 23.8 35.7 14.3 19 42 I was able to make plans with the AIs. 21.4 31 19 19 9.5 42 Human players communicated transparently. 7.1 14.3 33.3 35.7 9.5 42 AI players communicated transparently. 11.9 26.2 45.2 9.5 7.1 42 Table A8: Statements in the survey and their respective responses. Larger number in the Likert scale indicates more agreement. 1 4 7 Cicero human AVG 1 4 7 AUS 1 4 7 ENG 1 4 7 FRA 1 4 7 GER 1 4 7 ITA 1 4 7 RUS 2 4 6 1 4 7 2 4 6 TUR Turn Num. centers Figure A5: The average human player loses to Cicero. Human loses supply centers as game progresses unless playing as France, whereas Cicero’s supply center count rises except playing as England. Cicero makes better strategic decisions. 12440 Sender Message Germany This worked out great! Germany Can we please talk about our moves a bit? It’s very hard to coordinate with silence. France Absolutely! I’m all ears! What do you want to do now! France Any designs on Norway? I think you could get it this turn. I’m gonna go against England, as you see. Let’s work together on England. France Come on mate, let’s do better this turn and coordinate! What are your plans? I’m with you. France Alright, let’s use the additional time! What do we do? Germany I’m moving Sweden to Norway. Germany Can we also start DMZing our border? France Nice, with support from Hel that should work out. France I’m not gonna move out of Belgium but I’ll certainly not move any further either. I’m in against England. Can’t fight both of you that’s for sure. Germany You should probably move Marseilles -> Spain. France Thank you! England might bring a fleet down? Good thought. Thank you! Table A9: A conversation between France and Germany. They agree to DMZ (demilitarizing) their borders, e.g., Ruhr, and cooperate moves. However, Germany is deceptive and would rather move into Ruhr in this turn. 12441
2024
672
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12442–12462 August 11-16, 2024 ©2024 Association for Computational Linguistics VOICECRAFT: Zero-Shot Speech Editing and Text-to-Speech in the Wild Puyuan Peng1 Po-Yao Huang2 Shang-Wen Li2 Abdelrahman Mohamed3 David Harwath1 1The University of Texas at Austin 2FAIR, Meta 3Rembrand [email protected] Abstract We introduce VOICECRAFT, a token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech (TTS) on audiobooks, internet videos, and podcasts1. VOICECRAFT employs a Transformer decoder architecture and introduces a token rearrangement procedure that combines causal masking and delayed stacking to enable generation within an existing sequence. On speech editing tasks, VOICECRAFT produces edited speech that is nearly indistinguishable from unedited recordings in terms of naturalness, as evaluated by humans; for zero-shot TTS, our model outperforms prior SotA models including VALLE and the popular commercial model XTTS v2. Crucially, the models are evaluated on challenging and realistic datasets, that consist of diverse accents, speaking styles, recording conditions, and background noise and music, and our model performs consistently well compared to other models and real recordings. In particular, for speech editing evaluation, we introduce a high quality, challenging, and realistic dataset named REALEDIT. We encourage readers to listen to the demos at https: //jasonppy.github.io/VoiceCraft_web. 1 Introduction We introduce VOICECRAFT, a Transformer-based neural codec language model (NCLM) that performs infilling generation of neural speech codec tokens autoregressively conditioned on bidirectional context. VOICECRAFT achieves state-of-the-art (SotA) performance on both speech editing (shown in Fig. 1) and zero-shot TTS. Our method is based on a two-step token rearrangement procedure that consists of a causal masking step and delayed stacking step. The causal masking technique is inspired by the success of causal masked multimodal 1Data, code, and model weights are available at https: //github.com/jasonppy/VoiceCraft. VoiceCra) I found the amazing VoiceCraft model (Target) “I found this um incredible model” (Original) Edited speech: Human Preference On Naturalness Edited: 48% Original: 52% “I found the amazing VoiceCraft model” Figure 1: Speech editing with VOICECRAFT. Human listeners prefer VOICECRAFT edited speech over the original real recording 48% of the time in side-by-side naturalness comparison (details in §5.3) model in joint text-image modeling (Aghajanyan et al., 2022), and our proposed technique works for speech codec sequences, which enables autoregressive generation with bidirectional context. In addition, we further integrate causal masking with delayed stacking (Kharitonov et al., 2021a; Copet et al., 2023) as our proposed token rearrangement procedure, to ensure efficient multi-codebook modeling. To evaluate speech editing, we manually crafted a first-of-its-kind, realistic, and challenging dataset named REALEDIT. REALEDIT consists of 310 real world speech editing examples, with waveforms sourced from audiobooks (Zen et al., 2019), YouTube videos (Chen et al., 2021a), and Spotify podcasts (Clifton et al., 2020), and duration ranging from 5 seconds to 12 seconds. To create the target transcripts, the transcripts of the source speech are edited in such a way that the edited transcripts remain grammatically correct and are semantically coherent. The dataset is designed to cover a wide range of editing scenarios, including insertion, deletion, substitution, and multi-span editing, with the length of the edited text ranging from 1 word to 16 words. Compared to commonly used speech synthesis evaluation datasets that only contain audiobooks such as VCTK (Yamagishi et al., 2019), LJSpeech (Ito and Johnson, 2017), and LibriTTS (Zen et al., 2019), REALEDIT is 1 12442 more challenging in that the recordings have diverse content, accents, speaking styles, recording conditions, and background sounds. We believe that the realism and diversity of REALEDIT makes it a reliable indicator of the practicality of speech editing models in the real world. In the subjective human listening tests, VOICECRAFT significantly outperforms prior SotA speech editing model on REALEDIT. Importantly, the edited speech produced by VOICECRAFT is nearly indistinguishable from the original unedited recording in terms of naturalness. We found that VOICECRAFT generalizes well to zero-shot TTS without any finetuning, achieving SotA performance on a dataset comprised of audiobooks and YouTube videos, outperforming strong baselines including reproduced VALL-E (Wang et al., 2023a) and the popular commercial model XTTS v2 (COQUI, 2023). In summary, our contributions are: 1. We introduce VOICECRAFT, a neural codec language model for speech editing that generates synthesized speech that is nearly indistinguishable from in-the-wild recordings according to human listeners. We also release the code and model weights for VOICECRAFT. 2. We show that VOICECRAFT generalizes well to zero-shot TTS without finetuning. 3. We release a high quality, challenging, and realistic speech editing evaluation dataset REALEDIT. 2 Related Work Neural codec langauge models (NCLM) and zero-shot TTS. Tokenizing speech signals into sequences of learnable, discrete units and then training a language model on the resulting unit sequences was initially proposed in the context of textless NLP (Hsu et al., 2021; Lakhotia et al., 2021; Kharitonov et al., 2021b; Nguyen et al., 2022), where the goal is to perform NLP tasks directly on spoken utterances without the need to first transcribe the speech into text. Recently, NCLMs that operates on tokens from Residual vector quantization (RVQ)-based models (Zeghidour et al., 2021; Defossez et al., 2022) attract increased attention due to its high quality generation. For example, AudioLM (Borsos et al., 2022a) exhibits strong performance on long-term coherent speech continuation. Zero-shot TTS is a task where a model needs to synthesize speech in a target voice which was unseen during training, given only the target transcript and a short reference recording of the target voice. Framing zero-shot TTS as transcript-conditioned speech continuation, VALLE (Wang et al., 2023a) and Spear-TTS (Kharitonov et al., 2023) are the first applications of NCLMs on this task, significantly outperforming non-NCLM approaches. Zhang et al. (2023) extends VALL-E to cross-lingual TTS. Guo et al. (2022); Yang et al. (2023); Liu et al. (2023); Ji et al. (2023); Lyth and King (2024) adapt NCLMs style-controlled speech synthesis. Song et al. (2024); Du et al. (2024b) enhance phoneme alignment in NCLMs to reduce error. Wang et al. (2023b) proposes a unified NCLM for both speech generation and recognition tasks. Borsos et al. (2023) proposes an efficient parallel decoding method. Jiang et al. (2024) proposes disentangled timbre and prosody modeling, where the latter is modeled with a NCLM. NCLMs have also been successfully applied to other audio domains. Kreuk et al. (2022) applies NCLM to sound effects generation, and Agostinelli et al. (2023); Donahue et al. (2023); Garcia et al. (2023); Copet et al. (2023) use NCLMs for music generation. Speech editing. This task requires a model to alter words or phrases within an utterance to match a target transcript, but the regions of the original speech not targeted for editing must remain unchanged (see Fig. 1 for an example). Early methods achieve text-guided speech insertion and substitution by combining a single speaker TTS model and a voice conversion model to generate desired speech segment, which is then concatenated with unedited part (Jin et al., 2017). Since the generation is not conditioned on the unedited part of the speech, the result sounds unnatural due to prosody mismatch and boundary artifacts (Morrison et al., 2021). More recent speech editing models have attempted to condition their generation on surrounding speech context. Tan et al. (2021) uses two unidirectional LSTM models with bidirectional fusion. Wang et al. (2022); Bai et al. (2022); Borsos et al. (2022b) uses the masked reconstruction objective with Convolutional or Transformer models to further improve contextualization. FluentSpeech (Jiang et al., 2023b) is a diffusion-based speech editing model that achieves SotA performance on speech editing on LibriTTS and VCTK. The research community starts to investigate the possibility of having a unified model for both zeroshot TTS and speech editing. Yin et al. (2022); Jiang et al. (2023a) propose modular models for 2 12443 the two tasks, while our model is end-to-end. Concurrent work SpeechX (Wang et al., 2023c) adapt VALL-E by prompt tuning for a range of tasks including speech editing and zero-shot TTS, but no human evaluation is conducted in their paper. Concurrent work UniCATS (Du et al., 2024a) is a diffusion-based modular model for the two tasks. However their model is only evaluated on masked speech reconstruction of span length less than 2 seconds, while our model is evaluated on as much as 16 words editing. Voicebox (Le et al., 2023) is a recent flow matching based model capable of a wide range of tasks including speech editing and zeroshot TTS. However the speech editing capability is not evaluated in their paper, and only shown in their demo page. We therefore compare our model’s editing results with Voicebox’s on our demo page using on the same examples from their demo page. 3 Method VOICECRAFT casts both sequence infilling (for speech editing) and continuation (for zero-shot TTS) as a simple left-to-right language modeling by rearranging neural codec’s output tokens. The rearrangement involves two steps: (1) causal masking (§3.1) to enable autoregressive continuation/infilling with bidirectional context and (2) delayed stacking (§3.2) to ensure efficient multi-codebook modeling. VOICECRAFT employs decoder-only Transformers and is trained with an autoregressive sequence prediction (§3.3). We introduce the inference setup for speech editing and zero-shot TTS in §3.4. 3.1 Rearrangement Step 1: Causal Masking As shown on the left hand side of Fig. 2, given a continuous speech waveform as input, we first use Encodec (Defossez et al., 2022) to quantize it into a T by K codec matrix X, where T is the number of temporal frames, and K is the number of RVQ codebooks. X can be written as (X1, · · · , XT ), where Xt is a vector of length K representing the codes from different codebooks at time step t, and we assume that code from codebook k models the residual from codebook k −1. During training, our goal is to randomly mask some span of tokens (Xt0, . . . , Xt1), and then autoregressively predict these masked tokens conditioned on all of the unmasked tokens. This is a problem when t1 < T, because we cannot condition on future outputs when performing autoregressive generation. We need to modify the masking on X so that it is causal, by moving the span to be masked to the end of the sequence, so that when infilling these tokens the model can condition on both past and future unmasked tokens (Aghajanyan et al., 2022; Donahue et al., 2020; Bavarian et al., 2022). The procedure outlined above can be trivially extended to multiple masked spans by simply moving all masked spans to the end of the sequence. The number of spans to be masked n is sampled from Poison(λ), and then for each span, we sample a span length l ∼Uniform(1, L). Finally, we randomly select the locations of the spans within X under the constraint that they do not overlap with each other. The selected n spans are then replaced with mask tokens 〈M1〉, · · · , 〈Mn〉. The original tokens within these masked spans are moved to the end of the sequence X, with each span preceded by its corresponding mask token. Consider this example: let X = (X1, . . . , X6) and imagine we wish to mask a single span from X2 to X4. The original sequence X is rearranged into Y = (Y1; 〈M1〉; Y2; 〈M1〉; Y3; ), where Y1 = (X1), Y2 = (X5, X6), and Y3 = (X2, X3, X4). We call Y1 and Y2 the unmasked spans, and Y3 the masked span. An end of span or EOS token is added to the end of each masked span (in this example at the end of Y3), and an end of utterance or EOU token is added to the end of the utterance (i.e. Y2). For simplicity, we do not explicitly denote these special tokens and assume they are part of the spans. 3.2 Rearrangement Step 2: Delayed Stacking After the causal masking token rearrangement, each timestep of the rearranged matrix Y is vector of K tokens. Copet et al. (2023) observed that when performing autoregressive generation over stacked RVQ tokens, it is advantageous to apply a delay pattern so that the prediction of codebook k at time t can be conditioned on the prediction of codebook k −1 from the same timestep. We take a similar approach which we describe here. Assume a span Ys is of shape Ls × K. Applying the delay pattern rearranges it into Zs = (Zs,0, Zs,1, · · · , Zs,Ls+K−1), where Zs,t, t ∈[Ls + K −1] is defined as2: Zs,t = (Ys,t,1, Ys,t+1,2, · · · , Ys,t−K+1,K) (1) where Ys,t−k+1,k denotes the token located at coordinate (t −k + 1, k) in matrix Ys, i.e. the kth codebook entry at the (t −k + 1)th timestep. To make 2[N] represents integer set {0, 1, · · · , N} 3 12444 x1 x2 x3 x4 x5 x6 x1 x2 x3 x4 x5 x6 x1 x2 x3 x4 x5 x6 x1 x2 x3 x4 x5 x6 span2 (to be masked) x1 x2 x3 x4 x5 x6 x1 x2 x3 x4 x5 x6 x1 x2 x3 x4 x5 x6 x1 x2 x3 x4 x5 x6 m1 m1 m1 m1 m1 m1 m1 m1 u u u u s s s s x2 x3 x4 x2 x3 x4 x2 x3 x4 x2 x3 x4 s s s s Step 1: Causal Masking span1 Step 2: Delayed Stacking x1 x1 x1 x1 m1 m1 m1 m1 x5 x6 x5 x6 x5 x6 x5 x6 u u u u m1 m1 m1 m1 CB1 CB2 CB3 CB4 x x x x codec tokens from different codebooks s u end-of-span m end-of-uCerance Nota9on: Transformer Decoder v ɔɪ s x4 x3 x4 x2 x3 x4 x2 x3 s s … … … … … … … … … … autoregressive genera/on Phonemized transcript span3 Encodec Encoder Encodec Decoder ❄ ❄ Rearrange " empty token mask token … Token Rearrangement Modeling span1 span2 (to be masked) span3 X Y Z Adjust for ACL rearranged codec tokens Figure 2: An example of the token rearrangement procedure and modeling framework. The rearrangement procedure involves two steps: (1) Causal masking, where masked spans are replaced with mask tokens and moved to the end, and (2) Delayed stacking, where tokens are shifted in the time dimension based on their codebook index. sure that ∀t ∈[Ls + K −1], Zs,t contains K valid tokens, we introduce a special learnable [empty] token and define Ys,t−k+1,k ≜[empty] , ∀t ∈ {s : s < k ∪s −k + 1 > Ls}. Note that the mask tokens are not part of any span and are not changed during delayed stacking. We define the resulting matrix of delayed stacking Z = (Z1, 〈M1〉, Z2, 〈M1〉, · · · , 〈M S−1 2 〉, ZS) (assuming Y consists of S spans). See the diagram for Z in Fig. 2 for an illustration. 3.3 Modeling As shown in the right hand side of Fig. 2, we use a Transformer decoder to model Z autoregressively, conditioned on transcript of the speech W. Therefore, the input to the decoder is [W; Z], where “;” denotes concatenation. At timestep t of span s in codec matrix Z, the model predicts all K tokens of Zs,t simultaneously, by using K MLP heads to project the transformer’s final hidden state to K sets of logits, one for each of the K codebooks. Note that the prediction is conditioned on transcript W, and all tokens in Z before Zs,t, denoted as Hs,t. Mathematically, the Transformer decoder models the factorized conditional distribution of Z: Pθ(Z|W) = Y s Y t Pθ(Zs,t|W, Hs,t) (2) = Y s Y t K Y k=1 Pθ(Zs,t,k|W, Hs,t) (3) Where θ represent the parameters of the model. Equation 2 is the autoregressive factorization across time, while Equation 3 is the factorization across codebooks given an independence assumption - given W and Hs,t, the K RVQ codes in Zs,t are assumed to be independent of each other. We argue in appendix D that this assumption is mild. With the token level probability formulation in Equation 3, we derive the training loss as the negative log likelihood L(θ) = −log Pθ(Z|W) = −PK k=1 Lk(θ). Empirically, we found that weighting the first residual codebooks more than the latter codebooks leads to better performance, and therefore our final loss is L(θ) = PK k=1 αkLk(θ), where (αk)K k=1 are tunable hyperparameters. Note that we follow Aghajanyan et al. (2022) and calculate the prediction loss on all tokens (not just the tokens in the masked spans), except for mask tokens and [empty] tokens. 3.4 Inference Speech Editing. The setting for speech editing is the following: we have a speech recording R and its transcript W, and we want the model to modify only the relevant spans of R so that it matches the target transcript W ′. We assume that W ′ is an edited version of W, where some words have been inserted, substituted, or deleted. This task is almost exactly the same as the training task, with two differences: 1) during training, the input transcript is simply the transcript of the original recording W, while during inference it is a modified transcript W ′ 2) during training, the spans to be masked (i.e. edited) are chosen randomly. During inference, we select them by comparing the original transcript and the target transcript to identify the words that should be masked out, and then use the word level forced alignment of the original transcript to identify the codec token spans that correspond to these 4 12445 Table 1: Examples of the speech editing dataset REALEDIT. More examples are shown in table 8. Edit Types Original Edited deletion I wrote the title of the course many years ago, ah, when I created this course. I wrote the title when I created this course. insertion And we’re at this point. And we’re all extremely excited at this point. substitution, substitution See why it’s extremely valuable to it’s kind of like it’s kind of like having a wall hack to watch a demo. See why it’s extremely important right? it’s kind of like having a rough time to watch a demo. words to be masked. To ensure a smooth transition between the edited speech and the unedited speech, the neighboring words surrounding the span to be edited also need to be slightly modified in order to model co-articulation effects. Therefore, we specify a small margin hyperparameter ϵ, and extend the mask span length by ϵ on both the left and right sides3. During autoregressive generation, we feed the model the target transcript with all unmasked spans, with mask tokens inserted in the locations where the edits should take place. We then have the model autoregressively continue this sequence, whereby it fills in the masked spans. The generated codec tokens are then spliced back into their correct location in the utterance, and we map the complete codec token sequence back to a waveform using the Encodec decoder network. Zero-shot TTS. As we previously noted, zeroshot TTS for our model is straightforward because it simply corresponds to performing an insertion edit at the end of the original utterance. In this case, the model is provided a voice prompt with its transcription, as well as the target transcript of the speech to be generated. The three inputs are concatenated together and fed to the model, after which it generates the codec sequence of the target transcript autoregressively. 4 REALEDIT: a realistic and challenging speech editing dataset To support as realistic an evaluation as possible, we constructed a first-of-its-kind dataset of 310 manually-crafted speech editing examples. Each example consists of a tuple: (original audio, original transcript, edited transcript). The dataset contains 100 utterances from LibriTTS (dev-clean and dev-other) (Zen et al., 2019), 100 utterances from YouTube (from Gigaspeech testset) (Chen et al., 2021a) and 110 utterances from the Spotify Pod3for substitution and deletion, the spans that are to be masked are just those words that are different from the target plus the margin; for insertion, the spans are just left and right margin spanning from the middle of the two words where the insertion happens cast dataset (Clifton et al., 2020). We manually checked the utterances for accuracy, then had native English speakers revise them to create edited transcripts. For each utterance, we determine the type of modification using predefined probability distributions of editing type, number of disjoint spans to be edited, and editing span length. Specifically, we study the following categories: 1) number of edited spans: 1 or 2; 2) type of edits: insertion, deletion and substitution; 3) editing span length: short (1-2 words), medium (3-6 words), long (712 words). Crucially, a edited transcript must be grammatically correct and semantically coherent. Examples of the dataset are shown in table 1 and 8, and statistics are shown in table 2, Table 2: Dataset statistics for speech editing evaluation. Note that for 2-span editing, each example is edited using 2 of the 3 edit types. length type Insert. Delet. Substi. Total 1-2 words (1 span) 8 17 38 63 3-6 words (1 span) 22 24 79 125 7-12 words (1 span) 15 11 56 82 1 span total 45 52 173 270 2 spans total 13 13 54 40 5 Experiments 5.1 Setup Data. Gigaspeech training set (Chen et al., 2021a) is used as the training data, which contains 9k hours of audiobooks, podcasts, and YouTube videos at 16kHz audio sampling rate. Audio files that shorter than 2 seconds are dropped. For ablation studies, we use the masked reconstruction task, and a 1000utterance random subset of Gigaspeech validation set as the testing utterances (detailed in §C). For speech editing evaluation, we use the proposed REALEDIT dataset. For zero-shot TTS evaluation, we constructed a 250 prompt-transcript paired dataset from LibriTTS (Zen et al., 2019) and the YouTube portion of the Gigaspeech test set, with half of the examples drawn from each dataset. The length of each voice prompt is kept as close as pos5 12446 sible to 3 seconds long, with the constraint applied that we only cut the audio between complete words. The transcript is a concatenation of the transcript of the voice prompt and the target transcript. The target transcripts are chosen from different utterances spoken by the same speaker as the prompt, and range from 8 to 40 words in length. We only select utterances with a WER lower than 15% by Whisper medium.en (Radford et al., 2022). Model. Encodec (Defossez et al., 2022) is used as the speech tokenizer, which has 4 RVQ codebooks each with vocabulary size of 2048, and a codec framerate of 50Hz on 16kHz recordings. (see §C for detailed config). To choose the number of spans to mask in training, we use a Poison(1) distribution truncated to a minimum of 1 and maximum of 3. Span lengths are sampled from Uniform(1, 600) i.e. the masked speech can be as long as 12 seconds. At each time step, the embeddings of codes from different codebooks are summed (Wang et al., 2023a), then added by sinusoidal positional encoding (Vaswani et al., 2017), before being fed to the transformer. Text transcripts are phonemized based on the IPA phoneset using the toolkit provided by Bernard and Titeux (2021). Our main VOICECRAFT model has 16 transformers layer with hidden/FFN dimensions of 2048/8192, and 12 attention heads. The output of the last layers are fed to four separate 2-layer MLP modules to get prediction logits. Our Main model has 830M parameters and codebook weight hyperparameters α is set to be (5, 1, 0.5, 0.1). Ablations on model sizes and codebook weights are shown in §5.2. Training and inference. The training of the Encodec model largely follows the setting in Copet et al. (2023), detailed in §C. To train VOICECRAFT, we used the ScaledAdam optimizer and Eden Scheduler proposed in (Yao et al., 2024) with a base learning rate of 0.05, batch size of 400k frames (i.e. 133.2 minutes), and total training step of 50k with gradient accumulation. The training of the 830M VOICECRAFT model took about 2 weeks on 4 NVIDIA A40 GPUs. More details can be found in §C. We compare the performance of ScaledAdam and AdamW in §A.1. For inference, we use Nucleus sampling (Holtzman et al., 2020) with p = 0.8 and a temperature of 1 for all experiments. Due to the stochasticity of autoregressive generation, via manual inspection we found that while most of the time the model produces natural sounding speech, it sometimes produces excessively long silence or drags out certain sounds. Table 3: Effect of scaling model sizes and codebook re-weighting. Lower is better for all metrics. Params Weights WER MCD F0 Energy 120M (1,1,1,1) 10.18 8.75 78.49 3.22 120M (5,1,0.5,0.1) 7.75 8.31 87.74 3.54 430M (1,1,1,1) 7.87 8.22 70.05 3.17 430M (5,1,0.5,0.1) 7.30 8.13 73.41 3.19 830M (5,1,0.5,0.1) 6.68 8.05 67.81 3.12 We found that happens when the codec token generation gets stuck in a repeating loop. To resolve it, we use a simple heuristic: for each input utterance we generate several different output utterances and throw away the longest outputs. Specifically for speech editing, we run inference 10 times with different margin parameters, stepping ϵ up from 0.05 to 0.14 in 0.01 increments. The 4 longest outputs are discarded, and then we randomly select one sample from the remaining 6 outputs. For zero-shot TTS, we reduce the probability of generating the same token in consecutive timesteps in proportion to how many times that token was consecutively generated in the immediately preceding timesteps. In addition, we generate 5 samples with different random seeds, and select the shortest for TTS evaluation. The sample selection process is completely automatic and unsupervised (i.e. no human intervention or ASR scoring). Baselines. For speech editing, we compare VOICECRAFT with the diffusion-based model FluentSpeech (Jiang et al., 2023b) which is the current open-source SotA model for speech editing. Since the original FluentSpeech model is trained on LibriTTS, for a fair comparison, we took the official GitHub repo and trained the model on Gigaspeech. Please find more details in §C. For zeroshot TTS, we compare our VOICECRAFT with VALL-E (Wang et al., 2023a), XTTS v2 (COQUI, 2023), YourTTS (Casanova et al., 2021), and FluentSpeech. Since the original VALL-E is not open-sourced, we use the code from the popular open-source implementation by Li (2023), and also trained the model on Gigaspeech. XTTS v2 is a popular commercial zero-shot TTS model4 trained on a mixture of publicly available data and webcrawled data, although the exact data sources are unknown. YourTTS is trained on VCTK, LibriTTS, and also French and Portugese corpora. Metrics. For ablation studies, since ground truth waveform is avaliable, in addition to WER (using Whisper medium.en as the ASR model), we 4The GitHub repo hosting XTTS v2 has 26k stars by Jan 2024. 6 12447 Table 4: Performance comparison on speech editing. Intelligibility MOS Naturalness MOS Model WER LibriTTS YouTube Spotify Total LibriTTS YouTube Spotify Total FluentSpeech 4.5 3.89±0.09 4.08±0.08 3.95±0.08 3.97±0.05 3.42±0.10 4.07±0.10 3.93±0.10 3.81±0.06 VOICECRAFT 6.1 4.05±0.08 4.14±0.07 4.12±0.07 4.11±0.05 3.68±0.10 4.25±0.09 4.16±0.08 4.03±0.05 Original 5.4 4.22±0.07 4.30±0.07 4.16±0.08 4.22±0.05 3.84±0.09 4.35±0.08 4.29±0.08 4.17±0.05 0 20 40 60 80 100 insertion substitution deletion 39.11% VoiceCraft 17.78% Neutral 43.11% Original 40.46% VoiceCraft 16.65% Neutral 42.89% Original 40.77% VoiceCraft 15.77% Neutral 43.46% Original 0 20 40 60 80 100 2 spans 7-12 words 3-6 words 1-2 words 40.00% VoiceCraft 13.00% Neutral 47.00% Original 41.95% VoiceCraft 13.66% Neutral 44.39% Original 39.52% VoiceCraft 17.28% Neutral 43.20% Original 39.68% VoiceCraft 19.37% Neutral 40.95% Original Figure 3: Breakdown of side-by-side human preference on naturalness comparing VOICECRAFT edited speech and the original speech. Grouped by edit type (left) and edit span length (right). Table 5: Side-by-side naturalness comparison of VOICECRAFT (VCR) v.s. Original (Orig.) and FluentSpeech (FS). Comparison VCR better Tie VCR worse VOICECRAFT v. FS 56.1% 19.7% 24.1% VOICECRAFT v. Orig. 40.3% 16.2% 43.6% use mel-ceptral distortion (MCD), F0 distance (F0) and energy distance (Energy). These are all objective metrics and their definitions are detailed in §C. For speech editing and zero-shot TTS evaluation, we use a combination of objective and subjective metrics. For the objective metrics, we used WER and speaker similarity (SIM) following prior works(Wang et al., 2023a; Kharitonov et al., 2023). SIM is calculated using the WavLMTDCNN (Chen et al., 2021b). WER and SIM are calculated on all 310 utterances in REALEDIT, and 250 utterances in the zero-shot TTS dataset. For our subjective evaluation, we used the Amazon Mechanical Turk platform to conduct human listening tests. For speech editing, the outputs of our model on all 310 utterances from REALEDIT are evaluated by Turkers in terms of naturalness and intelligibility, and we use a 5-point Likert scale where 1 means poor and 5 means excellent. We also performed side-by-side A/B testing of VOICECRAFT’s output against the original (non-edited) speech, as well as the edited speech produced by FluentSpeech. In both cases, Turkers were asked to determine which utterance sounds more natural. The Turkers can choose either one of the two, or indicate that they are equally natural. Each evaluation received 5 ratings from 5 different Turkers. For zero-shot TTS, we randomly sampled 80 utterances (40 from LibriTTS and 40 from YouTube) from the original evaluation set, and asked Turkers to rate the naturalness, intelligibility, and speaker similarity of the generated speech to the reference prompt on a 5-point Likert scale. Each evaluation received 10 ratings. For all evaluations except the side-by-side comparison, Mean-Opinion-Score (MOS) with 95% confidence interval are reported. For the side-by-side comparison, we report the percentage of the time one model is preferred over the other. 64 and 59 Turkers participated in speech editing and TTS evaluation respectively. Please refer to §E for instructions and participants description. 5.2 Ablations In table 3, we see that larger model sizes lead to better performance across all metrics . In addition, we see a bigger gap between the bigger models, indicating the potential of further scaling model (and possibly training data) sizes. For the impact of codebook re-weighting, and we see that weighting earlier codebook heavier leads to better performance on intelligibility related metrics WER and MCD, while worse performance on prosody related metrics F0 and Energy5. We choose weight (5, 1, 0.5, 0.1) in our final 830M model because anecdotally, we found that VOICECRAFT is stronger in prosody compared to intelligibility (similar properties about NCLMs are also found in (Jiang et al., 2023a; Song et al., 2024; Du et al., 2024b)) 5.3 Speech Editing Results Table 4 shows the results of speech editing evaluation in terms of WER, and human preference 5This can be regarded as a probing results that shows the properties of different codebooks in RVQ models. Since this is not the focus of our work, we do not conduct further experiment on this direction. 7 12448 Table 6: On the zero-shot TTS task, comparing VOICECRAFT with other models. Intelligibility MOS Naturalness MOS Speaker Similarity MOS Model WER SIM Libri. YouTube Total Libri. YouTube Total Libri. YouTube Total YourTTS 6.6 0.41 3.28±0.11 3.01±0.12 3.14±0.08 2.99±0.12 2.59±0.12 2.79±0.08 3.10±0.12 2.49±0.12 2.79±0.09 FluentSpeech 3.5 0.47 3.70±0.11 3.65±0.12 3.67±0.08 3.34±0.11 3.43±0.12 3.38±0.08 4.10±0.09 3.92±0.11 4.01±0.07 VALL-E 7.1 0.50 4.05±0.09 3.94±0.10 4.00±0.07 3.85±0.10 3.86±0.10 3.86±0.07 4.12±0.10 4.02±0.10 4.07±0.07 XTTS v2 3.6 0.47 4.29±0.09 3.97±0.10 4.13±0.07 4.02±0.09 3.90±0.10 3.96±0.07 3.64±0.12 3.25±0.12 3.44±0.08 VOICECRAFT 4.5 0.55 4.38±0.08 4.08±0.10 4.23±0.06 4.16±0.08 4.18±0.09 4.17±0.06 4.35±0.08 4.33±0.09 4.34±0.06 Ground Truth 3.8 0.76 4.37±0.08 4.42±0.08 4.39±0.06 4.32±0.08 4.64±0.06 4.48±0.05 4.26±0.10 4.62±0.08 4.44±0.06 on intelligibility and naturalness. Our VOICECRAFToutperforms FluentSpeech on both intelligibility and naturalness MOS across different sources. Interestingly, FluentSpeech achieves a WER lower than the original recording (4.5 v.s. 5.4), although its intelligibility MOS (3.97) is worse than both VOICECRAFT (4.11) and original recording (4.22). This suggests that ASR model and human judgement diverge on FluentSpeech’s intelligibility. Anecdotally, we observe that FluentSpeech tends to produce dull and sometimes robotic speech 6, and we hypothesize that this type of speech tends be more easily recognized by ASR, but is less intelligible to human ears. We notice this same phenomenon in our results on zero-shot TTS. Human listeners rate LibriTTS’s naturalness lower than YouTube and Spotify on original speech (results on TTS is consistent with this). This suggests that to better evaluate speech synthesis in general, the research community should consider evaluating on other speech domains besides audiobooks as is commonly done. Table 5 presents side-by-side utterance naturalness comparison of VOICECRAFT vs. FluentSpeech and VOICECRAFT vs. the original, unedited speech. We observe that VOICECRAFT is preferred over FluentSpeech 56.1% of the time, with an additional 19.7% of the time the two are tied. This means that 75.9% of the time, human listeners’ think VOICECRAFT produces equal or more natural speech than FluentSpeech. Impressively, human listeners judge the edited speech produced by VOICECRAFT to be equally or more natural than the original unedited speech 56.4% of the time. Fig. 4 shows the breakdown of the side-by-side comparisons by edit type and edit span length. We see that compared to the original speech, VOICECRAFT performs consistently well across different edit types, but human listeners think its outputs are slightly less natural with longer edit span(s). 6please refer to our demo page for examples 5.4 Zero-Shot TTS Results Table 6 shows both objective and subjective evaluation on zero-shot TTS. We observe that VOICECRAFT achieves the best results in both automatic speaker similarity metric SIM, and all human evaluation metrics. In particular, VOICECRAFT is only slightly worse than ground truth in terms of intelligibility MOS (4.23 v.s. 4.39), and speaker similarity MOS (4.34 v.s. 4.44). The gap on naturalness is larger between VOICECRAFT and ground truth (4.17 v.s. 4.48), especially on YouTube utterances, which highlights the challenges of zero-shot TTS on noisy, in-the-wild data. The commercial model XTTS v2 comes second in terms of intelligibility and naturalness, and second to last on speaker similarity MOS. VALL-E achieves the second best on both automatic metric SIM and subjective metric speaker similarity MOS. Similarly to the speech editing results, ground truth YouTube utterances receive higher MOS scores than ground truth LibriTTS utterances in Table 6, which again suggests that we should consider using more diverse data for future speech synthesis model evaluation. Lastly, we again observe that FluentSpeech achieves lower WER than the ground truth, but receives much lower ratings in terms of intelligibility MOS from human listeners, indicating that WER could be misleading in evaluating intelligibility of speech synthesis systems7. 6 Conclusion We introduce a neural codec language model VOICECRAFT that achieves state-of-the-art performance on speech editing and zero-shot TTS on inthe-wild data. The key lies in an innovative token rearrangement procedure which enables efficient and effective autoregressive codec generation with bidirectional context. In addition, we introduce a first-of-its-kind high quality, challenging, and re7we also tried Whisper Large-v3, it gets WER of 4.1 for ground truth, and 2.7 for FluentSpeech. 8 12449 alistic speech editing dataset REALEDIT, which we believe can reliably measure the practicality of speech editing models. 7 Limitations Given the advancement of made by VOICECRAFT, there are still limitations. First and foremost is the long silence and scratching sound that occasionally occur during generation. Although in this work, we overcome it with sampling multiple utterances and selecting the shorter ones, more elegant and efficient methods are needed. Another important aspect is AI safety, how can we watermark and detect synthesized speech? While watermarking and deepfake detection has attracted increasing attention in the research community, and remarkable progress has been made (Zhang et al., 2020; Yamagishi et al., 2021; Chen et al., 2023; Roman et al., 2024), more advanced models such as VOICECRAFT presents new opportunities and challenges to safety research. To facilitate speech synthesis and AI safety research, we fully open source our codebase and model weights. 8 Ethical Implications The speech synthesis model VOICECRAFT introduced in this work has both positive and negative implications. On the positive side, VOICECRAFT holds the promise of significant benefits across several domains. For individuals with speech impairments or who have lost the use of their voice, VOICECRAFT could be transformative, enabling these individuals new ways to communicate with ease and clarity that were previously not possible. Content creators, whether they work in education, video production, or podcasting, could leverage VOICECRAFT to streamline their editing processes, making it easier to produce high-quality content without the need to re-record takes when they contain a small mistake. Furthermore, VOICECRAFT’s ability to handle diverse accents without compromising on quality opens up new possibilities for creating synthetic data. This could, in turn, enhance speech recognition systems, such as Voicebox (Le et al., 2023), by providing them with a richer and more varied dataset to learn from, thereby improving their accuracy and accessibility to users worldwide. However, the potential negative impacts of VOICECRAFT cannot be overlooked. One of the primary concerns is the model’s potential to exacerbate existing biases, particularly those related to ethnicity. If not carefully monitored and corrected, these biases could lead to unequal performance across different groups, perpetuating and possibly even worsening existing disparities. Moreover, the ease with which voices can be cloned raises serious concerns about misuse, including impersonation and fraud. The ability to replicate someone’s voice with only a few seconds of reference audio could be exploited to commit crimes or spread misinformation, posing significant ethical and security challenges. As such, while the benefits of VOICECRAFT are clear and substantial, it is imperative to approach its deployment with caution, ensuring that measures are in place to mitigate these risks and protect against potential misuse. Despite the concerns regarding impersonation and fraud associated with VOICECRAFT, there are compelling reasons to advocate for its release. Foremost among these is the opportunity it presents for the broader research community and technology developers to better understand and mitigate these negative impacts. By making these methods open source, we can catalyze the development of more robust countermeasures against the misuse of voice cloning technologies. This collaborative approach allows for the rapid identification of vulnerabilities and the exploration of innovative strategies to address them. Moreover, the authors of this work fully committed to advancing the field responsibly. We are actively working on pioneering deepfake detection and watermarking algorithms specifically designed for synthetic speech. By doing so, we not only acknowledge the potential risks associated with our technology but also take concrete steps to ensure its ethical use. This dual approach of open collaboration and dedicated research into safeguarding mechanisms reflects our commitment to fostering a technological ecosystem where the benefits of voice cloning can be realized while minimizing its potential for harm. 9 Acknowledgements We thank Ziyue Jiang for providing guidance in running inference with and scaling up the FluentSpeech model. We thank students at SALT Lab of UT Austin for helpful discussions. This work is supported in part by the National Science Foundation under Grant No. 2238605. 9 12450 References Armen Aghajanyan, Po-Yao (Bernie) Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. 2022. Cm3: A causal masked multimodal model of the internet. ArXiv, abs/2201.07520. Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matthew Sharifi, Neil Zeghidour, and Christian Havnø Frank. 2023. Musiclm: Generating music from text. ArXiv, abs/2301.11325. He Bai, Renjie Zheng, Junkun Chen, Xintong Li, Mingbo Ma, and Liang Huang. 2022. A3t: Alignment-aware acoustic and text pretraining for speech synthesis and editing. In International Conference on Machine Learning. Mohammad Bavarian, Heewoo Jun, Nikolas A. Tezak, John Schulman, Christine McLeavey, Jerry Tworek, and Mark Chen. 2022. Efficient training of language models to fill in the middle. ArXiv, abs/2207.14255. Mathieu Bernard and Hadrien Titeux. 2021. Phonemizer: Text to phones transcription for multiple languages in python. Journal of Open Source Software, 6(68):3958. Zalán Borsos, Raphaël Marinier, Damien Vincent, Eugene Kharitonov, Olivier Pietquin, Matthew Sharifi, Dominik Roblek, Olivier Teboul, David Grangier, Marco Tagliasacchi, and Neil Zeghidour. 2022a. Audiolm: A language modeling approach to audio generation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 31:2523–2533. Zalán Borsos, Matthew Sharifi, and Marco Tagliasacchi. 2022b. Speechpainter: Text-conditioned speech inpainting. In Interspeech. Zalán Borsos, Matthew Sharifi, Damien Vincent, Eugene Kharitonov, Neil Zeghidour, and Marco Tagliasacchi. 2023. Soundstorm: Efficient parallel audio generation. ArXiv, abs/2305.09636. Edresson Casanova, Julian Weber, Christopher Dane Shulby, Arnaldo Cândido Júnior, Eren Gölge, and Moacir Antonelli Ponti. 2021. Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone. In International Conference on Machine Learning. Guang Chen, Yu Wu, Shujie Liu, Tao Liu, Xiaoyong Du, and Furu Wei. 2023. Wavmark: Watermarking for audio generation. ArXiv, abs/2308.12770. Guoguo Chen, Shuzhou Chai, Guan-Bo Wang, Jiayu Du, Weiqiang Zhang, Chao Weng, Dan Su, Daniel Povey, Jan Trmal, Junbo Zhang, Mingjie Jin, Sanjeev Khudanpur, Shinji Watanabe, Shuaijiang Zhao, Wei Zou, Xiangang Li, Xuchen Yao, Yongqing Wang, Yujun Wang, Zhao You, and Zhiyong Yan. 2021a. Gigaspeech: An evolving, multi-domain asr corpus with 10,000 hours of transcribed audio. In Proc. Interspeech 2021. Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Micheal Zeng, and Furu Wei. 2021b. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing, 16:1505–1518. Ann Clifton, Sravana Reddy, Yongze Yu, Aasish Pappu, Rezvaneh Rezapour, Hamed Bonab, Maria Eskevich, Gareth Jones, Jussi Karlgren, Ben Carterette, and Rosie Jones. 2020. 100,000 podcasts: A spoken English document corpus. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5903–5917, Barcelona, Spain (Online). International Committee on Computational Linguistics. Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Defossez. 2023. Simple and controllable music generation. ArXiv, abs/2306.05284. COQUI. 2023. Xtts v2. https://huggingface.co/ coqui/XTTS-v2. Alexandre Defossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. 2022. High fidelity neural audio compression. ArXiv, abs/2210.13438. Chris Donahue, Antoine Caillon, Adam Roberts, Ethan Manilow, Philippe Esling, Andrea Agostinelli, Mauro Verzetti, Ian Simon, Olivier Pietquin, Neil Zeghidour, and Jesse Engel. 2023. Singsong: Generating musical accompaniments from singing. ArXiv, abs/2301.12662. Chris Donahue, Mina Lee, and Percy Liang. 2020. Enabling language models to fill in the blanks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2492– 2501, Online. Association for Computational Linguistics. Chenpeng Du, Yiwei Guo, Feiyu Shen, Zhijun Liu, Zheng Liang, Xie Chen, Shuai Wang, Hui Zhang, and K. Yu. 2024a. Unicats: A unified context-aware textto-speech framework with contextual vq-diffusion and vocoding. In AAAI. Chenpeng Du, Yiwei Guo, Hankun Wang, Yifan Yang, Zhikang Niu, Shuai Wang, Hui Zhang, Xie Chen, and Kai Yu. 2024b. Vall-t: Decoder-only generative transducer for robust and decoding-controllable textto-speech. Hugo Flores Garcia, Prem Seetharaman, Rithesh Kumar, and Bryan Pardo. 2023. Vampnet: Music generation via masked acoustic token modeling. ArXiv, abs/2307.04686. 10 12451 Zhifang Guo, Yichong Leng, Yihan Wu, Sheng Zhao, and Xuejiao Tan. 2022. Prompttts: Controllable text-to-speech with text descriptions. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Wei-Ning Hsu, David Harwath, Tyler Miller, Christopher Song, and James R. Glass. 2021. Text-free image-to-speech synthesis using learned segmental units. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021. Keith Ito and Linda Johnson. 2017. The lj speech dataset. https://keithito.com/ LJ-Speech-Dataset/. Shengpeng Ji, Jia li Zuo, Minghui Fang, Ziyue Jiang, Feiyang Chen, Xinyu Duan, Baoxing Huai, and Zhou Zhao. 2023. Textrolspeech: A text style control speech corpus with codec language text-to-speech models. ArXiv, abs/2308.14430. Ziyue Jiang, Jinglin Liu, Yi Ren, Jinzheng He, Zhenhui Ye, Shengpeng Ji, Qian Yang, Chen Zhang, Pengfei Wei, Chunfeng Wang, Xiang Yin, Zejun MA, and Zhou Zhao. 2024. Mega-TTS 2: Boosting prompting mechanisms for zero-shot speech synthesis. In The Twelfth International Conference on Learning Representations. Ziyue Jiang, Yi Ren, Zhe Ye, Jinglin Liu, Chen Zhang, Qiang Yang, Shengpeng Ji, Rongjie Huang, Chunfeng Wang, Xiang Yin, Zejun Ma, and Zhou Zhao. 2023a. Mega-tts: Zero-shot text-to-speech at scale with intrinsic inductive bias. ArXiv, abs/2306.03509. Ziyue Jiang, Qiang Yang, Jia li Zuo, Zhe Ye, Rongjie Huang, Yixiang Ren, and Zhou Zhao. 2023b. Fluentspeech: Stutter-oriented automatic speech editing with context-aware diffusion models. In Annual Meeting of the Association for Computational Linguistics. Zeyu Jin, Gautham J. Mysore, Stephen DiVerdi, Jingwan Lu, and Adam Finkelstein. 2017. Voco: textbased insertion and replacement in audio narration. In International Conference on Computer Graphics and Interactive Techniques. Eugene Kharitonov, Ann Lee, Adam Polyak, Yossi Adi, Jade Copet, Kushal Lakhotia, Tu Nguyen, Morgane Rivière, Abdel rahman Mohamed, Emmanuel Dupoux, and Wei-Ning Hsu. 2021a. Text-free prosody-aware generative spoken language modeling. ArXiv, abs/2109.03264. Eugene Kharitonov, Ann Lee, Adam Polyak, Yossi Adi, Jade Copet, Kushal Lakhotia, Tu Nguyen, Morgane Rivière, Abdel rahman Mohamed, Emmanuel Dupoux, and Wei-Ning Hsu. 2021b. Text-free prosody-aware generative spoken language modeling. ArXiv, abs/2109.03264. Eugene Kharitonov, Damien Vincent, Zalán Borsos, Raphaël Marinier, Sertan Girgin, Olivier Pietquin, Matthew Sharifi, Marco Tagliasacchi, and Neil Zeghidour. 2023. Speak, read and prompt: High-fidelity text-to-speech with minimal supervision. Transactions of the Association for Computational Linguistics, 11:1703–1718. Jaehyeon Kim, Jungil Kong, and Juhee Son. 2021. Conditional variational autoencoder with adversarial learning for end-to-end text-to-speech. ArXiv, abs/2106.06103. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. 2020. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. ArXiv, abs/2010.05646. Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Defossez, Jade Copet, Devi Parikh, Yaniv Taigman, and Yossi Adi. 2022. Audiogen: Textually guided audio generation. ArXiv, abs/2209.15352. Robert F. Kubichek. 1993. Mel-cepstral distance measure for objective speech quality assessment. Proceedings of IEEE Pacific Rim Conference on Communications Computers and Signal Processing, 1:125– 128 vol.1. Kushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu Nguyen, Jade Copet, Alexei Baevski, Adel Ben Mohamed, and Emmanuel Dupoux. 2021. On generative spoken language modeling from raw audio. Transactions of the Association for Computational Linguistics, 9:1336–1354. Matt Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, and WeiNing Hsu. 2023. Voicebox: Text-guided multilingual universal speech generation at scale. ArXiv, abs/2306.15687. Feiteng Li. 2023. An unofficial pytorch implementation of vall-e. https://github.com/lifeiteng/ vall-e. Guanghou Liu, Yongmao Zhang, Yinjiao Lei, Yunlin Chen, Rui Wang, Zhifei Li, and Linfu Xie. 2023. Promptstyle: Controllable style transfer for text-tospeech with natural language descriptions. ArXiv, abs/2305.19522. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. In International Conference on Learning Representations. 11 12452 Daniel Lyth and Simon King. 2024. Natural language guidance of high-fidelity text-to-speech with synthetic annotations. ArXiv, abs/2402.01912. Matthias Mauch and Simon Dixon. 2014. Pyin: A fundamental frequency estimator using probabilistic threshold distributions. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 659–663. Brian McFee, Colin Raffel, Dawen Liang, Daniel P. W. Ellis, Matt McVicar, Eric Battenberg, and Oriol Nieto. 2015. librosa: Audio and music signal analysis in python. In SciPy. Max Morrison, Lucas Rencker, Zeyu Jin, Nicholas J. Bryan, Juan Pablo Cáceres, and Bryan Pardo. 2021. Context-aware prosody correction for text-based speech editing. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7038–7042. Tu Nguyen, Eugene Kharitonov, Jade Copet, Yossi Adi, Wei-Ning Hsu, Ali Mamdouh Elkahky, Paden Tomasello, Robin Algayres, Benoît Sagot, Abdel rahman Mohamed, and Emmanuel Dupoux. 2022. Generative spoken dialogue language modeling. Transactions of the Association for Computational Linguistics, 11:250–266. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2022. Robust speech recognition via large-scale weak supervision. ArXiv, abs/2212.04356. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Robin San Roman, Pierre Fernandez, Alexandre Defossez, Teddy Furon, Tuan Tran, and Hady ElSahar. 2024. Proactive detection of voice cloning with localized watermarking. ArXiv, abs/2401.17264. Yakun Song, Zhuo Chen, Xiaofei Wang, Ziyang Ma, and Xie Chen. 2024. Ella-v: Stable neural codec language modeling with alignment-guided sequence reordering. Daxin Tan, Liqun Deng, Yu Ting Yeung, Xin Jiang, Xiao Chen, and Tan Lee. 2021. Editspeech: A text based speech editing system using partial inference and bidirectional fusion. 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 626–633. Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural discrete representation learning. ArXiv, abs/1711.00937. Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Neural Information Processing Systems. Chengyi Wang, Sanyuan Chen, Yu Wu, Zi-Hua Zhang, Long Zhou, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023a. Neural codec language models are zero-shot text to speech synthesizers. ArXiv, abs/2301.02111. Tao Wang, Jiangyan Yi, Liqun Deng, Ruibo Fu, Jianhua Tao, and Zhengqi Wen. 2022. Context-aware mask prediction network for end-to-end text-based speech editing. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6082–6086. Tianrui Wang, Long Zhou, Zi-Hua Zhang, Yu Wu, Shujie Liu, Yashesh Gaur, Zhuo Chen, Jinyu Li, and Furu Wei. 2023b. Viola: Unified codec language models for speech recognition, synthesis, and translation. ArXiv, abs/2305.16107. Xiaofei Wang, Manthan Thakker, Zhuo Chen, Naoyuki Kanda, Sefik Emre Eskimez, Sanyuan Chen, Min Tang, Shujie Liu, Jinyu Li, and Takuya Yoshioka. 2023c. Speechx: Neural codec language model as a versatile speech transformer. ArXiv, abs/2308.06873. Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald. 2019. Cstr vctk corpus: English multispeaker corpus for cstr voice cloning toolkit (version 0.92). Junichi Yamagishi, Xin Wang, Massimiliano Todisco, Md. Sahidullah, Jose Patino, Andreas Nautsch, Xuechen Liu, Kong-Aik Lee, Tomi H. Kinnunen, Nicholas W. D. Evans, and Héctor Delgado. 2021. Asvspoof 2021: accelerating progress in spoofed and deepfake speech detection. ArXiv, abs/2109.00537. Dongchao Yang, Songxiang Liu, Rongjie Huang, Guangzhi Lei, Chao Weng, Helen M. Meng, and Dong Yu. 2023. Instructtts: Modelling expressive tts in discrete latent space with natural language style prompt. ArXiv, abs/2301.13662. Zengwei Yao, Liyong Guo, Xiaoyu Yang, Wei Kang, Fangjun Kuang, Yifan Yang, Zengrui Jin, Long Lin, and Daniel Povey. 2024. Zipformer: A faster and better encoder for automatic speech recognition. In ICLR. Dacheng Yin, Chuanxin Tang, Yanqing Liu, Xiaoqiang Wang, Zhiyuan Zhao, Yucheng Zhao, Zhiwei Xiong, Sheng Zhao, and Chong Luo. 2022. Retrievertts: Modeling decomposed factors for text-based speech insertion. In Interspeech. Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. 2021. Soundstream: An end-to-end neural audio codec. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 30:495–507. Heiga Zen, Viet-Trung Dang, Robert A. J. Clark, Yu Zhang, Ron J. Weiss, Ye Jia, Z. Chen, and Yonghui Wu. 2019. Libritts: A corpus derived from librispeech for text-to-speech. In Interspeech. 12 12453 You Zhang, Fei Jiang, and Zhiyao Duan. 2020. Oneclass learning towards synthetic voice spoofing detection. IEEE Signal Processing Letters, 28:937–941. Zi-Hua Zhang, Long Zhou, Chengyi Wang, Sanyuan Chen, Yu Wu, Shujie Liu, Zhuo Chen, Yanqing Liu, Huaming Wang, Jinyu Li, Lei He, Sheng Zhao, and Furu Wei. 2023. Speak foreign languages with your own voice: Cross-lingual neural codec language modeling. ArXiv, abs/2303.03926. A Additional Experiments A.1 Comparing ScaledAdam and AdamW The hyperparameters settings of ScaledAdam can be found in table 9. For AdamW (Loshchilov and Hutter, 2017), we tried 3 settings: • setting1: peak learning rate: 1e-5, batch size: 3.3 min, update steps: 500k • setting2: peak learning rate: 1e-4, batch size: 33.3 min (same as ScaledAdam), update steps: 80k • setting3: peak learning rate: 1e-4, batch size: 3.3 min, update steps: 500k For all settings, we use a linear scheduler which linear ramp up the learning rate to peak in first 8% steps, and linearly decay it afterwards. We use the common default values for other hyperparameters, setting β1 = 0.9, β2 = 0.999, weight-decay = 0.01. All experiments are done on 4 A40 GPUs. Results are shown in table 7.8 We see that ScaledAdam achieves better performance in all metrics while using less compute. However we note that due to limitation in computational resources, we could not exhaust hyperparameter search for AdamW, therefore we do not over-generalize our finding here. A.2 Breakdown of side-by-side human preference comparison. The comparison breakdown between VOICECRAFT and FluentSpeech is shown in figure 4. We see that VOICECRAFT outperforms FluentSpeech across the board, especially for substitution edits and when the edit span length is long. A.3 Spectrograms Comparison Spectrogram level comparison between FluentSpeech and VOICECRAFTare shown in figure 5, 6, 7 with the edited part marked in dark green rectangle. The three examples have increasing difficulty in terms of accents and recording conditions, in particular, the examples in figure 7 appears to be in low bandwidth transmission. In all 3 examples, we see that VOICECRAFT is able to generated more detailed frequency patterns. The corresponding audio can be found in the demo page. 8We early stopped AdamW setting 2 at step 57k to save the compute, as it has already taken more time than the finished ScaledAdam job while the performance was worse. 13 12454 Table 7: ScaledAdam consistently outperforms AdamW across all metrics, while taking 10% less time to train. Optimizer Setting Training Time WER MCD F0 Energy AdamW lr=1e-5, bsz=13.3min, steps=500k 262 hours 16.45 8.91 196.15 5.94 AdamW lr=1e-4, bsz=133.2min, steps=57k 273 hours 10.77 8.45 117.38 4.91 AdamW lr=1e-4, bsz=13.3min, steps=500k 262 hours 7.58 8.32 82.73 3.70 ScaledAdam lr=3e-2, bsz=133.2min, steps=50k 237 hours 7.30 8.13 73.41 3.19 0 20 40 60 80 100 insertion substitution deletion 55.56% VoiceCraft 16.44% Neutral 28.00% FluentSpeech 58.50% VoiceCraft 18.38% Neutral 23.12% FluentSpeech 45.38% VoiceCraft 30.00% Neutral 24.62% FluentSpeech 0 20 40 60 80 100 2 spans 7-12 words 3-6 words 1-2 words 60.50% VoiceCraft 16.00% Neutral 23.50% FluentSpeech 62.20% VoiceCraft 16.34% Neutral 21.46% FluentSpeech 53.44% VoiceCraft 19.84% Neutral 26.72% FluentSpeech 50.79% VoiceCraft 26.35% Neutral 22.86% FluentSpeech Figure 4: Breakdown of side-by-side human preference on naturalness comparing of VOICECRAFT and FluentSpeech on speech editing. Grouped by edit type (left) and edit span length (right). Figure 5: Upper: FluentSpeech; lower: VOICECRAFT 14 12455 Figure 6: upper: FluentSpeech; lower: VOICECRAFT Figure 7: upper: FluentSpeech; lower: VOICECRAFT. Note that since the speech is recorded in very challenging condition, the word alignment is not very accurate. We see that for FluentSpeech’s result, since the entire melspectrogram are passed to HiFi-GAN for resynthesis, even the unedited speech contains high frequency noise. 15 12456 B Examples of the Speech Editing Dataset REALEDIT Examples of REALEDIT are shown in table 8. C Implementational Details The Encodec model. The Encodec model we use has a stride of 320 samples, which means the codec framerate is 50Hz for recording of sample rate 16kHz. The base dimension is 64, doubling at each of the 5 convolutional layer in the encoder. Following (Copet et al., 2023), we use the opensourced audiocraft repo9 for Encodec model training. 1 second speech segments sampled from Gigaspeech over a total of 160 epochs (320k steps) with a batch size of 240. The model is trained with the Adam (Kingma and Ba, 2014) with base learning rate of 3e-4. Eden Scheduler (Yao et al., 2024). the scheduler adjust the learning rate αt at step t using the following formula: αt =αbase · t2 + α2 step α2step !−0.25 · · e2 + α2 epoch α2 epoch !−0.25 · · linear(αstart, twarmup, t). Where αbase base learning rate, t is the step index, e is the epoch index, and αstep and αepoch controls the amount of data the model has seen before significantly reducing the learning rate. linear(αstart, twarmup, t) linearly increase the outcome from αstart to 1 over twarmup steps, and stays at 1. In our experiment, we set αbase = 0.05, αstep = 3000, αepoch = 4, αstart = 0.5, twarmup = 500 Since our dataset is quite large, we use pseudoepoch instead of the actual epoch, and 1 pseudoepoch is set to be 3000 training steps. Note that the choice of these hyperparameters are inspired by Yao et al. (2024); Li (2023), and if computation resources permitted, a grid search might find better hyperparameters settings. Configuration in ablation studies. Configuration of different models are shown in table 9. Note that we use base learning rate 3e-2 for 430M model instead of 5e-2 because the latter gave a NaN error. 9Encodec training doc can be here Task and Data for ablation studies. The evaluation task is masked reconstruction, where for each utterance, we randomly select a span of length 1 to 15 words to mask, and ask VOICECRAFT to reconstruct the masked speech based on the transcript and unmasked speech. We use a 1000-utterance random subset of the Gigaspeech validation set, which contains YouTube videos and podcast data. We ensure that each utterance in the subset has a WER lower than 15% when decoded by Whisper medium.en (Radford et al., 2022). Metrics for ablation studies. Since ground truth is available for masked reconstruction evaluation, in addition to WER (measured from Whisper medium.en’s output), we also measure the melcepstral distortion (MCD) (Kubichek, 1993), F0 distance (F0), and energy distance (Energy) WER and MCD are better correlated with intelligibility of the speech, and F0 and Energy are better correlated with prosody similarity between the generated and ground truth. MCD measures the difference of Mel Frequency Cepstrum Coefficients (MFCC) between generated and ground truth, defined as MCD = 10 ln 10 v u u t1 2 L X i=1 (mg i −mr i )2 where L is the order of MFCC, which we set to be 13. mg i is the ith MFCC of ground truth recording and mr i is the ith MFCC of the generated. We use pymcd package 10 for calculating MCD. For F0 estimation, we use the pYIN (Mauch and Dixon, 2014) algorithm implemented in librosa (McFee et al., 2015) with minimal frequency 80hz and maximal frequency 600hz. For energy calculation, we use the root mean square of magnitude of spectrogram, which is extracted using short time Fourier transform with window length of 640, hop size of 160. Note that since generated speech might have a different length compared to ground truth, dynamic time wrapping is first applied to time aligned the extracted MFCC/F0/energy before calculating their euclidean distances. For each model in the ablation study, we use 3 different random seeds and report the averaged results. Scaling FluentSpeech. The original FluentSpeech (Jiang et al., 2023b) is trained on LibriTTS, and we made our best effort in scaling it for a fair comparison. Taking guidance from the authors of FluentSpeech. We scale the batch size 10https://github.com/chenqi008/pymcd 16 12457 Table 8: Examples of the speech editing dataset REALEDIT. Edit Types Original Edited substitution, substitution See why it’s extremely valuable to it’s kind of like it’s kind of like having a wall hack to watch a demo. See why it’s extremely important right? it’s kind of like having a rough time to watch a demo. deletion I wrote the title of the course many years ago, ah, when I created this course. I wrote the title when I created this course. insertion Fast cars, that had the nice clothes, that had the money, they was criminals. Fast cars, that had the nice clothes, that had expensive gold watches, that had the money, they was criminals. substitution When the CEO of blockbuster heard that, he promptly had a kitchen sink delivered to the netflix office, a fairly creative way of declaring war. When the CEO of blockbuster heard that, he promptly had five hundred pounds of glitter divided into five thousand manilla envelopes delivered to the netflix office, a fairly creative way of declaring war. substitution So if you’ve been following my story, you will remember that I said earlier in this podcast that the Grammy nominations came out. So if you’ve been following my story, you will remember that I said earlier that this week we had super exciting stuff to talk about because Grammy nominations came out. insertion No to the chemical pollution, air pollution, and the destruction of the environment caused by factories and the manufacturing industry. No to the chemical pollution, air pollution, no to the killing of plants and wildlife and the destruction of the environment caused by factories and the manufacturing industry. substitution, substitution because we can include so many other characters if we just expand the definitions to any sword wielder, who’s a little spicy. because we can include so many other participants if we are brave enough to expand the definitions to any blade wielder, who’s a little spicy. insertion So for more craziness now that French was conquered we have to join forces to Great Britain. So for more craziness now that French was conquered by the Germans, we have to join forces to Great Britain. substitution economic development remains one of the most effective ways to increase the capacity to adapt to climate change. economic development remains one of the most promising options that we have left on the table to increase the capacity to adapt to climate change. insertion And we’re at this point. And we’re all extremely excited at this point. insertion Steve also co-founded pixar animation studios. Which has revolutionized the film industry in it’s short history with brilliant use of technology. Steve also co-founded pixar animation studios. Which has revolutionized the film industry in it’s short history with films like toy story that showcase brilliant use of technology. substitution, deletion this is just so cozy up here, and having that skylight is just lovely isn’t it. this is just so cozy and warm here, isn’t it. substitution It was a glance of inquiry, ending in a look of chagrin, with some muttered phrases that rendered it more emphatic. It was a look of disgust followed by a curled lip, with some muttered phrases that rendered it more emphatic. substitution More of a base and infrastructure to tell those stories rather than doing it out of a out of a tent with solar power. More of a base and infrastructure to fight these battles instead of out of a tent with solar power. 17 12458 Params codebook dim Trm hidden dim FFN dim Trm layers Base LR Update Steps 830M 2048 2048 8192 16 5e-2 50k 430M 2048 2048 8192 8 3e-2 50k 120M 1024 1024 4196 8 5e-2 50k Table 9: Hyperparameters settings for the different model sizes. Trm stands for Transformer. from 16 utterances to 256 utterances. Diffusion base hidden dimension from 320 to 1024, residual layers from 20 layers to 30 layers, residual channels from 256 to 512. The final model contains 330M parameters, which is roughly the same as the Voicebox model (Le et al., 2023). The model was trained on Gigaspeech training set on 1 A40 GPU for 626k steps which took 10 days. The HiFi-GAN vocoder is also retrained on Gigaspeech training set for 400k steps using hyperparameters used on Voicebox (Le et al., 2023) (they also use Hifi-GAN as vocoder to decode to 16kHz speech) Baselines for zero-shot TTS. For zero-shot TTS, we compare our VOICECRAFT with VALLE (Wang et al., 2023a), XTTS v2 (COQUI, 2023), YourTTS (Casanova et al., 2021), and FluentSpeech. Since the original VALL-E is not open-sourced, we use the code from the popular open-source implementation by Li (2023), and also trained the model on Gigaspeech. Both the AR and NAR model are trained for 50k steps using the ScaledAdam optimizer and Eden scheduler, same as our VOICECRAFT. The commercial model XTTS v2 is composed of three modules, VQ-VAE (van den Oord et al., 2017) for speech tokenization, a GPT-2 (Radford et al., 2019) model for speech token modeling and a customized HiFiGAN (Kong et al., 2020) model for token to waveform generation. XTTS v2 is trained on a mixture of publicly available data and web-crawled data, but the exact data sources are unknown. YourTTS is a zero-shot TTS model built upon the adversarial VAE model VITS (Kim et al., 2021), with novel zero-shot multi-speaker and multilingual training. The model is trained on VCTK, LibriTTS, and also French and Portugese corpora. The FluentSpeech model we used for TTS is the same as in speech editing, as the model can be configured to do zeroshot TTS similar to Voicebox (Le et al., 2023). Licenses of the speech corpora. Licenses: LibriTTS: CC BY 4.0; Gigaspeech: Apache-2.0; Spotify Podcast dataset: CC BY 4.0. D The Conditional Independence Assumption To better explain the rational behind the conditional independent assumption in equation 3, we go back to sequence Y produced by causal masking. The assumption we are making for equation 3 to hold is equivalent to the assumption that given W and Hs,t, Ys,t,k is independent of I(1) s,t,k and I(2) s,t,k defined as I(1) s,t,k ≜(Ys,t+k−1,1, Ys,t+k−2,2, · · · , Ys,t+1,k−1) I(2) s,t,k ≜(Ys,t−1,k+1, Ys,t−2,k+2, · · · , Ys,t−K+k,K) We argue that this assumption is mild, because 1) I(1) s,t,k are tokens from timestep after t and therefore should have less impact on the distribution of Yt,k given past tokens Hs,t (Hs,t might also contain also future tokens in physical time if Zs,t is in the masked spans); 2) although I(2) s,t,k are tokens from timestep before t, they are from codebooks that are later than codebook k in the residual quantization chain, meaning that they model the residual left by codebook k (at the corresponding timesteps). Given the fact that {Ys,t−1,k, Ys,t−2,k+1, · · · , Ys,t−K+k,K−1} ⊂ Hs,t11, meaning that the “fitted parts” are given, and therefore the “unfitted parts” (which is the residual) should have miner impact on the distribution of Ys,t,k. Empirically, MusicGen shows that a codec language model trained with the Delay Pattern enjoys the efficiency of the naive parallel pattern, while achieving similar modeling performance as completely flattened sequence. E Instructions for human listening test Screenshots of instructions for the human listening test we used on Amazon Mechanical Turk is shown in figure 8 (speech editing - intelligibility), figure 9(speech editing - naturalness), figure 10 (speech editing - side-by-side comparison), figure 11 (zero-shot TTS - intelligibility), figure 12 11A weaker condition holds for the first K tokens in unmasked spans (which accounts for at most 0.08s of speech for our models), but we omit the discussion here for simplicity 18 12459 (zero-shot TTS - speaker similarity), figure 13 (zero-shot TTS - naturalness). For speech editing evaluation, 64 Turkers participated and we paid 474.3 USD in total; for zero-shot TTS evaluation, 59 Turkers participated and we paid 457.6 USD. We only allow Turkers who are resident of the U.S. to do the tasks, and the goal is to increase the probability of Turkers being native English speakers. We acknowledge that this is a perfect approach and might need to bias in judgement, but since Amazon Mechanical Turk doesn’t allow selection on native language, this is the best approach we could think of as a proxy to constraining the native language. 19 12460 Figure 8: Instruction for speech editing-intelligibility preference. Each task contains 5 recordings. Since the first paragraph is also presented in all other tasks in the instruction page, we only show it in this screenshot. Figure 9: Instruction for speech editing-naturalness preference. Each task contains 5 recordings. Figure 10: Instruction for speech editing, side-by-side naturalness preference. Each task contains 3 pairs of recordings. 20 12461 Figure 11: Instruction for zero-shot TTS, intelligibility preference. Each task contains 5 recordings. Figure 12: Instruction for zero-shot TTS, speaker similarity preference. Each task contains 3 pairs. Figure 13: Instruction for zero-shot TTS, naturalness preference. Each task contains 5 recordings. 21 12462
2024
673
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12463–12492 August 11-16, 2024 ©2024 Association for Computational Linguistics RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors Liam Dugan1, Alyssa Hwang1, Filip Trhlik2, Josh Magnus Ludan1 Andrew Zhu1, Hainiu Xu3, Daphne Ippolito4, Chris Callison-Burch1 University of Pennsylvania1 University College London2 King’s College London3 Carnegie Mellon University4 {ldugan, ahwang16, jludan, andrz, ccb}@seas.upenn.edu [email protected], [email protected], [email protected] Abstract Many commercial and open-source models claim to detect machine-generated text with extremely high accuracy (99% or more). However, very few of these detectors are evaluated on shared benchmark datasets and even when they are, the datasets used for evaluation are insufficiently challenging—lacking variations in sampling strategy, adversarial attacks, and open-source generative models. In this work we present RAID: the largest and most challenging benchmark dataset for machinegenerated text detection. RAID includes over 6 million generations spanning 11 models, 8 domains, 11 adversarial attacks and 4 decoding strategies. Using RAID, we evaluate the out-ofdomain and adversarial robustness of 8 openand 4 closed-source detectors and find that current detectors are easily fooled by adversarial attacks, variations in sampling strategies, repetition penalties, and unseen generative models. We release our data1 along with a leaderboard2 to encourage future research. 1 Introduction Large Language Models (LLMs) have been able to fool humans into thinking their outputs are humanwritten for roughly four years (Dugan et al., 2020; Clark et al., 2021). In that short span of time we have seen LLM-generated text be used for targeted phishing attacks (Baki et al., 2017; Hazell, 2023), mass spam and harassment (Weiss, 2019), disinformation campaigns (Sharevski et al., 2023; Spitale et al., 2023), and spurious scientific publication (Lund et al., 2023). In order to document and eventually mitigate such harms, we must develop robust automatic detectors of machine-generated text. Many exciting and inventive methods have been proposed in recent years for detecting generated text (Crothers et al., 2023). However, when evaluating these methods, authors typically generate 1https://github.com/liamdugan/raid 2https://raid-bench.xyz/leaderboard LLaMA (default) https://www.detect-ai.com DETECT AI: NOT AI-GENERATED https://www.detect-ai.com DETECT AI: AI-GENERATED LLaMA +sampling +penalty Figure 1: Detectors for machine-generated text are often highly performant on default model settings but fail to detect more unusual settings such as using random sampling with a repetition penalty. their own evaluation datasets and fail to test their models on shared resources—making it difficult to verify claims of accuracy and robustness. This has led to an erosion of trust in the efficacy of automatic detection methods and a generally fatalistic sentiment towards detection among researchers and practitioners (Sadasivan et al., 2023). To combat this trend, in this work, we introduce the Robust AI Detection (RAID) benchmark. RAID is the largest and most challenging benchmark of generated text ever released, consisting of 6M+ generations spanning 11 generators, 8 domains, 11 adversarial attacks, and 4 decoding strategies. Using RAID, we benchmark 12 detectors (8 open- and 4 closed-source). We find that detectors have difficulty generalizing to unseen models and domains and that simple changes such as changing the sampling strategy, adding a repetition penalty, and adversarially modifying text lead to marked decreases in performance. 2 Related Work In Table 1 we show a comparison between RAID and other publicly available sources of generated 12463 Domain Model Sampling Multilingual Adversarial Name Size coverage? coverage? coverage? coverage? coverage? TuringBench (Uchendu et al., 2021) 200k RuATD (Shamardina et al., 2022) 215k HC3 (Guo et al., 2023) 26.9k MGTBench (He et al., 2023) 2817 MULTITuDE (Macko et al., 2023) 74.1k AuText2023 (Sarvazyan et al., 2023b) 160k M4 (Wang et al., 2023b) 122k CCD (Wang et al., 2023a) 467k IMDGSP (Mosca et al., 2023) 29k HC-Var (Xu et al., 2023) 145k HC3 Plus (Su et al., 2024) 210k MAGE (Li et al., 2024) 447k RAID (Ours) 6.2M Table 1: A comparison of the publicly available sources of generated text. Our provided dataset is the only one that contains a diverse selection of domains, sampling strategies, and adversarial attacks across recent generative models. text. Among these, the most similar work to ours is Li et al. (2024), who create a dataset of 447k generations from 7 language model families across 10 domains to study detector robustness. Other resources typically focus on particular sub-areas such as multilingual text (Macko et al., 2023; Wang et al., 2023b), code (Wang et al., 2023a), questionanswering (Guo et al., 2023; Xu et al., 2023; Su et al., 2024), and scientific papers (Mosca et al., 2023). Additionally, shared tasks such as AuTextTification (Sarvazyan et al., 2023b) and RuATD (Shamardina et al., 2022) have provided datasets and encouraged centralized evaluation and competition. While many shared resources do well at covering multiple generative models and domains, few include adversarial attacks and none include variation in decoding strategy—frequently even failing to list the strategy used. These datasets are insufficiently challenging and promote the inflated reports of detector accuracy. Another way to evaluate robustness is through a small-scale comparative study. In these studies, one aspect of the generated text is varied and detector accuracy is compared across the variations. Such studies have shown that detectors lack robustness to unseen generative models (Stiff and Johansson, 2022; Pu et al., 2023b; Chakraborty et al., 2023), domains (Pagnoni et al., 2022; Pu et al., 2023a; Rodriguez et al., 2022), decoding strategies (Ippolito et al., 2020; Solaiman et al., 2019), prompts (Koike et al., 2023; Kumarage et al., 2023; Lu et al., 2023), repetition penalties (Fishchuk and Braun, 2023), and human edits (Gao et al., 2024). Similar work specializing in adversarial robustness has shown that detectors are vulnerable to homoglyph attacks (Gagiano et al., 2021; Wolff, 2020; Macko et al., 2024), whitespace insertion (Cai and Cui, 2023), sentiment and factual alterations (Bhat and Parthasarathy, 2020), paraphrase attacks (Krishna et al., 2023; Sadasivan et al., 2023), and synonym replacement (Kulkarni et al., 2023; Pu et al., 2023a). Our work builds on this foundation and synthesizes many elements of the robustness literature into one singular systematic benchmark study. 3 Dataset Creation 3.1 Overview In Figure 2, we illustrate the components of the RAID dataset. To create RAID, we first sample roughly 2,000 documents of human-written text from each of our 8 target domains (§3.2). For each document, we create a corresponding generation prompt using a template such as “Write a recipe for {title}” (§3.3). We then generate one output for each of our 11 models (§3.4), 4 decoding strategies (§3.5), and 11 adversarial attacks (§3.6). The RAID dataset consists of over 6M generations, the largest dataset of generated text to date. 3.2 Domains Since different domains have been shown to induce LLMs to make diverse errors (Dugan et al., 2023), we prioritized domains that were both at high risk for abuse and were diverse and challenging. Our sources require factual knowledge (News, Wikipedia), generalization and reasoning (Abstracts, Recipes), creative and conversational skills (Reddit, Poetry), and knowledge of specific media (Books, Reviews). To avoid contamination, most of our human-written documents are taken 12464 Detectors Decoding Strategy RoBERTa-L (GPT-2) RoBERTa-B (ChatGPT) RADAR RoBERTa-B (GPT-2) Neural GLTR Fast DetectGPT Binoculars LLMDet GPTZero Originality Winston ZeroGPT Adversarial Attacks Alternative Spelling Article Deletion Insert Paragraphs Upper Lower Swap Zero-Width Space Whitespace Addition Repetition Penalty (rep = 1.2) (rep = 1.0) (temp. = 0) Greedy (temp. = 1, p = 1) Sampling Metric-Based Commercial Models MPT-30B Mistral-7B LLaMA 2 70B (Chat) Cohere GPT-2 XL ChatGPT GPT-3 GPT-4 MPT-30B (Chat) Mistral-7B (Chat) Cohere (Chat) Domains Abstracts Books News Poetry Recipes Reddit Reviews Wikipedia Homoglyph Number Swap Paraphrase Synonym Swap Misspelling With Without 11 models 8 domains 12 detectors 11 attacks Figure 2: An overview of the structure of the RAID dataset. We generate 2,000 continuations for every combination of domain, model, decoding, penalty, and adversarial attack. This results in roughly 6.2 million generations for testing. We then evaluate each detector on all pieces of generated text in the dataset. from publicly available pre-2022 datasets (see Appendix E.1). 3.3 Prompts We prompt our generators in a zero-shot fashion using “Chat” templates for models fine-tuned on dialogue and “Non-Chat” templates for continuation models. Each prompt is nearly the same, with the exception of a “{title}” field that is dynamically replaced with the title of the corresponding human-written text (see Table 2). Unlike previous work (Verma et al., 2023; Xu et al., 2023), we intentionally avoid biasing the language model towards a particular length or generation style to better match our expectations of real-world scenarios. We engineered our prompts over multiple rounds to minimize degenerate repetition, unhelpful generation, and meta-commentary across all models (see Appendix E.3). 3.4 Models We carefully chose a set of models that were maximally distinct from each other, offering us the widest range and variability of generated text. We focused on varying model sizes, open/closed source, and chat/completion style. Following work by Sarvazyan et al. (2023a), we select largest model from each model family and exclude third-party fine-tuned variants of base models in favor of the base models themselves. In total, we used four GPT models (GPT2, GPT3, GPT4, ChatGPT), three open-source models and their chat variants (MisChat Write the abstract for the academic paper titled "{title}". Non-Chat The following is the full text of the abstract for a research paper titled "{title}" from arxiv.org: Table 2: The “Chat” and “Non-Chat” templates used for generation in the Abstracts domain. The “{title}” field is dynamically filled in with the title of the humanwritten document at generation time. tral 7B, MPT 30B, LLaMA 2 70B), and the Cohere command and chat model (see Appendix E.2). 3.5 Decoding Strategies The decoding strategy determines how tokens are selected from the language model’s probability distribution. Previous work has shown that greedy decoding (i.e. selecting the most likely token at each time step) reduces the diversity of text and makes it easier to detect while sampling directly from the language model output distribution shows the opposite effect (Ippolito et al., 2020). Based on these findings, we generate two outputs per prompt, one with greedy decoding and the other with fully random sampling. We also generate two additional outputs with Keskar et al. (2019)’s repetition penalty when available. This penalty works by down-weighting the probability of tokens that have previously appeared in the context window by some multiplicative factor θ, resulting in less repetitive output. We are the first to evaluate this penalty for detection at a 12465 large scale and show that it significantly reduces the detectability of outputs. Following Keskar et al. (2019), we use θ = 1.2 for our experiments.3 3.6 Adversarial Attacks When selecting adversarial attacks, we assume that our adversary has exactly one query and no knowledge of the detector. Thus, we include the following 11 black-box, query-free attacks as opposed to gradient-based methods: 1. Alternative Spelling: Use British spelling 2. Article Deletion: Delete (‘the’, ‘a’, ‘an’) 3. Add Paragraph: Put \n\n between sentences 4. Upper-Lower: Swap the case of words 5. Zero-Width Space: Insert the zero-width space U+200B every other character 6. Whitespace: Add spaces between characters 7. Homoglyph: Swap characters for alternatives that look similar, e.g. e →e (U+0435) 8. Number: Randomly shuffle digits of numbers 9. Misspelling: Insert common misspellings 10. Paraphrase: Paraphrase with the fine-tuned T5-11B model from Krishna et al. (2023) 11. Synonym: Swap tokens with highly similar BERT (Devlin et al., 2019) candidate tokens Following recommendations from Dyrmishi et al. (2023), we manually reviewed our data to ensure that adversarial attacks were inconspicuous. Thus, for each attack only a small percentage of the total available mutations were applied. For details on mutation percentages for each attack as well as other implementation details, see Appendix E.4. 3.7 Post-Processing After all generations were completed, we removed prompts and left only the generated output. We then filtered out failed generations and balanced the dataset such that each human-written document has exactly one corresponding generation per model, decoding strategy, and adversarial attack. 4 Dataset 4.1 Statistics The non-adversarial portion of the RAID dataset consists of 509,014 generations and 14,971 humanwritten documents for a total of 6,287,820 texts 3Only open-source models provide access to a repetition penalty. OpenAI and Cohere instead offer slightly different “frequency” and “presence” penalties (see Appendix E.5). Thus we vary repetition penalty only for open-source models. 0 50 100 0 10k 20k 30k Greedy 0 50 100 0 20k 40k 60k Greedy + Penalty 0 50 100 0 20k 40k Sampling 0 50 0 20k 40k 60k Sampling + Penalty Figure 3: Histogram of examples (y-axis) grouped by their repetitiveness measured via SelfBLEU score (xaxis). We see that both random sampling and repetition penalty greatly reduce repetitiveness for all models. Num. SelfPPL PPL Model Gens Toks BLEU -L7B -G2X Human 14971 378.5 7.64 9.09 21.2 GPT 2 59884 384.7 23.9 8.33 8.10 GPT 3 29942 185.6 13.6 3.90 8.12 ChatGPT 29942 329.4 10.3 3.39 9.31 GPT 4 29942 350.8 9.42 5.01 13.4 Cohere 29942 301.9 11.0 5.67 23.7 (+ Chat) 29942 239.0 11.0 4.93 11.6 Mistral 59884 370.2 19.1 7.74 17.9 (+ Chat) 59884 287.7 9.16 4.31 10.3 MPT 59884 379.2 22.1 14.0 66.9 (+ Chat) 59884 219.2 5.39 7.06 56.3 LLaMA 59884 404.4 10.6 3.33 9.76 Total 509k 323.4 13.7 6.61 23.8 Table 3: Statistics for the generations in the base dataset without adversarial attacks. PPL-L7B refers to mean perplexity according to LLaMA 7B and PPL-G2X refers to mean perplexity according to GPT 2 XL. when including adversarial attacks. On average, models are more repetitive than humans as measured by Self-BLEU (Zhu et al., 2018) and typically generate shorter passages (see Table 3). The mean perplexity is lower for models than humans, according to LLaMA 7B and GPT 2 XL. 4.2 Release Structure To accompany the RAID dataset we also release an official public leaderboard4 which will host the results from our analysis alongside other detector results submitted by public contributors. The leaderboard is split up into two sections—one for 4https://raid-bench.xyz/leaderboard 12466 those who self-report having trained on the RAID dataset and one for those who do not. This is important to ensure that a clear distinction is made between detectors that are generalizing to out-ofdomain data and those that are not. To ensure fair competition, 10% of the RAID dataset is released without labels for use as the official hidden test set. We provide scripts to easily run detectors on this test set and calculate accuracy with respect to the hidden labels. Users can then submit their outputs to the leaderboard via a pull request (see Appendix D). We hope that this infrastructure encourages more comparison and shared evaluation of detectors. 4.3 RAID-extra In addition to the over 6M+ generations in the core RAID train and test sets, we also release “RAIDextra”, an additional dataset consisting of 2.3M generations from three extra domains not included in the main benchmark: Python Code, Czech News, and German News. In Appendix A we report evaluation results from our 12 detectors on RAID-extra and show that metric-based detectors perform surprisingly well on these types of unusual domains. RAID-extra is the largest and most challenging dataset of generated code and multilingual text ever released. We hope it will be of value to the academic community. 5 Detectors 5.1 Detector Selection We evaluate detectors from three categories: neural, metric-based, and commercial. Neural detectors typically involve fine-tuning a pre-trained language model such as RoBERTa (Liu et al., 2019) while metric-based detectors typically compute some metric using the output probabilities of an existing generative model. In contrast, commercial detectors tend to provide some documentation of their performance but disallow direct access to the models. We tested the following: (i). Neural: RoBERTa-Base (GPT2), RoBERTaLarge (GPT2), RoBERTa-Base (ChatGPT), RADAR (ii). Metric-Based: GLTR, Binoculars, Fast DetectGPT, LLMDet (iii). Commercial: GPTZero, Originality, Winston, ZeroGPT τ=0.25 τ=0.5 τ=0.75 τ=0.95 R-B GPT2 8.71% 6.59% 5.18% 3.38% R-L GPT2 6.14% 2.91% 1.46% 0.25% R-B CGPT 21.6% 15.8% 15.1% 10.4% RADAR 7.48% 3.48% 2.17% 1.23% GLTR 100% 99.3% 21.0% 0.05% F-DetectGPT 47.3% 23.2% 13.1% 1.70% LLMDet 97.9% 96.0% 92.0% 75.3% Binoculars 0.07% 0.00% 0.00% 0.00% GPTZero 0.03% 0.00% 0.00% 0.00% Originality 0.47% 0.25% 0.17% 0.07% Winston 0.75% 0.55% 0.38% 0.21% ZeroGPT 1.71% 1.42% 1.21% 0.90% Table 4: False Positive Rates for detectors on RAID at naive choices of threshold (τ). We see that, for opensource detectors, thresholding naively results in unacceptably high false positive rates. Unlike Li et al. (2024), we do not train our own neural models on our dataset because we wish to investigate the generalization ability of off-the-shelf detectors. For the metric-based detectors, we chose to use the default generative model in each repository to emulate the most realistic use-case.5 5.2 Detector Evaluation Detectors work by taking in a sequence of tokens and outputting a scalar score. In order to convert this score to a binary prediction, we must select a scalar threshold τ such that if the score s ≥τ the sequence is predicted to be machine-generated. In our work, we select a threshold for each model such that the resulting false positive rate of the detector is 5%. In practical terms, accuracy at a fixed FPR of 5% represents how well each detector identifies machine-generated text while only misclassifying 5% of human-written text. Our work is one of the first shared resources to fix and disclose FPR, following the rise of this evaluation paradigm in recent robustness research (Hans et al., 2024; Krishna et al., 2023; Soto et al., 2024).6 6 Findings Finding 1: Default False Positive Rates (FPRs) of open-source detectors are dangerously high When applying a detector to a piece of text, it is important to decide on a threshold (τ) to use for binary classification. The principled way to determine this is to use a set of in-domain data to 5See Appendix F for more details on each detector tested 6See Appendix B for a discussion of why we chose this paradigm instead of the traditional precision/recall/F1 score 12467 0.1% 1% 10% 100% 0.0 0.2 0.4 0.6 0.8 1.0 Originality Binoculars F-DetectGPT RADAR Winston GPTZero ZeroGPT GLTR R-B (GPT2) R-L (GPT2) R-B (C-GPT) LLMDet Figure 4: Detection accuracy (y-axis) vs. False Positive Rate (x-axis) for all detectors. We see that Binoculars works significantly better than other detectors at low FPR and that few detectors can operate at FPR<1%. calibrate the threshold to a specific point along the receiver-operating curve. However, this is cumbersome and requires a set of readily accessible human-written data. Thus, in practice it is common to simply use some seemingly sensible default value (such as 0.5) for the threshold without further investigation. In Table 4 we report the false positive rates of our detectors for various commonly chosen thresholds. We see that open-source detectors, especially metric-based detectors, exhibit dangerously high false positive rates when using these naive thresholds. On the contrary, closed-source detectors seem to be calibrated fairly well, with none having an FPR above 1.7%. Given this result, we advise practitioners to take care not to use naive-yet-sensible values for their detectors and instead calibrate detectors on in-domain data before using them. Following this advice, for the remainder of the Findings section, we will be exclusively using thresholds that were calibrated to a FPR of 5% on the human-written portion of the RAID dataset (see Appendix C). Finding 2: Detector accuracy varies substantially depending on target False Positive Rate In Figure 4 we report the results of an experiment where we varied the classification threshold and plotted detector accuracy vs. false positive rate. We found that our detectors were capable of achieving the high accuracies cited in many viral reports, but only at similarly high FPR. Some detectors failed to achieve the lowest FPR we tested, plateauCommercial Neural Metric-based 0.0 0.2 0.4 0.6 0.8 1.0 90 84 89 66 63 70 69 51 58 44 34 20 Greedy Sampling Greedy + Penalty Sampling + Penalty Figure 5: Accuracy at FPR=5% (y-axis) vs. decoding strategy across our three detector classes on nonadversarial text. We see that repetition penalty greatly reduces accuracy in both greedy decoding and sampling. ing at 16.9% (ZeroGPT), 0.88% (FastDetectGPT), and 0.62% (Originality). Most detectors dropped steeply as FPR decreased from 100%, but Binoculars (Hans et al., 2024) was particularly strong at low FPR. Since detector accuracy varies so much with FPR, explicitly calibrating and reporting FPR is crucial for comparable, informative, and reproducible detection studies. Finding 3: Repetition penalty drastically hurts accuracy for all detectors As shown in Figure 5, we observe a consistent pattern across detectors, detector categories, generators, and domains: adding a repetition penalty decreases accuracy by up to 32 points regardless of decoding strategy. The decoding strategy matters as well. Detectors in each category, across domains and generators, perform substantially better on greedy decoding than random sampling, even when taking repetition penalty into account. This pattern is especially concerning because past studies have largely overlooked variations in decoding strategy when evaluating detectors. Furthermore, none have reported results on repetition penalty before our study. Since sampling with repetition penalty results in text that often sounds more human-like (Keskar et al., 2019), exposing these patterns is critical for reliable detection. There are many possible penalties and decoding strategies one could use when generating text such as contrastive decoding (Li et al., 2023), eta and epsilon decoding (Hewitt et al., 2022), and typical sampling (Meister et al., 2023)—all of which are likely to reduce detector accuracy. We highly encourage robustness studies and open evaluations to investigate how well detectors can generalize to 12468 GPT2 ChatGPT GPT4 Mistral Books News Reddit Reviews Wiki 0.987 0.588 0.287 0.548 0.996 0.694 0.415 0.640 0.992 0.437 0.252 0.477 0.976 0.612 0.387 0.462 0.959 0.695 0.332 0.373 RoBERTa (GPT2) GPT2 ChatGPT GPT4 Mistral 0.405 1.000 1.000 0.265 0.280 1.000 1.000 0.190 0.258 0.975 0.840 0.148 0.455 0.995 1.000 0.320 0.422 0.995 0.975 0.270 GPTZero GPT2 ChatGPT GPT4 Mistral 0.768 0.992 0.965 0.689 0.810 0.999 0.999 0.663 0.491 0.969 0.792 0.467 0.222 0.004 0.007 0.118 0.706 0.999 0.963 0.613 RADAR Figure 6: Heatmap measuring the accuracy of the RoBERTa-Large GPT2, GPTZero, and RADAR detectors across models and domains. We see a clear bias towards domains and models that the detectors have trained on. Open-Source Closed-Source Chat Models Non-Chat Models Chat Models Non-Chat Models (llama-c, mistral-c, mpt-c) (mistral, mpt, gpt2) (c-gpt, gpt4, cohere) (cohere, gpt3) Dec. Strategy greedy sampling greedy sampling greedy sampling greedy sampling Rep. Penalty? ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✓ ✗ ✗ ✗ ✗ R-B GPT2 84.1 52.3 77.9 26.2 98.6 44.1 60.5 35.4 70.9 41.7 65.1 52.5 R-L GPT2 79.7 41.1 71.4 19.5 98.5 43.0 67.2 53.4 61.4 34.7 61.1 48.6 R-B CGPT 80.2 63.3 75.0 39.3 53.3 26.4 14.9 1.7 59.1 38.1 46.5 39.0 RADAR 88.8 77.4 85.6 66.4 91.8 63.8 48.3 31.8 81.6 75.3 72.2 67.7 GLTR 89.8 67.5 83.9 38.3 99.6 56.9 44.5 0.5 80.7 54.3 75.6 63.7 F-DetectGPT 98.6 74.5 96.2 40.5 97.8 56.1 79.7 0.6 96.0 74.1 93.8 86.3 LLMDet 55.5 30.2 47.5 16.5 74.8 27.0 38.4 3.7 35.8 18.5 40.0 32.9 Binoculars 99.9 86.6 99.7 60.6 99.9 62.3 72.4 0.6 99.2 92.1 99.0 95.0 GPTZero 98.8 93.7 98.4 82.5 74.7 34.6 9.4 4.8 92.3 88.5 60.6 53.4 Originality 98.6 86.3 97.7 72.5 99.9 64.1 89.0 51.2 96.8 89.0 91.7 85.4 Winston 97.2 90.1 96.6 78.3 68.2 49.0 29.5 11.3 96.1 93.7 73.2 68.1 ZeroGPT(*) 95.4 80.7 90.5 54.9 85.1 57.2 16.0 0.3 92.1 65.8 83.4 72.7 Table 5: Accuracy Score at FPR=5% for all detectors across model groups and sampling strategies. Asterisks (*) indicate that the detector was unable to achieve the target FPR. We see that random sampling with a repetition penalty consistently makes output generations very difficult to detect, especially for open-source non-chat models. alternative generation settings, especially if generative models are already trained on such outputs. Finding 4: Seemingly strong, robust detectors can perform unexpectedly poorly The most accurate detectors—FastDetectGPT, Originality, Binoculars, etc.—may seem like reliable solutions to detection in general, but they sometimes deteriorate from perfect accuracy to complete failure (see Table 5). The changes in experimental settings were not particularly sophisticated either: simply changing the text generator, switching decoding strategies, or applying a repetition penalty was enough to introduce up to 95+% error rate. Our findings show that detectors tend not to generalize across different models or generation settings in the same domain. Compounded by the lack of evaluation at different false positive rates, domainspecific detectors for critical issues like fake news and education are particularly at risk of mislabeling human-written text as machine-generated without our awareness. Finding 5: Detectors perform better on domains and models seen during training. Figure 6 shows the performance of RoBERTa-Large GPT2, GPTZero, and RADAR on a cross-section of models and domains from RAID. We see that RoBERTa GPT2 achieves 95+% accuracy on five domains generated by GPT2, but it rarely achieves beyond 60% accuracy on text of the same domain from different models. This detector is opensource, so we know that it was trained exclusively on GPT2 in an open-domain setting. We observe similar trends with RADAR, as it performs uncharacteristically poorly when detecting movie reviews regardless of generative model. All detectors known to have constrained training data skew heavily towards test data with similar characteristics, leading us to believe that detectors perform better on domains and models seen during training. Some closed-source models such 12469 None Paraphrase Synonym Misspelling Homoglyph Whitespace Delete Articles R-L GPT2 56.7 72.9 (+16.2) 79.4 (+22.7) 39.5 (-17.2) 21.3 (-35.4) 40.1 (-16.6) 33.2 (-23.5) RADAR 70.9 67.3 (-3.6) 67.5 (-3.4) 69.5 (-1.4) 59.3 (-11.6) 66.1 (-4.8) 67.9 (-3.0) GLTR 62.6 47.2 (-15.4) 31.2 (-31.4) 59.8 (-2.8) 24.3 (-38.3) 45.8 (-16.8) 52.1 (-10.5) Binoculars 79.6 80.3 (+0.7) 43.5 (-36.1) 78.0 (-1.6) 37.7 (-41.9) 70.1 (-9.5) 74.3 (-5.3) GPTZero 66.5 64.0 (-2.5) 61.0 (-5.5) 65.1 (-1.4) 66.2 (-0.3) 66.2 (-0.3) 61.0 (-5.5) Originality 85.0 96.7 (+11.7) 96.5 (+11.5) 78.6 (-6.4) 9.3 (-75.7) 84.9 (-0.1) 71.4 (-13.6) Table 6: Accuracy Score at FPR=5% for select detectors across different adversarial attacks. Colors indicate an increase, slight increase, slight decrease, and decrease in performance. We see that not all adversarial attacks affect models equally—with some occasionally even improving performance of detectors instead of harming them. as GPTZero display similar behavior, allowing us to infer what data was used to train them. These findings demonstrate the need for multi-generator training corpora, especially since many publicly available neural detectors focus on only one or two generative models (Guo et al., 2023). Finding 6: Different detectors are vulnerable to different types of adversarial attacks In Table 6, we see that Binoculars and other metric-based methods degrade as much as 36.1% when a small portion of words are swapped with synonyms. All detectors were sensitive to homoglyph attacks except for GPTZero which sustained only a 0.3% loss under the homoglyph attack while five others dropped an average of 40.6%. Detectors like RADAR that underwent adversarial training, unsurprisingly, were much more robust to adversarial attacks. These detector-dependent differences in vulnerability suggest that attacking an arbitrary detector without prior knowledge of the detector type or training distribution will be difficult. Adversaries may respond by attempting to discover what the detector was trained on—which our findings have shown could be possible—or attacking detectors with repeated queries. In addition, we see that detector accuracy sometimes increases after an adversarial attack. RoBERTa GPT2, for example, improved after texts were paraphrased with T5 and after words were replaced with BERT-based synonyms. GPT2, RoBERTa, T5, and BERT are contemporaneous models trained on similar data, leading us to believe that detectors benefit from adversarial attacks that inadvertently modify text to be more similar to their training data. Our previous findings on the influence of training data on performance reinforce our hypothesis. 7 Conclusion As the generation capabilities of language models have continued to increase, accurately and automatically detecting machine-generated text has become an important priority. Detection efforts have even surpassed the bounds of natural language processing research, spurring discussions by social media companies and governments on possibly mandating labels for machine-generated content. Despite the protective intentions of these mandates, our work shows that such regulations would be difficult to enforce even if they were implemented. Detectors are not yet robust enough for widespread deployment or high-stakes use: many detectors we tested are nearly inoperable at low false positive rates, fail to generalize to alternative decoding strategies or repetition penalties, show clear bias towards certain models and domains, and quickly degrade with simple black-box adversarial attacks. The bulk of our findings may sound bleak, but we did uncover promising signs of improvement. Binoculars, for example, performed impressively well across models even at extremely low false positive rates, Originality achieved high precision in some constrained scenarios, and GPTZero was unusually robust to adversarial attacks. We believe that openly evaluating detectors on large, diverse, shared resources is critical to accelerating progress—and trust—in detection. Evaluating robustness is particularly important for detection, and it only increases in importance and the scale of public deployment grows. We also need to remember that detection is just one tool for a larger, even more valuable motivation: preventing harm by the mass distribution of text. Detecting machine-generated text was a useful proxy for identifying harmful text for a long time, but language models have improved to the point that generated text is frequently legitimate and not harmful (Schuster et al., 2020). Therefore, 12470 detecting specific harmful elements—like misinformation, hate speech, and abuse—should take precedence over whether or not the text was authored by a machine. Knowing if a text was machinegenerated, however, does still offer insights on the types of errors we can expect or the recency of the facts cited within. We hope that our analyses and the RAID dataset are a step toward a future in which AI detection tools are safely integrated into society as a multi-pronged approach to reducing harm. We encourage future work to build on this by including more models, languages, and generation settings in future shared resources. Limitations While we attempt to cover a wide variety of domains, models, decoding strategies and adversarial attacks in our dataset, we recognize that there can never be a truly comprehensive dataset for robustness. In particular, our dataset lacks the inclusion of multilingual text in many diverse domains. We release our RAID-extra data to help begin this process but we acknowledge the limited nature of this approach (only having multilingual text in the news domain). We encourage future work to expand on our foundation and use our tools to create truly robust shared benchmarks in many languages. Furthermore, as the state-of-the-art in language modeling continues to improve, datasets of generated text will naturally obsolesce and will need to be continually maintained with new generations. This creates issues with shared evaluations as detectors will need to be re-run on any new dataset items and any accuracy metrics will have to be updated. While we believe this dataset will continue to be useful for many years, we do acknowledge this limitation and plan to alleviate this by occasionally releasing new updated versions. Finally, and most importantly, the concept of a public benchmark for out-of-domain robustness is an inherently limited one. As practitioners seek to improve performance on our benchmark they will undoubtedly specialize to the particular aspects of robustness we cover. This will lead to overfitting, even if detectors are not explicitly trained on examples. Such overfitting will result in the reappearance of exactly the problems we wished to alleviate by creating this dataset, namely that detector accuracies are generally over-reported. We trust that this process will, to some extent, be alleviated by regular releases of new versions and keeping a set of hidden test data private. That being said, it does not nullify the utility of the dataset as a resource for profiling robustness of classifiers. Ethics Statement Detecting generated text is often accusatory in nature and can frequently result in disciplinary or punitive action taken against the accused party. This can cause significant harm even when detectors are correct, but especially when they are incorrect. This is especially problematic given recent work by Liang et al. (2023c) showing that detectors are biased against non-native English writers. Our results also support this and suggest that the problem of false positives remains unsolved. For this reason, we are opposed to the use of detectors in any sort of disciplinary or punitive context and it is our view that poorly calibrated detectors cause more harm than they solve. Therefore, until better evaluation standards are widely adopted in the detection community, the use of detectors in this fashion should be discouraged. We intend for our work to be the start of this conversation and look forward to a future where machinegenerated detectors are deployed in safe and responsible ways. Acknowledgements The authors would like to thank the members of the lab of Chris Callison-Burch for their testing and detailed feedback on the contents of this paper. In addition, we’d like to thank Professor Dan Roth for his early and enthusiastic support of the project. This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. References Aaditya Bhat. 2023. Gpt-wiki-intro. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, 12471 Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, Quentin Malartic, Daniele Mazzotta, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. The falcon series of open language models. Micael Arman. 2020. Poems dataset (nlp). https://www.kaggle.com/datasets/ michaelarman/poemsdataset. Shahryar Baki, Rakesh Verma, Arjun Mukherjee, and Omprakash Gnawali. 2017. Scaling and effectiveness of email masquerade attacks: Exploiting natural language generation. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, ASIA CCS ’17, page 469–482, New York, NY, USA. Association for Computing Machinery. David Bamman and Noah A. Smith. 2013. New alignment methods for discriminative book summarization. Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, and Yue Zhang. 2023. Fast-detectgpt: Efficient zero-shot detection of machine-generated text via conditional probability curvature. Meghana Moorthy Bhat and Srinivasan Parthasarathy. 2020. How effectively can machines defend against machine-generated fake news? an empirical study. In Proceedings of the First Workshop on Insights from Negative Results in NLP, pages 48–53, Online. Association for Computational Linguistics. Michał Bie´n, Michał Gilski, Martyna Maciejewska, Wojciech Taisner, Dawid Wisniewski, and Agnieszka Lawrynowicz. 2020. RecipeNLG: A cooking recipes dataset for semi-structured text generation. In Proceedings of the 13th International Conference on Natural Language Generation, pages 22–28, Dublin, Ireland. Association for Computational Linguistics. Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214–217, Barcelona, Spain. Association for Computational Linguistics. Matyáš Boháˇcek, Michal Bravanský, Filip Trhlík, and Václav Moravec. 2022. Fine-grained czech news article dataset: An interdisciplinary approach to trustworthiness analysis. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Shuyang Cai and Wanyun Cui. 2023. Evade chatgpt detectors via a single space. Megha Chakraborty, S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Shreya Gautam, Tanay Kumar, Krish Sharma, Niyar Barman, Chandan Gupta, Vinija Jain, Aman Chadha, Amit Sheth, and Amitava Das. 2023. Counter Turing test (CT2): AI-generated text detection is not as easy as you may think - introducing AI detectability index (ADI). In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2206–2239, Singapore. Association for Computational Linguistics. Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin Gururangan, and Noah A. Smith. 2021. All that’s ‘human’ is not gold: Evaluating human evaluation of generated text. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7282–7296, Online. Association for Computational Linguistics. NLP Team Cohere. 2024. World-class ai, at your command. Accessed: 2024-02-02. Evan N. Crothers, Nathalie Japkowicz, and Herna L. Viktor. 2023. Machine-generated text: A comprehensive survey of threat models and detection methods. IEEE Access, 11:70977–71002. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Liam Dugan, Daphne Ippolito, Arun Kirubarajan, and Chris Callison-Burch. 2020. RoFT: A tool for evaluating human detection of machine-generated text. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 189–196, Online. Association for Computational Linguistics. Liam Dugan, Daphne Ippolito, Arun Kirubarajan, Sherry Shi, and Chris Callison-Burch. 2023. Real or fake text? investigating human ability to detect boundaries between human-written and machinegenerated text. In Proceedings of the ThirtySeventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence, AAAI’23/IAAI’23/EAAI’23. AAAI Press. 12472 Salijona Dyrmishi, Salah Ghamizi, and Maxime Cordy. 2023. How do humans perceive adversarial text? a reality check on the validity and naturalness of word-based adversarial attacks. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8822–8836, Toronto, Canada. Association for Computational Linguistics. Vitalii Fishchuk and Daniel Braun. 2023. Efficient black-box adversarial attacks on neural text detectors. Rinaldo Gagiano, Maria Myung-Hee Kim, Xiuzhen Zhang, and Jennifer Biggs. 2021. Robustness analysis of grover for machine-generated news detection. In Proceedings of the The 19th Annual Workshop of the Australasian Language Technology Association, pages 119–127, Online. Australasian Language Technology Association. Chujie Gao, Dongping Chen, Qihui Zhang, Yue Huang, Yao Wan, and Lichao Sun. 2024. Llm-as-a-coauthor: The challenges of detecting llm-human mixcase. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. Sebastian Gehrmann, Hendrik Strobelt, and Alexander Rush. 2019. GLTR: Statistical detection and visualization of generated text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 111–116, Florence, Italy. Association for Computational Linguistics. Derek Greene and Pádraig Cunningham. 2006. Practical solutions to the problem of diagonal dominance in kernel document clustering. In Proc. 23rd International Conference on Machine learning (ICML’06), pages 377–384. ACM Press. Jesus Guerrero, Gongbo Liang, and Izzat Alsmadi. 2022. A mutation-based text generation for adversarial machine learning applications. Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. 2023. How close is chatgpt to human experts? comparison corpus, evaluation, and detection. Abhimanyu Hans, Avi Schwarzschild, Valeriia Cherepanova, Hamid Kazemi, Aniruddha Saha, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2024. Spotting llms with binoculars: Zero-shot detection of machine-generated text. Julian Hazell. 2023. Large language models can be used to effectively scale spear phishing campaigns. arXiv preprint arXiv:2305.06972. Xinlei He, Xinyue Shen, Zeyuan Chen, Michael Backes, and Yang Zhang. 2023. Mgtbench: Benchmarking machine-generated text detection. John Hewitt, Christopher Manning, and Percy Liang. 2022. Truncation sampling as language model desmoothing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 3414– 3427, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Xiaomeng Hu, Pin-Yu Chen, and Tsung-Yi Ho. 2023. Radar: Robust ai-text detection via adversarial learning. Advances in Neural Information Processing Systems. Daphne Ippolito, Daniel Duckworth, Chris CallisonBurch, and Douglas Eck. 2020. Automatic detection of generated text is easiest when humans are fooled. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1808–1822, Online. Association for Computational Linguistics. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. Tibor Kiss and Jan Strunk. 2006. Unsupervised multilingual sentence boundary detection. Computational Linguistics, 32(4):485–525. Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. 2022. The stack: 3 tb of permissively licensed source code. Preprint. Ryuto Koike, Masahiro Kaneko, and Naoaki Okazaki. 2023. How you prompt matters! even task-oriented constraints in instructions affect llm-generated text detection. Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, and Mohit Iyyer. 2023. Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense. Pranav Kulkarni, Ziqing Ji, Yan Xu, Marko Neskovic, and Kevin Nolan. 2023. Exploring semantic perturbations on grover. Tharindu Kumarage, Paras Sheth, Raha Moraffah, Joshua Garland, and Huan Liu. 2023. How reliable are ai-generated-text detectors? an assessment framework using evasive soft prompts. Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2022. Deduplicating training 12473 data makes language models better. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8424–8445, Dublin, Ireland. Association for Computational Linguistics. Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. 2023. Contrastive decoding: Open-ended text generation as optimization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12286–12312, Toronto, Canada. Association for Computational Linguistics. Yafu Li, Qintong Li, Leyang Cui, Wei Bi, Zhilin Wang, Longyue Wang, Linyi Yang, Shuming Shi, and Yue Zhang. 2024. Mage: Machine-generated text detection in the wild. Gongbo Liang, Jesus Guerrero, and Izzat Alsmadi. 2023a. Mutation-based adversarial attacks on neural text detectors. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. 2023b. Holistic evaluation of language models. Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu, and James Zou. 2023c. Gpt detectors are biased against non-native english writers. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Ning Lu, Shengcai Liu, Rui He, Qi Wang, Yew-Soon Ong, and Ke Tang. 2023. Large language models can be guided to evade ai-generated text detection. Brady D Lund, Ting Wang, Nishith Reddy Mannuru, Bing Nie, Somipam Shimray, and Ziang Wang. 2023. Chatgpt and a new academic reality: Artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5):570–581. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics. Dominik Macko, Robert Moro, Adaku Uchendu, Jason Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, and Maria Bielikova. 2023. MULTITuDE: Large-scale multilingual machine-generated text detection benchmark. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9960–9987, Singapore. Association for Computational Linguistics. Dominik Macko, Robert Moro, Adaku Uchendu, Ivan Srba, Jason Samuel Lucas, Michiharu Yamashita, Nafis Irtiza Tripto, Dongwon Lee, Jakub Simko, and Maria Bielikova. 2024. Authorship obfuscation in multilingual machine-generated text detection. Clara Meister, Tiago Pimentel, Gian Wiher, and Ryan Cotterell. 2023. Locally typical sampling. Transactions of the Association for Computational Linguistics, 11:102–121. Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, and Chelsea Finn. 2023. Detectgpt: zero-shot machine-generated text detection using probability curvature. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org. NLP Team MosaicML. 2023. Introducing mpt-30b: Raising the bar for open-source foundation models. Accessed: 2023-06-22. Edoardo Mosca, Mohamed Hesham Ibrahim Abdalla, Paolo Basso, Margherita Musumeci, and Georg Groh. 2023. Distinguishing fact from fiction: A benchmark dataset for identifying machine-generated scientific papers in the LLM era. In Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 190–207, Toronto, Canada. Association for Computational Linguistics. OpenAI. 2022. ChatGPT: Optimizing Language Models for Dialogue. OpenAI. 2023. Gpt-4 technical report. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Artidoro Pagnoni, Martin Graciarena, and Yulia Tsvetkov. 2022. Threat scenarios and best practices to detect neural fake news. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1233–1249, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. 12474 Sayak Paul and Soumik Rakshit. 2021. arxiv paper abstracts. https://www.kaggle.com/datasets/ spsayakpaul/arxiv-paper-abstracts. Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. 2023. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data, and web data only. J. Pu, Z. Sarwar, S. Abdullah, A. Rehman, Y. Kim, P. Bhattacharya, M. Javed, and B. Viswanath. 2023a. Deepfake text detection: Limitations and opportunities. In 2023 IEEE Symposium on Security and Privacy (SP), pages 1613–1630, Los Alamitos, CA, USA. IEEE Computer Society. Xiao Pu, Jingyu Zhang, Xiaochuang Han, Yulia Tsvetkov, and Tianxing He. 2023b. On the zero-shot generalization of machine-generated text detectors. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Veselin Raychev, Pavol Bielik, and Martin Vechev. 2016. Probabilistic model for code with decision trees. SIGPLAN Not., 51(10):731–747. Juan Diego Rodriguez, Todd Hay, David Gros, Zain Shamsi, and Ravi Srinivasan. 2022. Cross-domain detection of GPT-2-generated technical text. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1213–1233, Seattle, United States. Association for Computational Linguistics. Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, and Soheil Feizi. 2023. Can ai-generated text be reliably detected? Areg Mikael Sarvazyan, José Ángel González, Paolo Rosso, and Marc Franco-Salvador. 2023a. Supervised machine-generated text detectors: Family and scale matters. In Experimental IR Meets Multilinguality, Multimodality, and Interaction: 14th International Conference of the CLEF Association, CLEF 2023, Thessaloniki, Greece, September 18–21, 2023, Proceedings, page 121–132, Berlin, Heidelberg. Springer-Verlag. Areg Mikael Sarvazyan, José Ángel González, Marc Franco-Salvador, Francisco Rangel, Berta Chulvi, and Paolo Rosso. 2023b. Overview of autextification at iberlef 2023: Detection and attribution of machinegenerated text in multiple domains. Dietmar Schabus, Marcin Skowron, and Martin Trapp. 2017. One million posts: A data set of german online discussions. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’17, page 1241–1244, New York, NY, USA. Association for Computing Machinery. Tal Schuster, Roei Schuster, Darsh J. Shah, and Regina Barzilay. 2020. The limitations of stylometry for detecting machine-generated fake news. Computational Linguistics, 46(2):499–510. Tatiana Shamardina, Vladislav Mikhailov, Daniil Chernianskii, Alena Fenogenova, Marat Saidov, Anastasiya Valeeva, Tatiana Shavrina, Ivan Smurov, Elena Tutubalina, and Ekaterina Artemova. 2022. Findings of the the ruatd shared task 2022 on artificial text detection in russian. In Computational Linguistics and Intellectual Technologies. RSUH. Filipo Sharevski, Jennifer Vander Loop, Peter Jachim, Amy Devine, and Emma Pieroni. 2023. Talking abortion (mis) information with chatgpt on tiktok. In 2023 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), pages 594–608. IEEE. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGuffie, and Jasmine Wang. 2019. Release strategies and the social impacts of language models. Rafael Rivera Soto, Kailin Koch, Aleem Khan, Barry Chen, Marcus Bishop, and Nicholas Andrews. 2024. Few-shot detection of machine-generated text using style representations. Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani. 2023. Ai model gpt-3 (dis)informs us better than humans. Science Advances, 9(26):eadh1850. Harald Stiff and Fredrik Johansson. 2022. Detecting computer-generated disinformation. International Journal of Data Science and Analytics, 13:363–383. Zhenpeng Su, Xing Wu, Wei Zhou, Guangyuan Ma, and Songlin Hu. 2024. Hc3 plus: A semantic-invariant human chatgpt comparison corpus. Edward Tian and Alexander Cui. 2023. Gptzero: Towards detection of ai-generated text using zero-shot and supervised methods. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, 12475 Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Adaku Uchendu, Zeyu Ma, Thai Le, Rui Zhang, and Dongwon Lee. 2021. Turingbench: A benchmark environment for turing test in the age of neural text generation. Vivek Verma, Eve Fleisig, Nicholas Tomlin, and Dan Klein. 2023. Ghostbuster: Detecting text ghostwritten by large language models. Michael Völske, Martin Potthast, Shahbaz Syed, and Benno Stein. 2017. TL;DR: Mining Reddit to learn automatic summarization. In Proceedings of the Workshop on New Frontiers in Summarization, pages 59–63, Copenhagen, Denmark. Association for Computational Linguistics. Jian Wang, Shangqing Liu, Xiaofei Xie, and Yi Li. 2023a. Evaluating aigc detectors on code content. Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Chenxi Whitehouse, Osama Mohammed Afzal, Tarek Mahmoud, Alham Fikri Aji, and Preslav Nakov. 2023b. M4: Multi-generator, multi-domain, and multi-lingual black-box machine-generated text detection. Max Weiss. 2019. Deepfake bot submissions to federal public comment websites cannot be distinguished from human submissions. Technology Science. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Huggingface’s transformers: State-of-the-art natural language processing. Max Wolff. 2020. Attacking neural text detectors. Kangxi Wu, Liang Pang, Huawei Shen, Xueqi Cheng, and Tat-Seng Chua. 2023. LLMDet: A third party large language models generated text detection tool. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 2113–2133, Singapore. Association for Computational Linguistics. Han Xu, Jie Ren, Pengfei He, Shenglai Zeng, Yingqian Cui, Amy Liu, Hui Liu, and Jiliang Tang. 2023. On the generalization of training-based chatgpt detection methods. Yaoming Zhu, Sidi Lu, Lei Zheng, Jiaxian Guo, Weinan Zhang, Jun Wang, and Yong Yu. 2018. Texygen: A benchmarking platform for text generation models. A Experiments on Multilingual and Code Generations (RAID-Extra) In addition to the main RAID dataset we also release RAID-Extra: a collection of 2.3M generations in three extra challenging domains: Python Code, Czech News, and German News. These extra experiments were not included in the main benchmark as we felt that they were out of scope for most detectors and should not be used as a basis for comparison. Nonetheless, we were still curious to see what sorts of insights they can give us on detector performance. A.1 Data Generation Following Macko et al. (2023), multilingual prompts were written in the target language by a native speaker rather than being written in English and explicitly requesting that the model complete the generation in the target language. We found this to be the most effective method to get our generative models to adhere to the target language. For Python Code generations, we applied an additional post-processing step as, in this domain, generative models had a tendency to write code between sets of triple backtick characters (```) and give natural language explanations of the code outside of the backticks. Thus for this domain and this domain only, we extracted the text between these sets of backticks and discarded all others. This was done to ensure that detectors could not use text descriptions of code for detection and would instead have to rely on the code itself. A.2 Results In Table 7 we report the accuracies of our 12 detectors on generations from RAID-extra at 5% FPR. We see an interesting trend, that being the relatively strong performance of metric-based classifiers as compared to neural and commercial detectors. We suspect that metric-based classifiers are particularly well suited for such rare domains as they can be given any generative model to calculate their probabilities. In Figure 7 we show a heatmap of the performance of our detectors across the extra domains from select models. We see that Binoculars performs decently well when detecting Czech 12476 Cohere ChatGPT GPT4 Mistral Code Czech German 0.047 0.055 0.039 0.098 0.746 0.009 0.009 0.783 0.409 0.010 0.010 0.871 RoBERTa (GPT2) Cohere ChatGPT GPT4 Mistral 0.145 0.095 0.135 0.090 0.745 0.010 0.012 0.713 0.365 0.092 0.076 0.782 RADAR Cohere ChatGPT GPT4 Mistral 0.653 0.901 0.772 0.435 0.911 0.835 0.721 0.687 0.942 0.999 0.966 0.691 Binoculars Figure 7: Heatmaps of accuracy for three of our detectors on German News, Python Code, and Czech News generations. We see that metric-based detectors have an edge over neural detectors in their ability to generalize to these unusual domains. Code Czech German Total R-B GPT2 13.4 48.4 39.7 38.2 R-L GPT2 12.7 53.1 48.4 43.5 R-B CGPT 24.0 38.7 51.5 41.1 RADAR 12.9 51.1 53.2 44.7 GLTR 40.7 51.9 68.9 56.7 F-DetectGPT 51.1 55.2 75.5 62.7 LLMDet 17.5 24.0 10.6 17.3 Binoculars 59.9 67.0 76.7 69.6 GPTZero 33.8 33.6 49.5 39.0 Originality 8.5 69.8 89.1 55.8 Winston 24.5 70.3 73.8 56.2 ZeroGPT 13.8 49.3 51.7 38.3 Table 7: Accuracy of our 12 detectors at FPR=5% on RAID-extra domains (Python Code, Czech News, and German News). We see that metric based detectors generally perform better than neural detectors. news articles despite the underlying generative model, Falcon 7B (Almazrouei et al., 2023) being trained with five times as much German data as Czech data (Penedo et al., 2023). This seems to suggest that strong metric-based detectors for low-resource languages can be bootstrapped from highly-multilingual language models. Future work is necessary to understand the optimal setup in such scenarios. B Fixed FPR Accuracy vs. F1 Score Throughout our work we report accuracy on machine generated text at a set FPR because we believe it is the most intuitive way of understanding the performance of models in high-risk scenarios (i.e. “What percentage of generations are detected given that we tolerate an x% chance of wrongly accusing someone”). Reporting the more standard F1 score is not only less intuitive but also treats false positives and false negatives as equivalent— which is not the case when dealing with high-risk scenarios or ones where detectors are repeatedly applied to texts from the same author. In addition, since our dataset has a roughly 40:1 ratio of generated to human-written text, precision scores will artificially favor true positives over false positives as most all examples in the dataset are positive examples. However, in the real world, this ratio is reversed and the majority of texts are human written. Thus precision scores systematically overrepresent the capabilities of detectors when used as a metric on a dataset like ours. We hope that our work can help to shed light on this issue and how easy it is to accidentally over-represent the performance of classifiers. C Per-Domain Threshold Tuning When tuning the False Positive Rate of classifiers to a specific percentage (in our case 5%), it is important to look not just at total FPR across all human texts, but also at FPR for each individual domain in the dataset. In our work, we ensure that classification thresholds are determined on a per-domain basis, i.e. that the FPRs of every detector on every domain of the data should be 5%. While this undoubtedly adds complexity to the evaluation, it is an important step to ensure that detectors are being evaluated fairly with respect to one another (see Appendix F.2 for details about the threshold searching procedure). To drive home the importance of this point, in Table 8 we show the FPR of each classifier at a 5% total FPR threshold broken up by domain. As we can see, while the total FPR is consistently 5%, many detectors have particularly acute domainspecific weaknesses: RADAR has a 20.4% FPR on Reviews, GLTR has a 33.4% FPR on Recipes, and Originality has a 13% FPR on Wikipedia. This asymmetric variation of FPR creates a dampening effect whereby the inclusion of weaker, more obscure domains reduces the accuracy of a classifier on more common domains—ultimately lowering total accuracy in the process. In order to avoid this issue, we ensure that our thresholds are chosen on a per-domain basis. That 12477 τ News Wiki Reddit Books Abstracts Reviews Poetry Recipes Total R-B GPT2 0.759 1.0% 3.2% 3.7% 4.0% 1.6% 3.1% 15.4% 7.3% 5.0% R-L GPT2 0.307 1.8% 7.0% 2.4% 2.5% 1.3% 4.0% 15.4% 5.1% 5.0% R-B C-GPT 0.988 4.4% 4.0% 0.7% 9.0% 0.0% 1.1% 0.5% 18.6% 5.0% RADAR 0.343 0.2% 1.2% 10.7% 2.0% 4.2% 20.4% 8.6% 0.1% 5.0% GLTR 0.818 0.4% 0.8% 1.1% 0.0% 0.2% 0.1% 1.8% 33.4% 5.0% F-DetectGPT 0.920 5.1% 1.6% 1.9% 6.3% 1.8% 2.8% 7.5% 2.2% 3.7% LLMDet 1.000 10.2% 12.8% 6.1% 2.7% 0.2% 3.9% 3.4% 0.1% 5.0% Binoculars 0.090 2.8% 5.7% 7.2% 4.0% 5.3% 5.7% 3.8% 5.9% 5.0% GPTZero 0.068 3.0% 2.0% 3.0% 13.0% 10.0% 5.0% 0.0% 1.0% 4.6% Originality 0.494 4.0% 13.0% 2.0% 3.0% 4.0% 4.0% 3.0% 7.0% 5.0% Winston 0.892 0.0% 10.0% 0.0% 13.0% 0.0% 0.0% 4.0% 13.0% 5.0% ZeroGPT 1.000 29.0% 48.0% 0.0% 1.0% 0.0% 5.0% 0.0% 52.0% 16.9% Table 8: False Positive Rates of our detectors on the RAID dataset broken up by domain when naively using a single threshold (τ). We see that while the total FPR across all domains still sums to 5%, individual domains see substantial fluctuation in FPR values. is, we find the threshold for each detector for each domain that results in 5% FPR on that domain (see Table 13). D Leaderboard and Pypi Package In order to truly achieve our goal of standardizing detector evaluation, it is important for RAID to not only be sufficiently challenging, but also have a simple, straightforward interface for submission and comparison. As discussed in section 4.2, we solve this problem by releasing a shared leaderboard and a Pypi package to make submitting to the leaderboard easy. The RAID package can be installed by running: $ pip install raid-bench In Figure 9 we show how to use our pypi package to load the RAID test set and run a detector on the texts. After getting the predictions.json file, submitting to the leaderboard simply involves making a folder in the repository, filling out metadata, and creating a pull request. In Figure 8 we show a screenshot of the leaderboard page on our project website. Submissions are open to the public and are automatically accepted and evaluated via pull requests to our git repository. The data used for evaluation on the leaderboard is the 10% test split of the core RAID dataset that is released without labels. E Dataset Details E.1 Domains In Table 9 we report the exact number of documents sampled from each domain. The only two datasets that include post-2021 documents are Wikipedia and Abstracts. The Reviews domain did not have 2,000 documents and so we sampled the maximum amount. For each domain, we provide a detailed list of all human-written sources with metadata along with the RAID dataset in our code repository. A detailed description of the contents of each domain is as follows: Book summaries (Bamman and Smith, 2013) This dataset contains plot-centric summaries of books along with their titles. We chose this dataset due to the first-person narrative style and because we expect generators and detectors with knowledge of the source material to have an advantage. BBC News Articles (Greene and Cunningham, 2006) This dataset contains BBC articles with associated titles. The articles are spread out evenly across 5 categories (sport, technology, entertainment, politics, and business). This dataset was chosen since good generation requires factuality and because News is a large area for LLM-based harm. Poems (Arman, 2020) This dataset contains poems collected from poemhunter.com with their titles and genre. The poems are randomly spread out over genres and topics. We hypothesize that LLMs will write generic and repetitive poetry and that this tendency should be detectable. Recipes (Bie´n et al., 2020) This dataset consists of recipes and their dish names. Recipes are a combination of a list of ingredients and a numbered list of steps. This dataset is difficult because it requires significant common sense reasoning, which 12478 Figure 8: Screenshot of the RAID leaderboard accessible at https://raid-bench.xyz/leaderboard from raid import run_detection from raid.utils import load_data # Define your detector function def my_detector(texts: list[str]) -> list[float]: pass # Load the RAID test data test_df = load_data(split='test') # Run your detector on the dataset predictions = run_detection(my_detector, test_df) # Write predictions to a JSON file with open('predictions.json') as f: json.dump(predictions, f) Figure 9: A example showing how to use the RAID Pypi package to evaluate a detector on the dataset and submit it to the leaderboard. is difficult for models. Reddit Posts (Völske et al., 2017) This dataset contains reddit posts and their titles. We hypothesize that such data will be challenging to detect due to the first-person and informal style. Movie Reviews (Maas et al., 2011) This dataset contains movie reviews from IMDb along with the names of the movies. The formality of the reviews are varied and this tests model’s ability to recall details from movies as well as generate and detect opinionated text. Wikipedia (Aaditya Bhat, 2023) This dataset contains introductions to various Wikipedia articles. This dataset is challenging as it tests the models ability to accurately recall facts relating to specific historical events. Python Code (Raychev et al., 2016) This dataset contains python solutions to coding problems and the associated problem title. We include this as an initial foray into the detection of AI-generated code. Czech News (Boháˇcek et al., 2022) This domain consists of Czech language news articles. The topics and sources are diverse sampling from over 45 publications including mainstream journalistic websites, tabloids and independent news outlets in the Czech Republic. German News (Schabus et al., 2017) This domain consists of German language news articles from DER STANDARD, an Austrian daily broadsheet newspaper. Articles are fairly political in nature covering topics such as the European migrant crisis, the 2016 Austrian presidential elections and the Syrian Civil War. We expect this dataset to test models’ ability to generate opinionated political content in another language. Paper Abstracts (Paul and Rakshit, 2021) This is a dataset of abstracts scraped from ArXiv together with paper titles. For this dataset and this dataset only, we filter the data such that only papers from 2023 or later are present in the data. This allows us to rule out the possibility that our models have memorized this text. E.2 Generative Models In Table 10 we list the exact generative models used in our project along with their unique identifiers. All open-source models were run using the HuggingFace transformers library (Wolf et al., 2020) and all closed-source models were run using 12479 Dataset Genre Size (Paul and Rakshit, 2021) Abstracts 1966 (Bamman and Smith, 2013) Books 1981 (Raychev et al., 2016) Code 920 (Greene and Cunningham, 2006) News 1980 (Arman, 2020) Poetry 1971 (Bie´n et al., 2020) Recipes 1972 (Völske et al., 2017) Reddit 1979 (Maas et al., 2011) Reviews 1143 (Aaditya Bhat, 2023) Wiki 1979 (Boháˇcek et al., 2022) Czech 1965 (Schabus et al., 2017) German 1970 Table 9: The number of articles sampled from each domain with their corresponding sources Model Identifier GPT-2 gpt2-xl (Radford et al., 2019) MPT (+ Chat) mpt-30b (MosaicML, 2023) mpt-30b-chat Mistral (+ Chat) Mistral-7B-v0.1 (Jiang et al., 2023) Mistral-7B-Instruct-v0.1 LLaMA Chat Llama-2-70b-chat-hf (Touvron et al., 2023) Cohere (+ Chat) command (co.generate()) (Cohere, 2024) command (co.chat()) GPT-3 text-davinci-002 (Ouyang et al., 2022) ChatGPT gpt-3.5-turbo-0613 (OpenAI, 2022) GPT-4 gpt-4-0613 (OpenAI, 2023) Table 10: The generative models used in our project the proprietary APIs from Cohere7 and OpenAI8. The following is a detailed list of the generative models used in the project. GPT2 XL 1.5B (Radford et al., 2019) is a decoder-only model trained on the WebText dataset. This dataset consists of a collection of all documents that were linked from reddit posts or comments that had at least 3 or more upvotes. Released in February of 2019 and having 1.5B parameters, GPT2 is the predecessor of GPT3 and GPT4 and the most powerful open-source OpenAI model. GPT3 (Ouyang et al., 2022) is a closed-source language model released by OpenAI on November 29th, 2022. The model was allegedly trained with 7https://docs.cohere.com/reference/about 8https://platform.openai.com/docs/introduction a variety of data including the Common Crawl (filtered), WebText2, and Wikipedia datasets (Brown et al., 2020) but exact composition of the training dataset is unknown. It is the first model shown to work well with prompts and has shown great zeroand few-shot capabilities. In this study, we use the text-davinci-002 model. We queried the model from November 1st to November 2nd 2023. Unfortunately, as of January 4th 2024, this model is no longer available for use on the OpenAI API. This is unfortunate as it prevents us from expanding the domains in future releases. We encourage researchers to keep this in mind when using OpenAI models for their research projects. ChatGPT (OpenAI, 2022) is a version of GPT3 fine-tuned using Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022). We use the June 13th 2023 checkpoint of the model (gpt-3.5-turbo-0613). Although the number of parameters is unknown, ChatGPT demonstrates outstanding capability in language and code generation. GPT4 (OpenAI, 2023) is the latest iteration of OpenAI’s GPT family of models and is one of the largest and most powerful language models available to date. In this study, we use the gpt-4-0613 checkpoint of the model through the ChatCompletion interface9. We queried the model from November 1st to November 2nd 2023. LLaMA 2 70B (Touvron et al., 2023) is a decoder-only model trained by Meta (Facebook) and is the second model in the LLaMA series. Released on July 18th 2023, LLaMA 2 is the successor to the original LLaMA model which was trained on webpages from CommonCrawl, multilingual Wikipedia, books from Project Gutenberg, and QAs from Stack Exchange. The composition of LLaMA 2’s training data is not known but it has shown impressive performance on many opensource evaluations and is widely considered competitive with the open-source state-of-the-art. Mistral 7B (Jiang et al., 2023) is a decoder-only model trained by Mistral and is the first model released by the company. Released on September 27, 2023, Mistral 7B outperforms LLaMA 2 13B across various benchmarks at half the size. While model weights are open-source, the training data 9https://platform.openai.com/docs/guides/chat 12480 for the model is not and no details have been released regarding the makeup of this data. MPT 30B (MosaicML, 2023) is a decoder-only model trained by Mosaic and is the first model released by the company. Released on June 22nd 2023, MPT 30B has an 8k context window and outperforms GPT-3 on various reasoning tasks. Training data consists of deduplicated C4 (Raffel et al., 2020; Lee et al., 2022), the RedPajama10 split of CommonCrawl, and selected programming languages from The Stack (Kocetkov et al., 2022). Cohere command (Cohere, 2024) is a closedsource model trained and released by Cohere. While original versions of this model were quoted as having roughly 50 billion parameters (Liang et al., 2023b), the size and training data of the current version of the Cohere model is unknown. Unlike OpenAI, Cohere does not version the command model and so, much like with text-davinci-002 we are unable to expand our dataset to new domains without re-generating all generations. We queried the command model through both the co.generate() and co.chat() endpoints from November 1st to November 2nd 2023. E.3 Prompts In Table 12 we report the prompt templates for each of the 8 domains used in our project. We specifically avoided biasing the model towards a particular length or style of generation and did not use any of the text from the human-written documents other than the title when prompting. To decide on the particular form of our prompts, we conducted two rounds of manual review. These rounds consisted of the authors determining problematic prompts by manually reviewing 10 generations per model in each domain for instances of degenerate repetition, meta-commentary or other signs of generated output. In problematic domains, we conducted individual explorations to determine if there existed some high level prompting concept that removed the unintended behavior and whether or not using such a prompt style caused regressions across other generators. After identifying a desired prompting change, we re-wrote our templates and restarted the review process. Through this investigation, we found that continuation-style models benefit greatly from explicitly stating the source website in the prompt 10https://www.together.ai/blog/ redpajama-data-v2 Attack θ Source Alternative Spelling 100% (Liang et al., 2023c) Article Deletion 50% (Liang et al., 2023a; Guerrero et al., 2022) Homoglyph 100% (Wolff, 2020; Gagiano et al., 2021) Insert Paragraphs 50% (Bhat and Parthasarathy, 2020) Number Swap 50% (Bhat and Parthasarathy, 2020) Paraphrase 100% (Krishna et al., 2023; Sadasivan et al., 2023) Misspelling 20% (Liang et al., 2023a; Gagiano et al., 2021; Gao et al., 2018) Synonym 50% (Pu et al., 2023a) Upper Lower 5% (Gagiano et al., 2021) Whitespace 20% (Cai and Cui, 2023; Gagiano et al., 2021) Zero-Width Space 100% (Guerrero et al., 2022) Table 11: The adversarial attacks used in the project. θ represents the manually determined fraction of available attacks carried out. We determine this fraction through manual review. and that chat-style models benefit from language explicitly asking them not to repeat the title of the article (See Table 12). While we were unable to remove all instances of degenerate repetition or meta-commentary, this investigation significantly increased the quality of our dataset and we encourage future work on dataset creation to conduct a similar process when engineering their prompts. E.4 Adversarial Attacks In Table 11 we list the attacks used along with the attack rate θ and the relevant sources for the attacks. In this section we will list the attacks in more detail and discuss the various design decisions made. Implementations for all attacks can be found in our GitHub repository. Alternative Spelling We use an American to British English dictionary11 to construct a mapping between American and British spellings of words. We then find all instances of such words in the generation (defaulting to the longest available match if there were multiple substring matches for the same token). We then randomly sample a fixed percentage θ of the possible mutations to make with a set seed and apply the attack at those indices. Article Deletion We search through the text and find every instance of the articles “a”, “an”, and 11https://github.com/hyperreality/ American-British-English-Translator 12481 Continuation-Style Prompt Chat-Style Prompt Abstracts The following is the full text of the abstract for a research paper titled "{title}" from arxiv.org: Write the abstract for the academic paper titled "{title}". Books The following is the full text of a plot summary for a novel titled "{title}" from wikipedia.org: Write the body of a plot summary for a novel titled "{title}". Do not give it a title. Code The following is the full text of a Python code solution to the exercise "{title}" from stackoverflow.com: Write just the Python code solution for the problem "{title}". German Schreiben Sie einen Nachrichtenartikel mit dem Titel "{title}". Es folgt ein Nachrichtenartikel mit dem Titel "{title}": Czech Následuje ˇclánek s názvem "{title}": Napiš text ˇclánku, který má nadpis "{title}". News The following is the full text of a news article titled "{title}" from bbc.com: Write the body of a BBC news article titled "{title}". Do not repeat the title. Poetry The following is the full text of a poem titled "{title}" from poemhunter.com: Write the body of a poem titled "{title}". Do not repeat the title. Recipes The following is the full text of a recipe for a dish called "{title}" from allrecipes.com: Write a recipe for "{title}". Reddit The following is the full text of a post titled "{title}" from reddit.com: Write just the body of a Reddit post titled "{title}". Do not repeat the title. Reviews The following is the full text of a review for the movie "{title}" from IMDb.com: Write the body of an IMDb review for the movie "{title}". Do not give it a title. Wiki The following is the full text of an article titled "{title}" from wikipedia.com: Write the body of a Wikipedia article titled "{title}". Table 12: The text of the generation prompts for all ten datasets in both continuation and chat style. The field {title} was replaced with the title of the book, dish, or news article before being passed into the generative model. “the”. We then randomly sample a fixed percentage θ of the possible mutations to make with a set seed and apply the attack at those indices. Homoglyph Homoglyphs are character that are non-standard unicode characters that strongly resemble standard English letters. These are typically characters used in Cyrillic scripts. We use the set of homoglyphs from Wolff (2020) which includes substitutions for the following standard ASCII characters: a, A, B, e, E, c, p, K, O, P, M, H, T, X, C, y, o, x, I, i, N, and Z. We limit ourselves to only homoglyphs that are undetectable to the untrained human eye, thus we are able to use an attack rate of θ = 100% and apply the attack on every possible character. For characters that have multiple possible homoglyphs we randomly choose between the homoglyphs. Insert Paragraphs For this attack, we again split sentences using Punkt (Kiss and Strunk, 2006) and construct a list of all inter-sentence spans. We then sample θ percent of those spans and add the double newline character \n\n in-between the sentences to simulate a paragraph break. Number Swap For this attack we use the following regular expression to extract all instances of numerical digits in the generation: "\d+.?\d*". We then randomly select θ percent of these digits to modify and for each digit we randomly select an alternate number between 0 and 9 to replace the character with. Paraphrase For paraphrasing we run the DIPPER-11B model12 from Krishna et al. (2023) through HuggingFace (Wolf et al., 2020). DIPPER is a fine-tuned version of T5-11B (Raffel et al., 2020) specifically made for paraphrasing text to avoid machine-generated text detectors. We use the default settings from the paper, namely a sentence interval of 3 with lexical diversity of 60 and order diversity of 0. Since paraphrases are not inherently noticeable when models are correct, we are able to apply this attack across the entirety of the output text (θ = 100%). Misspelling For this attack, we manually constructed a dictionary of common misspellings13 and only applied the attack to instances of words that have misspellings in our dictionary. We did this to minimize suspicion from human readers, as 12https://huggingface.co/kalpeshk2011/ dipper-paraphraser-xxl 13https://en.wikipedia.org/wiki/Commonly_ misspelled_English_words 12482 certain common words such as ‘the’ or ’him’ are rarely misspelled. Instead of randomly selecting θ percent of the possible candidate words to misspell, we follow Gagiano et al. (2021) and misspell only the top θ percent most likely candidate words by log likelihood as determined by GPT 2 small. This allows us to choose only the most effective misspellings to apply. Synonym For this attack we originally planned to use the DFTFooler algorithm from Pu et al. (2023a). However, we found the candidate synonyms to be of low quality and easily detectable by our manual analysis. Thus we decided to implement our own algorithm based largely on DFTFooler. Our algorithm produces high-quality and diverse synonym substitutions without relying on any of the large decoder-only language models used for generation. We start by iterating over all tokens in the generation. For each token i we replace it with a mask token and get the top 20 most likely mask-fill candidates14 according to BERT (Devlin et al., 2019). We then compute the part-of-speech tag for each candidate using NLTK (Bird and Loper, 2004) and reject all candidates that do not match the part-ofspeech of the original token. We then get the static FastText (Bojanowski et al., 2017) embeddings for all candidate substitutions and reject all tokens that have cosine similarity of less than 0.5 with the original token. Doing this for each index i gives us a global list of all valid candidate swaps across the entire generated passage. From this list we select the θ · L most likely synonym swaps according to BERT where L is the length (in tokens) of the passage. The full code for this algorithm can be found in our project repository along with the implementations of the other adversarial attacks. Upper-Lower This attack randomly selects some θ percent of the tokens in the passage and swaps the first letter of the token to be uppercase if it was lowercase and lowercase if it was originally uppercase. Whitespace This attack randomly selects some θ percent of inter-token spaces and adds an extra space character inbetween the tokens. This can occasionally result in multiple spaces added between two tokens as sampling is done with replacement. 14In order to reduce computation time we limit BERT to a window of 20 tokens on either side of the index when determining the mask-fill candidates. We found no reduction in candidate quality from this modification. Zero-Width Space The unicode zero-width space U+200B is a character that exists in text encoding but is not visible to human readers in most scenarios. Thus, for this attack we insert this character at every possible opportunity (before and after each visible character in the generation). E.5 Repetition Penalty vs. Frequency Penalty vs. Presence Penalty Given a temperature T > 0 and a set of scores xi ∈ Rd for each token i in a vocabulary, the probability pi of predicting the ith token is given by: pi = exp(xi/T) P j exp(xj/T) The repetition penalty defined by Keskar et al. (2019) modifies this distribution as follows: pi = exp(xi/(T · I(i ∈g))) P j exp(xj/(T · I(j ∈g))) Where g is a list of previously generated tokens and I(c) = θ if c is True else 1. OpenAI implements a form15 of this penalty (referred to as a ‘presence penalty’) which is additive instead of multiplicative: pi = exp((xi/T) −I(i ∈g)) P j exp((xj/T) −I(j ∈g)) Cohere (as of May 13th 2024) provides no documentation on the nature of their presence penalty despite requests from the authors for the proper documentation. E.6 Hardware We ran our generations over the course of 15 days from November 1st 2023 to November 15th 2023 on 32 NVIDIA 48GB A6000 GPUs. We ran all models with 16-bit precision as we found that outputs were identical to full precision and it cut our inference time in half. The amount of GPU hours used by each family of models is as follows: LLaMA 2 70B (+ Chat) 8,376 hours, MPT (+ Chat) 5,440 hours, Mistral (+ Chat) 672 hours, GPT2 (+ Chat) 352 hours. In total we used 14,872 GPU hours (620 GPU days) to generate the RAID dataset. 15https://platform.openai.com/docs/guides/ text-generation/parameter-details 12483 F Detector Details F.1 Detectors In this section we provide a detailed description of all detectors used in the evaluations of the RAID dataset. RoBERTa (GPT2) (Solaiman et al., 2019) This detector16 is a RoBERTa model (Liu et al., 2019) fine-tuned on the GPT2 output dataset. This dataset consists of outputs from GPT2 in open domain settings with three different decoding strategies: greedy decoding, top-k=50, and fully random sampling and has been a baseline inclusion for many years. We use both the base and large size of this model in our comparisons. RoBERTa (ChatGPT) (Guo et al., 2023) This detector is a RoBERTa-base model (Liu et al., 2019) fine-tuned on the HC3 dataset. HC3 consists of roughly 27,000 questions paired with both human and ChatGPT answers in various domains such as reddit, medicine, finance, and law. We download and query the detector via HuggingFace datasets with the unique identifier Hello-SimpleAI/chatgpt-detector-roberta RADAR (Hu et al., 2023) This detector is a finetuned version of Vicuna 7B (which itself is a finetune of LLaMA 7B). It was trained in a generative adversarial setting alongside a paraphrase model. The paraphraser was trained specifically to fool the detector and the detector was trained to accurately detect generations from the paraphraser, humantext from the WebText dataset, and outputs from the original language model. We download and query this detector from HuggingFace with the unique identifier TrustSafeAI/RADAR-Vicuna-7B GLTR (Gehrmann et al., 2019) Originally intended as an interface to help humans better detect generated text, GLTR has become a standard baseline in robustness studies of detector abilities. GLTR evaluates the likelihood of text according to a language model and bins tokens according to their likelihoods and uses these bins as features for detection. We use the default settings from the GLTR repository17 namely our cutoff set at rank=10 and the language model set to GPT2 small. 16We download the model hosted by OpenAI from the following link https://openaipublic.azureedge.net/ gpt-2/detector-models/v1/detector-large.pt 17https://github.com/HendrikStrobelt/ detecting-fake-text FastDetectGPT (Bao et al., 2023) This detector is an improvement on the original DetectGPT (Mitchell et al., 2023)—speeding up inference by 340x without any reduction in accuracy. For the scoring model we use the repository default of GPTNeo-2.7B and for the reference model we again use the default of GPT-J-7B. Since neither of these models were used to generate continuations in our dataset, we felt that this was a reasonable choice. Binoculars (Hans et al., 2024) This detector uses perplexity divided by cross-entropy between two very similar language models as the metric for detection. In our implementation we use the code from the official GitHub repository and calculate perplexity using the default models from the repository, namely Falcon 7B and Falcon 7B Instruct (Almazrouei et al., 2023). Much like FastDetectGPT, since neither of these models were used to generate continuations, we believed this made for a fair comparison. LLMDet (Wu et al., 2023) This detector computes the proxy-perplexity of the input text from 10 different small language models and uses these features for detection. The proxy-perplexity is an approximation of the true perplexity calculated by repeatedly sampling n-grams from models rather than by actually running them. Of the models used, none of them were used for generations in our dataset, thus this was a fair comparison. GPTZero (Tian and Cui, 2023) GPTZero is a closed-source commercial detector released in January 2023 and was among the first to gain widespread media attention for their detection-as-aservice business model. We queried the detector on January 24th 2024 using the v2 API18 and threshold on the completely_generated_prob field. Originality Originality is a closed-source commercial detector released in November 2022 and was the first company to adopt the detection-as-aservice business model. We queried this detector from January 24th to 25th through the v1 API19 and threshold on the ‘score’ field in the output JSON. For the Czech and German news domains, we specifically query the multilingual “version 3” of the detector. Winston This detector is the closed-source commercial detector that claims the highest accuracy 18https://api.gptzero.me/v2/predict/text 19https://api.originality.ai/api/v1/scan/ai 12484 out of any detector (99.98%). We query this model through the v1 API20 and specifically set the input language to German when we detect that our input text is in German. Unfortunately, since winston does not support detection in Czech, we use English as the input language for the Czech news domain. We once again threshold on the ‘score’ field in the output JSON. ZeroGPT This detector is the final commercial detector. We run this as it is the only detector besides GPTZero that has been evaluated in another benchmark paper. We query the API at the following link21 and use the ‘isHuman’ return field as the classifier output. F.2 Thresholds In order to find the thresholds that achieve a fixed false positive rates we had to implement some basic search procedure. We searched our thresholds in a linear fashion: We start at the threshold corresponding to the mean score of human data (50% FPR) and approach the desired false positive rate by iteratively incrementing or decrementing the threshold. If we overshoot the target fpr we divide our step size in half and flip the sign. We continue to do this until the false positive rate is within ϵ = 0.0005 of the desired false positive rate or until 50 iterations are reached. If 50 iterations are reached without convergence then we select the threshold corresponding to the FPR that is closest to the target while still being less than the target. If there is no such threshold then we note that the detector could not reach the target FPR and simply choose the value closest to and greater than the target. In Table 13 we list the thresholds our algorithm found for each detector along with the exact false positive rates that these thresholds allow. G Extended Figures and Tables G.1 Dataset Statistics and Evaluations In Figures 10 and 11 we report extended statistics about the dataset. We see some interesting trends here, namely that human-written text is still significantly less likely than machine-generated text according to LLaMA 2 7B and that it is also the least repetitive. These results push back on claims that language models are approaching human-level performance and therefore detection is unreasonably difficult. 20https://api.gowinston.ai/functions/v1/predict 21https://api.zerogpt.com/api/detect/detectText G.2 Extended Heatmaps In Figure 12 we show the extended heatmaps from Figure 6 in the main paper. We see that the trend holds that RoBERTa GPT2 is significantly better on GPT2 generations and that RADAR is uncharacteristically bad on IMDb Movie Reviews. G.3 Model Performance vs. Domain In Table 14 we report the results of our 12 detectors across all different domains of generated text. We see that the metric-based methods such as Binoculars and FastDetectGPT generalize surprisingly well across domains. We also see that in general detectors perform well but occasionally perform surprisingly poorly. G.4 Detector Accuracy vs. Decoding Strategy In Table 5 we report the results of our evaluation broken up by category of model and by decoding strategy. Much like in Figure 5 from the main paper, we see that certain combinations of decoding strategies and models can cause detector accuracy to plummet unexpectedly. This raises serious concerns for the robust deployment of detectors. G.5 Detector Accuracy vs. Adversarial Attack In Table 16 we report the full version of the results from Table 6 in the main paper. We see similar trends, namely that certain adversarial attacks work better on certain detectors and that occasionally adversarial attacks actually improve detector performance rather than harm it. One notable inclusion here is the zero-width space attack, which seems to either cause detectors to assign all positive or all negative labels. Future work should investigate this phenomenon. H Example Generations In Table 17 and 18 we provide example outputs for each generative model and adversarial attack. We see that different generative models have significantly differing styles, underscoring the difficulty of the detection problem. 12485 2.5 5.0 7.5 10.0 12.5 15.0 0 1000 2000 3000 ChatGPT 2.5 5.0 7.5 10.0 12.5 15.0 0 500 1000 1500 Cohere 2.5 5.0 7.5 10.0 12.5 15.0 0 1000 2000 Cohere-Chat 2.5 5.0 7.5 10.0 12.5 15.0 0 2500 5000 7500 GPT2 2.5 5.0 7.5 10.0 12.5 15.0 0 1000 2000 3000 GPT3 2.5 5.0 7.5 10.0 12.5 15.0 0 1000 2000 GPT4 2.5 5.0 7.5 10.0 12.5 15.0 0 200 400 Human 2.5 5.0 7.5 10.0 12.5 15.0 0 2500 5000 7500 LLaMA-Chat 2.5 5.0 7.5 10.0 12.5 15.0 0 2000 4000 6000 MPT 2.5 5.0 7.5 10.0 12.5 15.0 0 1000 2000 3000 MPT-Chat 2.5 5.0 7.5 10.0 12.5 15.0 0 2000 4000 Mistral 2.5 5.0 7.5 10.0 12.5 15.0 0 2000 4000 6000 Mistral-Chat Figure 10: Histograms of perplexity according to LLaMA 2 7B per generative model in the dataset. We see that there is still a significant difference between human-written and machine-generated text with respect to perplexity. 20 40 60 80 100 0 1000 2000 3000 ChatGPT 20 40 60 80 100 0 1000 2000 Cohere 20 40 60 80 0 500 1000 1500 Cohere-Chat 20 40 60 80 100 0 1000 2000 GPT2 20 40 60 80 100 0 500 1000 1500 GPT3 10 20 30 40 50 60 0 500 1000 1500 GPT4 20 40 60 80 100 0 500 1000 Human 20 40 60 80 0 2000 4000 LLaMA-Chat 20 40 60 80 100 0 500 1000 1500 MPT 20 40 60 80 100 0 1000 2000 MPT-Chat 20 40 60 80 100 0 500 1000 1500 Mistral 20 40 60 80 100 0 2000 4000 Mistral-Chat Figure 11: Histograms of SelfBLEU (Zhu et al., 2018) per generative model in the dataset. We see that continuation models tend to be more repetitive than chat models and that human-written text is by far the least repetitive. 12486 GPT2 Mistral-Chat GPT3 LLaMA-Chat ChatGPT MPT Mistral Cohere-Chat MPT-Chat Cohere GPT4 News Recipes Books Reviews Reddit Wiki Abstracts Poetry 0.996 0.829 0.815 0.748 0.694 0.689 0.640 0.591 0.536 0.466 0.415 0.902 0.691 0.850 0.817 0.785 0.480 0.457 0.818 0.485 0.682 0.623 0.987 0.685 0.854 0.629 0.588 0.585 0.548 0.725 0.432 0.414 0.287 0.976 0.661 0.795 0.569 0.612 0.520 0.462 0.577 0.451 0.245 0.387 0.992 0.647 0.838 0.428 0.437 0.552 0.477 0.318 0.493 0.164 0.252 0.959 0.647 0.670 0.715 0.695 0.412 0.373 0.759 0.449 0.546 0.332 0.978 0.587 0.698 0.580 0.508 0.631 0.481 0.626 0.458 0.373 0.386 0.921 0.113 0.299 0.050 0.010 0.337 0.365 0.121 0.038 0.040 0.039 RoBERTa (GPT2) ChatGPT GPT4 LLaMA-Chat Mistral-Chat MPT-Chat Cohere-Chat GPT3 GPT2 MPT Mistral Cohere Abstracts Books News Recipes Reviews Wiki Poetry Reddit 1.000 0.985 0.998 0.995 0.958 0.900 0.910 0.482 0.297 0.360 0.760 1.000 1.000 1.000 1.000 0.917 0.955 0.845 0.405 0.258 0.265 0.655 1.000 1.000 0.990 0.970 0.863 0.415 0.330 0.280 0.205 0.190 0.135 1.000 0.980 0.990 0.910 0.693 0.830 0.815 0.403 0.315 0.338 0.575 0.995 1.000 0.978 0.998 0.922 0.855 0.940 0.455 0.215 0.320 0.425 0.995 0.975 0.965 0.835 0.807 0.685 0.455 0.422 0.253 0.270 0.460 0.990 0.990 0.995 0.992 0.845 0.885 0.695 0.400 0.275 0.422 0.390 0.975 0.840 0.963 0.948 0.870 0.445 0.615 0.258 0.167 0.148 0.115 GPTZero ChatGPT GPT4 Mistral-Chat LLaMA-Chat MPT-Chat GPT3 Cohere-Chat MPT GPT2 Cohere Mistral Recipes News Wiki Books Reddit Abstracts Poetry Reviews 1.000 0.999 0.985 0.994 0.789 0.988 0.908 0.893 0.799 0.946 0.640 0.999 0.999 0.996 0.995 0.935 0.933 0.877 0.820 0.810 0.706 0.663 0.999 0.963 0.860 0.970 0.693 0.873 0.806 0.568 0.706 0.597 0.613 0.992 0.965 0.999 0.993 0.895 0.991 0.923 0.629 0.768 0.546 0.689 0.969 0.792 0.946 0.872 0.951 0.978 0.699 0.434 0.491 0.450 0.467 0.897 0.880 0.772 0.780 0.880 0.822 0.715 0.420 0.554 0.363 0.429 0.361 0.153 0.697 0.322 0.545 0.860 0.773 0.572 0.646 0.249 0.526 0.004 0.007 0.135 0.035 0.193 0.463 0.264 0.098 0.222 0.057 0.118 RADAR Figure 12: Extended heatmap of RoBERTa GPT2, GPTZero, and RADAR’s performance across all models and domains in the RAID dataset. We see that the trends noted in Figure 6 still hold. 12487 News Wiki Reddit Books Abstracts Reviews Poetry Recipes RoBERTa-B GPT2 0.032 0.379 0.477 0.586 0.055 0.539 0.998 0.916 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) RoBERTa-L GPT2 0.070 0.459 0.100 0.161 0.085 0.298 0.762 0.315 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.1%) (5.0%) (5.0%) RoBERTa-B C-GPT 0.987 0.983 0.219 0.996 0.007 0.371 0.295 0.998 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) RADAR 0.022 0.061 0.695 0.174 0.31 0.997 0.457 0.016 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) GLTR 0.788 0.788 0.767 0.742 0.726 0.757 0.756 0.863 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) FastDetectGPT 0.920 0.870 0.870 0.930 0.860 0.900 0.940 0.880 (4.8%) (5.1%) (5.1%) (4.9%) (4.6%) (4.8%) (5.9%) (5.5%) LLMDet 1.000 1.000 1.000 1.000 0.999 1.000 1.000 0.998 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.1%) (5.0%) (5.0%) Binoculars 0.077 0.093 0.099 0.085 0.092 0.097 0.084 0.094 (4.9%) (5.0%) (5.0%) (5.0%) (5.0%) (4.9%) (5.0%) (5.0%) GPTZero 0.047 0.032 0.057 0.125 0.125 0.070 0.031 0.035 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) Originality 0.375 0.938 0.250 0.312 0.257 0.461 0.047 0.750 (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) (5.0%) Winston 0.001 0.970 0.062 0.998 0.000 0.062 0.875 0.996 (4.0%) (5.0%) (5.0%) (5.0%) (6.0%) (5.0%) (5.0%) (5.0%) ZeroGPT(*) 1.000 1.000 0.375 0.250 0.500 1.000 0.125 1.000 (29.0%) (48.0%) (1.0%) (9.0%) (4.0%) (5.0%) (5.0%) (52.0%) Table 13: Thresholds found by our search and the exact False Positive Rates on our dataset. We see that ZeroGPT is incapable of achieving the target FPR of 5% in many domains. News Wiki Reddit Books Abstracts Reviews Poetry Recipes RoBERTa-B GPT2 74.3 64.7 56.3 67.9 56.3 69.1 23.9 64.6 RoBERTa-L GPT2 69.8 59.5 54.1 62.4 58.9 58.2 24.4 67.2 RoBERTa-B CGPT 45.1 48.7 44.6 53.5 72.3 65.3 32.0 5.9 RADAR 88.0 76.8 71.8 84.5 66.7 14.1 53.0 88.5 GLTR 66.9 64.3 65.7 74.0 62.0 67.3 34.8 67.2 FastDetectGPT 74.2 77.3 70.9 76.2 76.4 77.5 63.4 74.6 LLMDet 39.8 32.6 39.6 37.1 18.0 33.1 30.7 48.1 Binoculars 80.7 76.7 79.4 83.7 79.1 80.1 81.0 76.6 GPTZero 58.1 62.8 57.0 71.4 74.9 70.5 69.5 67.6 Originality 88.4 83.2 85.0 90.4 87.7 87.3 75.1 82.8 Winston 72.4 54.9 68.9 70.7 94.7 72.9 64.3 68.9 ZeroGPT(*) 72.2 70.6 65.1 73.3 60.3 68.6 50.0 63.7 Table 14: Accuracy Score at FPR=5% for detectors across different domains. We see that metric-based methods perform surprisingly well across domains and that detectors can perform surprisingly poorly on unseen domains. 12488 GPT2 GPT3 ChatGPT GPT4 Cohere Mistral MPT Llama Total Chat? (Y/N) ✗ ✗ ✓ ✓ ✗ ✓ ✗ ✓ ✗ ✓ ✓ R-B GPT2 84.0 74.7 65.4 42.4 42.9 61.1 45.7 65.9 49.2 45.9 68.7 59.1 R-L GPT2 96.3 72.4 53.7 33.8 37.3 56.6 47.6 60.5 52.6 41.6 56.7 56.7 R-B CGPT 36.7 54.2 66.1 30.0 31.3 49.6 17.7 69.9 17.8 53.3 70.1 44.8 RADAR 64.7 88.6 82.1 76.0 51.4 77.2 54.1 83.6 58.0 76.5 78.5 70.9 GLTR 66.6 85.1 81.4 53.7 54.2 67.4 49.1 75.4 35.4 52.5 81.6 62.6 F-DetectGPT 72.1 95.4 96.1 73.9 84.7 85.1 58.2 81.3 45.3 57.1 94.0 73.6 LLMDet 48.2 40.2 18.9 27.0 32.6 35.6 31.5 35.7 28.2 21.4 55.1 35.0 Binoculars 68.9 99.2 99.6 91.9 94.8 95.4 62.3 91.7 45.2 70.8 97.6 79.6 GPTZero 38.8 70.1 99.4 97.1 43.9 74.6 28.9 95.6 24.8 85.9 98.5 66.5 Originality 99.1 98.2 98.2 89.9 78.9 90.6 71.0 95.5 58.1 76.2 94.7 85.0 Winston 47.6 77.8 99.6 98.8 63.6 86.2 46.1 94.9 24.8 79.2 97.5 71.0 ZeroGPT(*) 42.4 90.2 93.2 67.1 65.9 76.6 49.3 81.4 27.3 66.0 93.7 65.5 Table 15: Accuracy at FPR=5% for detectors on non-adversarial outputs of different models. We see that base models are more difficult to detect than their chat fine-tuned counterparts and that metric-based methods show impressive cross-model generalization. Asterisks (*) indicate that the detector was unable to achieve the target FPR. None AS AD HG IP NS PP MS SYN ULS WSA ZWS RoB-B GPT2 59.1 55.6 37.1 7.6 56.9 55.9 68.9 43.8 71.5 18.8 45.2 99.9 RoB-L GPT2 56.7 52.4 33.2 21.3 55.1 51.7 72.9 39.5 79.4 19.3 40.1 99.9 RoB-B CGPT 44.8 43.3 38.0 0.0 5.2 44.3 49.2 42.1 39.6 31.7 0.1 0.0 RADAR 70.9 70.8 67.9 59.3 73.7 71.0 67.3 69.5 67.5 70.4 66.1 82.2 GLTR 62.6 61.2 52.1 24.3 61.4 59.9 47.2 59.8 31.2 48.1 45.8 97.2 F-DGPT 73.6 71.6 64.7 51.4 72.0 68.2 71.8 70.7 34.0 60.4 64.4 98.9 LLMDet 35.0 33.9 27.4 40.6 27.2 33.8 28.5 32.7 27.3 23.4 4.4 27.1 Binoculars 79.6 78.2 74.3 37.7 71.7 77.1 80.3 78.0 43.5 73.8 70.1 99.1 GPTZero 66.5 64.9 61.0 66.2 66.2 65.8 64.0 65.1 61.0 56.5 66.2 66.2 ZeroGPT 65.5 65.4 59.7 82.4 64.9 64.7 46.7 64.7 18.8 54.5 64.2 48.0 Originality 85.0 83.6 71.4 9.3 85.1 86.0 96.7 78.6 96.5 75.8 84.9 4.9 Winston 71.0 68.9 66.9 26.3 69.8 69.0 52.6 67.5 63.6 56.8 46.8 25.0 Table 16: Accuracy Score at FPR=5% for all detectors across different adversarial attacks. Abbrevations are: AS: Alternative Spelling, AD: Article Deletion, HG: Homoglyph, IP: Insert Paragraphs, NS: Number Swap, PP: Paraphrase, MS: Misspelling, SYN: Synonym Swap, ULS: Upper Lower Swap, WSA: Whitespace Addition, ZWS: Zero-Width Space Addition 12489 Domain: Reddit, Title: VW album track list/lyric sheet inquiry? (ID: 25fb9f14-c7c5-4e76-95ac-bf149780f30d) Prompts: (Chat) Write just the body of a Reddit post titled "VW album track list/lyric sheet inquiry?". (Non-Chat): The following is the full text of a post titled "VW album track list/lyric sheet inquiry?" from reddit.com Original Reddit Post (Human) I bought a "Modern vampires of the city" on vinyl recently, and I was wondering if anyone had information on the nature of the track list? Its on the backside of the poster, and its a normal tracklist and lyric sheet except there’s some things circled and underlined and stuff. I looked it up, and I thought origionally that it was supposed to look like a playbook, like football, but it looks like the marks are intentional to the numbers and stuff so then I thought, maybe it was them planning their album? And they just decided to add some flair to the track list? Either way, im curious, and a quick Google search didn’t work, so I figured you guys would know better than me. I would love any input I could get GPT-2 "Hi, this is a thread for people of any ability to help me with a query related to the following track: "Cherries On The Wind" from the album ’Volkswagen’, which was released July 2008 to August 2009. I have a few questions about all of them (my two biggest questions are ’Who sings it?’ and ’Who wrote it and why?’): (1) Is the singer named "Cherries" a female? If not, is it a boy, female or something else? (2) Is it a female lead or male supporting singer in the band? (3) Who writes the lyrics which are written in both German and English, in the style of a pop song? Is it a German name or is it the name of a song artist, musician or songwriter? (4) Is there a female vocalist in the band? If so, can you shed some light on the singer? Are there German solo projects by her? If so, can you shed some light on the singer? Are there German solo projects by her? http://www.youtube.com/watch?v=jY9FyMwKvCk&feature=fvrs If you need to ask more question, just add them to the top of the post and I’ll answer. Thanks much! 2 people have voted this message helpful Lyrics: https://www.youtube.com/watch?v=fSdWJ4Wy7k0 GPT-3 Hey all, I’m wondering if anyone might be able to help me out with something. I’m trying to track down a copy of the lyric sheet/track list for the album "Volkswagen." I’ve been able to find a few images online, but they’re all partial and/or low quality. If anyone has a high quality scan or knows where I can find one, I would really appreciate it! Thanks in advance! ChatGPT Hello everyone, I recently got my hands on a vinyl copy of VW’s latest album and I am absolutely loving it! The production, the melodies, and the lyrics are just top-notch. I’ve been obsessively listening to it and can’t get enough. However, I was wondering if anyone has access to or knows where I can find the track list and lyric sheet for the album? I’ve been searching online but haven’t had much luck so far. I would really love to dive deeper into the lyrics and understand the meaning behind some of the songs. If anyone can point me in the right direction or has any information, I would greatly appreciate it. Thanks in advance! GPT-4 Hello fellow VW fans! I’ve recently got my hands on a vintage vinyl copy of one of the older Vampire Weekend albums. However, it didn’t come with a track list or lyric sheet. It’s proving to be a bit of a task to identify some of the tracks and I’d love to sing along to my favourites. If anyone could point me to where I might find a track list and/or a place where I can find the lyrics to the songs, I would greatly appreciate it. Let me know or feel free to message me directly. Thanks for your help, guys! Mistral > This has been bugging me for a bit. I never got my copy of "VW" (or "VW EP" - whichever you want to call it) so I never had a chance to look at the lyric sheet. I know there’s an extended version on iTunes, but I haven’t the funds to purchase it since the EP and "TNWT" came out at the same time. But I would like to know the lyrics to at least "Wonderful Life" (for which I know there’s no lyrics of), "The New Wave", "Song of Our Hits", "What If God Was One Of Us", and "The Hits Just Keep Coming". I’d like to know how the song "What If God Was One Of Us" fits with the storyline (or is it related at all?), as well as the whole thing behind "What Would She Say?", and the ending song that has to be either a spoiler for "VW" ("I Am the Wind"), or something relating to "TNWT" (I think that line from the "TNWT" lyric sheet about a "new power" in the verse between "The Hits Just Keep Coming" and "Song For Our Hits" is what I’m thinking of) Mistral Chat Hey everyone, I just got into The Velvet Underground and I’ve been a huge fan of their work ever since. I’ve been doing a deep dive into their discography and I was wondering if anyone had a track list or lyrics sheet for the album "VW"? I can’t find it anywhere online and it’s driving me crazy. Any help would be greatly appreciated! Thanks in advance! Cohere VW album track list/lyric sheet inquiry? Hello! I was wondering if anyone had scanned/digitized the track list or lyric sheet from the VW album (oil diffuser). I know it’s floating around here somewhere, but damned if I can find it. Thanks! Cohere Chat Check this out! Been waiting for this for ages!! Hey all – I’ve recently fallen in love with VW’s latest album (thank you Reddit for the recommendation!) but I cannot for the life of me find the track list or lyrics anywhere. The songs are amazing but still quite new for me, so I have no clue what the lyrics are and wanna know!!! Does anyone have a track list/ lyric sheet that they could share? Any help is appreciated! 12490 MPT I read this on Reddit and think it bears repeating and expanding upon on a site that VW followers frequent, and I figured it behooved me to get it out to you guys ASAP The first batch of test pressings for the new Volcano Witch album, which was recorded during a recent trip to Chicago by VW member John McEntire, will be arriving very soon. Before this happens, the band wanted to know whether or not our fans will be interested in a test pressing lyric sheet. This is a special edition (limited number) item that will only be made available to those supporters that were very instrumental to the creation & completion of the new VW album. The test pressing lyric sheet will feature all 10 tracks with full handwritten lyrics by John, who also took the time to hand write lyrics to two new never heard before VW songs, which will only be available on the lyric sheet & test pressings of the album. If our followers are interested in this special edition item please submit a screenshot of the below image with the word “YES” placed in the comment section of this post. If you’re unable to submit a screenshot simply place the word “YES” in the comment section of this post & I’ll make sure you’re added to list for the special test pressing lyric sheet. All proceeds from the test pressing will be donated to The Center For Discovery, the school my daughter attends, in her honor. . . In the meantime, while the band waits to hear from their fans, we’re giving you the chance to be one of 100 lucky winners to win a signed “Storm Lord” poster that was created by the band’s longtime collaborator Eric Adrian Lee, who also designed & made all the artwork for the new Volcano Witch album. To enter yourself into this giveaway all you have to do is share this post. That’s it, simple as that. We’ll pick the lucky winners on October 2nd & then post the giveaway winners on our social media. So...are you interested in the special test pressing lyric sheet? Source: http://redd.it/30fsl1 Posted at 10:47 PM in Volcano Witch | Permalink | Comments (0) Volcano Witch - "A.M. (Live at the Empty Bottle)" MPT Chat Hey everyone! I’m a big fan of VW and I was wondering if anyone knows where I can find the album track list and lyric sheet for their latest album? I’ve been searching online but can’t seem to find it anywhere. Any help would be much appreciated! Thanks in advance! Llama Chat Hey fellow Redditors, I’m hoping someone can help me out with a little inquiry I have about the Velvet Underground’s album "The Velvet Underground & Nico". I’ve been trying to find a track list/lyric sheet for the album that includes the bonus tracks from the CD reissue, but I can’t seem to find one anywhere. I’ve searched high and low on Google, but all the track lists I come across only include the original track list for the album, without the bonus tracks. I’ve also checked the album’s Wikipedia page and various other websites, but no luck. Does anyone know of a resource or website that has the complete track list/lyric sheet for the album, including the bonus tracks? I’d really appreciate it if someone could point me in the right direction. Thanks in advance for the help, and sorry if this has been asked before (I searched the subreddit but didn’t find anything). Table 17: Output generations from each model in RAID for the Reddit post titled “VW album track list/lyric sheet inquiry?” using random sampling (temp=1, p=1) and no repetition penalty. Prompts used are listed in Table 12 12491 Domain: Reviews, Model: Cohere, Decoding: Sampling (No Penalty), (ID: b0aa73c4-ff31-4a43-9472-8fe7a85e2754) Prompt: The following is the full text of a review for the movie "Fast Five" from IMDb.com: Original (No Attack) This is probably my favorite action movie of 2011, can’t wait to see Fast and Furious 6. Bringing back Vin Diesel and the late Paul Walker was a great idea, making this franchise better than ever. The action was great and suspenseful, and the new cast members were amazing. This is a must-see. Alternative Spelling This is probably my favourite action movie of 2011, can’t wait to see Fast and Furious 6. Bringing back Vin Diesel and the late Paul Walker was a great idea, making this franchise better than ever. The action was great and suspenseful, and the new cast members were amazing. This is a must-see. Article Deletion This is probably my favorite action movie of 2011, can’t wait to see Fast and Furious 6. Bringing back Vin Diesel and the late Paul Walker was a great idea, making this franchise better than ever. The action was great and suspenseful, and the new cast members were amazing. This is a must-see. Homoglyph This is probably my favorite action movie of 2011, can’t wait to see Fast and Furious 6. Bringing back Vin Diesel and the late Paul Walker was a great idea, making this franchise better than ever. The action was great and suspenseful, and the new cast members were amazing. This is a must-see. Insert Paragraphs This is probably my favorite action movie of 2011, can’t wait to see Fast and Furious 6. Bringing back Vin Diesel and the late Paul Walker was a great idea, making this franchise better than ever. [¶] The action was great and suspenseful, and the new cast members were amazing. This is a must-see. Number Swap This is probably my favorite action movie of 0346, can’t wait to see Fast and Furious 6. Bringing back Vin Diesel and the late Paul Walker was a great idea, making this franchise better than ever. The action was great and suspenseful, and the new cast members were amazing. This is a must-see. Paraphrase This is surely the best of all the action films in 2011. I’m really looking forward to Fast & Furious 6. I loved that they brought back Vin-Diesel and the late Paul-Walker, it made the series better than ever. The action was suspenseful and exciting and the new actors were all great. This is a must-see. Misspelling This is probably my favorite action movie of 2011, can’t wait to see Fast and Furious 6. Bringing back Vin Diesel and the late Paul Walker was a grat idea, making this franchise better than ever. The action was grate and suspenseful, and the new cast members were amazing. This is a must-see. Synonym Swap This is also my favourite action film of 2011, can’t wait to see Fast and Furious 6. Taking down Vin Diesel and the late Scott Davis was a good thing, doing this franchise better than ever. The action was good and suspenseful, and the new cast members were amazing. This is a must-see. Upper Lower This is probably my favorite action movie of 2011, can’t wait to see Fast and furious 6. Bringing back Vin Diesel and the late Paul walker was a great idea, making this franchise better than ever. The action was great and suspenseful, and the new cast members were amazing. This is a must-see. Whitespace This is probably my favorite[ ] action movie of 2011, can’t wait to see Fast and[ ] Furious 6. Bringing[ ] back Vin Diesel[ ] and the late Paul Walker was a great idea,[ ] making this[ ] franchise better[ ] than ever. The action was great and suspenseful, and the new cast members[ ] were amazing. This is a must-see. Zero-Width Space T[U+200B]h[U+200B]i[U+200B]s[U+200B] i[U+200B]s[U+200B] [U+200B]p[U+200B]r[U+200B]o [U+200B]b[U+200B]a[U+200B]b[U+200B]l[U+200B]y[U+200B] [U+200B]m[U+200B]y[U+200B] [U+200B]f[U+200B]a[U+200B]v[U+200B]o[U+200B]r[U+200B]i[U+200B]t[U+200B]e[U+200B] [U+200B]a[U+200B]c[U+200B]t[U+200B]i[U+200B]o[U+200B]n[U+200B] [U+200B]<...> Table 18: Example outputs for each adversarial attack in RAID when applied to the IMDb movie review for “Fast Five” generated by Cohere using random sampling (temp=1, p=1) and no repetition penalty. Blue color indicates the portion of the text that was changed by the attack. Detailed descriptions of each attack along with their effective attack surfaces are listed in Appendix E.4. 12492
2024
674
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12493–12509 August 11-16, 2024 ©2024 Association for Computational Linguistics Silent Signals, Loud Impact: LLMs for Word-Sense Disambiguation of Coded Dog Whistles Julia Kruk Michela Marchini Rijul Magu Caleb Ziems David Muchlinski Diyi Yang Georgia Institute of Technology, Stanford University {jkruk3, rijul.magu, dmuchlinski3}@gatech.edu, {marchini, cziems, diyiy}@stanford.edu Abstract Warning: This paper contains content that may be upsetting or offensive to some readers. A dog whistle is a form of coded communication that carries a secondary meaning to specific audiences and is often weaponized for racial and socioeconomic discrimination. Dog whistling historically originated from United States politics, but in recent years has taken root in social media as a means of evading hate speech detection systems and maintaining plausible deniability. In this paper, we present an approach for word-sense disambiguation of dog whistles from standard speech using Large Language Models (LLMs), and leverage this technique to create a dataset of 16,550 highconfidence coded examples of dog whistles used in formal and informal communication. Silent Signals1 is the largest dataset of disambiguated dog whistle usage, created for applications in hate speech detection, neology, and political science. 1 Introduction “Ronald Reagan liked to tell stories of Cadillac-driving ’welfare queens’ and ’strapping young bucks’ buying T-bone steaks with food stamps. In flogging these tales about the perils of welfare run amok, Reagan always denied any racism and emphasized he never mentioned race.” — Ian Haney-Lopez (2014) Dog whistles are coded language which, though seemingly innocuous to the general public, can communicate a covert harmful meaning to a specific in-group (Henderson and McCready, 2018a). Though this coded language appears in all kinds 1huggingface.co/datasets/SALT-NLP/silent_ signals The Nuances of Dog Whistles “Why do you type like this? It’s just oozing soy?” A select in-group will recognize that the speaker used soy with the coded meaning: implying something or someone is liberal, therefore weak and effeminate. The general public may sense that the word soy is used strangely, but will be unaware of the coded meaning of the word in this context. Figure 1: This figure demonstrates the nuances of dog whistle detection as a word can be used in a coded or non-coded sense. All illustrations were created using Adobe Firefly. of speech, the idea of the ‘dog whistle’ historically originates in politics (Albertson, 2014; HaneyLópez, 2014). In the United States, political dog whistles gained popularity in the Civil Rights Era following the landmark Brown vs. Board of Education Supreme Court decision, as overt racism became less acceptable and politicians turned to coded language to maintain racial animus in political discussions while maintaining plausible deniability (Saul, 2044). Dog whistle use has fluctuated in the last six decades, but their use remains a consistent signal of a speaker’s underlying prejudices, whether in the domain of American politics or otherwise. Their use has been shown to successfully provoke the underlying prejudice of target audiences, and wield racial anxiety to steer public 12493 opinion and policy (Drakulich et al., 2020; Wetts and Willer, 2019). Improved understanding of dog whistles has applications in content moderation, computational social science, and political science. detecting and explaining coded discriminatory speech is a challenging task for NLP systems, as dog whistles famously evade toxicity and hate speech detection (Magu et al., 2017; Magu and Luo, 2018; Mendelsohn et al., 2023). This is because many dog whistle terms have standard vernacular meanings. Consider the example in Figure 1 on the word “soy," which in most contexts refers to a soybean product, but can also serve as a dog whistle to denigrate liberal or establishment Republican men for perceived feminine attributes, as in “That guy has soy face.” To study this language, prior work has focused on taxonomically organizing and archiving dog whistles with representative examples (Torices, 2021; Mendelsohn et al., 2023; Ryskina et al., 2020; Zhu and Jurgens, 2021). However, dog whistles can also evolve over time in order to remain covert, a process which has only become more rapid in the age of the internet (Dor, 2004; Merchant, 2001). This work presents a large dataset to track examples of dog whistles in their various forms, and help train language models to do the same. This resource can be used to (1) study how dog whistles emerge and evolve (Saul, 2044; Weimann and Am, 2020), (2) uncover ways to predict new dog whistle terms from knowledge of old ones, (3) study the prevalence of dog whistles in natural settings, and (4) improve hate speech and toxicity detection systems. As a preliminary step, this work employs LLMs for dog whistle word-sense disambiguation— a new task. We then apply these architectures to create Silent Signals, which is the largest dataset of coded dog whistle examples. It contains formal dog whistles from 1900-2023 Congressional records, and informal dog whistles from Reddit between 2008-2023. Silent Signals also contains vital contextual information for reliably decoding their hidden meanings and enabling future study of how Dog Whistles are used in discourse. Our contributions include: • The Silent Signals dataset of 16,550 dog whistle examples. • A novel task and verified method for dog whistle word-sense disambiguation. • Experiments with GPT-3.5, GPT-4, Mixtral and Gemini on dog whistle detection. • The Potential Dog Whistle Instance dataset with over 7 million records from informal and formal communication that contain dog whistle key words, and can be used for further scaling Silent Signals. 2 Related Work Hate Speech Prior work categorizes abusive language with two dimensions: target specificity (directed or generalized) and explicitness (explicit or implicit) (Waseem et al., 2017). In addition to detecting of explicit language (Davidson et al., 2017; Nobata et al., 2016), recent work also labels, detects, and explains the latent meaning behind implicitly abusive language (ElSherief et al., 2021; Hartvigsen et al., 2022; Breitfeller et al., 2019; Sap et al., 2020; Zhou et al., 2023), but these works do not primarily focus on dog whistles at scale. Dog Whistles Though there is limited prior NLP research on dog whistles, prior work in linguistics has explored the semantics and pragmatics of dog whistles (Saul, 2044; Torices, 2021; Quaranto, 2022; Perry, 2023), and applied agent-based models to the study of the evolution of dog whistle communication (Henderson and McCready, 2018a, 2020). Mendelsohn et al. (2023) produced a glossary of over 300 dog whistles used in both the formal and informal settings, and conducted a preliminary survey of the abilities of GPT-3 in the task of dog whistle definition. We extend upon this initial exploration by breaking down Automatic Dog Whistle Resolution into sub-tasks of varying complexity, and evaluating LLMs that have been shown to preform well on content moderation tasks (Jiang et al., 2024; Buscemi and Proverbio, 2024). The Allen AI Glossary of Dog Whistles (Mendelsohn et al., 2023) is also instrumental in the creation of the Silent Signals dataset presented in this work. Additionally, it is important to note that Dog whistle research in NLP is not limited to American or English-speaking contexts, but extends to coded language in Chinese (Xu et al., 2021) and Swedish (Hertzberg et al., 2022) communication as well. Political Science Implications After the Jim Crow era, once explicitly racist commentary was no longer tolerated (Mendelberg, 2001; Lasch, 2016), dog whistles became part of the GOP’s “Southern Strategy” to maintain racial animus in politics without attracting public ridicule. Although its use dates back to the early 20th century, it is still a very 12494 prominent part of American politics (Drakulich et al., 2020). It is a means of political manipulation that encourages people to act on existing biases and vote for policies against their own interests (Wetts and Willer, 2019; Saul, 2044). Prior work has also highlighted that the communication of different messages to different groups makes inferring policy mandates once a candidate assumes office incredibly problematic (Goodin and Saward, 2005). To this end, longitudinal dog whistle datasets could facilitate the study of political parties’ co-evolution with political, social, and economical events, and improved dog whistle detection could deter ongoing adverse political manipulation. Word Sense Disambiguation Modern Word Sense Disambiguation (WSD) systems can outperform humans (Maru et al., 2022; Bevilacqua and Navigli, 2020; Barba et al., 2021a; Conia and Navigli, 2021; Kohli, 2021). WSD tasks are typically treated as multi-label classification problems for resolving the semantic interpretation of target words in context (Bevilacqua et al., 2021; Barba et al., 2021b). A large body of research has focused on designing systems in supervised settings, leveraging pre-trained language models as foundational frameworks (Maru et al., 2022; Barba et al., 2021a; Scarlini et al., 2020; Blevins and Zettlemoyer, 2020). Notably, recent work has explored the use of LLMs for WSD, with findings pointing to strong performance on benchmark evaluations, but still short of levels attained by state-of-the-art models (Koco´n et al., 2023). The detection of pejorative or abusive uses of taboo lexemes has been framed as a WSD task (Dinu et al., 2021; Pamungkas et al., 2020, 2023), but in these cases, the taboos were overt, and these prior works did not evaluate LLMs. Our study extends the evaluation of LLMs for WSD to contexts where word senses can be deliberately obfuscated or coded. 3 Methods 3.1 Initial Data Collection To explore dog whistle disambiguation in both formal and informal settings, we pull public data from both Reddit and United States Congressional records. We collected Reddit comments from 20052022 in 45 controversial subreddits via the PRAW API and Pushshift archives (Baumgartner et al., 2020). In addition to the Stanford Congressional Records dataset (Gentzkow et al., 2018), we use the @unitedstatesproject parser (Judd et al., 2017) Prompt Design for LLM Experiments Automatic Dog Whistle Resolution You are a helpful assistant participating in a content moderation task of dog whistle detection. A dog whistle is { definition of dog whistles}. Does the following sentence contain a dog whistle? If it does, please identify the dog whistle. { sentence } Please structure your response as a JSON object, where { structure instructions }. Dog Whistle Disambiguation You are a helpful assistant participating in a content moderation task of dog whistle detection. A dog whistle is { definition of dog whistles}. The coded meaning of { dog whistle D } is: { definition of D }. Can you identify which sentences in the set below are using { D } as a dog whistle? [ { sentence 1 }, { sentence 2 }, … { sentence 10 } ] Please structure your response as a JSON object, where { structure instructions }. __ – systems message* __ – output structure request Decoding Dog Whistle Definitions You are an objective political scientist aiming to discern the meaning and targeted group of various dog whistles. A dog whistle is { definition of dog whistles}. The following examples all contain the use of the dog whistle { D }. [ { sentence 1 }, { sentence 2 }, … { sentence 10 } ] What is the coded meaning of the dog whistle { D }? What group of people is being covertly or negatively referenced through the coded use of this dog whistle? Figure 2: Visual representation of the different prompt structures used in Automatic Dog Whistle Resolution (Section 4.1) and Word-Sense Disambiguation (Section 4.3) experiments. to compile congressional speeches from January 1900 to September 2023. For more details on data collection see Appendix A. 3.2 The Potential Instance Dataset The Allen AI Glossary of Dogwhistles (Mendelsohn et al., 2023) provides a list of 340 dog whistles with surface forms and examples to seed our keyword search for dog whistles iin the data we collected from Reddit and Congressional Records. We check for the presence of over 1000 surface forms to identify potential “instances" of dog while use. Congressional entries which contain a keyword match are reduced to three sentence long excerpts. When a match is found in Reddit content, the entire submission or comment is retained. Each piece of content is annotated with the first key word found, so there may be more than one dog whistle present in one piece of text. The resulting Potential Instance dataset spans approximately 6 million instances from Reddit 12495 Dataset Description Size Informal Formal Potential Instance Dataset Produced via keyword search for dog whistle terms on data collected from Congressional records and Reddit. Used as input data for the creation of Silent Signals. 6,026,910 1,088,130 Synthetic-Detection Manually annotated dataset of dog whistles examples from the Potential Instance Dataset used for Dog Whistle Resolution. 50/50 split on positive and negative examples. 50 50 Synthetic-Disambiguation Manually annotated dataset where positive and negative examples are grouped by the dog whistle term they contain. Includes 13 distinct dog whistles. Designed specifically for evaluation on the Dog Whistle Disambiguation task. 74 50 Silent Signals Dataset Novel dataset of coded dog whistle examples created by applying the Dog Whistle Disambiguation task on the Potential Instance Dataset. 13,220 3,330 Table 1: Overview of the datasets used across experiments. comments, 1.1 million instances from Congressional records, and 327 distinct dog whistle. Entries in this dataset may be using the matched dog whistle phrase innocuously or with a coded meaning. At this step in the process, there is no way to separate the two cases. 3.3 Synthetic Datasets for Evaluation We build two synthetic datasets that are manually annotated for evaluation. The first, SyntheticDetection contains 50 positive examples of singleword dog whistle terms from Mendelsohn et al. (2023)’s glossary, and 50 negative examples from Reddit and Congressional content, half of which contain an innocuous use of a dog whistle keyword, and the other half contain no keyword. The second dataset, Synthetic-Disambiguation, contains 124 examples from Reddit and Congressional records which were manually annotated for evaluation. The dataset includes 13 distinct dog whistles, each with a corresponding set of 9-10 examples of this word used in discourse (with the exception of “jogger” which was added later with only 4 instances). These sets contain both coded and non-coded examples. This data was uniquely structured for the contrastive word-sense disambiguation task, where the model is provided a dog whistle, the definition of its coded meaning, and a set of ten sentences that contain that word or term. It is then prompted to separate sentences that use that word with the coded meaning from those that don’t. A breakdown of the datasets used in this study can be found in Table 1. All data was manually annotated by two members of our team with experience in this domain. The annotation was done individually, and only samples with agreement from both annotators were selected. Less than 5 samples were discarded due to disagreement. 4 LLM Experiments 4.1 Automatic Dog Whistle Detection If LLMs can reliably detect and explain political dog whistles with minimal annotation or engineering efforts, this would have significant implications for online moderation and computational social science, as this subtle form of hate speech detection could be at least partially automated at scale (Ziems et al., 2024). If LLMs match or outperform non-expert crowdworkers, then annotation resources could be shifted towards more efficient co-annotation between LLMs and experts (Li et al., 2023). To test this, we evaluate four different models on the Synthetic-Detection dataset in a zero or few-shot manner. Each model was provided with the definition of a political dog whistle and a candidate sentence, and was expected to predict whether the sentence included a dog whistle. If present, the model should then identify the span of text that contained the dog whistle and correctly define it. Candidate models include GPT-3.5 (Brown et al., 2020), GPT-4 (Achiam et al., 2023), Gemini (Google et al., 2023), and Mixtral (Jiang et al., 2024), as these have demonstrated strong performance on content moderation tasks (Jiang et al., 2024; Buscemi and Proverbio, 2024). When prompt engineering on GPT-3.5, we considered 5 different construct definitions and 3 additional phrasings of the prompt. We observed wide variation in performance, as in Mendelsohn et al. (2023), but found that the Wikipedia definition of dog whistle and the following prompt was optimal: “Does 12496 Zero-shot Few-shot Human GPT-3.5 GPT-4 Mixtral Gemini GPT-3.5 GPT-4 Mixtral Gemini Presence "is a dog whistle present?" Acc 66.8 80.0 85.0 68.0 81.0 76.0 86.6 81.0 86.7 F1 64.8 83.1 85.7 61.9 80.0 76.0 87.4 80.0 88.3 Identification "identify the dog whistle." Acc 49.0 58.0 59.8 59.0 69.7 65.7 71.1 69.0 75.5 F1 33.6 56.3 48.0 45.3 61.5 61.4 68.2 62.7 76.0 Definition "define the dog whistle" Acc 47.3 52.0 54.6 58.0 66.7 60.6 67.0 67.0 73.5 F1 29.7 46.7 37.1 43.2 56.0 53.0 61.9 59.3 73.5 Table 2: Metric scores on the Automatic Dog Whistle Detection task which surveys LLM and human ability to detect and define dog whistles in context. When presented with a sentence these experiments test the ability of a model/user to determine if the sentence contains a dog whistle and if so, correctly identify and define it. Predictions across all models have a statistical significance of p < 0.01 by chi-squared test, and human predictions have statistical significant of p <= 0.037. this sentence contain a dog whistle? If so, please identify it”. Visualizations of prompt structure can be seen in Figure 2. For additional prompt engineering details, see Appendix C.1. 4.2 Human Baseline for Dog Whistle Detection To ground the performance of LLMs on Automatic Dog Whistle Detection, we design a user study to establish a human baseline performance on this task. The aim is to gauge if LLMs could detect, identify, and define dog whistles better than a general populace. Leveraging the 100 test cases in the Synthetic-Detection dataset (Section 3.3), 62 Amazon Mechanical Turk workers were recruited to complete 720 unique annotations. Each annotation includes a classification on the presence of a dog whistle, and if applicable, identification of the dog whistle and its definition. Workers were expected to select dog whistle terms and definitions in isolation, from a set of 7 options (one of which was "I am not sure / not present in options") 2. Workers were paid $15/h. We vetted participants by inspecting their performance on non-coded negative examples. As half of the negative examples contained general speech, poor performance on these samples was deemed unlikely and indicative of poor quality annotation. This study included 28 individuals who identified as Male, 33 Female, and 2 Transgender, ages ranging from 18-22 to 50. Participants in this study also declared diverse political views. For more information on annotators that contributed to this study, please see Appendix 3. 2This study was approved by the Georgia Institute of Technology Institutional Review Board (IRB) 4.3 Dog Whistle Disambiguation As preliminary investigation showed that LLM performance on Automatic Dog Whistle Detection left much to be desired (see Section 4.1), we explore using these architectures for word-sense disambiguation. We leverage the Synthetic-Disambiguation dataset to evaluate LLMs’ capacity to distinguish contexts in which a keyword appears with a harmful coded meaning from those in which the keyword appears innocuously. The prompt includes the Wikipedia definition of a dog whistle, the dog whistle keyword, and the word’s coded meaning. The model performs classification for each of 10 example instances that contain the keyword, providing for each a label and an explanation for its decision. In an effort to improve the precision scores on the coded dog whistle instances, we simulate an ensemble-like approach where the model is prompted with the same task N consecutive times (as distinct chat completions) 3. Only predictions that have remained consistent over N inferences are kept, the others are discarded. We evaluate word-sense disambiguation of dog whistles over N = 1, 3, 5 consecutive inferences, as shown in Figure 3. Specific details of prompt structure can be seen in Figure 2. 5 Results Performance metrics from the Automatic Dog Whistle Detection experiments (Section 4.1) show that GPT-4 performed best on Dog Whistle Presence 3Experiments were run with lower temperature values to prevent inference variation. However, decreasing temperature resulted in decreases in Precision of up to 10 points. 12497 GPT-3.5 turbo GPT-4 Mixtral-7b Instruct Gemini 0 20 40 60 80 100 32.6 68.5 11.3 22.5 73.7 100 69.2 20.9 56.7 6 13.4 1 Inference GPT-3.5 turbo GPT-4 Mixtral-7b Instruct Gemini 31.3 65.7 5.7 4.2 76.5 66.7 100 19.7 50 3 2.1 3 Inference GPT-3.5 turbo 21.1 Dog Whistles Disambiguation Experimental Results 95.7 86.4 GPT-4 Mixtral-7b Instruct Gemini 82 0 0 0 5 Inference F1 Prec Recall 57.1 12.9 71.2 75 96.2 8.5 4.5 Dog Whistle Results Figure 3: Results of Dog Whistle Disambiguation task using the simulated ensemble across N = 1, 3, 5 inferences. In an attempt to compensate for output volatility, for each N-inferences experiment, predictions are only considered if they remained consistent across all N runs. Precision-1 and Recall-1 scores pertain to the positive class of coded dog whistle instances. prediction in the zero-shot setting, and Gemini performed best on all other categories. However, no architecture produced remarkably high metrics on the Dog Whistle Definition task, for which the highest F1-score achieved with Gemini is 73.5. For each model, there is a notable drop in performance as the complexity of the task increases from predicting the presence of a dog whistle, to identifying the dog whistle, and finally, defining it. For many examples, the model may correctly predict that a dog whistle is present, but incorrectly identify other provocative, but non-coded, language to be the dog whistle. Similarly, the model may correctly predict the presence of a dog whistle and correctly identify it in the text, but be unable to define it. This trend is also observed with the human baseline, which decreases with task complexity. Additionally, not that each LLM evaluated in this study out-performed the human baseline. These initial investigations demonstrated that LLMs are unable to reliably identify and explain dog whistles. Since these tasks are not solved, there remains a present need for larger training datasets with more numerous and varied examples of dog whistles. As described in Section 4.3, we explore applying LLMs to the task of word-sense disambiguation via prompting. The hypothesis is that providing the model with a set of examples would enable it to comparatively evaluate text and better disambiguate the coded instances from the non-coded. Although Gemini demonstrates superior performance on Dog Whistle Resolution, GPT-4 achieves highest metric scores across all word-sense disambiguation experiments, especially when consistency in prediction for N = 3 or 5 consecutive inferences is required. Whereas GPT-3.5 and GPT4 respond well to this prompt structure, Gemini and Mixtral do not. Gemini’s performance drastically decreases as the number of inferences increases, which is indicative of the architecture suffering from greater inference variation than other models in the study. Both Gemini and Mixtral are more reluctant to generate output in reference to potentially harmful content. With Gemini, the API explicitly blocked model output with code "block reason: other"4. Mixtral would generate a response that expressed its inability to address the task. Examples that contained words such as "terrorists" (Gemini), "groomers" (Gemini), and "fatherless" (Mixtal) were common sensitivities. Most notably, increasing the number of consecutive inferences N in the simulated ensemble approach for Dog Whistle Disambiguation produced a precision score on coded dog whistle examples of 96.2 with GPT-4 (as seen in Figure 3). Although optimizing the precision score comes at the expense of recall, these experiments demonstrated that GPT4 can be used to create a dataset of high confidence examples of coded dog whistle use. In Section 6, we use this Dog Whistle Disambiguation method to create the Silent Signals dataset. 6 Silent Signals Dataset Mendelsohn et al. (2023)’s Dog Whistle Glossary documented a diverse collection of dog whistles across informal and formal communication. How4Outputs from Gemini were still blocked with this code after adjusting model safety settings to block none of the harassment and harm categories. 12498 racist conservative Islamophobic anti-Latino antisemitic homophobic anti-GMO white suprem. ingroup 0 500 1000 1500 2000 Instance Count Formal Data (U.S. Congressional Records) racist white suprem. antisemitic transphobic anti-liberal Islamophobic anti-Latino conservative In-Groups 0 500 1000 1500 2000 2500 3000 Instance Count Informal Data (Reddit Comments) Dog Whistles by In-Group from Silent Signals Figure 4: The distributions of dog whistles over ingroups for informal and formal communication in the Silent Signals dataset. 1912 1919 1926 1932 1937 1943 1949 1954 1959 1964 1969 1974 1979 1984 1989 1994 1999 2004 2009 2014 2019 2021 year 0 100 200 300 400 500 600 700 Instance Count Formal Data (U.S. Congressional Records) 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 Years 0 500 1000 1500 2000 2500 Instance Count Informal Data (Reddit Comments) Dog Whistles by Year from Silent Signals Figure 5: The distributions of dog whistles over time for informal and formal communication in the Silent Signals dataset. ever, this resource alone does not address the challenges of conducting computational analysis of dog whistle use. Evaluating data based on key-word matches in text does not consider that many of those matches may not be coded uses of the dog whistle. To study the churn of dog whistles over time, their permeation through online communities and political parties, and their proliferation as vehicles for discriminatory speech, there must exist a means of disambiguation. Leveraging the word-sense disambiguation methodology presented in Section 4.3 over 100,000 instances sampled randomly from the Potential Instance dataset, we create the Silent Signals dataset of high confidence coded dog whistle examples. We utilize the ensemble approach over 3 inferences with GPT-4. Information on dog whistles which were sampled at lower rates from the Potential Instance dataset can be found in Appendix A.2. Each example in the Silent Signals dataset is annotated with the dog whistle present, dog whistle definition, type (formal or informal), in-group, and date. Congressional examples are also annotated with the chamber, speaker, and party, while Reddit instances are annotated with the subreddit. The dataset contains 16,550 instances across 298 dog whistles and 706 surface forms. Of these 79.8% are informal instances from Reddit and 20.2% are formal instances from Congressional speeches. The earliest dog whistle instance in the dataset dates to June 1, 1900 and the most recent to September 7, 2023. 6.1 Validation In addition to our initial experiments which found a precision on coded dog whistle examples of 95.7%, we manually evaluate a sample of 400 coded dog whistle examples in the Silent Signals dataset. This vetting procedure found a precision of 85.3% amongst the positively predicted instances. However, for a number of these false positives, the word was in fact used as a dog whistle, but the coded meaning did not align with the definition provided in the Allen AI Glossary. For example, the glossary defines "terrorist" as an Islamophobic dog whistle with the coded implication that Muslim people on a whole are a threat. In many instances captured in the Silent Signals dataset, however, "terrorist" is used not as an Islamophobic dog whistle but an anti-Liberal dog whistle. For example: "But they really turned splinter into a gay transpecies hedonist? The terrorists have truly won."5 In this instance, "terrorists" are implied to be liberals who support LGBTQ+ Rights. Taking into account these examples that do not fit the Allen AI definition but show signs of being novel dog whistle use, the accuracy over the vetted sample becomes 89.4%. 5This post was shared in reference to the perceived queerness of the character Splinter in the 2023 movie Teenage Mutant Ninja Turtles: Mutant Mayhem. 12499 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 2020 Years Civil Rights Movement Civil Rights Act Reagan Administration LA Riots PRWORA Welfare Reform Extension Obama Administration Trump Tax Act BLM and 2020 Election Frequency of Racist Dog Whistles Resolved from Congressional Records 1950 - 2023 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 Years Bathroom Bills Healthcare Restrictions Obergefell v. Hodges Trans Military Ban Title VII Protections Biden Proclaims TDOV Frequency of Transphobic Dog Whistles Resolved for Reddit 2012 - 2022 2013 2016 2017 2018 2019 2020 2021 2022 Years RFK Appointed Head of Vaccine Panel COVID COVID Vaccine Frequency of Antivax Dog Whistles Resolved for Reddit 2013 - 2022 Figure 6: We investigate the use of Racist, Transphobic, and Anti-Vax dog whistles captured by the Silent Signals dataset over time. The graphs in this figure aree annotated with dates of pivotal political and cultural events in the United States. 6.2 Analysis & Characteristics The distribution of dog whistles in the Silent Signals dataset is visualized over in-group categories in Figure 4, and over time in Figure 5. The sharp increase in dog whistles extracted from U.S. Congressional Records after 1960 aligns with the understanding that dog whistle use in politics gained popularity following the Jim Crow era (Mendelberg, 2001; Lasch, 2016). Furthermore, the disproportionately large amount of racist dog whistle detected in U.S. Congressional Records reflects political science research on historical use of dog whistles. Namely, that dog whistles were used to manipulate voter’s racial animus after overt racism was not acceptable (Haney-López, 2014). To demonstrate the utility of the Silent Signals dataset for political science research, we analyze the use of dog whistles over time by ingroup. Specifically, we graph and annotate the use of racist dog whistles in congressional records, as well as Transphobic and Anti-vax dog whistles on Reddit. As shown in Figure 6, the trends in dog whistles resolved per year demonstrate remarkable alignment with pivotal cultural and political events. The use of racial dog whistles in Congress increase from the beginning of the Civil Rights Movement, with significant peaks in years when Congress is discussing significant welfare and tax reforms such as the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA). We also observe more recent spikes surrounding the 2016 Election, the Trump Tax Act, and the Black Lives Matter movement. Interestingly, we see that until the 1990s dog whistles do not have as large a presence in congressional speeches as might be anticipated, highlighting that political dog whistles were often used in more "public facing" venues such as speeches and campaign ads as opposed to on the Congress floor. With respect to Transphobic dog whistles, we see alignments in usage with significant points in the transgender and LGBTQ+ rights movements including Obergefell v. Hodge (the Supreme Court Decision that required states to licence same-sex marriages), the passing of Bathroom Bills (state legislation that denies access to public restrooms by gender identity), and the enactment of the Transgender Military Ban during Donald Trump’s presidency. The analysis of Antivax dog whistle usage on Reddit is, unsurprisingly, 12500 centered on COVID with a peak in 2021 following the availability of the COVID vaccine. The small exception to this is, interestingly, a brief increase in usage in 2017 surrounding the potential appointment by Donald Trump of known Antivaxxer Robert F. Kennedy to head a vaccine panel. 7 Discussion 7.1 Evolution of Coded Meanings The validation of the Silent Signals Dataset enabled a salient observation of dog whistle use as it appears in Silent Signals. As discussed above, there were multiple cases in which a dog whistle was used with a covert meaning different from the definition present in the Allen AI glossary. Though this phenomenon was not frequent, it was far more common in colloquial instances than formal ones. This highlights the ways in which the study of neology is vital to the understanding of dog whistles given the rapid pace of linguistic change in online communities. 7.2 Referential Speech Does Intention make a Dog Whistle? Intentional Dog Whistle “American citizens at home will face an increased threat at the hands of terrorists lying in wait for the chance to cripple our economy and derail out war machine.” Referential Speech “There is disquiet in the Muslim world that the U.S. is poised to turn its terrorist campaign into a war against Islam.” Robert Byrd, Senate, 2003 Cynthia McKinney, House, 2001 Figure 7: This figure demonstrates the difference between an intentional dog whistle and referential speech, using Congressional data in Silent-Signals. Among the 10.6% of false positives identified during dataset validation, the vast majority were examples of referential speech or unintentional dog whistle use. These are cases where the term was used with the coded meaning, but without the explicit goal of discriminating against the target group. Figure 7 demonstrates the difference with examples from the Silent Signals dataset. Terrorist in both cases references the notion that Muslim people as a whole are a threat. Where Senator Robert Byrd is using this word with this meaning intentionally, Representative Cynthia McKinney is attempting to draw attention to the War on Terror in US politics becoming increasing Islamophoic. There is not a strong consensus on how much intent makes a dog whistle. Saul (2044) and Quaranto (2022) acknowledge unintentional dog whistles as a category of this coded speech, where as Wetts and Willer (2019) and Haney-López (2014) regard intent as an integral component. 7.3 Applications The Silent Signals dataset provides numerous opportunities for further study. It can be used to track dog whistle use over time, model the overlap between dog whistle use in formal and informal contexts, and investigate patterns of language used in various communities, virtual and other wise. From a Political Science perspective, it provides opportunity for analysis of dog whistle use along partisan and speaker-based axes. It can be used to explore how dog whistle use corresponds with social and political movements in the United States. From a Machine Learning perspective, Silent Signals can be used to training and/or finetuning could be performed for tasks ranging from hate speech detection to emergent dog whistle identification. 8 Conclusion Dog whistles are used to promote discrimination in both formal and informal environments. The use of this coded language allows speakers to maintain plausible deniability and bypass hate speech detection systems when used online. This work presents the largest, to date, survey of LLM capabilities with respect to the automatic resolution of dog whistles. Experimental results demonstrate that LLMs remain unreliable in the dog whistle resolution task. A hindrance to research in this space has been the unavailability of large datasets of coded dog whistles examples. We show that despite the overall inconsistencies of LLMs on the automatic dog whistle resolution task, with the proper methodology, they are adept at disambiguating coded dog whistles from standard language. We leverage this capability to create the Silent Signals dataset which contains 16,550 dog whistle examples and 298 distint dog whistles. We believe that this resource will be integral to the continued study of dog whistles with applications in content moderation, computational social science, and political science, on tasks such as analysis of trends in dog whistle use, dog whistle resolution, hate speech detection, and identification of emergent dog whistles. 12501 9 Limitations Language naturally evolves as it permeates through communities. In the case of dog whistles, this can result in a broadening or changing of target groups, terms, or coded meanings. While the Allen AI glossary is a foundational work without which this research would not be possible, it likely does not encompass all dog whistles, use cases, and definitions. As such, though the Allen AI glossary and the Silent Signals dataset both provide helpful tools for the continued research of dog whistles, the rapidly evolving nature of coded language can render these resources outdated and incomplete. Further, there is the question of whether dog whistles are always used intentionally or simply perpetuate harmful tropes the speaker may be unaware of. Seal (2018) explore this idea in the context of the dog whistle "bankers": "Were the Populists’ attacks on greedy bankers—some of which used terms like Shylock or invoked the Rothschilds—meant to focus anger and hatred on the Jews, or was the association so sublimated that the Populists didn’t even realize they were blowing a dogwhistle?" . Though we include such use cases in the Silent Signals dataset, it remains unclear to what extent intentionality defines the dog whistle. Additionally, please note that the dog whistles documented in the Allen AI glossary are not evenly distributed over all in-groups or between formal and informal speech. For example, racist dog whistles, which are most common in both the informal and formal portion of Silent Signals are also the most common in-group in the glossary. White supremacist dog whistles, on the other hand, are resolved at a very low rate from the congressional data have only 5 formal entries present in the glossary. As a result, the distribution of dog whistles over in-group in Silent Signals will reflect these biases and is not necessary reflective of dog whistle usage as a whole. Specific breakdowns of the Allen AI glossary by ingroup and formal/informal sphere can be seen in Appendix E. From a computational perspective, our method achieved high precision on the Dog Whistle Disambiguation Task. However, optimizing on precision comes at the expense of recall. Improving the efficiency of word-sense disambiguation with LLMs remains an open problem. Additionally, using GPT4 in the creation of Silent Signals subjects it any biases in the model. We recognize that we may have resolved specific types of dog whistles more frequently than others. With respect to establishing a human baseline, a natural extension of this work would be to investigate how LLM performance would compare to domain expert prediction, and how annotator attributes correlate with understanding of political coded language. Observations from our concise user study suggest that human performance on this task may vary with context behind the coded language and the community the individual belongs to. Due to resource constraints, we were not able to collect enough annotations from various communities to derive any statistically significant trends on these axes. We hope to address these pertinent investigations in future work. Finally, although we collect over 7 million potential dog whistle instances, due to resource constraints, we only sample 100,000 instances for the creation of the Silent Signals dataset. Therefore, we release the Potential Dog Whistle Dataset to enable the open sourced expansion of the Silent Signals dataset. Acknowledgements We are thankful to the anonymous reviewers, as well as members of SALT Lab for their helpful feedback on various stages of the draft. This work was partially supported by a grant from Meta. Caleb Ziems is supported by the NSF Graduate Research Fellowship under Grant No. DGE-2039655. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. ArXiv preprint, abs/2303.08774. Bethany L. Albertson. 2014. Dog-whistle politics: Multivocal communication and religious appeals. Political Behavior, 37(1):3–26. Edoardo Barba, Tommaso Pasini, and Roberto Navigli. 2021a. ESC: Redesigning WSD with extractive sense comprehension. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4661–4672, Online. Association for Computational Linguistics. Edoardo Barba, Luigi Procopio, Roberto Navigli, et al. 2021b. Consec: Word sense disambiguation as continuous sense comprehension. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1492–1503. 12502 Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020. The pushshift reddit dataset. In Proceedings of the international AAAI conference on web and social media, volume 14, pages 830–839. Michele Bevilacqua and Roberto Navigli. 2020. Breaking through the 80% glass ceiling: Raising the state of the art in word sense disambiguation by incorporating knowledge graph information. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2854–2864, Online. Association for Computational Linguistics. Michele Bevilacqua, Tommaso Pasini, Alessandro Raganato, and Roberto Navigli. 2021. Recent trends in word sense disambiguation: A survey. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI-21. International Joint Conference on Artificial Intelligence, Inc. Prashanth Bhat and Ofra Klein. 2020. Covert Hate Speech: White Nationalists and Dog Whistle Communication on Twitter, page 151–172. Springer International Publishing. Terra Blevins and Luke Zettlemoyer. 2020. Moving down the long tail of word sense disambiguation with gloss informed bi-encoders. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1006–1017, Online. Association for Computational Linguistics. Luke Breitfeller, Emily Ahn, David Jurgens, and Yulia Tsvetkov. 2019. Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1664–1674, Hong Kong, China. Association for Computational Linguistics. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Alessio Buscemi and Daniele Proverbio. 2024. Chatgpt vs gemini vs llama on multilingual sentiment analysis. ArXiv preprint, abs/2402.01715. Simone Conia and Roberto Navigli. 2021. Framing word sense disambiguation as a multi-label problem for model-agnostic knowledge integration. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3269–3275, Online. Association for Computational Linguistics. Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media, volume 11, pages 512–515. Liviu P Dinu, Ioan-Bogdan Iordache, Ana Sabina Uban, and Marcos Zampieri. 2021. A computational exploration of pejorative language in social media. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3493–3498. Danny Dor. 2004. From englishization to imposed multilingualism: Globalization, the internet, and the political economy of the linguistic code. Public culture, 16(1):97–118. Kevin Drakulich, Kevin H Wozniak, John Hagan, and Devon Johnson. 2020. Race and policing in the 2016 presidential election: Black lives matter, the police, and dog whistle politics. Criminology, 58(2):370– 402. Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. ArXiv preprint, abs/2109.05322. Matthew Gentzkow, Jesse M. Shapiro, and Matt Taddy. 2018. Congressional record for the 43rd-114th congresses: Parsed speeches and phrase counts. Robert E Goodin and Michael Saward. 2005. Dog whistles and democratic mandates. The Political Quarterly, 76(4):471–476. Gemini Team Google, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. 2023. Gemini: a family of highly capable multimodal models. ArXiv preprint, abs/2312.11805. Ian Haney-López. 2014. Dog whistle politics: How coded racial appeals have reinvented racism and wrecked the middle class. Oxford University Press. Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece Kamar. 2022. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection. ArXiv preprint, abs/2203.09509. R. Henderson and Elin McCready. 2018a. How dogwhistles work. In New Frontiers in Artificial Intelligence, pages 231–240, Cham. Springer International Publishing. 12503 R. Henderson and Elin McCready. 2018b. How Dogwhistles Work, page 231–240. Springer International Publishing. Robert Henderson and Elin McCready. 2020. Towards functional, agent-based models of dogwhistle communication. In Proceedings of the Probability and Meaning Conference (PaM 2020), pages 73–77, Gothenburg. Association for Computational Linguistics. Niclas Hertzberg, Robin Cooper, Elina Lindgren, Björn Rönnerstrand, Gregor Rettenegger, Ellen Breitholtz, and Asad Sayeed. 2022. Distributional properties of political dogwhistle representations in Swedish BERT. In Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH), pages 170–175, Seattle, Washington (Hybrid). Association for Computational Linguistics. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. ArXiv preprint, abs/2401.04088. Nicholas Judd, Dan Drinkard, Jeremy Carbaugh, and Lindsay Young. 2017. congressional-record: A parser for the congressional record. Jan Koco´n, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika Szydło, Joanna Baran, Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, et al. 2023. Chatgpt: Jack of all trades, master of none. Information Fusion, page 101861. Harsh Kohli. 2021. Training bi-encoders for word sense disambiguation. In International Conference on Document Analysis and Recognition, pages 823–837. Springer. Christopher N Lasch. 2016. Sanctuary cities and dogwhistle politics. New Eng. J. on Crim. & Civ. Confinement, 42:159. Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy Chen, Zhengyuan Liu, and Diyi Yang. 2023. Coannotating: Uncertainty-guided work allocation between human and large language models for data annotation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1487–1505. Rijul Magu, Kshitij Joshi, and Jiebo Luo. 2017. Detecting the hate code on social media. Rijul Magu and Jiebo Luo. 2018. Determining code words in euphemistic hate speech using word embedding networks. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 93–100, Brussels, Belgium. Association for Computational Linguistics. Marco Maru, Simone Conia, Michele Bevilacqua, and Roberto Navigli. 2022. Nibbling at the hard core of word sense disambiguation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4724–4737. Tali Mendelberg. 2001. The race card: Campaign strategy, implicit messages, and the norm of equality. Princeton University Press. Julia Mendelsohn, Ronan Le Bras, Yejin Choi, and Maarten Sap. 2023. From dogwhistles to bullhorns: Unveiling coded rhetoric with language models. ArXiv preprint, abs/2305.17174. Guy Merchant. 2001. Teenagers in cyberspace: an investigation of language use and language change in internet chatrooms. Journal of research in reading, 24(3):293–306. Merriam-Webster. 2017. What is the political meaning of ’dog whistle’. In Merriam-Webster.com dictionary. Chikashi Nobata, Joel R. Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive language detection in online user content. In Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 15, 2016, pages 145–153. ACM. Endang Wahyu Pamungkas, Valerio Basile, and Viviana Patti. 2020. Do you really want to hurt me? predicting abusive swearing in social media. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 6237–6246, Marseille, France. European Language Resources Association. Endang Wahyu Pamungkas, Valerio Basile, and Viviana Patti. 2023. Investigating the role of swear words in abusive language detection tasks. Language Resources and Evaluation, 57(1):155–188. Samuel L Perry. 2023. Mating call, dog whistle, trigger: Asymmetric alignments, race, and the use of reactionary religious rhetoric in american politics. Sociological Theory, 41(1):56–82. Anne Quaranto. 2022. Dog whistles, covertly coded speech, and the practices that enable them. Synthese, 200(4):330. Maria Ryskina, Ella Rabinovich, Taylor BergKirkpatrick, David Mortensen, and Yulia Tsvetkov. 2020. Where new words are born: Distributional semantic analysis of neologisms and their semantic neighborhoods. In Proceedings of the Society for Computation in Linguistics 2020, pages 367–376, New York, New York. Association for Computational Linguistics. Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, and Yejin Choi. 2020. Social bias frames: Reasoning about social and power implications of language. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5477–5490, Online. Association for Computational Linguistics. 12504 Jennifer Saul. 2044. Dogwhistles, political manipulation, and philosophy of language. ArXiv preprint, abs/44951156/. Bianca Scarlini, Tommaso Pasini, and Roberto Navigli. 2020. With more contexts comes better performance: Contextualized sense embeddings for all-round word sense disambiguation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3528–3539, Online. Association for Computational Linguistics. Andrew Seal. 2018. The Chill Embargo of the Snow: On Anti-Semitism, Populism, and the Present | Society for US Intellectual History — s-usih.org. [Accessed 16-02-2024]. José Ramón Torices. 2021. Understanding dogwhistles politics. Theoria: An International Journal for Theory, History and Foundations of Science, 36(3):321– 339. Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online, pages 78–84, Vancouver, BC, Canada. Association for Computational Linguistics. Gabriel Weimann and Ari Ben Am. 2020. Digital dog whistles: The new online language of extremism. International Journal of Security Studies, 2(1):4. Rachel Wetts and Robb Willer. 2019. Who is called by the dog whistle? experimental evidence that racial resentment and political ideology condition responses to racially encoded messages. Socius, 5:2378023119866268. Wikipedia. 2024. Dog whistle (politics) — Wikipedia, the free encyclopedia. http: //en.wikipedia.org/w/index.php?title=Dog% 20whistle%20(politics)&oldid=1198148639. [Online; accessed 15-February-2024]. Canwen Xu, Wangchunshu Zhou, Tao Ge, Ke Xu, Julian McAuley, and Furu Wei. 2021. Blow the dog whistle: A Chinese dataset for cant understanding with common sense and world knowledge. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2139–2145, Online. Association for Computational Linguistics. Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta, and Maarten Sap. 2023. COBRA frames: Contextual reasoning about effects and harms of offensive statements. In Findings of the Association for Computational Linguistics: ACL 2023, pages 6294–6315, Toronto, Canada. Association for Computational Linguistics. Jian Zhu and David Jurgens. 2021. The structure of online social networks modulates the rate of lexical change. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2201–2218, Online. Association for Computational Linguistics. Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. 2024. Can large language models transform computational social science? Computational Linguistics, 50(1):237–291. 12505 A Data Collection Details A.1 Reddit Subreddits included as a part of the Potential Instance and Silent Signals datasets. 4chan Antiwork AsianMasculinity aznidentity BlackPeopleTwitter Braincels CBTS_stream ChapoTrapHouse Chodi climateskeptics conservatives consoom conspiracy Coontown CringeAnarchy European FemaleDatingStrategy frenworld GenderCritical GenderCynical GenZedong GoldandBlack GreatAwakening HermanCainAward incels IncelsInAction KotakuInAction MensRights MGTOW MillionDollarExtreme Mr_Trump NoFap NoNewNormal Portugueses prolife Russia RussiaPolitics SocialJusticeInAction The_Donald TheRedPill TrueUnpopularOpinion TruFemcels TumblrInAction UncensoredNews walkaway WhitePeopleTwitter A.2 Keyword Matching Considerations There were a select few dog whistles which occurred at incredibly high rates in the non-coded sense. Due to resource constraints, we did not want to expend large amounts of compute on dog whistles which where most commonly used innocuously. As such, a select few dog whistles were excluded or sampled at a lower rate for the creation of the Silent Signals dataset. In the Congressional dataset, the dog whistles "XX", "federal reserve", "based", and "single" were excluded due to their high rate of innocuous usage and the fact that initial surveys indicated no coded uses. In the Reddit dataset, the dog whistles "based" and "single" were down sampled based on the frequency of their non-coded use in the Potential Instance dataset. Importantly, even with this down sampling, the Silent Signals dataset still contains coded instances of both "based" and "single". B User Study on Dog Whistle Detection The table below outlines the demographic attributes of participants in this study. Note that participants span many ages, genders, and political views. Due to the nature in which coded language like dog whistles is disseminated, it is likely that attributes such as political affiliation and preferred social media platform may correlate with literacy in specific kinds of coded speech. However, this user study did not collect annotation from enough individuals to male statistically significant findings in this area. Annotator Demographics Gender Age Woman 33 18 - 22 1 Man 28 22 - 30 22 Transgender 2 30 - 40 16 40 - 50 17 50 + 7 Ethnicity Religion White 53 Christianity 48 Asian 3 Atheist/Agnostic 13 Native American 3 Other 2 Black 2 Latino 2 Political Views Education Liberal 24 HighSchool 5 Conservative 22 Associates/Trade 5 Moderate 12 Bachelors 35 Masters 18 Preferred Social Media Instagram 31 Twitter 8 Reddit 7 TikTok 3 Other 2 Table 3: This table reports the demographic attributes of annotators that contributed to the human baseline on Dog Whistle Detection. See Section 4.2. C Prompting The structure of prompts for all LLM experiments can be found in Figure 8. C.1 Automatic Dog Whistle Detection While in the prompt engineering stages of our work, we ran a number of experiments on GPT-3.5 to determine which combination of dog whistle definition and prompting question would produce the best results. Specifically, we tested 5 dog whistle definitions and 3 questions. Results of these experiments can be seen in Table 5 and Table 4 respectively. Due to their high rate of dog whistle resolution the definition "A dogwhistle is the use of coded or suggestive language in political messag12506 Prompt Design for LLM Experiments Automatic Dog Whistle Resolution You are a helpful assistant participating in a content moderation task of dog whistle detection. A dog whistles is { definition of dog whistles}. Does the following sentence contain a dog whistle? If it does, please identify the dog whistle. { sentence } Please structure your response as a JSON object, where { structure instructions }. Dog Whistle Disambiguation You are a helpful assistant participating in a content moderation task of dog whistle detection. A dog whistles is { definition of dog whistles}. The coded meaning of { dog whistle D } is: { definition of D }. Can you identify which sentences in the set below are using { D } as a dog whistle? [ { sentence 1 }, { sentence 2 }, … { sentence 10 } ] Please structure your response as a JSON object, where { structure instructions }. __ – systems message* __ – output structure request Decoding Dog Whistle Definitions You are an objective political scientist aiming to discern the meaning and targeted group of various dog whistles. A dog whistles is { definition of dog whistles}. The following examples all contain the use of the dog whistle { D }. [ { sentence 1 }, { sentence 2 }, … { sentence 10 } ] What is the coded meaning of the dog whistle { D }? What group of people is being covertly or negatively referenced through the coded use of this dog whistle? Figure 8: Visual representation of the different prompt structures used in Automatic Dog Whistle Resolution (Section 4.1), Word-Sense Disambiguation (Section 4.3), and Decoding Dog Whistle Definition (Appendix F) experiments. ing to garner support from a particular group without provoking opposition." and the prompt "Does the following sentence contain a dog whistle? If it does, please identify the dog whistle." were selected and used throughout our work. Our experiments with GPT-3.5 informed our selection of the optimal definition of the term "dog whistle". In further stages of this work, however, we adapted prompt structures to the other models used in the study. D LLM Behavioral Trends In the process of conducting the experiments described in Section 4, the following behavioral trends were observed for the models evaluated. We provide this information as a guide for practitioners who may seek to conduct similar investigations: 1. GPT struggled with performance when output structures were requested. Specifically, we saw our performance decrease 3-5 points when output was requested to be formatted in JSON or list form. 2. When asked to provide its reasoning, we witnessed a 5-10 point increase in performance across models. 3. Certain models are more and less amenable to certain prompt structures. Specifically, Gemini and Mixtral struggled greatly with multiexample prompts where multiple instances were requested to be interacted with in a single run (for example in the word sense disambiguation task when multiple instances needed to be categorized). 4. Gemini was only usable for this task after all user safety blocks had been disabled. Even with these blocks disabled, there were still a number of cases in which the model blocked output by throwing an error messsage. 5. Mixtral was only cooperative once "This is a content moderation task" was included in the prompt. E Glossary Analysis The Allen AI Glossary of Dogwhistless contains 340 English-language dog whistles from informal and formal spheres. As the glossary is designed as a tool to document and define dog whistles, there is not an even split between informal and formal dog whistles present nor an even distribution amongst ingroups. In total there are 193 formal and 147 informal dog whistles documented in the Allen AI glossary. The distribution of both formal and informal dog whisltes by ingroup can be seen in 12507 No Dog Whistle Detected Incorrect Dog Whistle Identified Correct Dog Whistle, Incorrect Definition Correct Dog Whistle and Definition Does the following sentence contain a dog whistle? 20.0 24.5 28.2 26.4 Does the following sentence contain a dog whistle? If it does, please identify the dog whistle. 8.0 19.0 22.0 51.0 Does the following sentence contain a dog whistle? If it does, please identify the dog whistle and describe what it secretly means. 7.1 20.2 23.23 49.5 Table 4: Analysis of GPT-3.5 output across 3 prompting questions. Given it had the highest rate of dog whistle resolution, the second prompt was selected as the prompting question for the automatic dog whistle resolution task. No Dog Whistle Detected Incorrect Dog Whistle Identified Correct Dog Whistle, Incorrect Definition Correct Dog Whistle and Definition A dogwhistle is an expression that has different meanings to different audiences. (Albertson, 2014) 7.8 29.7 23.4 39.1 A dogwhistle is a word or phrase that means one thing to the public at large, but that carry an additional, implicit meaning only recognized by a specific subset of the audience. (Bhat and Klein, 2020) 15.9 22.2 22.2 39.7 A dogwhistle is a term that sends one message to an outgroup while at the same time sending a second (often taboo, controversial, or inflammatory) message to an ingroup. (Henderson and McCready, 2018b) 11.1 27.0 23.8 38.1 A dogwhistle is a coded message communicated through words or phrases commonly understood by a particular group of people, but not by others. (Merriam-Webster, 2017) 17.5 25.4 22.2 34.9 A dogwhistle is the use of coded or suggestive language in political messaging to garner support from a particular group without provoking opposition. (Wikipedia, 2024) 6.5 25.8 25.8 41.9 Table 5: Analysis of GPT-3.5 output across 5 dog whistle definitions. Given it had the lowest rate of detecting no dog whistles and the highest rate of correctly resolving dog whistles, the Wikipedia definition was selected as the definition used throughout the rest of our experiments. Figure 9. These uneven distributions are important to keep in mind when viewing the distribution of dog whistles by ingroup in the Potential Instance and Silent Signals dataset, as some ingroups may have higher rates of representation in the datasets because of their high representation in the Allen AI glossary. F Further Dog Whistle Definition Experiments Following our initial survey of LLM performance on automatic dog whistle resolution, we explored means of improving the architectures’ ability to decode hidden meanings of dog whistles. To do so we provide the model with additional context in the form of multiple coded examples of a specific dog whistle from the Synthetic-Disambiguation dataset. Specifically, the model is given a definition of what a dog whistle is, the dog whistle to be evaluated, and a set of 3 - 7 coded examples of the dog whistle in context and is asked to return the coded meaning of the dog whistle. For specific prompting details see Figure 8. As a point of comparison, we run a parallel experiment in which no example dog whistle instances are provided as a means to gauge the effect that additional context has on the LLMs’ ability to accurately define dog whistles. This experiment is run on the Synthetic-Detection dataset and exclusively with GPT-4, as this model was most amenable to the multi-example setting. 12508 Figure 9: This graph shows the distribution of dog whistles in the Allen AI glossary by ingroup and sphere of use. Predictions shown in Table 6 were manually validated referencing definition and targeted group information provided in the Allen AI glossary. To allow for nuance, we evaluate each predicted definition on a scale of 0 to 2, where 0 is incorrect, 1 is incomplete, and 2 is correct. Incomplete definitions of dog whistles or their targeted groups are characterized by mis-identification of the target group, incorrect implications of the term, or failure to underline the harmful nature of the coded speech. For example, an incomplete definition might say a dog whistle carries connotations that are "anti-political correctness, non-conformity, anti-establishment" as opposed to connotations of alt-right or white supremacist views. The Decoding Dog Whistle Definitions experiment was designed with the hypothesis that providing a model with multiple examples of a dog whistle’s usage would improve its ability to resolve the definition. However, when counting only fully correct responses, there is very little difference between results when only a dog whistle was presented and results when we provided the dog whistle and 3-7 coded instances of its use. When including partially correct definitions, the addition of examples had greater impact on model output. Best results were found when prompting the model to identify both the definition and target group, while the model struggled most to identify only the targeted group of a given dog whistle. % Fully % Correct Correct (w/ Incomplete) no context Definition 69.2 84.6 Targeted Group 53.8 69.2 Definition and Group 61.5 84.6 in context Definition 69.2 92.3 Targeted Group 53.8 69.2 Definition and Group 69.2 92.3 Table 6: Ability of GPT-4 to accurately define dog whistles and their target group. No context experiments present only the dog whistle while in context experiments present the dog whistle along with 3-7 coded examples of its use. Partially correct responses may identify part but not all of the definition or target group or else fail to underline the hateful and harmful nature of the given dog whistle. 12509
2024
675
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12510–12548 August 11-16, 2024 ©2024 Association for Computational Linguistics On the Representational Capacity of Neural Language Models with Chain-of-Thought Reasoning Franz Nowak* Anej Svete* Alexandra Butoi Ryan Cotterell {franz.nowak, anej.svete, alexandra.butoi, ryan.cotterell}@inf.ethz.ch Abstract The performance of modern language models (LMs) has been improved by chain-of-thought (CoT) reasoning, i.e., the process of generating intermediate results that guide the model towards a final answer. A possible explanation for this improvement is that CoT reasoning extends an LM’s computational power, as RNNs and transformers with additional scratch space are known to be Turing complete. Comparing LMs to Turing machines, however, introduces a category error—Turing machines decide language membership, whereas LMs define distributions over strings. To bridge this gap, we formalize CoT reasoning in a probabilistic setting. We present several results on the representational capacity of recurrent and transformer LMs with CoT reasoning, showing that they can represent the same family of distributions over strings as probabilistic Turing machines. https://github.com/rycolab/cot-lms 1 Introduction Motivated by how humans solve complex problems by noting down intermediary results, Wei et al. (2022b) introduced chain-of-thought (CoT) reasoning.1 CoT reasoning helps language models (LMs) solve reasoning tasks by allowing them to store intermediary results in a scratch space that is not part of the final output. The empirical success of CoT reasoning has made it an established component of modern neural LMs (Nye et al., 2021; Wei et al., 2022a,b; Suzgun et al., 2022; Kojima et al., 2023; Wies et al., 2023)—seemingly overnight. Such empirical success motivates a thorough understanding of the abilities of CoT-augmented LMs. While existing theoretical treatments have shed light on some aspects of this framework, we are still far from a concrete theoretical understanding of CoT (Feng et al., 2023). CoT reasoning can be interpreted as encouraging LMs to perform additional sequential computation *Equal contribution. 1We use the more general term CoT reasoning over the original term CoT prompting as prompting is just one way to elicit CoT reasoning (Wang and Zhou, 2024). Probabilistic unrestricted languages Unbounded precision Transformers Thm. 4.5 GGGGGGGGGGGGGB F GGGGGGGGGGGGG Thm. 4.6 Probabilistic Turing machines E GGGGGGGGGGGGG GGGGGGGGGGGGGC Thm. 4.4 Unbounded precision Elman RNNs Probabilistic regular languages Probabilistic Finite State Automata Fixed-precision Elman RNNs Thm. 4.2 GGGGGGGGGGGGGB F GGGGGGGGGGGGG Thm. 4.1 Fixed-precision Transformers D GGGGGGGGGGGGG Thm. 4.3 Neural architecture with CoT reasoning Corresponding model of computation Figure 1: A schematic overview of our main results, showing which neural language models extended with CoT reasoning (left) correspond to which probabilistic models of computation (right). steps while storing the result of that computation. Naturally, this suggests that CoT reasoning should be formalized in a manner that exploits our understanding of well-known, stateful models of computation. Theory of computation provides a natural toolbox for this avenue of exploration. Indeed, the internal configurations of many neural language models have been previously linked to intermediate steps in neural LMs (Pérez et al., 2021; Merrill and Sabharwal, 2024; Feng et al., 2023). A language model is definitionally a distribution over strings from some fixed alphabet.2 This definition is somewhat at odds with the way existing work has investigated the representational capacity of CoT-augmented LMs. Firstly, most results in this area concern unweighted language recognition, which is different from the inherently probabilistic task of language modeling. Secondly, Turing completeness results such as Pérez et al.’s (2021) construction—a seminal result showing the Turing completeness of transformer LMs— usually require additional symbols not present in the original alphabet to correctly simulate a Turing machine. Because the output alphabets of the transformer LM and the Turing machine it simulates fundamentally differ, it is difficult 2An alphabet is a finite, non-empty set. 12510 to discuss their equivalence. This motivates a more fine-grained treatment of the representational capacity of neural LMs in which: (1) LMs are treated as probabilistic models of computation, i.e., they assign weights to strings rather than deciding language membership, and (2) language model equivalence is defined on the output string level, and the analysis takes into account the additional information required to be encoded in additional outputs to achieve the equivalence. We address the first point by analyzing CoTaugmented LMs using probabilistic models of computation, which provide a convenient way of describing families of distributions over strings with standard models of sequential reasoning. To address the second point, we define a new type of relationship between language models, which we call regular reducibility. Intuitively, an LM is regularly reducible into another one if the strings generated by the latter are sufficiently simple transformations of those generated by the former. An instance of such a transformation is the deletion of the intermediary computation steps of CoT reasoning. Formalizing CoT reasoning in this framework, we find that it increases the computational power in both RNN and transformer LMs. Concretely, we find that CoT-augmented constant-precision RNN LMs are equivalent to probabilistic finitestate automata (PFSAs). This is in contrast to the constant-precision RNN LMs without CoT reasoning, which are equivalent to deterministic PFSAs (Svete and Cotterell, 2023). Additionally, we show how Turing-complete linear-precision RNN LMs can be thought of as performing CoT reasoning. Finally, we show that both linearly bounded precision RNN LMs and logarithmically bounded precision decoder-only transformer LMs with CoTreasoning can simulate any probabilistic Turing machine. Taken together, our results frame CoT reasoning in pure language modeling terms and describe its intuitive and formal connections to probabilistic models of computation. 2 Preliminaries In our analysis, we assume rational (rather than real) arithmetic for all definitions and computations. 2.1 Language Models A formal language L is a subset of the Kleene closure Σ∗of some alphabet Σ. We call an element y of Σ∗a string. We denote the empty string by ε, and we assume that ε /∈Σ. A discrete semimeasure over Σ∗is a function µ: Σ∗→[0, 1] such that P y∈Σ∗µ (y) ≤1 (Bauwens, 2013; Icard, 2020). If the semimeasure of all strings sums to one, i.e., P y∈Σ∗µ (y) = 1, then µ is called a probability measure.3 Finally, for any alphabet Σ, we define the set Σε def= Σ ∪{ε}. Definition 2.1. A language model (LM) p is a semimeasure over Σ∗. If p is a probability measure, it is called a tight language model. Definition 2.2. Two LMs p and q are weakly equivalent if p(y) = q(y) for all y ∈Σ∗. Most modern LMs are autoregressive, i.e., they define p (y) through conditional distributions of the next symbol given the string produced so far and the probability of ending the string, i.e., p (y) def= p (EOS | y) |y| Y t=1 p (yt | y<t) . (1) Here, EOS denotes the special end-of-string symbol, which specifies that the generation of a string has halted. The inclusion of EOS allows (but does not guarantee) that a language model p, autoregressively, constitutes a probability measure over Σ∗(Du et al., 2023); a model defined as in Eq. (1) may sum to less than 1 in a pathological case. For any alphabet Σ, we define the set Σ def= Σ ∪{EOS}. The conditional probability distributions are usually defined based on vectorial representations of y<t computed by a language encoder enc: Σ∗→RD (Chan et al., 2024). Definition 2.3. A representation-based LM is any LM that can be written as an autoregressive language model (Eq. (1)) where the conditional distributions over the next symbol yt ∈Σ are given by p (yt | y<t) def= f(E enc (y<t))yt, (2) where enc: Σ∗→RD is a language encoder, E ∈R|Σ|×D is an output matrix, and f is a projection function.4 3This definition differs from Li and Vitányi’s (2008) who instead define semimeasures over prefix strings. 4A common choice for f is the softmax. Since our analyses use rational arithmetic, we instead opt for sparsemax (Martins and Astudillo, 2016). However, all of our results can be extended to the use of the more common softmax function through the use of log activations and the extended real numbers R ∪{−∞, ∞} (Svete et al., 2024). 12511 2.2 Regular Language Models Probabilistic finite-state automata are a wellunderstood probabilistic computational model. Definition 2.4. A probabilistic finite-state automaton (PFSA) is a tuple (Σ, Q, λ, ρ, δ) where Σ is an alphabet, Q is a finite set of states, δ ⊆Q × Σ × Q≥0×Q is a finite set of weighted transitions where we write transitions (q, y, w, q′) ∈δ as q y/w −−→q′, and λ, ρ: Q →Q≥0 are functions that assign each state its initial and final weight, respectively. Moreover, for all states q ∈Q, δ, λ and ρ satisfy P q∈Q λ (q) = 1, and P q y/w −−→q′∈δ w + ρ (q) = 1. Next, we will define some basic concepts related to PFSAs. A PFSA A = (Σ, Q, λ, ρ, δ) is deterministic if |{q | λ (q) > 0}| = 1 and, for every q ∈Q, y ∈Σ, there is at most one q′ ∈Q such that q y/w −−→q′ ∈δ with w > 0.5 Any state q where λ (q) > 0 is called an initial state, and if ρ (q) > 0, it is called a final state. A path π of length N is a sequence of subsequent transitions in A, denoted as q1 y1/w1 −−−−→q2 y2/w2 −−−−→q3· · ·qN yN/wN −−−−−→qN+1. (3) The yield of a path is s (π) def= y1 · · · yN. The prefix weight ew of a path π is the product of the transition and initial weights, whereas the weight of a path additionally has the final weight multiplied in. In symbols, this means ew(π) def= N Y n=0 wn, (4) w(π) def= N+1 Y n=0 wn, (5) with w0 def= λ(q1) and wN+1 def= ρ(qN+1). We write Π(A) for the set of all paths in A and we write Π(A, y) for the set of all paths in A with yield y. The sum of weights of all paths that yield a certain string y ∈Σ∗is called the stringsum, which we write as A (y) def= X π∈Π(A,y) w (π) . (6) The stringsum gives the probability of the string y. This way, PFSAs induce a particularly well-understood family of LMs. 5In this paper, we do not distinguish between a transition for a given symbol with weight w = 0 and the absence of a transition for that symbol. That is, we assume there always exists a transition q y/w −−→q′ ∈δ for any q, q′ ∈Q and y ∈Σ, albeit possibly with w = 0. Such a choice turns out to be useful in our technical exposition. Definition 2.5. An LM p is a regular LM if there exists a PFSA A whose induced language model pA is weakly equivalent to p. PFSAs and non-determinism. Although PFSAs share many properties with unweighted (booleanweighted) finite-state automata, one important difference relates to determinization. In the unweighted case, the class of deterministic and nondeterministic FSAs are equivalent, i.e., any nondeterministic FSA has an equivalent deterministic FSA that accepts the same language. This result, however, does not hold for PFSAs: There exist PFSAs that admit no deterministic equivalent (Mohri, 1997; Buchsbaum et al., 2000), meaning that non-deterministic PFSAs are strictly more expressive than deterministic ones. Computing string probabilities under nondeterminism. Autoregressive LMs (cf. Eq. (1)) and PFSAs fall under the larger framework of models that specify probability distributions implicitly (Icard, 2020).6 However, in autoregressive neural LMs with fixed precision, only one sequence of computational actions can yield a particular string, meaning that they can only model deterministic weighted regular languages (Svete and Cotterell, 2023). On the other hand, non-deterministic PFSAs compute string probabilities by additively combining the probabilities of all paths that yield the same string, and hence lie outside the grasp of such neural models.7 As we show later, CoT reasoning provides a principled way to overcome this limitation and allow fixed-precision neural models to simulate non-deterministic automata. 2.3 Regular Functions We now define a finite-state machine that, in addition to scanning, also outputs strings. Definition 2.6. A finite-state transducer (FST) is a 6-tuple T = (Q, Σ, Ξ, I, F, δ), where Q is a finite set of states, Σ is an alphabet of input symbols, ∆is an alphabet of output symbols, I, F ⊆Q are sets of initial and final states, respectively, and δ ⊆Q × Σε × ∆ε × Q is a set of transitions. Similar to PFSA transitions, we write FST transitions of the form (q, x, y, q′) ∈δ as q x: y −−→q′. 6Other examples of such models include hidden Markov models, which are equivalent to PFSAs (Icard, 2020). 7Note that RNN LMs with linearly bounded precision can simulate non-deterministic PFSAs in real-time by encoding a probability distribution over the PFSA’s current state in the hidden state of the RNN (Svete et al., 2024). 12512 Finally, we give the following definition which we will use to formalize CoT reasoning. Definition 2.7. A regular relation is a relation ϕ ⊆ Σ∗× ∆∗that is representable by an FST. If ϕ is a (partial) function, it is called a regular function. Like all relations, regular relations ϕ ⊆Σ∗×∆∗ have a trivial inverse ϕ−1 ⊆∆∗× Σ∗. 2.4 Turing Machines We consider the following definition of a probabilistic Turing machine. Definition 2.8. A probabilistic Turing machine (PTM) is a two-tape machine with a working tape and an output tape specified by the 6-tuple M = (Q, Σ, Γ, δ, qι, qφ), where Q is a finite set of states, Σ is an output alphabet, Γ is a tape alphabet including the blank symbol ⊔, qι, qφ ∈Q are the initial and final states, and δ ⊆Q × Γ × Σε × {L, R} × Q≥0 × Q × Γ is a rationally weighted transition relation. L and R signify the PTM head moving left (L) or right (R) on the tape after a transition. We write transitions (q, γ, y, d, w, q′, γ′) ∈δ as (q, γ) y,d/w −−−→(q′, γ′). Moreover, we require that for any given q ∈Q, γ ∈ Γ, the weights satisfy P (q,γ) y,d/w −−−→(q′,γ′)∈δ w = 1. A transition should be interpreted as follows: When M in state q, reads γ on the working tape, writes y on the output tape, writes γ′ on the working tape, move the head in direction d on the working tape. Each computation step randomly selects a transition according to its weight w.8 This definition of Turing machines straightforwardly induces a semimeasure over strings (Nowak et al., 2023, Remark 2.2). It is sometimes easier to prove claims about Turing machines through another equivalent machine (Hopcroft et al., 2001). The probabilistic two-stack pushdown automaton is one example (Nowak et al., 2023). See App. C.3 for an overview. 2.5 Recurrent Neural Language Models Recurrent neural LMs are LMs whose conditional probabilities are given by an RNN.9 8For more details, see App. C.1. 9Throughout this paper, we will focus on Elman RNNs (Elman, 1990) as they are the easiest to analyze and a special case of more common networks, e.g., those based on long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRUs; Cho et al., 2014). Definition 2.9. An Elman RNN R = (Σ, σ, D, U, V, b, η) is an RNN with the following hidden state recurrence: h0 = η (t = 0), (7a) ht = α (Uht−1 + Vr(yt) + b) (t > 0), (7b) where ht ∈QD is the state vector10 at time step t, η ∈QD is an initialization parameter, yt ∈Σ is the input symbol at time step t, r: Σ →QR is a symbol representation function, U ∈QD×D and V ∈QD×R are parameter matrices, b ∈QD is a bias vector, and α: QD →QD is an element-wise, non-linear activation function.11 Because ht hides the symbols consumed by the Elman RNN, we also use the evocative notation h(y) to denote the result of the application of Eq. (7b) over the string y = y1 · · · yt. The notation h(y) makes it clear that an RNN LM implicitly defines a language encoder. Definition 2.10. A representation-based LM is called an Elman LM if its representation function is defined by the hidden state of an Elman RNN enc (y<t) def= h (y<t). The most common choice for the projection function f is the softmax, whose limitation is that it implies the LM has full support, i.e., an Elman LM with a softmax projection assigns positive probability to all strings in Σ∗. To construct non-fullsupport Elman LMs, we instead use the sparsemax (Martins and Astudillo, 2016): sparsemax(x) def= argmin z∈∆N−1 ∥z −x∥2 2. (8) In contrast to the softmax function, sparsemax is the identity on ∆N−1, i.e., we have sparsemax(x) = x for x ∈∆N−1. 2.6 Neural Networks and Numerical Precision All implementations of LMs on modern hardware require representations to be fixed-precision floating-point numbers or arbitrary-precision (rational) numbers. In the case of arbitrary precision, an important consideration in processing strings y ∈Σ∗is the number of bits required to store the representations and how the number of bits scales with the length of the string, |y|. This motivates the following definition of precision. 10Throughout this paper all vectors are column vectors. 11Common examples of α include the Heaviside function H(x) def = 1 {x > 0}, the sigmoid function σ(x) def = 1 1+exp(−x), and the ReLU (x) def = max (0, x). 12513 Definition 2.11. The precision ψ (y) of a representation-based neural LM is the number of bits required to represent the entries of enc (y): ψ (y) def= max d∈[D] min p,q∈N, p q =enc(y)d ⌈log2 p⌉+ ⌈log2 q⌉. (9) We say that a representation-based LM is of • constant precision if ψ (y) = O (1), i.e., if ψ (y) ≤C for all y ∈Σ∗and some C ∈R, • logarithmically bounded precision if ψ (y) = O (log |y|), i.e., if there exist T 0 ∈N and C ∈R such that for all y ∈Σ∗with |y| ≥T 0, ψ (y) ≤C log2 |y|, • linearly bounded precision if ψ (y) = O (|y|), i.e., there exist T 0 ∈N and C ∈R such that for all y ∈Σ∗with |y| ≥T 0, ψ (y) ≤C|y|, and • unbounded precision if ψ (y) cannot be bounded by a function of |y|. The constructions in the existing literature range from constant to unbounded precision. The RNNs considered by Svete and Cotterell’s (2023) encoding of deterministic PFSAs, for example, results in an RNN that is of constant precision. Weiss et al. (2018), Merrill (2019) and Merrill et al. (2020) consider models with logarithmically bounded precision. In contrast, articles treating the Turing completeness of neural networks (Siegelmann and Sontag, 1992; Nowak et al., 2023) require unbounded precision to be able to represent the fact that a Turing machine may fail to halt. Naturally, our constructions of Turing complete LMs will also require unbounded precision. 2.7 Transformer Language Models Transformer LMs compute the conditional distributions p (yt | y<t) by means of self-attention. Because transformer LMs necessitate the precise introduction of multiple sub-components, we postpone the full introduction to App. C.4 and only review the basics here. We borrow much of the notation and formalization from Svete and Cotterell (2024). A transformer is a composition of multiple transformer layers, each of which implements the attention mechanism. The attention mechanism works as follows. It takes a query vector q ∈RD and two matrices: The matrix K ∈RN×D of keys and the matrix V ∈RN×D of values and computes a weighted average of the value vectors based on the compatibilities of the key vectors to the query vector, as scored by a scoring function f. Attention weights are computed by normalizing the scores f (q, k1) , . . . , f (q, kt). The choice of the normalization function has concrete implications on representational capacity (Hao et al., 2022; Svete and Cotterell, 2024). We focus on the hard attention projection function. Definition 2.12. Hard attention is computed with the hardmax projection function hardmax (x)d def= ( 1 m if d ∈argmax (x) 0 otherwise, (10) for d ∈ [D], where x ∈ RD and m def= | argmax (x) | is the cardinality of the argmax set. A transformer layer uses the attention mechanism followed by a position-wise MLP12 to compute augmented representations zt of the input representations Xt = x⊤ 1 ; · · · ; x⊤ t  ∈Rt×D. The query qt, the keys Kt, and values Vt are all transformations of the input representations Xt. Initially, Xt are computed by some static representation function of the symbols and their positions. Multiple transformer layers are stacked into a transformer, which computes the (deep) contextual representations of all symbols in the string. The contextual representation of the final symbol in the string is then used to define the representation function enc of a transformer LM. 3 CoT Reasoning and Weak Equivalence In this section, we argue that CoT reasoning can be seen as a way of comparing the input language of a formal automaton with the output language of an autoregressive LM. We then introduce a formalization of CoT reasoning that allows us to characterize the representational capabilities of CoT-augmented LMs in terms of their weak equivalence to wellstudied weighted automata. 3.1 Equivalence and Augmented Alphabets Autoregressive LMs generate strings by outputting individual symbols y ∈Σ one at a time until EOS is generated. The resulting string y is the output of the LM and the outputs generated in this manner implicitly define the distribution of the LM. Models studied in existing work on the representational capacity of neural LMs, however, 12For more details, see App. C.4 12514 often do not (only) emit symbols from Σ. Rather, they also emit symbols from some larger alphabet ∆that includes symbols that encode additional information required to simulate a given formal model of computational. For example, the symbols generated by Pérez et al.’s (2021) transformer contain both the output symbols as well as (the changes to) the configuration of the Turing machine. It is not difficult to see how generating strings from an augmented alphabet can be seen as a form of CoT reasoning. The outputs not intended to be a part of the final output can simply be seen as the result of the additional computational steps performed by the CoT-augmented LM. These outputs are later removed in post-processing and only the string with symbols from our target alphabet Σ remains. In this sense, Pérez et al.’s (2021) construction works with a form of CoT reasoning without explicitly mentioning it. This connection was made explicit in concurrent work by Merrill and Sabharwal (2024). Outputting additional information is not regarded as an issue when the task is to simulate (unweighted) Turing machine runs on a given input string because what matters, in that case, is only whether the model accepts or rejects the input (Siegelmann and Sontag, 1992; Pérez et al., 2021, inter alia). However, when considering the LM induced by a PTM, we do care about the alphabet the distribution over strings is over. Thus, given an LM over Σ∗, it follows that neural LMs outputting additional information cannot define the same distribution, i.e., they cannot be weakly equivalent. Nevertheless, the ability to output additional information while generating a string seems natural and useful; it is the heart of CoT reasoning. And, indeed, models that can only output symbols that are part of the final string are restricted to doing real-time computation (Weiss et al., 2018; Nowak et al., 2023; Svete et al., 2024). 3.2 Regular Reducibility We now formalize the notion of CoT reasoning through the following definition which allows a model to output additional information not considered part of the final output: Definition 3.1. An LM p over ∆is regularly reducible to a LM q over Σ if there exists a regular function ϕ: ∆∗→Σ∗such that q ◦ϕ is weakly equivalent to p. We can use the function ϕ to map the strings sampled from an LM, ones additionally encoding the intermediary steps of computation, into the final output, i.e., strings y ∈Σ∗. Definition 3.2. We say an alphabet ∆satisfies the Σ-augmentation condition for an alphabet Σ if ∆⊆Σε × Γε \ {(ε, ε)} for some alphabet Γ. This means if we care about strings from Σ∗ and we have an LM over the closure of a Σaugmented alphabet ∆, then each symbol the LM outputs encodes is either an output symbol from Σ, an intermediate computation symbol from Γ, or both.13 The computation symbols can then easily be removed by applying the per-symbol projection function ϕ ((y, γ)) def= y, lifted to a non-lengthincreasing homomorphism over strings ϕ: ∆∗→ Σ∗to the LM’s output element-wise.14 This function can be implemented by a simple single-state transducer (making it a regular function): 0 ∆∋(y, γ) : y ∈Σ 3.3 Properties of Regular Reducibility To motivate our formalization, in this section, we show several properties of regular reducibility that will allow us to reason about the representational capacity of CoT-augmented LMs later in §3.4. Particularly, as we will see, there exists a strong connection between regular reducibility and the addition of non-deterministic choices in the model. To exemplify this concretely, the next theorem shows that regular reducibility allows us to model non-deterministic PFSAs with deterministic ones, which, as discussed in §2.2, cannot be done with just deterministic PFSAs in general. Theorem 3.1. Let A = (Σ, Q, δ, λ, ρ) be a PFSA. Then, there exists a deterministic PFSA A′ over the alphabet Σ × Q and with the state space Σ × Q that is regularly reducible to A. Proof. See App. E. ■ A similar connection exists for PPDAs (see App. E). Thm. 3.1 captures a crucial property of regular reducibility: It allows us to simulate non-determinism with a deterministic device. As 13While in practice the CoT and the final output come from the same alphabet, we use separate ones for ease of notation and analysis. This is without loss of generality; see App. D. 14A non-length-increasing homomorphism is a function h: ∆∗→Σ∗where h(ε) = ε, h(x) = y for x ∈∆ and y ∈Σε, and that further satisfies h(x1x2 · · · xT ) = h(x1)h(x2) · · · h(xT ). 12515 An LM p generates strings from ∆∗⊆(Σ × Q)∗: x1 = q0, y1, q1, q2, . . . , qN, yT−1, yT ∼p x2 = q′ 0, q′ 1, y1, q2, . . . , yT , q′ M−1, q′ M ∼p The function ϕ removes the CoT steps: ϕ (x1) = ϕ (x2) = y1, y2, . . . , yT−1, yT ∼p Figure 2: An LM p can generate strings from a stateaugmented alphabet ∆. The intermediate outputs qi allow it to condition on the previous states of the computation. By post-hoc removing the intermediate outputs with ϕ, we obtain a CoT-augmented LM p over Σ∗ that generates more human-readable outputs. exemplified in the cases of regular LMs, this can concretely increase the representational capacity of a model class, as in the case of Thm. 3.1. On the other hand, regular reducibility never decreases the representational capacity of a model class, since ϕ can always be set to the identity function. Due to the close connection between neural LMs and determinism (Svete and Cotterell, 2023), one could hope that a similar increase in capabilities could be achieved for neural LMs augmented with CoT reasoning as well. In §3, we show that this is indeed the case. 3.4 Regular Reducibility and CoT Reasoning After introducing the notion of regular reducibility and presenting some of its core properties, we now use it to define CoT-augmented LMs. Definition 3.3. Given alphabets Σ and ∆where ∆ satisfies the Σ-augmentation condition, an LM p over Σ∗is called a CoT-augmented LM induced by LM p over ∆∗if p is regularly reducible from p . That is, there exists a regular function ϕ: ∆∗→Σ∗ such that p def= p ◦ϕ−1. Def. 3.3 defines a CoT-augmented LM p through the LM p that generates strings from ∆∗ and then applies the regular function ϕ to the generated strings to obtain strings from Σ∗. This allows p to output additional information while still defining a probability distribution over strings from Σ∗(see Fig. 2). The weight of a string y ∈Σ∗under p is then computed as the sum over the weights of all strings in the preimage of ϕ, analogously to Eq. (6): p (y) = X x∈ϕ−1(y) p(x). (11) This is the approach we take in §4.2.2. BOS y1 · · · · · · · · · yt−1 ? · · · q0 q1 qt−1 qt BOS q0 y1 q1 · · · yt−2 qt−2 yt−1 qt−1 ? · · · p(qt,yt|qt−1) Figure 3: If y≤t uniquely determines a subpath in the PFSA, the current state qt can be uniquely determined without knowing the previous state (top). If, however, the substring y≤t can lead to multiple states, the relevant next-symbol distribution cannot be determined. Storing the sampled states as part of the output fixes this by keeping track of the sampled PFSA subpath (bottom). 4 CoT and Representational Capacity We now connect the notions introduced in §3 to the representational capacity of neural LMs. We begin in §4.1 by showing how CoT reasoning endows autoregressive LMs with non-determinism by using scratch space to keep track of the current branch of execution. We then use similar principles to emulate probabilistic Turing machines with Elman RNNs (§4.2.1) and transformers (§4.2.2). Neural LMs and formal models of computation. Probabilistic models of computation are often presented as autoregressive generators of strings that implicitly define probability (semi)measures (Icard, 2020). This interpretation is particularly important when addressing non-determinism, because it allows for multiple executions (for example, in the case of PFSAs, multiple different paths) generating the same string. Here, we treat neural LMs as autoregressive generators of strings and show that, by defining identical next-symbol distributions, they implicitly define the same semimeasures over strings as classical probabilistic models of computation. 4.1 Neural LMs and Regular LMs We first discuss the connection between CoTaugmented neural LMs and PFSAs. 4.1.1 Recurrent Neural LMs Minsky’s (1954) construction provided one of the first connections between a neural network and a formal computational model. It showed that RNNs can emulate (deterministic) FSAs, where the RNN accepts string by activating a particular 12516 neuron after reading the string. The relationship in the probabilistic case was explored by Svete and Cotterell (2023), who show the equivalence of pure Elman RNNs (without CoT reasoning) and deterministic PFSAs.15 This illustrates an important distinction between the deterministic and non-deterministic frameworks and thus a discrepancy between general PFSAs and RNN LMs. Intuitively, the discrepancy comes from the fact that there is no non-determinism in the recurrence of an RNN, which is thus unable to capture the possibly non-deterministic decisions of the PFSA. CoT reasoning endows an RNN with exactly this nondeterminism because it allows the RNN to sample and refer back to trajectories rather than only symbols by annotating each of its outputs with the current state of the PFSA.16 Upon reading the previously randomly generated state of the automaton captured in the output, it can follow the randomly sampled generating trajectories. The following two theorems show that RNN LMs with fixed precision and CoT reasoning are weakly equivalent to general PFSA. This requires showing the correspondence in both directions, i.e., that (1) the distribution induced by any PFSA can be generated by a constant-precision RNN LM with CoT, and (2) any CoT-augmented constant-precision RNN LM can be emulated by a PFSA. Theorem 4.1. For any regular LM, there exists a weakly equivalent CoT-augmented constantprecision Elman RNN LM. Proof intuition. We show how, using the RNN’s recurrence and output sampling step, we can implement the transition function of any PFSA. The RNN starts by sampling an output symbol containing an initial state q0 according to the initial distribution λ without emitting a language symbol. This output gets fed back into the RNN at the next time step, allowing it to read the sampled state, and the next symbol–state pair is sampled according to the conditional distribution defined by the output state, as illustrated by Fig. 3. Finally, the states in the generated string are removed by a regular function, leaving only the language output. See App. F for the detailed proof. ■ 15The relationship between RNN LMs and nondeterministic PFSAs was explored by Svete et al. (2024), albeit with linearly bounded-precision RNNs. 16Recall that non-determinism means that multiple possible transitions to different states can yield the same symbol (§2.2). Theorem 4.2. For any constant-precision CoTaugmented Elman RNN LM, there exists a weakly equivalent PFSA. Proof. See App. F. ■ Thms. 4.1 and 4.2 establish the equivalence between CoT-augmented constant-precision Elman RNN LMs and general regular LMs. This illuminates the added representational capacity awarded by CoT reasoning: Storing the current FSA state in the output string and removing it later allows the model to handle non-determinism which is not possible without the additional information. 4.1.2 Transformer LMs We now show an analogous claim to Thm. 4.1 for Transformer LMs with CoT. This is simply a stepping stone towards full probabilistic Turing completeness—a similar construction will then lead us to the full proof that transformer LMs with unbounded precision can simulate PTMs. Theorem 4.3. For any regular LM, there exists a weakly equivalent CoT-augmented constantprecision transformer LM. Proof sketch. We use the construction from Svete and Cotterell (2024) which shows how to encode n-gram LMs in a transformer. Here, the alphabet of the transformer is augmented with the states of the PFSA, i.e., the symbol at each position t contains not only an output symbol but also the current state of the PFSA at time t. Thereby each input symbol contains all the information required to compute the next-symbol probabilities (it is effectively a unigram LM). Because the next symbol probabilities in a PFSA only depend on the current state, the transformer does not need to use attention at all and can rely solely on the output of its residual connections. See App. F for the detailed proof. ■ 4.2 Neural LMs and PTMs We now extend the results from the previous section from representing the simple regular LM to expressing all enumerable semimeasures over strings by emulating probabilistic Turing machines. For this, in the RNN case, we require unbounded precision and ReLU activation functions rather than fixed precision and Heaviside activations. 4.2.1 Recurrent Neural LMs First, note the following definition: 12517 Definition 4.1. An autoregressive LM is called a real-time LM if it never outputs empty symbols (ε). Nowak et al. (2023) show that RNN LMs with rationally valued activations that are not restricted to operating in real time are Turing complete and weakly equivalent to a subset of rationally weighted PTMs. Intuitively, not operating in real time gives an LM additional computation time while storing the results of its computations in the hidden states. This is a form of CoT reasoning since we could equivalently let the LM output additional symbols that do not count toward the final output. 17 The ability to erase certain symbols, as done by our transducer ϕ, helps make the setup of Nowak et al. (2023) realistic. Importantly, beyond giving the LM more computation time, CoT also allows the LM to model non-determinism in PTMs. Theorem 4.4. For every LM induced by a nondeterministic probabilistic Turing machine, there exists a weakly equivalent CoT-augmented RNN LM without unbounded precision. Proof intuition. The proof follows Nowak et al.’s (2023) probabilistic version of the proof by Siegelmann and Sontag (1992), but extended to account for CoT reasoning. For the ability to simulate all PTMs rather than just a subset, we augment the output alphabet of the RNN with enough information about the current PTM configuration so that subsequent steps can uniquely identify the previous action taken. Concretely, the output alphabet ∆ contains information about the current state, the symbol written on the working tape of the PTM, and the head action performed. This additional information is then removed by our regular function at the end, yielding only the output of the simulated PTM generated according to its probability. A detailed proof is presented in App. F. ■ 4.2.2 Transformer LMs The representational capacity of transformer LMs has received a lot of attention in the last few years. Pérez et al. (2021) established that the encoder–decoder variant of the architecture is Turing complete. Since then, concurrent work has shown non-probabilistic Turing completeness of LM-oriented decoder-only variants (Merrill and Sabharwal, 2024; Feng et al., 2023). We extend the work to the probabilistic case. 17On the other hand, some CoT-augmented LMs do operate in real time but over an extended alphabet; see §4.1. Theorem 4.5. For any PTM-induced LM, there exists a weakly equivalent unbounded-precision CoT-augmented Transformer LMs. Proof intuition. The proof follows Pérez et al. (2021) but is adapted to the probabilistic case. Again, the main idea is to augment the output alphabet with enough information about the current PTM configuration to reconstruct the probabilities of possible actions at each time step. This information and appropriate positional encodings are enough to recover the PTM’s current configuration and thus the next-action distribution, allowing us to construct a weakly equivalent transformer LM. A regular function then removes the additional information from the string. See App. F for details. ■ 4.2.3 Weak Equivalence Thms. 4.4 and 4.5 show that transformer and RNN LMs with CoT reasoning are at least as powerful as PTMs. Weak equivalence requires us to also prove the reverse of these two theorems, analogous to Thm. 4.2 for constant-precision Elman RNN LMs. Theorem 4.6. For any rationally valued RNN LM and transformer LM with CoT reasoning, there exists a weakly equivalent PTM. Proof. RNN LMs define enumerable semimeasures (Nowak et al., 2023, App. G), which can always be expressed by a PTM (Icard, 2020, Thm. 3). Following the same reasoning, transformer LMs define enumerable semimeasures as well; the probability of a string y is defined as the sum probabilities of all runs of the transformer which result in the output y, of which there are countably many. Finally, there is only a countable number of ways a regular function can transform any string, so RNN LMs and Transformer LMs with CoT still define enumerable semimeasures. ■ 5 Conclusion Many modern language models have been shown to perform better with CoT reasoning. Despite its empirical success, CoT reasoning has yet to be well understood formally. Recent theoretical work in this area has analyzed the Turing completeness of CoT-augmented LMs, failing to account for the inherently probabilistic nature of LMs. We fix this mismatch in the current literature by introducing a novel formalization of CoT reasoning. 12518 Limitations Our constructions simulating PFSAs rely on fixedprecision arithmetic in line with the finite-memory nature of such automata. In contrast, the Turingcomplete models all rely on unbounded precision with respect to the length of the string—either to be able to encode arbitrarily large stacks in the hidden state in the case of RNN LMs or to be able to encode positional information in the case of transformer LMs. This inevitably results from the unbounded number of computational steps per emitted symbol by a Turing machine and is distinctly different from the scaling with respect to the number of computational steps. In the transformer case, the precision scales logarithmically with the number of steps (since we use the same positional encodings as Pérez et al. (2021)), which is standard for theoretical investigations of transformer models (Yao et al., 2021; Merrill et al., 2022; Merrill and Sabharwal, 2023). In the case of RNNs, the precision scales linearly with the computation sequence length, due to the encoding of a stack as a rational number, where each entry on the stack occupies a single digit. While in line with previous theoretical work, these assumptions are unrealistic in practice since neural LMs normally use fixed precision floating point arithmetic. We do not use layer normalization with transformers—this is done for simplicity—but note that layer normalization has been found to increase the representational capacity of transformers in some cases (Chiang and Cholak, 2022; Merrill and Sabharwal, 2024). Moreover, we only prove theoretically the equivalence between CoT-augmented LMs and formal models of computation. That is, we do not give a training algorithm to elicit the emergence of specific Turing complete or non-deterministic regular automata in neural LMs, and we make no claims about the training efficiency or even the feasibility of training neural LMs for this purpose. Ethics Statement Since this work deals with purely theoretical properties of neural language models, we do not foresee any ethical issues arising from this work. References Steven Abney, David McAllester, and Fernando Pereira. 1999. Relating probabilistic grammars and automata. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 542– 549, College Park, Maryland, USA. Association for Computational Linguistics. Y. Bar-Hillel, M. Perles, and E. Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 14:143–172. Bruno Bauwens. 2013. Upper semicomputable sumtests for lower semicomputable semimeasures. arXiv preprint arXiv:1312.1718. Satwik Bhattamishra, Arkil Patel, and Navin Goyal. 2020. On the computational power of transformers and its implications in sequence modeling. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 455–475, Online. Association for Computational Linguistics. Adam L. Buchsbaum, Raffaele Giancarlo, and Jeffery R. Westbrook. 2000. On the determinization of weighted finite automata. SIAM Journal on Computing, 30(5):1502–1531. Robin Chan, Reda Boumasmoud, Anej Svete, Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Mennatallah ElAssady, and Ryan Cotterell. 2024. On affine homotopy between language encoders. arXiv preprint arXiv:2406.02329. David Chiang and Peter Cholak. 2022. Overcoming a theoretical limitation of self-attention. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7654–7664, Dublin, Ireland. Association for Computational Linguistics. Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, pages 103–111, Doha, Qatar. Association for Computational Linguistics. Stephen Chung and Hava Siegelmann. 2021. Turing completeness of bounded-precision recurrent neural networks. In Advances in Neural Information Processing Systems, volume 34, pages 28431–28441. Curran Associates, Inc. Li Du, Lucas Torroba Hennigen, Tiago Pimentel, Clara Meister, Jason Eisner, and Ryan Cotterell. 2023. A measure-theoretic characterization of tight language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9744–9770, Toronto, Canada. Association for Computational Linguistics. Jeffrey L. Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179–211. 12519 Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. 2023. Towards revealing the mystery behind chain of thought: A theoretical perspective. arXiv preprint arXiv:2305.15408. Michael Hahn. 2020. Theoretical limitations of selfattention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156– 171. Yiding Hao, Dana Angluin, and Robert Frank. 2022. Formal language recognition by hard attention transformers: Perspectives from circuit complexity. Transactions of the Association for Computational Linguistics, 10:800–810. Yiding Hao, William Merrill, Dana Angluin, Robert Frank, Noah Amsel, Andrew Benz, and Simon Mendelsohn. 2018. Context-free transductions with neural stacks. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 306–315, Brussels, Belgium. Association for Computational Linguistics. John Hewitt, Michael Hahn, Surya Ganguli, Percy Liang, and Christopher D. Manning. 2020. RNNs can generate bounded hierarchical languages with optimal memory. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1978–2010, Online. Association for Computational Linguistics. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735– 1780. John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. 2001. Introduction to Automata Theory, Languages, and Computation, 3 edition. Pearson. Thomas F. Icard. 2020. Calibrating generative models: The probabilistic Chomsky–Schützenberger hierarchy. Journal of Mathematical Psychology, 95:102308. S. C. Kleene. 1956. Representation of events in nerve nets and finite automata. In C. E. Shannon and J. McCarthy, editors, Automata Studies. (AM-34), Volume 34, pages 3–42. Princeton University Press, Princeton. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2023. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916. Samuel A. Korsky and Robert C. Berwick. 2019. On the computational power of RNNs. arXiv preprint arXiv:1906.06349. Ming Li and Paul M.B. Vitányi. 2008. An Introduction to Kolmogorov Complexity and Its Applications, 3 edition. Springer Publishing Company, Incorporated. André F. T. Martins and Ramón F. Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML’16, page 1614–1623. Warren S. McCulloch and Walter Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115– 133. William Merrill. 2019. Sequential neural networks as automata. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 1–13, Florence. Association for Computational Linguistics. William Merrill and Ashish Sabharwal. 2023. The parallelism tradeoff: Limitations of log-precision transformers. Transactions of the Association for Computational Linguistics, 11:531–545. William Merrill and Ashish Sabharwal. 2024. The expressive power of transformers with chain of thought. In The Twelfth International Conference on Learning Representations. William Merrill, Ashish Sabharwal, and Noah A. Smith. 2022. Saturated transformers are constant-depth threshold circuits. Transactions of the Association for Computational Linguistics, 10:843–856. William Merrill and Nikolaos Tsilivis. 2022. Extracting finite automata from RNNs using state merging. arXiv preprint arXiv:2201.12451. William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, and Eran Yahav. 2020. A formal hierarchy of RNN architectures. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 443–459, Online. Association for Computational Linguistics. Marvin Lee Minsky. 1954. Neural Nets and the Brain Model Problem. Ph.D. thesis, Princeton University. Mehryar Mohri. 1997. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269–311. Mehryar Mohri. 2009. Weighted Automata Algorithms, pages 213–254. Springer Berlin Heidelberg, Berlin, Heidelberg. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2008. Speech Recognition with Weighted Finite-State Transducers, pages 559–584. Springer Berlin Heidelberg, Berlin, Heidelberg. Franz Nowak, Anej Svete, Li Du, and Ryan Cotterell. 2023. On the representational capacity of recurrent neural language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7011–7034, Singapore. Association for Computational Linguistics. 12520 Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114. C.H. Papadimitriou. 1994. Computational Complexity. Theoretical computer science. Addison-Wesley. Clemente Pasti, Andreas Opedal, Tiago Pimentel, Tim Vieira, Jason Eisner, and Ryan Cotterell. 2023. On the intersection of context-free and regular languages. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 737–749, Dubrovnik, Croatia. Association for Computational Linguistics. Fernando C. N. Pereira and Michael D. Riley. 1997. Speech recognition by composition of weighted finite automata. In Finite-State Language Processing. The MIT Press. Jorge Pérez, Pablo Barceló, and Javier Marinkovic. 2021. Attention is turing-complete. Journal of Machine Learning Research, 22(75):1–35. Hava T. Siegelmann and Eduardo D. Sontag. 1992. On the computational power of neural nets. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, COLT ’92, page 440–449, New York, NY, USA. Association for Computing Machinery. Michael Sipser. 2013. Introduction to the Theory of Computation, 3 edition. Cengage Learning. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2022. Challenging BIG-Bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261. Anej Svete and Ryan Cotterell. 2023. Recurrent neural language models as probabilistic finite-state automata. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8069–8086, Singapore. Association for Computational Linguistics. Anej Svete and Ryan Cotterell. 2024. Transformers can represent n-gram language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics, Mexico City. Association for Computational Linguistics. Anej Svete, Franz Nowak, Anisha Mohamed Sahabdeen, and Ryan Cotterell. 2024. Lower bounds on the expressivity of recurrent neural language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics, Mexico City. Association for Computational Linguistics. Johannes Von Oswald, Eyvind Niklasson, Ettore Randazzo, João Sacramento, Alexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov. 2023. Transformers learn in-context by gradient descent. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org. Xuezhi Wang and Denny Zhou. 2024. Chain-ofthought reasoning without prompting. arXiv preprint arXiv:2402.10200. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc. Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision RNNs for language recognition. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 740–745, Melbourne, Australia. Association for Computational Linguistics. Noam Wies, Yoav Levine, and Amnon Shashua. 2023. Sub-task decomposition enables learning in sequence to sequence tasks. arXiv preprint arXiv:2204.02892. Shunyu Yao, Binghui Peng, Christos Papadimitriou, and Karthik Narasimhan. 2021. Self-attention networks can process bounded hierarchical languages. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3770–3785, Online. Association for Computational Linguistics. 12521 A Discussion At a high level, our claims characterize neural architectures endowed with CoT reasoning in terms of well-understood probabilistic models of computation, giving the framework of CoT reasoning a novel probabilistic perspective. This allows the reconciliation of theoretical results about neural LMs with the general probabilistic language modeling framework. In doing so, we put existing results about the representational capacity of LMs into perspective and show how they can be thought of as performing “CoT reasoning in disguise.” Perhaps surprisingly, the inclusion of CoT reasoning steps results in a natural inclusion of non-determinism in otherwise deterministic neural LMs, as exemplified by Thm. 4.1. Outputting the states as part of the generated string—whose generation is inherently non-deterministic— allows a neural LM to keep track of the trajectory of the current execution. This suggests that CoT reasoning might provide an interesting avenue for exploring the non-deterministic representational capacity of neural LMs. Note that this means CoT reasoning can endow neural LMs with higher expressivity since, e.g., non-deterministic PFSAs are strictly more expressive than deterministic ones. Similarly, our result that RNNs or transformer LMs with CoT reasoning can simulate PTMs rather than regular Turing machines is important as it allows probabilistic computation using such neural LMs. This means one can sample from them multiple times to assess the certainty of string inclusion in a language. Furthermore, the problems LMs could solve efficiently can be described by different complexity classes such as BPP, ZPP, etc. instead of P. 18 B Related Work Turing completeness of RNNs. Plenty of existing work has investigated the representational capacity of RNNs, both as recognizers as well as LMs (e.g., McCulloch and Pitts, 1943; Kleene, 1956; Siegelmann and Sontag, 1992; Hao et al., 2018; Korsky and Berwick, 2019; Merrill, 2019; Merrill et al., 2020; Hewitt et al., 2020; Chung and Siegelmann, 2021; Merrill et al., 2022; Merrill and Tsilivis, 2022; Svete and Cotterell, 2023; Nowak et al., 2023, inter alia). Most relevant to our work, Siegelmann and Sontag (1992); Chung and Siegelmann (2021) show how RNNs with unbounded precision and unbounded computation time can simulate Turing machines and discuss the implications. Nowak et al. (2023) extend this to the probabilistic setting, showing that RNN LMs can emulate certain PTMs, but require the LMs to be able to perform non-emitting steps. Turing completeness of transformers. Pérez et al. (2021) show the Turing completeness of hard attention encoder–decoder transformer by encoding the configuration of the Turing machine in the output of the transformer. Bhattamishra et al. (2020) provide a different perspective on Turing completeness of the architecture by showing that transformers can simulate RNNs. Turing completeness is in this sense a simple consequence of the Turing completeness of RNNs. Bhattamishra et al.’s (2020) construction, however, relies on emitting symbols from an (uncountably) infinite set (rational-valued vectors), in contradiction to the requirement of the LM working over a finite alphabet. In this sense, they make use of the so-called regression transformer setup, which does not lend itself well to the discrete language modeling (Von Oswald et al., 2023). In concurrent work, Merrill and Sabharwal (2024) adapt Pérez et al.’s (2021) result to the CoT setting, connecting Turing completeness of transformer decoders to CoT reasoning similar to our work. In contrast to our work, they make statements about the model’s ability to decide language membership (simulating a Turing machine that accepts or rejects an input) and not to represent probabilistic languages. In that sense, the transformer they construct is not a language model. Besides being probabilistic, our construction also enables the analysis of any autoregressive LM architecture, and we focus on RNN and transformer LMs. Feng et al. (2023) similarly show that CoT transformers can perform arithmetic expressions and dynamic programming by generating intermediate results, backing these claims up with experimental evidence, but stopping short of showing Turing completeness. Du et al. (2023) provide a related result, showing that transformer LMs with continuous transformation functions are tight. This might at first glance contradict the results on Turing completeness, since the latter might require the model to run indefinitely, resulting in a non-tight model. However, 18For a comparison of these complexity classes, see e.g. Papadimitriou (1994). 12522 the discrepancy is resolved by noting that the tightness result crucially relies on the use of the softmax normalization function over the reals R (without ±∞) in Eq. (2). This, together with the encoding function of the transformer, results in a non-diminishing probability of generating EOS and thus in a tight model. Our use of the sparsemax function sidesteps this issue by allowing us to assign EOS probability 0. C Additional Preliminaries In our construction, we assume that strings are prefixed with the beginning-of-string symbol BOS. This is to give a base case for recurrent definitions such as Def. 2.1. This is just for ease of notation in our proofs. Note that instead of p (y | BOS), we could also equivalently write p (y | ε). C.1 Turing Machines Under our definition, a probabilistic Turing machine has two tapes.19 The first is the processing tape on which symbols from the tape alphabet can be read and written. The second is a write-only output tape for symbols of the output alphabet. In the beginning, the processing tape contains the designated ⊥ symbol in the leftmost cell while all other cells contain the blank symbol ⊔. The output tape is empty at the beginning. Starting in the initial state qι, at each time step t, a transition is sampled out of all available transitions for the given state q and the current working tape symbol γ, and then applied. The sampling happens according to the transition probability w (recall that the transition weights w are non-negative and sum to 1 for each pair of state q tape symbol γ). We write transitions as (q, γ) y,d/w −−−→(q′, γ′), where q, q′ ∈Q, γ, γ′ ∈Γ, y ∈Σε, w ∈Q≥0, and d ∈{L, R}, with the following interpretation. When the machine is in state q and its head is reading γ on the working tape, it moves to state q′, writes γ′ to the working tape, writes y to the output tape if y ∈Σ or nothing if y = ε, and it moves the head on the working tape by one symbol in the direction d, i.e., to the left (L), or to the right (R). As with any non-deterministic machine, the above definition naturally gives rise to the notion of a tree of possible computations (Sipser, 2013, p. 48). A branch π of the computation tree of a PTM is a sequence of consecutive transitions (qι, ⊥) y1,d1/w1 −−−−−→(q2, γ′ 1), (q2, γ2) y2,d2/w2 −−−−−→(q3, γ′ 2), · · · (qN, γN) yN,dN/wN −−−−−−−→(qN+1, γ′ N) (12) We say that a branch is accepting if the branch reaches the final state qφ.20 The yield of an accepting computation branch21 is the sequence of symbols y ∈Σ∗written on the output tape at that point in the computation, i.e., a concatenation of the (non-ε) symbols y1, · · · , yN, where N = |π| is the number of transitions in the branch. The weight of an accepting branch is the product of the weights of its transitions, i.e., w(π) def= N Y n=0 wn. (13) We denote the branches that yield a given string y by Π(M, y). The sum of weights of all branches that yield a certain string y ∈Σ∗is the stringsum of that string, defined as M(y) def= X π∈Π(M,y) w(π). (14) This definition gives rise to a semimeasure over strings whose sum over all possible strings is exactly the halting probability22 of the PTM, i.e., the probability that starting from the initial state qι, M reaches a final state qφ X y∈Σ∗ M(y) = P(M halts). (15) 19Adding another tape does not increase the computational power of a Turing machine (Sipser, 2013, Ch. 3). 20The final state has no outgoing transitions. 21We only consider branches that end in a final state when discussing the yield and weight of branches. This means we only take into account finite instances of computation. 22The halting probability is the probability that the execution of the PTM will end in a halting state after finitely many steps. 12523 C.2 Pushdown Automata Definition C.1. A probabilistic pushdown automaton (PPDA) is a tuple (Q, Σ, Γ, δ, (qι, S), (qφ, ε)) where Q is a finite set of states, Σ is the input alphabet, Γ is the stack alphabet, δ ⊆Q × Γ × Σε × Q≥0 × Q × Γ∗is a finite set of weighted transitions, and (qι, S) and (qφ, ε) are called the initial and final configuration, respectively. Moreover, for all states q ∈Q and stack symbols γ ∈Γ, δ satisfies P (q,γ) y/w −−→(q′,γ)∈δ w = 1. Our definition of PPDA is similar to that of Abney et al. (1999), but, unlike theirs, our machine must end its computation in the final configuration rather than just empty the stack. We write transitions (q, γ, y, w, q′, γ) ∈δ as (q, γ) y/w −−→(q′, γ) which represent a move with weight w from state q to q′ while scanning or outputting y, popping the symbol γ from the stack and pushing the sequence of symbols γ. Definition C.2. A configuration (q, γ) ∈Q × Γ∗is a pair containing the current state and the current contents of the stack. A PPDA P = (Q, Σ, Γ, δ, (qι, S), (qφ, ε)) is called deterministic if for every q ∈Q, γ ∈Γ and y ∈Σε, there is at most one transition (q, γ) y/w −−→q′, γ with w > 0. Additionally, if there is a transition q, γ y/w −−→q′, γ such that y ∈Σ, then there is no transition q, γ ε/w′ −−−→q′′, γ′ with w′ > 0. Intuitively, there exists at most one next move given any configuration in a deterministic PPDA. A run π of a PPDA is a sequence of configurations and transitions, (q0, γ0), τ 1, (q1, γ1), . . . , τ N, (qN, γN), (16) where (qn, γn) is the confguration reached by taking transition τ n from configuration qn−1, γn−1  for any n ∈[1, N]. If (q0, γ0) = (qι, S) and (qN, γN) = (qφ, ε), we call π accepting. The yield of an accepting run is s (π) def= y1 · · · yN, where yn is the symbol scanned by τ n. The weight of an accepting run, w (π), is the product of the weight of its transitions, w (π) def= N Y n=1 wn, (17) where wn is the weight of transition τ n. We write Π (P, y) for the set of all accepting runs with yield y. The sum of the weights of all accepting runs of some PPDA P that yield a certain string y ∈Σ∗is called the stringsum and is defined as P (y) def= X π∈Π(P,y) w (π) . (18) C.3 Two-stack Pushdown Automata Definition C.3. A two-stack pushdown automaton (2PDA) is a machine specified by the 6-tuple P = (Q, Σ, Γ, δ, qι, qφ), where Q is a finite set of states, Σ is an alphabet of input symbols, Γ is an alphabet of stack symbols, including the bottom-of-stack symbol ⊥, δ: Q × Γ × Σε × Q≥0 × Q × Γ4 ε is a finite set of rationally-weighted transitions, and qι and qφ are the initial and the final state, respectively. We use a definition similar to that of Nowak et al.’s (2023), which assumes without loss of generality that transitions are determined by the current state and the top symbol of the first stack only. See Nowak et al. (2023, Appendix B) for a proof. We write transitions as q γ,y,γ1→γ3/w −−−−−−−−→ γ2→γ4 q′, which represent a move with weight w from state q, with the top of the first stack γ, to state q′, while scanning or outputting y, popping γ1 and γ2 from the first and second stack, respectively, and pushing γ3 and γ4 to the first and second stack, respectively. A 2PDA is called probabilistic if, for every state q and top symbol γ of the first stack, the weights of the transitions define a probability distribution, i.e., X q γ,y,γ1→γ3/w −−−−−−−−→ γ2→γ4 q′∈δ w = 1. (19) 12524 Symbol Type Meaning [N] ⊂N The set {1, . . . , N} for N ∈N. Σ, Σ, Σ alphabet Σ is a set of symbols, Σ def = Σ ∪{BOS}, Σ def = Σ ∪{EOS} y, y, y ∈Σ A symbol, element of Σ, Σ, or Σ. y ∈Σ∗ A string over Σ. yi j ∈Σ∗ A substring of y, a string. JyK ∈{0, 1}|Σ| One-hot encoding of the symbol y ∈Σ. D ∈N Size of the contextual representations in the transformer or RNN. ∆N−1 ⊆RN The N −1-dimensional probability simplex. f RD × RD →R A scoring function. f RN →∆N−1 A normalization function. Q, K, V , O RD →RD The query, key, value, and output functions. F RD →RD The final transformer LM transformation function. enc Σ∗→RD The string representation function. rΠ Σ × N →RD The position-augmented representation function. L ∈N Number of layers. H ∈N Number of heads. H RHD →RD The head combining function. (· ; · · · ; ·) Vertical concatenation operator of vectors or matrices. Table 1: A summary of the notation used in the paper. As before, we define a run in the 2PDA as a sequence of consecutive transitions. A run is called accepting if it ends in a final configuration. The yield of an accepting run is the sequence of symbols y ∈Σ that the 2PDA scans (or outputs) during the run. And the weight of an accepting run, w (π), is the product of the weight of its transitions, w (π) def= N Y n=1 wn, (20) where wn is the weight of transition τ n. Finally, the stringsum of string y is defined as the sum of the weights of all accepting runs that yield y, i.e., Π (P, y): P (y) def= X π∈Π(P,y) w (π) . (21) C.4 Transformer Language Models Transformer LMs are LMs whose conditional distributions p (yt | y<t) are computed by a transformer. A transformer is a composition of multiple transformer layers, each of which implements the attention mechanism. We give definitions of these building blocks in what follows. Our formalization and notation closely follows Svete and Cotterell (2024). Notation. We use bold unitalicized letters such as x ∈RD to denote real-valued vectors and italicized letters xj ∈R for their entries. Capital bold letters such as X ∈RN×D denote matrices. All vectors are column vectors unless transposed. We define the vertical stacking operator (· ; · · · ; ·), which denotes the vertical concatenation of the D-dimensional column vectors x1, . . . , xN into a ND-dimensional vector (x1; · · · ; xN) ∈RND and the concatenation of the D-dimensional row vectors x⊤ 1 , . . . , x⊤ N into a matrix X ∈RN×D with N rows and D columns. Given the matrix X = x⊤ 1 ; · · · ; x⊤ N  , we write Xn = x⊤ 1 ; · · · ; x⊤ n  for the submatrix composed of the first n rows. We call a function f : RD×RD →R whose purpose is to evaluate the compatibility of two vectors a scoring function. A normalization function f : RN →∆N−1 maps vectors in RN to N probabilities. Here, ∆N−1 def= n x ∈[0, 1]N | PN n=1 xn = 1 o is the N −1-dimensional probability simplex. This notation is summarized in Tab. 1. The Attention Mechanism. The attention mechanism works as follows. It takes a query vector q ∈RD and two matrices: The matrix K ∈RN×D of keys and the matrix V ∈RN×D of values and computes a weighted average of the value vectors based on the compatibilities of the key vectors to the query vector, as scored by a scoring function f. A formal definition is given below. 12525 Definition C.4 (Attention Mechanism). The attention mechanism Att: RD × RN×D × RN×D →RD is defined as Att (q, K, V) def= N X n=1 snvn (22) where q ∈RD be a query vector and let K = k⊤ 1 ; · · · ; k⊤ N  ∈RN×D and V = v⊤ 1 ; · · · ; v⊤ N  ∈ RN×D be matrices of keys and values, respectively, and s def= f (f (q, k1) , . . . , f (q, kN)) (23) is the vector of normalized scores between the query q and the keys in K, f is a scoring function and f is a normalization function. Attention types. Attention weights are computed by normalizing the scores f (q, k1) , . . . , f (q, kt). The choice of the projection function f determines the type of attention and has concrete implications on representational capacity (Hao et al., 2022). We focus on the hard attention projection function. Definition C.5. Hard attention is computed with the hardmax projection function: hardmax (x)d def= ( 1 m if d ∈argmax (x) 0 otherwise (24) for d ∈[D], where x ∈RD and m def= | argmax (x) | is the cardinality of the argmax set. Definition C.6. A multi-layer perceptron (MLP) F : RD →RD is a function defined as the composition of elementary functions f1, . . . , fL F (x) def= fL ◦fL−1 ◦· · · ◦f1 (x) , (25) where each function fℓfor ℓ∈[L] is defined as fℓ(x) def= β (Wℓx + bℓ) ℓ∈[L −1] (26a) fL(x) def= WLx + bL, (26b) where Wℓ∈RD×D is a square weight matrix specific to layer ℓ, bℓ∈RD is a bias vector, and β is an element-wise non-linear activation function. The function f1 is called the input layer, the function fL is called the output layer, and the function fℓfor ℓ= 2, . . . , L −1 are called hidden layers.23 The Transformer Architecture. A transformer layer uses the attention mechanism to compute augmented representations zt = Att (qt, Kt, Vt) of the input representations Xt = (x1; · · · ; xt). The query qt, the keys Kt, and values Vt are all transformations of the input representations Xt. Definition C.7. Given query, key, value, and output functions Q, K, V , O: RD →RD, a transformer layer is a function L: RT×D →RT×D that computes L  x⊤ 1 ; · · · ; x⊤ T  =  z⊤ 1 ; · · · ; z⊤ T  ∈RT×D (27) for t ∈[T] where at def= Att (qt, Kt, Vt) + xt ∈RD (28a) zt def= O (at) + at ∈RD. (28b) 23Note that we refer to MLPs by the number of hidden layers, e.g., a one-layer-MLP is an MLP with one hidden layer. 12526 Here, we define qt def= Q (xt) ∈RD (29a) Kt def=  K (x1)⊤; · · · ; K (xt)⊤ ∈Rt×D (29b) Vt def=  V (x1)⊤; · · · ; K (xt)⊤ ∈Rt×D. (29c) Note: For simplicity, we do not include layer normalization. The functions Q, K, V are usually implemented as linear transformations and O as an MLP. Some theoretical literature, however, also considers more general function classes, e.g., all smooth functions (Hahn, 2020). Without further modification, the transformations applied by the transformer layer are position-invariant, which necessitates the addition of explicit positional information. Definition C.8. A symbol representation function is a function r: Σ →RDr and a positional encoding is a function Π: N →RDΠ. A position-augmented representation function rΠ : Σ × N →RD (with D = Dr + DΠ) is defined as rΠ  yt, t  def=  r  yt  ; Π (t)  . (30) Definition C.9. A static encoding R is a function R: ΣT →RT×D defined for any T ∈N as R (y) def=  rΠ (y1, 1)⊤; · · · ; rΠ (yT , T)⊤ . (31) Multiple transformer layers are stacked into a transformer, which computes the (deep) contextual representations of all symbols in the string. Definition C.10. For L ∈N, an L-layer transformer T is defined as T (R) def= LL ◦· · · ◦L1 ◦R, (32) where Lℓfor ℓ∈[L] are transformer layers and R is a static encoding. A transformer computes the contextual representations of the symbols y = y1 · · · yT as  z⊤ 1 ; · · · ; z⊤ T  def=  xL⊤ 1 ; · · · ; xL⊤ T  def= T (R) (y) . (33) If R is clear from the context or arbitrary, we will omit it as an argument to T and just write T (y). Definition C.11. Given a transformer T , a final representation transformation function F : RD →RD, and a string y ∈Σ∗with |y| = T, we define the encoding function enc as enc (y) def= F (zT ) (34) where zT is the representation of the T th symbol in y computed by T , i.e., z⊤ 1 ; · · · ; z⊤ T  = T (y). Transformer Language Models. So far, we have only defined how the transformer architecture can be used to compute the contextual representations of the symbols. To complete the definition, we define a transformer language model as follows. Definition C.12. A transformer LM pT is the representation-based autoregressive LM with the representation function enc from Eq. (34). That is, pT defines the conditional probability distributions pT (yt | y<t) def= sparsemax(E enc (y<t))yt. (35) 12527 C.5 Weighted Finite-state Transducers Definition C.13. A (rational-)weighted finite-state transducer (WFST) is the tuple (Σ, Ξ, Q, δ, λ, ρ) where Σ and Ξ are the input and output alphabets, respectively, Q is a finite set of states, δ ⊆Q × Σε × Ξε × Q≥0 × Q is a finite set of weighted transitions where we write transitions (q, y, x, w, q′) ∈δ as q y:x/w −−−→q′, and λ, ρ: Q →Q≥0 are functions that assign each state its initial and final weight, respectively. WFSTs are thus a special class of (weighted) finite-state automata that operate over two alphabets. Just like PFSAs are a generalization of unweighted FSAs, WFSAs represent a more general class of machines than unweighted FSTs, such as the ones used in the definition of regular reducibility (cf. Def. 3.1). The weighted versions will prove useful when talking about various aspects of equivalence with regular reducibility. As a special case of weighted finite-state automata, WFSTs compute weights of their inputs—in this case, weights of pairs of strings (the inputs and their outputs): T (y, x) def= X π∈Π(T ,(y,x)) w (π) . (36) Of interest are also marginal sums of the inputs, i.e., T (y) def= X x∈Ξ∗ T (y, x) . (37) C.5.1 Operations on WFSTs We will use WFSTs as building blocks in the exposition of regular reducibility (cf. §3.2). In this subsection, we outline some operations on the computational models that will be particularly useful. Composition. The composition of two WFSTs results in a WFST that, similarly to function composition, maps the strings with the first WFST, passes them to the second one, and returns the (weight of the) output of the second transducer (Pereira and Riley, 1997). More formally, given the WFSTs T 1 = (Σ, Υ, Q, δ, λ, ρ) and T 2 = (Υ, Ξ, Q, δ, λ, ρ), their composition T 2 ◦T 1 computes (T 2 ◦T 1) (y, x) def= X z∈Υ∗ T 2 (z, y) · T 1 (x, z) . (38) Intuitively, Eq. (38) computes the weight of all the possible ways of mapping y to x by first mapping y to some z ∈Υ∗and then mapping that z into x. The WFST (T 2 ◦T 1) can be computed from T 1 and T 2 in time O ((|Q1| + |δ1|) (|Q2| + |δ2|)) (Mohri, 2009). Projecting a WFST. Any WFST can be projected onto an (input or output) weighted finite-state automaton (WFSA)24 that computes the cumulative weights of individual input or output strings (Mohri et al., 2008). Given a WFST T = (Σ, Ξ, Q, δ, λ, ρ), its input projection is the automaton AI = (Σ, Q, δI, λ, ρ) that computes AI (y) def= X x∈Ξ∗ T (y, x) . (39) The output projection automaton is defined analogously. While it might not be immediately obvious that Eq. (39) represents the computations of a WFSA, AI can be easily constructed by ignoring the output labels on the transitions (and additively merging any transitions that become identical after the removal of the output labels). That is: δI =   q y/ P q y:x/w −−−→q′∈δ w −−−−−−−−−−−−→q′ | q, q′ ∈Q, y ∈Σε   . (40) 24A weighted finite-state automaton is a generalization of a probabilistic finite-state automaton where the weights do not have to form probability distributions. We also extend the definition of WFSAs to allow ε-transitions, i.e. with symbols from Σε. Note that every such WFSA can be converted into a weakly equivalent WFSA without ε-transitions. 12528 Lifting a PFSA. Lifting transforms a given PFSA into a trivial WFST that implements the identity function y 7→y. Concretely, given the PFSA A = (Σ, Q, λ, ρ, δ), its lifted WFST T A = (Σ, Σ, Q, δT , λ, ρ) defines the transitions δT def=  q y:y/w −−−→q′ | q y/w −−→q′ ∈δ  . (41) It is easy to see that T A computes T A y, y′ = 1  y = y′ A (y). (42) Lifting is useful when one wants to compose a PFSA with a WFST, as is the case when discussing regular reducibility. Inverting a WFST. The inverted WFST of some WFST T = (Σ, Ξ, Q, δ, λ, ρ) is the WSFT T −1 = Ξ, Σ, Q, δ−1, λ, ρ  that maps the outputs of T to their inputs. δ−1 is defined as δ−1 def=  q x:y/w −−−→q | q y:x/w −−−→q ∈δ  , (43) i.e., it is composed of transitions from δ with their input–output labels flipped. D Separation of Alphabets In our analysis, we use a different alphabet for the chain of thought (Γ) than for the final output (Σ) to clarify the distinction between output that encodes automata configurations vs the output that constitutes the weighted output language of that automaton. However, we can also define a regular function for the case where Γ = Σ. For this, we choose a specific symbol in Σ, e.g., X, whose first occurrence signifies the boundary between the chain of thought and the language output. Then, ϕ can be defined as: ϕ(y X z) def= z (44) where y ∈Σ∗\{X}, and z ∈Σ∗. This can be easily modeled by the following simple two-state transducer: 0 1 y ∈Σ\{X} : ε X : ε z ∈Σ : z Figure 4: A simple WFST T implementing ϕ. Note that in this case, the chain of thought has to be confined in its entirety as one string, with the entire final output coming afterward. This simply means that the simulated Turing machine first performs all the computations it needs on the working tape, and then writes the final string to the output tape in real time. The proof of Thm. 4.4 can easily be adapted to work in this setting by only tagging the ε symbols in Γ, e.g. by symbols a and b and defining ϕ to remove them until the occurrence of another symbol, e.g. c. E Proofs: Regular Reducibility E.1 Nondeterminism in PFSAs Theorem 3.1. Let A = (Σ, Q, δ, λ, ρ) be a PFSA. Then, there exists a deterministic PFSA A′ over the alphabet Σ × Q and with the state space Σ × Q that is regularly reducible to A. 12529 Proof. Given the PFSA A = (Σ, Q, λ, ρ, δ), we want to show the existence of a deterministic PFSA A′ over the alphabet Σ × Q that is regularly reducible to A. We construct the deterministic PFSA A′ = Σ × Q, Σ × Q, δ′, λ′, ρ′ as follows. δ′ def=  (y, q) (y′,q′)/w −−−−−→ y′, q′ | q y′/w −−−→q′ ∈δ, y ∈Σ  (45a) λ′ (y, q) def= λ (q) |Σ| , ∀y ∈Σ, q ∈Q (45b) ρ′ (y, q) def= ρ (q) , ∀y ∈Σ, q ∈Q (45c) We first prove that A′ is probabilistic and deterministic. • Probabilistic: We compute X (y,q)∈Σ×Q λ′ (y, q) = X (y,q)∈Σ×Q λ (q) |Σ| (46a) = X y∈Σ X q∈Q λ (q) |Σ| (46b) = X q∈Q λ (q) (46c) = 1 (46d) and, for any (y, q) ∈Σ × Q, X (y,q) (y′,q′)/w −−−−−→(y′,q′)∈δ′ w + ρ′ (y, q) = X (y,q) (y′,q′)/w −−−−−→(y′,q′)∈δ′ w + ρ (q) (47a) = X q w/y′ −−−→y′∈δ w + ρ (q) (47b) = 1 (47c) • Deterministic: Let (y, q) ∈Σ×Q be a state of A′ and (y′, q′) a symbol in its alphabet. By definition of δ′, (y′, q′) uniquely determines the target state, which is identical to the symbol—(y′, q′). We now show that A′ is indeed regularly reducible to A. The alphabet Σ × Q clearly satisfies the Σ-augmentation condition. We define the regular function ϕ (y, q) def= y, an instance of the general function described in §3.2. We will show that A′ is weakly equivalent to A ◦ϕ, or, equivalently, that A′ ◦ϕ−1 is weakly equivalent to A. Since A′ is an acceptor of strings in (Σ × Q)∗, we first transform it into weighted finite-state transducer T A implementing the mapping25 T A′ x, x′ def= 1  x = x′ A (x) (48) for x, x′ ∈∆∗. This is a simple WFST with transitions δT A′ def=  q (y,q):(y,q)/w −−−−−−−−→q′ | q (y,q)/w −−−−→q′ ∈δ′  . T A′ can then be composed with the weighed version of the FST implementing ϕ−1, T ϕ−1: T ϕ−1 (y, x) def= 1  x ∈ϕ−1 (y) . (49) 25See App. C.5 for a definition of weighted finite-state transducers and the operations on them. 12530 T A′ ◦T ϕ−1 then computes, for y ∈Σ∗, x′ ∈∆∗,  T A′ ◦T ϕ−1  y, x′ = X x∈∆∗ T ϕ−1 (y, x) · T A′ x, x′ (50a, Eq. 15.4 in Pereira and Riley (1997).) = X x∈∆∗ 1  x ∈ϕ−1 (y) · T A′ x, x′ (50b) = X x∈ϕ−1(y) T A′ x, x′ (50c) = X x∈ϕ−1(y) 1  x = x′ A (x) (50d) = 1  x′ ∈ϕ−1 (y) A x′ (50e) We then turn T A′ ◦T ϕ−1 into a PFSA in the standard way by projecting the WFST onto the transition input labels (Mohri et al., 2008). This results in a PFSA that assigns the string y ∈Σ∗the probability that equals the sum of all possible mappings of y to any x′ ∈∆∗:  T A′ ◦T ϕ−1  (y) def= X x′∈∆∗  T A′ ◦T ϕ−1  y, x′ (51a) = X x′∈∆∗ 1  x′ ∈ϕ−1 (y) A x′ (51b) = X x′∈ϕ−1(y) A x′ (51c) = A′ ϕ−1 (y)  (51d) = A′ ◦ϕ−1 (y) (51e) = A (y) (51f) This shows that A′ ◦ϕ−1 is weakly equivalent to A, which finishes the proof. ■ E.2 Nondeterminism in PPDAs Theorem E.1. Let P = (Q, Σ, Γ, δ, (qι, S), (qφ, ε)) be a PPDA. Then, there exists a deterministic PPDA P′ over the alphabet (Σε) × Q × Γ with the state space (Σε) × Q × Γ that is regularly reducible to P. Proof. Given a PPDA P = (Q, Σ, Γ, δ, (qι, S), (qφ, ε)), we prove the existence of a deterministic PPDA P′ over the alphabet Σε × Q × Γ that is regularly reducible to P. We construct the deterministic PPDA P′ = Σε × Q × Γ, Σε × Q × Γ, Γ, δ′, (qι, S), (qφ, ε)  with the set of transitions δ′ def= {(y, q, γ), γ′ (y′,q′,γ′)/w −−−−−−−→(y′, q′, γ′), γ |q, γ′ y′/w −−−→q′, γ ∈δ, y ∈Σε, γ ∈Γ}. (52) • Probabilistic: For any (y, q, γ) ∈Σε × Q × Γ and γ′ ∈Γ, X (y,q,γ),γ′ (y′,q′,γ′)/w −−−−−−−→(y′,q′,γ′),γ w = X q,γ′ y′/w −−−→q′,γ w (53a) = 1 (53b) • Deterministic: Just like in the finite-state case, let (y, q, γ) ∈Σε × Q × Γ be a state, γ′ ∈Γ a stack symbol and (y′, q′, γ′) ∈Σε × Q × Γ an input symbol. Then (y′, q′, γ′) uniquely determines the next configuration. 12531 We now show that P′ is regularly reducible to P. More precisely, we define the regular function ϕ(y, q, γ) def= y and show that P′ ◦ϕ−1 is weakly equivalent to P. Let G′ be a CFG26 that is equivalent to P′ (see Hopcroft et al., 2001, Section 6.3.2 for the construction). Using grammars, rather than pushdown automata, will simplify the proof as there are well-known constructions (e.g., Bar-Hillel et al. (1961), Pasti et al. (2023)) for composing CFGs and FSTs. We therefore prove that G′ ◦ϕ−1 is weakly equivalent to P. Recall that ϕ−1 can be implemented as an FST T ϕ−1 defined as T ϕ−1 (y, x) def= 1  x ∈ϕ−1 (y) . (54) Adapting Corollary 1 from Pasti et al. (2023) to the FST case, we get  G′ ◦T ϕ−1  (y, x) = G′ (x) · T ϕ−1 (y, x) (55a) = G′ (x) · 1  x ∈ϕ−1 (y) . (55b) Then, just like in the regular case, we compute  G′ ◦T ϕ−1  (y) by summing over all possible x ∈Σ∗ values:  G′ ◦T ϕ−1  (y) def= X x∈Σ∗  G′ ◦T ϕ−1  (y, x) (56a) = X x∈Σ∗ G′ (x) 1  x ∈ϕ−1 (y) (56b) = X x∈ϕ−1(y) G′ (x) (56c) = X x∈ϕ−1(y) P′ (x) (56d) = P′ ϕ−1 (y)  (56e) = P′ ◦ϕ−1 (y) (56f) = P (y) . (56g) This shows that G′ ◦T ϕ−1 is weakly equivalent to P, thus P′ ◦T ϕ−1 is weakly equivalent to P. This concludes the proof. ■ F Proofs: Representational Capacity of Neural LMs F.1 Finite-state Language Models Theorem 4.1. For any regular LM, there exists a weakly equivalent CoT-augmented constant-precision Elman RNN LM. Proof. Let A = (Σ, Q, λ, ρ, δ) be a PFSA. We divide the definition of a weakly equivalent Elman LM R = (Σ × Q, D, U, V, b, η) with the output matrix E into multiple steps.27 Note: The paper assumes a BOS-padding of the input strings. Since CoT-augmented LMs work over a potentially larger alphabet, the padding symbol has to be changed accordingly. Particularly, the CoTaugmented RNN will work over the alphabet Σ×Q. For simplicity, assume that any input string is padded by the symbol (BOS, q0) for some arbitrary (but fixed) state q0 ∈Q. 26Similar to the notation used for PPDAs, we use G (y) to denote the weight of the string y under the grammar G. 27Notice that R is CoT-augmented by construction, as it works over the alphabet ∆ def = Σ × Q. ∆also satisfies the Σ-augmentation condition. 12532 Hidden states. Let D = |Σε||Q|. We will represent the input symbols with the one-hot representation function J·, ·K that computes Jy, qK ∈{0, 1}|Σε||Q|:28 Jy, qKy,q def= 1 (57) while other entries of the vector are zero.29 We can define η def= 0D. Recurrence. Conveniently, the CoT-augmented RNN will store the current PFSA state in its output “symbol”. Therefore, its next hidden state ht will not, in fact, depend on the previous one (ht−1); ht−1 will only be used to compute the next-symbol–state output distribution through Eq. (2). The RNN therefore only has to appropriately incorporate the input information. With this in mind, we define the following Elman RNN parameters U, V, and b: U def= OD U ∈RD×D (58a) V def= ID V ∈RD×D (58b) b def= 0D b ∈RD (58c) where OD is a D × D-dimensional matrix of zeros, ID is the D-dimensional identity matrix, and 0D is a D-dimensional vector of zeros. Then, we define the output matrix E ∈R|Σε||Q|×D as E(y′,q′),(y,q) def= p q′, y′ | q  q, q′ ∈Q, y ∈Σε, y′ ∈Σε (59a) E(ε,q),(BOS,q0) def= λ (q) q ∈Q (59b) E(EOS,q),(y,q) def= ρ (q) q ∈Q, y ∈Σε. (59c) Computation of string probabilities. We now show that R with E defines the same conditional distributions as A, meaning that it implicitly defines the same probability distribution over strings. As mentioned above, we assume that the input string is prefixed with the (BOS, q0) symbol. Let the previously generated output symbol ∈∆be y, q  (in the first step, the “generated” pair is (BOS, q0)). We get that ht = H Uht−1 + VJy, qK + b  (60a) = H ODht−1 + VJy, qK + 0D  (60b) = H VJy, qK  (60c) = Jy, qK (60d) The one-hot encoding Jy, qK is then used to “index” the appropriate column of the output matrix E as Eht, which contains the probabilities of the next symbol–state pairs defined by the PFSA, as per Eq. (59a): (Eht)(q′,y′) = E(y′,q′),(y,q) = ( λ (q′) if y = BOS, q = q0, y′ = ε p (q′, y′ | q) otherwise . (61) Passing Eht through the normalization function, the next state and symbol are sampled and passed as input to the RNN, repeating the update step. Since the Elman RNN following these dynamics by constructions generates paths (sequences of states in the output symbols) with the same probabilities as the input automaton, it enumerates all accepting paths of any string y ∈Σ∗with the same probabilities as the original PFSA. Applying the transformation ϕ to the produced outputs and summing over sequences that yield the same string will thus result in strings sampled from the PFSA. This finishes the proof. ■ Theorem 4.2. For any constant-precision CoT-augmented Elman RNN LM, there exists a weakly equivalent PFSA. 28Throughout the paper, we index vectors and matrices directly with set elements rather than integer values, meaning each index corresponds to exactly one element from the given set. 29If any of the two arguments are empty, its corresponding component is also zero. 12533 Proof. Let p be a CoT Heaviside Elman RNN LM over the alphabet Σ with the regular function ϕ: ∆∗→Σ∗for an Σ-augmented alphabet ∆. Let p be the underlying RNN LM over ∆∗, that is, p = p ◦ϕ−1 and p = p ◦ϕ by Def. 3.3. We want to show the existence of a possibly non-deterministic PFSA A that is weakly equivalent to p . We write p ∼= p′ to mean p is weakly equivalent to p′ (cf. Def. 2.2). By Svete and Cotterell (2023, Lem. 4.1), there exists a DPFSA A = (∆, Q, δ, λ, ρ) over the augmented alphabet ∆weakly equivalent to p. This means that A ∼= p ∼= p ◦ϕ (62) and thus p ∼= A ◦ϕ−1. (63) Just like in the proof of Thm. 3.1, A ◦ϕ−1 can be computed as a composition of (lifted) WFSTs that is then projected onto the input component, resulting in a PFSA weakly equivalent to p . This finishes the proof. ■ Theorem 4.3. For any regular LM, there exists a weakly equivalent CoT-augmented constant-precision transformer LM. Before we proceed to the full proof of Thm. 4.3, we first show the following simple but useful lemma. Lemma F.1. Define the following linear transformation functions: V (x) def= 0, (64a) O (x) def= 0 (64b) for all x ∈RD. A transformer layer L with the parameters V and O and residual connections implements the identity function irrespective of the parameters Q, K, and f: L (x) = x (65) for all x ∈RD. Proof. By definition of a transformer layer (cf. Eqs. (27), (28a) and (28b)), L computes a = Att (q, K, V) + x = N X n=1 snvn + x = N X n=1 sn0 + x = x (66a) L (x) = O (a) + a = 0 + a = a = x. (66b) ■ This allows us to show Thm. 4.3. Proof. Let A = (Σ, Q, λ, ρ, δ) be a PFSA. Like the CoT-augmented RNN in Thm. 4.2, the CoTaugmented transformer LM will work over the alphabet ∆ def= Σε × Q. It will start by generating a random initial state according to the initial-state distribution λ upon reading the designated padding symbol (BOS, q0). Then, at step t of generation, it will generate the next symbol–state pair by sampling from the next-transition distribution p  qt, yt | qt−1  given the state stored in the previously generated output tuple. More formally, define BOS′ def= (BOS, q0) for some arbitrary but fixed q0 ∈Q. Further, define the static representation function R y, q  , t  def= Jy, qK ∈{0, 1}|Σε||Q| (67) 12534 as in Eq. (57). Let L be the identity transformer layer from Lemma F.1 and T the transformer with the single layer L. Then, by Lemma F.1, T (R) (BOS, q0) , (y1, q1) , . . . , yt−1, qt−1  =  JBOS, q0K⊤; Jy1, q1K⊤; · · · ; Jyt−1, qt−1K⊤ (68) for any t. Defining the transformation F (x) def= x, it therefore holds that enc (BOS, q0) , (y1, q1) , . . . , yt−1, qt−1  = F x1 t−1  = Jyt−1, qt−1K. (69) In words, enc (y<t) contains the current symbol–state pair of the PFSA. The CoT-augmented transformer LM can therefore sample the next symbol–state pair from the conditional distribution defined by the PFSA by setting the values of the output matrix E as in Thm. 4.1: E(y′,q′),(y,q) def= p q′, y′ | q  q, q′ ∈Q, y, y′ ∈Σ (70a) E(ε,q′),(BOS,q0) def= λ (q) q ∈Q (70b) E(EOS,q),(y,q) def= ρ (q) q ∈Q, y ∈Σε. (70c) Clearly, the identical conditional probabilities defined by the CoT-augmented transformer T result in strings over Σε × Q sampled with probabilities equal to the path probabilities from the PFSA, as in Thm. 4.1. Applying the transformation ϕ to the produced outputs will thus result in strings sampled from the PFSA, giving us a CoT-augmented transformer LM weakly equivalent to A. Lastly, since all the representations in the model are position-invariant, the constructed transformer is of constant precision. ■ F.2 Probabilistic Turing Machine Language Models RNN LMs with CoT reasoning. Next, we show that, for every PTM, its induced LM can be encoded in an unbounded precision CoT RNN LM. Theorem 4.4. For every LM induced by a non-deterministic probabilistic Turing machine, there exists a weakly equivalent CoT-augmented RNN LM without unbounded precision. Proof. We rely mainly on existing results by Nowak et al. (2023) but use the additional information afforded by the augmented output alphabet, resulting in a simpler construction and interpretation. By Nowak et al. (2023, Prop. 3.1 and Thm. 3.1), the LM families induced by PTMs and probabilistic 2PDA are weakly equivalent, so it suffices to show that a CoT RNN LM can encode any given 2PDA. We first reiterate their main result together with its condition. Definition F.2. A 2PDA P is called Σ-deterministic if, for any current state q any top symbol on its first stack γ and any output symbol from Σε, there is at most one transition with non-zero weight. Theorem F.3. Nowak et al. (2023, Thm 3.2) Every Σ-deterministic probabilistic 2PDA can be encoded in an RNN LM that can output empty tokens ε. Nowak et al. (2023, Thm. 3.2) shows that an RNN LM with output alphabet Σε can simulate any probabilistic 2PDA over Σ if it is Σ-deterministic. Let P be an arbitrary probabilistic 2PDA, with P = (Q, Σ, Γ, δ, qι, qφ). Now define another 2PDA P = (Q, ∆, Γ, δ′, qι, qφ), which generates outputs over the extended alphabet ∆ def= Q × Γ4 ε × Σε. Define the transitions, add the following transitions to δ′ in P′: δ′ def=  q γ,(q,γ1,γ2,γ3,γ4,y),γ1→γ3/w −−−−−−−−−−−−−−−−−−→ γ2→γ4 q′ | q γ,y,γ1→γ3/w −−−−−−−−→ γ2→γ4 q′ ∈δ  (71) The constructed P′ satisfies the required Σ-determinism condition, meaning that we can construct a weakly equivalent RNN LM with ε outputs. Finally, define the following regular function ϕ: ∆ε →Σε that removes all additional information post-hoc: ϕ(x) def= ( y if x = (q, γ1, γ2, γ3, γ4, y) ε if x = ε (72) 12535 The RNN LM therefore directly simulates runs from the 2PDA P′. Applying ϕ to the outputs of the RNN LM results in strings in Σ∗. Since the transformation groups all execution runs that output y ∈Σ∗and the resulting CoT-augmented LM p (cf. Def. 3.3) sums over them, we are left with a weakly equivalent LM. ■ Theorem 4.5. For any PTM-induced LM, there exists a weakly equivalent unbounded-precision CoTaugmented Transformer LMs. Proof. Here, we adapt the construction by Pérez et al. (2021) to the decoder-only case with prompt length (denoted by n in Pérez et al. (2021)) zero since we only care about generating strings from the CoT-augmented transformer LM. Moreover, rather than implementing a deterministic transition function of a Turing machine (which is simulated by a particular layer of their transformer), we implement the probabilistic transition function of a rational-valued PTM in the sampling step of the transformer LM. As in our previous constructions, this step endows the model with non-determinism. At a high level, we will construct a CoT-augmented transformer LM over Σ∗that will output symbols from the augmented alphabet ∆ def= Q × Γ × Σε × A × A. Here, A def= {−1, 0, 1}, where we will for conciseness identify the action LEFT with −1 and the action RIGHT with 1. Note that we, like Pérez et al. (2021), do not allow the action NOOP in our construction, but we include the value 0 in the action set as it will be useful for the starting conditions of the constructed transformer. As explained later, such an alphabet ∆contains enough information for the transformer LM to be able to reconstruct the configuration of the PTM at every time step, and thus match its conditional probability distributions. Notation. Let M = (Q, Σ, Γ, δM, qι, qφ) be a QPTM and p the LM over Σ∗it induces. In the following, we denote with qt ∈Q the state of the PTM, with vt ∈Γ the symbol written to the working tape, with at the action performed, with st the symbol read, and with yt the symbol written to the output tape, all at time step t. High-level idea of the construction. Before formally describing the components of the CoT-augmented transformer LM in separate lemmata, we give a high-level overview of the construction. The twolayer (single-head) transformer will, when computing enc  y<t  for y<t ∈∆∗, perform the following computations, very similar to those described by Pérez et al. (2021): 1. Input representations (R; Lemma F.5): The input representation function represents the input symbols y ∈∆with a multi-hot encoding (one that contains an individual one-hot encoding of each of the components qt, vt−1, yt−1, at−1, at−2) of the form30,31 rΠ  y , t  =            JqtK Jvt−1K Jat−1K at−1, at−2 0, 0 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            (73) for t ∈N≥0. The representations also include additional components (the “empty” zero values in the vector, explained below) used for processing and the positional encoding of the time step, 1, t + 1, 1 t+1, 1 (t+1)2 . This component is explained in more detail in Lemma F.5. 2. Layer 1 (L1; Lemma F.6): The first layer uses the information about the actions performed at all previous time steps (contained in the input static representations of the CoT-augmented symbols) to 30In the following, green color denotes the components that are added or computed by each of the described components. 31Notice that the symbol y ∈Σ is not part of the internal representation, since it is not required for the simulation of the PTM. It is only used to construct the final output string stored in the output tape. 12536 compute the locations of M’s head at each time step. This results in the internal representations of the form x1 t =            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            (74) for t ∈N≥0, where c (t) denotes the position of M’s head at time t. This component is identical to the second layer of the transformer from Pérez et al. (2021) and is explained in more detail in Lemma F.6. 3. Layer 2 (L2; Lemma F.7): Uses the L1-computed information about the head locations at each time step to (almost) compute st—the symbol read by the head of the PTM at time step t. This results in the internal representations of the form x2 t =            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 ℓ(t), Jvℓ(t)K 1, t + 1, 1 t+1, 1 (t+1)2            (75) for t ∈N≥0, where ℓ(t) denotes the last time step when M’s head wrote to c (t). This component is analogous to the third layer of the transformer from Pérez et al. (2021) and is explained in more detail in Lemma F.7. 4. Output function (F; Lemma F.8): The function F uses the information computed by the two layers of the transformer to compute the one-hot encoding of the current configuration of the PTM. In particular, this includes M’s current state (qt), the symbol read by M’s head (st), and, for reasons that we explain shortly, the action performed by M at the previous time step (at−1). This results in the representation enc  y<t  = Jqt, st, at−1K, (76) which is used for sampling the next transition of the PTM. This component resembles the output function of Pérez et al. (2021) and is explained in more detail in Lemma F.8. 5. Sampling (Lemma F.9): The representation Jqt, st, at−1K is used to index the output matrix E that contains the conditional probabilities p (· | qt, st). This can be used to sample the next augmented symbol Jqt+1, vt, yt, at, at−1K (here, the last component, at−1, equals the “input” to the sampling step). This component is explained in more detail in Lemma F.9. The idea of the construction—iteratively computing and storing the modifications to the PTM configuration in the generated string y —is therefore identical to Pérez et al.’s (2021) one. In particular, the two components that perform the bulk of the simulation—Layer 1 and Layer 2—are identical to the components from Pérez et al. (2021). We describe and show the correctness of the components in the lemmata in the rest of the section. The correctness of the construction follows from the correctness of the components. Weak equivalence. To define a CoT-augmented LM with such a transformer, we can define a transducer that projects the outputs in Q × Γ × Σε × A × A ∗onto Σ∗in the standard way by retaining only the outputs in Σε. This collapses all the executions of the transformer LM (which simulates the executions of the PTM with the same probabilities, as per Lemma F.9) yielding the same string in Σ∗, resulting in a CoT-augmented transformer LM weakly equivalent to p. 12537 Precision. The constructed transformer computes and stores position-dependent values at several points during the computation. The precision required for the representation of these values grows logarithmically with the number of computational steps. However, since the number of computational steps performed by a PTM generating a string is potentially unbounded in the length of the string (Hopcroft et al., 2001, p. 339), the transformer’s precision is subsequently unbounded as well (Nowak et al., 2023). ■ We begin with a general lemma about the disjunction of one-hot encodings. Lemma F.4. Let S1, . . . , Sn be finite sets and let Js1, . . . , snK ∈{0, 1}|S1|···|Sn| denote the one-hot encoding of the tuple (s1, . . . , sn) ∈S1 × · · · × Sn. Define the matrices WSi ∈{0, 1}|Si|×|S1|···|Sn| (77) element-wise as WSi s,(s1,...,si−1,s,si+1,...,sn) def= 1 for all sj ∈Sj, j ∈1, . . . , n, j ̸= i. (78) Then, it holds that WSiJs1, . . . , snK = JsiK. (79) Proof. Eq. (78) can equivalently be written as WSi s,(s1,...,si−1,si,si+1,...,sn) def= 1 {si = s} . (80) Indexing the elements of WSiJs1, . . . , snK directly with s ∈Si, we then compute WSiJs1, . . . , snK  s = X s′ j∈Sj j=1,...,n WSi s,(s′ 1,...,s′n)Js1, . . . , snKs′ 1,...,s′n (81a) = X s′ j∈Sj j=1,...,n 1  s′ i = s Js1, . . . , snKs′ 1,...,s′n (81b) = X s′ j∈Sj j=1,...,n 1  s′ i = s 1  s′ 1 = s1, . . . , s′ n = sn (81c) = X s′ i∈Si 1  s′ i = s X s′ j∈Sj j=1,...,n, j̸=i 1  s′ 1 = s1, . . . , s′ n = sn | {z } =1 (81d) = X s′ i∈Si 1  s′ i = s , (81e) which is the definition of the elements of JsiK. The equality in Eq. (81d) follows from the fact that the summand is non-zero exactly when s′ j = sj for all j ̸= i. ■ Lemma F.5 (Input representations). Define the following static representation function of the CoTaugmented symbols y def= q, v, y, a′, a  ∈∆: rΠ  y , t  def=      WJq, v, y, a′, aK 0, 0 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2     ∈RD (82) where Jq, v, y, a′, aK ∈{0, 1}|Q||Γ||Σε||A||A| denotes the one-hot encoding of the tuple q, v, y, a′, a  , W def= WQ; WΓ; WA; A′; A  ∈RD×(|Q||Γ||Σε||A||A|) (83) 12538 for matrices WQ, WΓ, WA from Lemma F.4, A′, A ∈R1×D defined as32 A′ 1,(q,v,y,a′,a) def= a′, q ∈Q, v ∈Γ, y ∈Σε, a, a′ ∈A (84a) A1,(q,v,y,a′,a) def= a, q ∈Q, v ∈Γ, y ∈Σε, a, a′ ∈A, (84b) and D def= [|Q| + |Γ| + |A| + 2] + 2 + [1 + |Γ|] + 4. Then, it holds that rΠ  y , t  =            JqtK JvK JaK a′, a 0, 0 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            (85) Proof. The first part (computing the one-hot encodings of ∼δ, v, and a′) follows from the construction of the matrices WQ, WΓ, and WA from Lemma F.4. The second part (computing the values a′ and a with the matrices A′ and A) follows from the definition of the matrices A and A′: By construction, the values of a′ and a will be copied from the one-hot encodings into the resulting entry of the vector. ■ We now describe the two layers of the transformer that compute the quantities required to be able to determine the current configuration of the PTM at each time step. Here, we heavily rely on the construction by Pérez et al. (2021). Let M = (Q, Σ, Γ, δM, qι, qφ) be a rationally-weighted PTM. Let t ∈N and define c (t) def= t−1 X j=0 aj (86) as the position of the head of the PTM at time t. Further, define the set L (t) def= {j | c (j) = c (t) , j ∈[t −1]}. (87) L (t) contains the time steps at which the PTM M so far visited (and thus wrote to) the tape cell read by M at time t. Then, define ℓ(t) as ℓ(t) def= ( max L (t) if |L (t) | > 0 t otherwise. (88) In words, ℓ(t) denotes the time step at which the PTM M last visited (and thus wrote on) the tape cell read by M at time t if this cell was visited yet. Otherwise, ℓ(t) equals t. Now, define the BOS symbol over the augmented alphabet Γ as BOS def= (q0, ⊥, BOS, 0, 0) ∈Q × Γ × Σε × A × A. (89) Let qι, q1, . . . , qT , ⊥, v1, . . . , vT , BOS, y1, . . . , yT , and a0, a1, . . . , aT be the sequences of states visited, symbols written on the processing tape, symbols written on the output tape, and actions performed by M in the first T steps. Define a−1 = a0 def= 0, x0 t def= ( rΠ  BOS , t  if t = 0 rΠ ((qt, vt, yt, at−1, at−2) , t) otherwise, (90) 32Recall that we can identify the actions of the PTM with the integers −1, 0, and 1. 12539 and X0 def= x0 0; x0 1; · · · ; x0 T  . (91) These will be the inputs to the transformer layers and thus,33 R  y<t  def= x0 0; x0 1; · · · ; x0 t−1  . (92) With this, we can describe and prove the correctness of the first layer of the transformer we are building. Lemma F.6 (Layer 1). There exists a transformer layer L1 that, given the inputs X0, computes the values c(t) t+1 and c(t−1) t+1 for all t = 1, . . . , T. More precisely, denoting X1 def= x1 0; x1 1; · · · ; x1 T  = L1 X0 , (93) it holds that x1 t contains the entries containing the values c(t) t+1 and c(t−1) t+1 : x1 t =            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            (94) Proof. This follows from Pérez et al. (2021, Lemma 9), which we summarize here.34 Concretely, L1 is implemented by a transformer layer with trivial (zero-valued) query and key transformations and a value transformation V that copies the values of the actions at−1 and at−2 to the entry that will hold the values c(t) t+1 and c(t−1) t+1 . Since the current head location c (t) is simply the sum of those values (cf. Eq. (86)), attending to all previous positions results in Eq. (94). Formally, we define Q1 def= 0D×D, K1 def= 0D×D, (95a) f (q, k) def= ⟨q, k⟩ (95b) V1 n′,: = V1 n,: =           0|Q| 0|Γ| 0|A| 0, 0 1, 1 0, 0|Γ| 0, 0, 0, 0           ⊤ (95c) O1 = 0D×D (95d) Here, n′ and n refer to the indices of the rows at which the values c(t) t+1 and c(t−1) t+1 will be stored (that is, the rows below the values of at−1 and at−2). All other rows of V1 are zero. Then, for t ∈N≥0, it holds that f (qt, kj) = f Q x0 t  , K x0 j  = f Q1x0 t , K1x0 j  = ⟨0, 0⟩= 0 (96) 33Note that since the transformer over the augmented alphabet is deterministic, this representation is unique. 34More precisely, since their construction stores the actions at and at−1 (the former is possible because the action is not sampled but deterministically computed based on the configuration of the PTM), their layer computes the values c(t) t+1 and c(t) t+1. As we show later in Lemma F.7, this does not affect the correctness of the construction. 12540 for all j ≤t, resulting in st = hardmax (0) = 1 t+11t, and V x0 j  =           0|Q| 0|Γ| 0|A| 0, 0 aj−1, aj−2 0, 0|Γ| 0, 0, 0, 0           . (97) This results in Att (qt, Kt, Vt) = t X j=0 sjV x0 j  (98a) = t X j=0 1 t + 1V x0 j  (98b) = 1 t + 1 t X j=0 V x0 j  (98c) = 1 t + 1 t X j=0           0|Q| 0|Γ| 0|A| 0, 0 aj−1, aj−2 0, 0|Γ| 0, 0, 0, 0           (98d) = 1 t + 1           0|Q| 0|Γ| 0|A| 0, 0 c (t) , c (t −1) 0, 0|Γ| 0, 0, 0, 0           (98e) =            0|Q| 0|Γ| 0|A| 0, 0 c(t) t+1, c(t−1) t+1 0, 0|Γ| 0, 0, 0, 0            . (98f) 12541 Furthermore, we have that at = Att (qt, Kt, Vt) + x0 t =            0|Q| 0|Γ| 0|A| 0, 0 c(t) t+1, c(t−1) t+1 0, 0|Γ| 0, 0, 0, 0            +            JqtK Jvt−1K Jat−1K at−1, at−2 0, 0 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            =            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            (99) x1 t = O (at) + at = 0D +            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            =            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 0, 0|Γ| 1, t + 1, 1 t+1, 1 (t+1)2            , (100) which is what we needed to show. ■ Lemma F.7 (Layer 2). There exists a transformer layer L2 that, given the outputs X1 of L1 from Lemma F.6, computes the values ℓ(t) and Jvℓ(t)K for all t = 1, . . . , T. More precisely, denoting X2 def= x2 0; x2 1; · · · ; x2 T  = L2 X1 , (101) it holds that x2 t contains the entries containing the values ℓ(t) + 1 and Jvℓ(t)K: x2 t =            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 ℓ(t) + 1, Jvℓ(t)K 1, t + 1, 1 t+1, 1 (t+1)2            . (102) Proof. This follows from Pérez et al. (2021, Lemma 10). The idea of the construction is for the selfattention mechanism at time t to attend to exactly the entry from time step ℓ(t) + 1 (i.e., to compute the query and key vectors such that argmax (st) = ℓ(t) + 1) since that entry will contain the information about the symbol written at the time step before—at ℓ(t). Then, the values ℓ(t) and vℓ(t) are obtained by copying the corresponding values of the positional encoding and the written symbol from that time step. 12542 Formally, we define Q2 def= · · · c(t) t+1 z}|{ · · · 1 t+1 z}|{ 1 (t+1)2 z}|{ · · · ! 1 1 1 3 (103a) K2 def= · · · c(t−1) t+1 z}|{ · · · 1 t+1 z}|{ 1 (t+1)2 z}|{ · · · ! 1 −1 1 3 (103b) f (q, k) def= −|⟨q, k⟩| (103c) V2 def= · · · Jvt−1K z}|{ · · · t+1 z}|{ · · ·           ... 1 ℓ(t) + 1 I|Γ| Jvℓ(t)K ... (103d) O1 = 0D×D (103e) Pérez et al. (2021, Lemma 10) show that, given the parameters above, the output of the scoring function f is maximized at the entry ℓ(t) ∈{0, . . . , t}.35 Since V2 copies the second value from the positional encoding of the symbol at time step ℓ(t), which is ℓ(t)+1, this entry appears in the output of the attention mechanism. Furthermore, V2 also copies the value Jvℓ(t)K from the same entry. This, together with the zero-valued function O and residual connections, results in Eq. (102). ■ Lemma F.8 (Correctness of the output function). There exists an MLP F that, given the outputs X2 of L2, computes the one-hot encoding of the current configuration of the PTM. More concretely, it holds that F x2 t  = Jqt, st, at−1K. (104) Proof. Here, we define a function F similar to that of Pérez et al. (2021, Lemma 11), but with an additional layer that handles the addition of the ⊥symbol not handled by Pérez et al. (2021). The logic nonetheless remains the same: F receives the output of L2 of the form x2 t =            JqtK Jvt−1K Jat−1K at−1, at−2 c(t) t+1, c(t−1) t+1 ℓ(t) + 1, Jvℓ(t)K 1, t + 1, 1 t+1, 1 (t+1)2            . (105) and (1) copies the value of qt, (2) copies the value of at−1 (3) compares the value of ℓ(t) to t to determine whether st = ⊔or st = vℓ(t), (4) compares the value of t to 0 to determine whether st = ⊥or st = vℓ(t). 35More precisely, their construction results in the maximum being at ℓ(t + 1), since layer 1 in their construction, in contrast to ours, computes the values c(t+1) t+1 and c(t) t+1, shifting all computations by one step. This is the consequence of the aforementioned difference between their deterministic and our probabilistic framework. 12543 Concretely, F will take the form of a three-layer MLP F (x) def= ReLU(W5ReLU(W4ReLU(W3ReLU(W2ReLU(W1x + b1) + b2) + b3) + b4) + b5) (106) where W1 ∈R(|Q|+|Γ|+|A|+|Γ|+|Γ|+2)×D (107a) W2 ∈R(|Q|+|Γ|+|A|+|Γ|+|Γ|+2)×(|Q|+|Γ|+|A|+|Γ|+|Γ|+2) (107b) W3 ∈R(|Q|+|Γ|+|A|+|Γ|+|Γ|)×(|Q|+|Γ|+|A|+|Γ|+|Γ|+2) (107c) W4 ∈R(|Q|+|Γ|+|A|)×(|Q|+|Γ|+|A|+|Γ|+|Γ|) (107d) W5 ∈R(|Q||Γ||A|)×(|Q|+|Γ|+|A|) (107e) (107f) are the weights of the MLP and b1, . . . , b5 are the corresponding biases. For conciseness, we will denote C1 def= |Q| (108a) C2 def= C1 + |Γ| (108b) C3 def= C2 + |A| (108c) C4 def= C3 + |Γ| (108d) C5 def= C4 + |Γ| (108e) C6 def= C5 + 2 (108f) and D1 def= |Q| (109a) D2 def= D1 + |Γ| (109b) D3 def= D2 + |A| (109c) D4 def= D3 + 2 (109d) D5 def= D4 + 2 (109e) D6 def= D5 + 1 + |Γ| (109f) D7 def= D6 + 4 (109g) Then, we define W1 = 1:D1 z}|{ D1:D2 z}|{ D2:D3 z}|{ · · · D5+1 z}|{ D6+1 z}|{ D6+2 z}|{ · · ·                     I|Q| I|Γ| I|A| 1 1 −1 2 −1 (110) b1 =                     0|Q| 0|Γ| 0|A| J⊔K J⊥K 0 0 (111) 12544 W2 = 1:C1 z}|{ C1:C2 z}|{ C2:C3 z}|{ C3:C4 z}|{ C4:C5 z}|{ C5+1 z}|{ C5+2 z}|{                     I|Q| I|Γ| I|A| I|Γ| I|Γ| 1 −1 1 (112) b2 =                     0|Q| 0|Γ| 0|A| 0|Γ| 0|Γ| 0 0 (113) W3 = 1:C1 z}|{ C1:C2 z}|{ C2:C3 z}|{ C3:C4 z}|{ C4:C5 z}|{ C5+1 z}|{ C5+2 z}|{           I|Q| I|Γ| −1|Γ| −1|Γ| I|A| I|Γ| 1|Γ| I|Γ| 1|Γ| (114) b3 =           0|Q| 0|Γ| 0|A| −1|Γ| −1|Γ| (115) W4 = 1:C1 z}|{ C1:C2 z}|{ C2:C3 z}|{ C3:C4 z}|{ C4:C5 z}|{     I|Q| I|Γ| I|Γ| I|Γ| I|A| (116) b4 =     0|Q| 0|Γ| 0|A| (117) Lastly, we set W5 and b5 such that the the function x 7→ReLU W5x + b5 computes the one-hot encoding of the three input one-hot encodings JqtK, JstK, Jat−1K by implementing the logic AND operations, as in Svete and Cotterell (2024, Lemma B.1). 12545 By construction, we get that y1 = ReLU W1x2 t + b1 (118a) =           JqtK Jvt−1K Jat−1K J⊔K J⊥K ReLU(ℓ(t) + 1 + 1 −(t + 1)) ReLU(2 −(t + 1))           (118b) =           JqtK Jvt−1K Jat−1K J⊔K J⊥K ReLU(ℓ(t) + 1 −t) ReLU(1 −t)           (118c) =           JqtK Jvt−1K Jat−1K J⊔K J⊥K 1 {ℓ(t) ≥t} 1 {t < 1}           (118d) =           JqtK Jvt−1K Jat−1K J⊔K J⊥K 1 {ℓ(t) = t} 1 {t = 0}           (118e) y2 = ReLU W2y1 + b2 (119a) =           JqtK Jvt−1K Jat−1K J⊔K J⊥K ReLU(1 {ℓ(t) = t} −1 {t = 0}) 1 {t = 0}           (119b) (119c) =           JqtK Jvt−1K Jat−1K J⊔K J⊥K 1 {1 {ℓ(t) = t} &1 {t > 0}} 1 {t = 0}           (119d) 12546 y3 = ReLU W3y2 + b3 (120a) =       JqtK ReLU Jvt−1K −(1 {ℓ(t) = t ∧t > 0} + 1 {t = 0}) 1|Γ|  Jat−1K ReLU J⊔K + 1 {ℓ(t) = t ∧t > 0} 1|Γ| −1|Γ|  ReLU J⊥K + 1 {t = 0} 1|Γ| −1|Γ|        (120b) =       JqtK 1 {¬ (ℓ(t) = t ∧t > 0) ∧¬ (t = 0)} Jvt−1K Jat−1K 1 {ℓ(t) = t ∧t > 0} J⊔K 1 {t = 0} J⊥K       (120c) =       JqtK 1 {(ℓ(t) < t ∨t = 0) ∧t > 0} Jvt−1K Jat−1K 1 {ℓ(t) = t ∧t > 0} J⊔K 1 {t = 0} J⊥K       (120d) =       JqtK 1 {ℓ(t) < t ∧t > 0} Jvt−1K Jat−1K 1 {ℓ(t) = t ∧t > 0} J⊔K 1 {t = 0} J⊥K       (120e) y4 = ReLU W4y3 + b4 (121a) =   JqtK 1 {ℓ(t) < t ∧t > 0} Jvt−1K + 1 {ℓ(t) = t ∧t > 0} J⊔K + 1 {t = 0} J⊥K Jat−1K   (121b) def=   JqtK JwtK Jat−1K   (121c) Since the three logical expressions in Eq. (121b) are complementary, it holds that the second component of y4 holds either the value of vt−1, ⊔, or ⊥depending on the value of ℓ(t) and t. This is exactly the symbol that will be read by the PTM at time step t, i.e., st: If ℓ(t) < t (which means that cell c (t) has been visited before), then st = vt−1; if ℓ(t) = t and t > 0 (which means that cell c (t) has not been visited before, and t > 0), then st = ⊔; and if t = 0 (meaning that the PTM just started executing and is still reading the initial symbol ⊥), then st = ⊥. By the construction of W5 and b5, we get that y5 = Jqt, st, at−1K. ■ Lemma F.9 (Correctness of the sampling step). Define the output matrix E ∈R|Q||Γ||Σε||A||A|×|Q||Γ||A| as E(q′,v,y,a′,a),(q,s,a) def= ( p (q′, v, y, a′, a | q, s) if y ̸= EOS p (qφ, v, y, a′, a | q, s) otherwise (122) for q, q′ ∈Q, v ∈Γ, y ∈Σε, a ∈A, a′ ∈{−1, 1}, and s ∈Γ. Then, it holds that E F x2 t  = p (· | qt, st) . (123) Proof. Follows directly from Lemma F.8 and the construction of E:By Lemma F.8, we have that F x2 t  = Jqt, st, at−1K. (124) 12547 By construction of E, we have that E F x2 t  (q′,v,y,a′,a) = (E Jqt, st, at−1K)(q′,v,y,a′,a) (125a) = E(qt,st,at−1),(q′,v,y,a′,a) (125b) = ( p (q′, v, y, a′, a | q, s) if y ̸= EOS p (qφ, v, y, a′, a | q, s) otherwise . (125c) ■ 12548
2024
676
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12549–12561 August 11-16, 2024 ©2024 Association for Computational Linguistics Analyzing LLM Behavior in Dialogue Summarization: Unveiling Circumstantial Hallucination Trends Sanjana Ramprasad♢ Elisa Ferracane♣ Zachary C. Lipton♣ ♢Northeastern University ♣Abridge AI {ramprasad.sa}@northeastern.edu {elisa,zack}@abridge.com Abstract Recent advancements in large language models (LLMs) have considerably advanced the capabilities of summarization systems. However, they continue to face concerns about hallucination. While prior work has evaluated LLMs extensively in news domains, most evaluation of dialogue summarization has focused on BART-based models, leaving a gap in our understanding of their faithfulness. Our work benchmarks the faithfulness of LLMs for dialogue summarization, using human annotations and focusing on identifying and categorizing span-level inconsistencies. Specifically, we focus on two prominent LLMs: GPT-4 and Alpaca-13B. Our evaluation reveals subtleties as to what constitutes a hallucination: LLMs often generate plausible inferences, supported by circumstantial evidence in the conversation, that lack direct evidence, a pattern that is less prevalent in older models. We propose a refined taxonomy of errors, coining the category of "Circumstantial Inference" to bucket these LLM behaviors. Using our taxonomy, we compare the behavioral differences between LLMs and older fine-tuned models. Additionally, we systematically assess the efficacy of automatic error detection methods on LLM summaries and find that they struggle to detect these nuanced errors. To address this, we introduce two prompt-based approaches for fine-grained error detection that outperform existing metrics, particularly for identifying "Circumstantial Inference." 1 1 Introduction Considerable progress has been made in summarization using large language models (LLMs) (Goyal et al., 2022; Zhang et al., 2023). However, the challenge of so-called “hallucinations”, characterized in this context as statements in summaries 1The dataset can be downloaded from https: //github.com/sanjanaramprasad/circumstantial_ inference.git Greg: Hi, honey. I need to stay after hours :-( Betsy: Again? Greg: I''m sorry! Betsy: What about Johnny? Greg: Well, could you pick him up? Betsy: What if I can't? Greg: Betsy? Betsy: What if I can't? Greg: Can't you, really? Betsy: I can’t. Today I need to work long hours as well. Tuesdays are your days in the kindergarten. Dialogue Snippet Summary: GPT-4: Greg informs Betsy he needs to stay after work, leading to a conflict as their son Johnny has to be picked up from kindergarten, which usually falls on Greg's responsibility on Tuesdays. Betsy also can't do it as she's working long hours. Figure 1: In the example provided, GPT-4 infers that the speakers are discussing "their son." Although this inference seems plausible given the circumstantial evidence in the conversation, it lacks direct evidence. that do not have direct evidence in the source material persists. As a result, evaluation of these summaries is an active area of research. In prior research, news articles have been the main testbed for LLM-generated summary evaluation (Zhang et al., 2023; Yang et al., 2023). Dialogue summarization remain less explored, with prior works mostly focused on smaller fine-tuned models (Zhu et al., 2023; Gao et al., 2023; Wang et al., 2022). In this work, we close the evaluation gap, focusing our analysis on LLM summaries of chit-chat style dialogues. We obtain fine-grained inconsistency annotations for summaries generated (zero-shot) by two prominent LLMs (GPT-4 (Luo et al., 2023) and Alpaca-13B (Taori et al., 2023)) and across two summarization datasets (SAMSum Gliwa et al. (2019) and DialogSum Chen et al. (2021)). In the domain of dialogues, a further gap exists in understanding the differences between summaries generated by LLMs and those generated by smaller fine-tuned models. In the news domain, prior work 12549 has found that LLM-generated summaries have fewer inconsistencies (Goyal et al., 2022; Zhang et al., 2023). Work done by Tang et al. (2022), also in the news domain, notes varying error distributions across different model categories. In our work in the dialogue domain, we compare differences in error rates and analyze the categories of errors for summaries of dialogues with fine-tuned models versus summaries with LLMs. As in the news domain, we find that LLM-generated summaries have fewer inconsistencies. Surprisingly, our analysis reveals that over 30% of LLM-generated summaries contain inconsistencies, contrasting sharply with the inconsistency rate of less than 5% in GPT-generated news summaries (Zhang et al., 2023). To further elucidate the differences between LLMs and fine-tuned models, we annotate spans with error categories. Previous work has primarily relied on part-of-speech-based tags for error classification (Wang et al., 2022; Zhu et al., 2023; Gao et al., 2023). However, complexities inherent in LLM-generated summaries, often lengthier and more intricate, do not neatly align with error categories based solely on part of speech, warranting alternative strategies for a more meaningful categorization. Hence, our work proposes a refined taxonomy integrating existing error types. We further introduce a new error category specific to LLM behavior: "Circumstantial Inference." This category stems from the observation that LLMs frequently produce statements that appear plausible based on circumstantial (but not direct) evidence in the dialogues, an aspect hitherto unexplored. In particular, LLMs tend to produce statements that may be circumstantially implied based on contextual cues in the conversation but not explicitly stated as seen in Figure 1. Although these inferences are not directly stated and can be inherently unsupported, they can still be useful in some instances, especially when summarizing ambiguous dialogues. However, the appropriateness of such inferred details varies depending on context and domain, highlighting the need for further investigation. In addition, there is limited understanding regarding the automatic detection of the mentioned error types. Therefore, we systematically evaluate the performance of state-of-the-art error detectors on LLM-generated dialogue summaries. We also introduce two prompt-based methods for fine-grained error detection, which notably outperform all prior state-of-the-art error detectors, particularly in identifying the newly introduced error type, "Circumstantial Inference." In summary, our primary contributions are as follows: 1. We bridge a gap in understanding LLM effectiveness for dialogue summarization by collecting fine-grained human annotations that highlight inconsistencies and make the benchmark publicly available. 2. We propose a refined taxonomy for error categorization of LLM-generated summaries, including a new error category called "Circumstantial Inference" that captures the tendency of LLMs to produce plausible hallucinations based on conversation context. 3. We examine differences in behavior in dialogue summarization between LLMs and finetuned models by comparing error rates and types. 4. We introduce two prompt-based methods for fine-grained error detection, which notably outperform existing metrics. These methods excel even in detecting the recently identified error type "Circumstantial Inference." Additionally, we evaluate state-of-the-art error detectors on model-generated summaries across model categories and error types unveiling their effectiveness and limitations. 2 Human Evaluation: Zero-shot Prompted Dialogue Summaries We aim to compare the difference in consistency of zero-shot prompted LLM-generated dialogue summaries with smaller fine-tuned model-generated summaries.2 To accomplish this, we conduct human annotations to identify inconsistent spans generated by both GPT-4 (OpenAI et al., 2023) and Alpaca-13b (Taori et al., 2023). Specifically, we direct annotators to spot inconsistencies in summaries, marked by spans that lack evidence in the source text or distort information from it. Our evaluation is carried out on dialogue summarization datasets previously used for benchmarking finetuned summarization models. 2We deliberately choose zero-shot instead of few-shot to better understand the model’s inherent capabilities. 12550 Linguistic Category Summary Excerpt Dialogue Excerpt Circumstantial Inference Cameron is unable to bring a video game for their daughter Peyton. Peyton: I have been asking you to bring that video game for me Cameron: Honey, I am not having enough time to come home. Logical Error Jane is worried about the travel time and suggests they meet later Steven: the road is new, we will make it Jane: I don’t want to stress out, let’s meet at 4:30 instead of 5, ok? World Knowledge #Person1# plans to vote for Joe Biden instead. #Person1#: I will vote for Biden anyway. Referential Error Person1 said that Person2 could call or email them. #Person2#: Please call me or send e-mail. Figurative Misinterpretation Alyssa likes Fergie’s national anthem. Alyssa: Have you seen Fergies national anthem? Derek: This is not normal. I saw it last week Alyssa: The best part is that she acts like she nailed it. Table 1: Examples of linguistic categories for inconsistencies in red between the LLM-generated summaries and the dialogues. 2.1 Datasets We perform human annotations on two prominent summarization datasets: SAMSum (Gliwa et al., 2019) and DialogSum (Chen et al., 2021). SAMSum comprises artificially generated, concise written conversations crafted by linguists, centering around everyday topics. Conversely, DialogSum presents a corpus of naturally occurring spoken dialogues reflecting real-life contexts. To facilitate comparisons with earlier fine-tuned models, we annotate the same set of data points from previous benchmark studies, as outlined below: a) Reference Matters (RefMatters): Introduced by Gao et al. 2023, this dataset offers factual annotations for summaries generated on dialogues in SAMSum and DialogSum. The annotated summaries include outputs from four fine-tuned summarization models, addressing eight distinct types of factual errors: Entity, Predicate, Circumstance, Coreference, Discourse Link, Out of Article, Grammatical, and Others. b) FacEval Dataset: Detailed by Wang et al. 2022, this dataset provides annotations for BARTbased models applied to the SAMSum dataset. It delineates six error types, namely Subject Object Error, Pronoun Error, Negation Error, Particulars Error, Hallucination Error, and Other Error. Notably, there exists a small overlap in data points with RefMatters. 2.2 Models We assess the performance of two prevalent Large Language Models (LLMs) in the context of dialogue dialogue summarization: (1) GPT-4 (OpenAI et al., 2023) utilizing the gpt-4-32k-0613 snapshot, and (2) Alpaca-13b (Taori et al., 2023). For both models, we use the default settings and prompt zero-shot using the following template to generate summaries: Generate a summary of the following dialogue snippet: {{Dialogue}} Our evaluation compares our collected annotations against the inconsistency annotations from previous benchmarks. Specifically, we use annotations for BART (Lewis et al., 2020), UniLM (Dong et al., 2019), MV-BART (Chen and Yang, 2020), and CODS (Wu et al., 2021) from the RefMatters Benchmark. Additionally, we consider annotations for BART (Lewis et al., 2020), MV-BART (Chen and Yang, 2020), CondigSum-BART (Liu et al., 2021a), and Coref-BART (Liu et al., 2021b) in the FacEval dataset. Henceforth, we refer to the above models as FT-Summ, representing smaller fine-tuned summarization models, and the zero-shot prompt-based models GPT-4 and Alpaca-13B as LLM. 2.3 Fine-grained inconsistency annotation We perform two rounds of annotations to identify inconsistencies in dialogue summaries. 12551 Error Annotation In the first phase, we enlist two linguist factcheckers from Upwork3 to assess summary consistency. Inconsistent summary sections are identified as those that conflict with or inaccurately represent dialogue information or lack sufficient evidence. This definition aligns with criteria from prior studies on news summarization (Huang et al., 2020; Maynez et al., 2020; Goyal and Durrett, 2021; Cao et al., 2022). Inter-annotator agreement, measured using pairwise F-1 metric, results in a substantial agreement of 66.94%. Further annotation instructions are detailed in Appendix A. Error Categorization In the second round of annotations, an author of the paper who is an expert linguist meticulously categorized the errors to attain more granularity. We find that traditional methods of error categorization that have relied heavily on part-of-speech tags (Wang et al., 2022; Zhu et al., 2023; Gao et al., 2023) prove less effective when dealing with summaries generated by LLMs due to their tendency to exhibit increased abstraction and inference. This makes it challenging to align inconsistent spans with specific part-of-speech-based categories. Consequently, we propose a taxonomy of errors which we outline in the subsequent section. Taxonomy of Errors We describe the taxonomy of errors identified in this work below and provide examples in Table 1. Logical Error: This category identifies inaccuracies in dialogue summaries. Our annotations highlight three main types of logical errors commonly found in summaries generated by LLMs: a) Event misordering, where the summary presents an incorrect chronological sequence due to wrong word usage or sentence order. b) Lack of commonsense, where models incorrectly reason through information that should be obvious. c) Missed detail, where the summary would be correct if not for the omission of important information. FT-Summ models also exhibit logical errors in this category, including inaccurate negations, wrong verbs, or incorrect word senses. Circumstantial Inference: We introduce this new category not explored in prior work and inspired by Grice’s Maxim of Conversation for Quantity, which states that cooperative speakers make 3https://www.upwork.com/ contributions that are sufficiently but not overly informative (Grice, 1975). Speakers intentionally omit information deemed shared knowledge. When the language model draws inferences based on circumstantial but not direct evidence in the conversation, we label this as a circumstantial inference error. While traditionally viewed as an inconsistency, we contend that in open-domain conversations, such circumstantial inferences may be reasonable. In the example listed in Table 1, Cameron addresses Peyton as "Honey" plausibly because they both know Peyton is Cameron’s daughter (and explicitly stating that would violate the maxim of Quantity). The importance of these inconsistencies depends on the context. For example, in doctor-patient conversations, inferring a patient has diabetes from blood sugar discussions is more consequential than inferring general familial relationships. World Knowledge: This error type constitutes a distinct subset of Out-of-Article Errors, wherein the inaccuracies involve real-world facts. For instance, the summary might include the full name of a public figure not explicitly mentioned in the dialogue (see Table 1). Referential Errors: This error resembles subject-object errors in prior research (Wang et al., 2022; Gao et al., 2023). However, LLMs show intricate misattributions, unlike FT-Summ models, where referential errors involve swapped entities. This complexity challenges previous straightforward categorizations. Figurative Misrepresentation This category of error occurs when metaphors, sarcasm, or jokes in the content are mistaken for literal statements in summaries, altering the intended meaning. Nonsensical errors: We consider grammatical errors in BART-generated summaries, as well as instances where language models continue a prompt or repeat instructions after generating a summary as Nonsensical. 2.4 Evaluation Results Error Rates Figure 2 depicts error rates for fine-tuned models (FT-Summ) and large language models (LLMs) applied to dialogue summarization datasets. Our results indicate that GPT-4 exhibits fewer inconsistencies in dialogue summarization compared to fine-tuned models. However, this improvement is smaller than observed in prior research on news 12552 GPT-4 Alpaca-13B BART MV-BART Coref-BART CondigSum-BART UniLM CODS 0.23 0.40 0.35 0.46 0.45 0.47 0.46 0.42 Figure 2: Each bar in this plot depicts the proportion of total summaries with inconsistencies across different model-generated summaries where GPT-4 performs the best (lower means fewer inconsistencies). Circumstantial Inference Logical Error World Knowledge Referential Error Figurative Misinterpretation Nonsensical 0.0 0.1 0.2 0.3 0.4 0.5 Proportion of Errors GPT-4 Alpaca-13B BART Figure 3: Error category proportions for each model in the dataset (lower values indicate fewer occurrences of specific categories). The more challenging circumstantial inference errors are common in GPT-4 but hardly present in BART. summarization, where GPT-3 achieved an error rate of less than 5% (Zhang et al., 2023). Conversely, in our case approximately 23% of GPT-4 summaries across all dialogue datasets display inconsistencies. Furthermore, Alpaca-generated summaries generally show lower inconsistency compared to most fine-tuned models, but are surpassed by BART. On further analysis (shown in Appendix B), we find that Alpaca outperforms the fine-tuned models (including BART) on the DialogSum benchmark (Gao et al., 2023), but the opposite is true on the SAMSum benchmarks (Gao et al., 2023; Wang et al., 2022). Differences in Alpaca-generated summary quality across datasets may stem from variances in dialogue and summary features, to which larger language models may be less sensitive. DialogSum uses real spoken dialogues with multiple turns, potentially resembling pre-training data, while SAMSum involves synthetic written conversations. Fine-grained Error Category Distribution In the investigation of fine-grained categories, we conduct annotations on summaries generated by GPT-4 and Alpaca-13b for Large Language Model (LLM) models and BART within the framework of FT-Summ. We choose BART for annotation due to its consistent superiority in consistency compared to other fine-tuned summarization models, as evidenced by its performance across various datasets (see Figure 2). One key discovery (shown in Figure 3) is the prevalence of Circumstantial Inferences in LLMgenerated summaries. Roughly 38% of errors fall into this category, which describes cases where assumptions are made based on circumstantial evidence within the conversation. Interestingly, these errors are rare in BART-generated summaries, constituting only 1% of all errors. We also observe several error categories with lower prevalence in LLMs compared to FT-Summ. Notably, LLM summaries consistently lack grammatical errors, presenting coherent and wellwritten summaries. However, we note a similar error type specific to LLMs—prompt errors. We group both LLM-based prompt errors and FTSumm-based grammatical errors as "Nonsensical" in Fig 3. Interestingly, GPT-4 exhibits no instances of nonsensical text, whereas BART shows a higher prevalence of such cases compared to Alpaca. Our analysis (shown in Fig 3) also uncovers a considerable reduction in logical errors, specifically 17%, in the case of LLM errors, in contrast to FTSumm, where over 50% of errors manifest as logical errors. This noticeable decrease signifies the superior proficiency of LLMs over FT-Summ models in deriving logical inferences from dialogues. Despite advancements, persistent challenges in 12553 specific error types remain prevalent within LLMs. Both FT-Summ and LLMs exhibit a similar frequency of referential errors across all error categories. Notably, Alpaca-generated summaries show a higher prevalence of referential errors compared to BART-generated ones, elucidating the elevated error rate in Alpaca summaries. Further examination reveals two distinct types of referential errors: coreference and misattributions. Coreference errors arise from the inability to accurately establish the reference of a pronoun or noun within a sentence, whereas misattributions involve erroneously attributing information or statements to the wrong source. Notably, in BART summaries, referential errors mainly consist of coreference errors (95%), with misattributions constituting only 5%. Conversely, referential errors in LLMs are characterized by a higher frequency of misattributions (58%) compared to coreference errors (42%). 3 Automatic Error Detection Prior research has demonstrated a shift in error category distributions when summarizing with models of different generations, resulting in varied trends in the performance of automatic error detectors (Tang et al., 2022). This section aims to address the following key questions: a) Do factuality metrics perform similarly on summaries generated by Large Language Models (LLMs) compared to older models (in our study, FTSumm)? Specifically, we investigate whether detecting factual errors in LLM-generated summaries poses greater challenges. b) Which error types across various model categories can factuality metrics identify, and what are the associated failure modes? Drawing upon our error taxonomy, we analyze the ability of metrics to detect different error categories, with a specific focus on their performance in identifying circumstantial inferences, an aspect not previously evaluated. c) Furthermore, we introduce a novel approach aimed at enhancing the detection of different error categories at the span-level which we introduce in section 3.1.2. 3.1 Metrics To include a wider range of metrics, we assess metric performance in the following two specific contexts: binary classification (label the entire summary as factual or not) and span detection (identify FT-Summ LLM Model Type 0.0 0.2 0.4 0.6 0.8 Balanced Accuracy Inconsistency Detection (Binary) QA NLI Prompt Prompt-Span Prompt-SpanMoE Figure 4: Automatic error detectors exhibit varying performance when applied to FT-Summ versus LLM. While QA/NLI metrics indicate a slight improvement, prompt-based metrics are better in detecting inconsistencies generated by the FT-Summ model in comparison to LLMs. the nonfactual span). 3.1.1 Binary Classification Binary classification metrics assess whether a summary is faithful or not by providing a single overarching score relative to the source. We incorporate four metrics into our evaluation framework: two question-answering-based metrics, namely QAFactEval (Fabbri et al., 2021) and QuestEval (Scialom et al., 2021), alongside two natural language inference (NLI) based metrics, SummaC-ZS and SummaC-Conv (Laban et al., 2022), which serve as our baseline measures. Both question-answering (QA) and natural language inference (NLI) metrics provide continuous scores for summarization. To translate these scores into binary factuality labels, we establish thresholds using a subset of 10% of the evaluation data. Scores exceeding the designated threshold are classified as nonfactual. Distinct thresholds are determined for each metric and model type across all datasets. Further elaboration on this process can be found in Appendix C. Recent studies have also investigated the effectiveness of integrating ChatGPT prompts in error detection, demonstrating promising results compared to conventional metrics. Consequently, we include the established prompt ChatGPT-Direct Assessment (ChatGPT-DA) (Luo et al., 2023) and evaluate its performance across both zero-shot and few-shot scenarios. The prompt is shown in Appendix D.1. 12554 Circumstantial Inference Logical Error World Knowledge Referential Error Figurative Misinterpretation 0.0 0.2 0.4 0.6 0.8 Balanced Accuracy QA NLI Prompt Prompt-Span Prompt-SpanMoE Figure 5: Inconsistency binary classification per error category Circumstantial Inference Logical Error World Knowledge Referential Error Figurative Misinterpretation 0.0 0.2 0.4 0.6 0.8 F1 QAFactSpan ChatGPT-Span(ZS) ChatGPT-Span(FS) ChatGPT-SpanMoE(ZS) ChatGPT-SpanMoE(FS) Figure 6: Span based F1 scores per error category 3.1.2 Span Detection Baseline Span detection involves the meticulous identification of nonfactual or inconsistent spans within the summary when compared to the source document. As a standard baseline, we integrate QAFactEval and designate all spans labeled "unanswerable" by the metric as inconsistent. Our Approach In addition to the aforementioned baseline approach, we introduce two new prompt-based methodologies. These approaches are structured around two distinct subtasks. a) Identification: This prompt focuses solely on extracting inconsistent spans based on provided definitions. b) Verification: In this phase, sentences containing these nonfactual spans are compared with the source text using a prompt, that asks GPT to provide a consistency rating from 1 to 5 (detailed in Appendix D.2.2). Spans are classified as nonfactual only if they receive a rating below 5 during verification. Considering that summaries generated by LLMs tend to be more abstract and may involve inference, we aim to avoid extracting spans as nonfactual that rely solely on common sense, even if not explicitly supported by evidence. Our objective is to identify spans where information is either entirely fabricated or ambiguous based on the content. For our first approach, we use a generic prompt for identification (Appendix D.2) followed by verification and call this approach ChatGPT-Span . To enhance the previous strategy, we introduce a "mixture of experts" concept for span identification (a). This method involves using distinct prompts for each error type outlined in our taxonomy (Section 2.3). Each error type has a unique prompt (see Appendix D.3) given to GPT, allowing it to target each error type. After experts identify all error types, spans undergo verification (step 2) using the same procedure as before. This approach is called ChatGPT-MoE. 3.2 Evaluation Setup Metrics We use balanced accuracy as a metric for binary classification. It calculates the arithmetic mean of sensitivity and specificity, giving equal importance to minority and majority classes making it beneficial for imbalanced data. We use F-1 scores to compare predicted spans with annotated ones. For binary classification, we also include spanlevel automated metrics. We predict a summary as consistent if no inconsistent spans are predicted, and inconsistent if at least one is predicted. In prompt-based few-shot setups, we use a four-shot approach, with two shots indicating inconsistency and two indicating consistency. Datasets and Models To compare metrics across benchmarks, we utilize all annotated FT-Summ models from Section 2.2 to compute balanced accuracy. It’s important to note that the FacEval benchmark doesn’t involve spans but focuses solely on error types. Therefore, our evaluation of span-level F1 scores on this bench12555 FT-Summ LLM F1 Precision Recall F1 Precision Recall QAFactEval 10.16 7.55 15.52 8.54 7.49 9.91 Prompt-based (Our Approaches) ChatGPT-Span(ZS) 23.55 24.79 22.44 30.59 33.47 28.17 ChatGPT-Span(FS) 24.31 27.54 23.59 30.51 33.37 28.10 ChatGPT-MoE (ZS) 28.04 27.62 28.46 31.29 33.93 29.04 ChatGPT-MoE (FS) 29.04 27.90 30.27 33.22 35.60 31.14 Table 2: F1-scores for fine-grained error detectors are shown for both FT-Summ and LLM models. The ChatGPTMoE prompt metric exhibits superior performance, particularly in detecting circumstantial inference errors which leads to superior performance on LLMs compared to FT-Summ models. ZS and FS indicate Zero- Shot and Few-Shot settings, respectively mark is restricted to our LLM span annotations. For assessing performance across error categories, we exclusively use summaries generated by BART, referred to as FT-Summ. Using our taxonomy, we classify annotated spans from previous benchmarks within BART summaries, enabling us to evaluate performance metrics on a per-category basis. We also find that approximately 1% of our dataset contains nonsensical errors, including grammatical and prompt inaccuracies. Since these errors mainly affect coherence rather than consistency, we have chosen to exclude this subset from our analysis. 3.3 Results Binary Classification We start by discussing binary inconsistency detection results. Table 3 presents balanced accuracy scores for FT-Summ and LLM models. Interestingly, prompt-based approaches, including direct assessment and span-based methods, perform less effectively in identifying LLM errors than FTSumm errors (Figure 4). Overall, prompt-based methods outperform standard QA and NLI metrics for both model types, though the improvement is less substantial for LLM models compared to FT-Summ models. We also show per category performance in Figure 5. Span Detection Table 2 shows span-level F1 scores, comparing predicted spans with annotated ones. Results reveal that QAFactEval struggles with detecting inconsistent spans. In contrast, ChatGPT-Span demonstrates notably superior performance in both ZeroShot(ZS) and Few-Shot scenarios(FS). With FewShot, it exhibits a 1-point increase in F1 scores Metric FT-Summ LLM QuestEval 47.54 49.47 QAFactEval 45.00 39.84 SummaC-ZS 43.29 49.70 SummaC-Conv 51.18 46.92 ChatGPT-DA(ZS) 73.00 60.34 ChatGPT-DA(FS) 72.06 61.61 Our Approaches ChatGPT-Span(ZS) 73.48 63.89 ChatGPT-Span(FS) 72.18 64.84 ChatGPT-SpanMoE(ZS) 73.77 67.96 ChatGPT-SpanMoE(FS) 75.61 70.27 Table 3: Binary accuracy scores comparing factual label predictions of different metrics against human annotations. ChatGPT-Span and ChatGPT-SpanMoE outperform ChatGPT-DA and standard metrics, especially on LLM-generated summaries. ZS and FS indicate ZeroShot and Few-Shot settings, respectively. for FT-Summ models, primarily enhancing precision. Moreover, incorporating ChatGPT-MoE yields a further improvement of nearly 5 points for FT-Summ and nearly 3 points for LLM summaries. Specifically, for FT-Summ, the most considerable enhancement lies in span recall, while for LLM models, the MoE prompt method affects both precision and recall. When examining span based metrics per error-category, we observe that ChatGPT-MoE notably enhances the detection of Circumstantial Inference Errors as depicted in Figure 6. 12556 4 Conclusion In summary, our work is the first to comprehensively assess Large Language Model (LLM) performance in dialogue summarization, revealing considerable inconsistencies that underscore the ongoing challenges in this area. Our findings emphasize the prevalence of circumstantial inferences, in summaries generated by GPT-4 and Alpaca-13b, indicating LLMs’ proficiency in language understanding but their tendency to introduce conceptual inferences. Moreover, we demonstrate that existing metrics struggle to detect these nuanced errors effectively. Consequently, we advocate for performance evaluations on benchmarks utilizing newer models to better capture the capabilities and limitations of automatic metrics, given the evolving error distributions and types of newer LLMs compared to FT-Summ models. Furthermore, our incorporation of two prompt-based methods shows promising progress in identifying circumstantial inference errors, although further research is required to improve performance. 5 Limitations and Ethics This study has limitations that should be noted. Firstly, the annotation process is resource-intensive and time-consuming. Consequently, we only benchmarked and annotated two Large Language Models (LLMs), which may not fully represent the behavior of all LLMs. Additionally, ethical considerations arise regarding the use of GPT-4 for prompt-based metrics. Being closed-source and expensive, its accessibility might be restricted, possibly widening the gap in research resources and impeding the reproducibility of our methodology. 5.1 Citations References Meng Cao, Yue Dong, and Jackie Cheung. 2022. Hallucinated but factual! inspecting the factuality of hallucinations in abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3340–3354, Dublin, Ireland. Association for Computational Linguistics. Jiaao Chen and Diyi Yang. 2020. Multi-view sequenceto-sequence models with conversational structure for abstractive dialogue summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4106– 4118, Online. Association for Computational Linguistics. Yulong Chen, Yang Liu, Liang Chen, and Yue Zhang. 2021. Dialogsum: A real-life scenario dialogue summarization dataset. arXiv preprint arXiv:2105.06762. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. Advances in neural information processing systems, 32. Alexander R Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2021. Qafacteval: Improved qa-based factual consistency evaluation for summarization. arXiv preprint arXiv:2112.08542. Mingqi Gao, Xiaojun Wan, Jia Su, Zhefeng Wang, and Baoxing Huai. 2023. Reference matters: Benchmarking factual error correction for dialogue summarization with fine-grained evaluation framework. arXiv preprint arXiv:2306.05119. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A humanannotated dialogue dataset for abstractive summarization. arXiv preprint arXiv:1911.12237. Tanya Goyal and Greg Durrett. 2021. Annotating and modeling fine-grained factuality in summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1449–1462, Online. Association for Computational Linguistics. Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2022. News summarization and evaluation in the era of gpt-3. arXiv preprint arXiv:2209.12356. Herbert P Grice. 1975. Logic and conversation. In Speech acts, pages 41–58. Brill. Dandan Huang, Leyang Cui, Sen Yang, Guangsheng Bao, Kun Wang, Jun Xie, and Yue Zhang. 2020. What have we achieved on text summarization? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 446–469, Online. Association for Computational Linguistics. Philippe Laban, Tobias Schnabel, Paul N Bennett, and Marti A Hearst. 2022. Summac: Re-visiting nlibased models for inconsistency detection in summarization. Transactions of the Association for Computational Linguistics, 10:163–177. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. 12557 Junpeng Liu, Yanyan Zou, Hainan Zhang, Hongshen Chen, Zhuoye Ding, Caixia Yuan, and Xiaojie Wang. 2021a. Topic-aware contrastive learning for abstractive dialogue summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1229–1243, Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhengyuan Liu, Ke Shi, and Nancy Chen. 2021b. Coreference-aware dialogue summarization. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 509–519, Singapore and Online. Association for Computational Linguistics. Zheheng Luo, Qianqian Xie, and Sophia Ananiadou. 2023. Chatgpt as a factual inconsistency evaluator for abstractive text summarization. arXiv preprint arXiv:2303.15621. Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906–1919, Online. Association for Computational Linguistics. OpenAI, :, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mo Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Chang, Fotis Chantzis, Derek Chen, Sully Chen, Ruby Chen, Jason Chen, Mark Chen, Ben Chess, Chester Cho, Casey Chu, Hyung Won Chung, Dave Cummings, Jeremiah Currier, Yunxing Dai, Cory Decareaux, Thomas Degry, Noah Deutsch, Damien Deville, Arka Dhar, David Dohan, Steve Dowling, Sheila Dunning, Adrien Ecoffet, Atty Eleti, Tyna Eloundou, David Farhi, Liam Fedus, Niko Felix, Simón Posada Fishman, Juston Forte, Isabella Fulford, Leo Gao, Elie Georges, Christian Gibson, Vik Goel, Tarun Gogineni, Gabriel Goh, Rapha Gontijo-Lopes, Jonathan Gordon, Morgan Grafstein, Scott Gray, Ryan Greene, Joshua Gross, Shixiang Shane Gu, Yufei Guo, Chris Hallacy, Jesse Han, Jeff Harris, Yuchen He, Mike Heaton, Johannes Heidecke, Chris Hesse, Alan Hickey, Wade Hickey, Peter Hoeschele, Brandon Houghton, Kenny Hsu, Shengli Hu, Xin Hu, Joost Huizinga, Shantanu Jain, Shawn Jain, Joanne Jang, Angela Jiang, Roger Jiang, Haozhun Jin, Denny Jin, Shino Jomoto, Billie Jonn, Heewoo Jun, Tomer Kaftan, Łukasz Kaiser, Ali Kamali, Ingmar Kanitscheider, Nitish Shirish Keskar, Tabarak Khan, Logan Kilpatrick, Jong Wook Kim, Christina Kim, Yongjik Kim, Hendrik Kirchner, Jamie Kiros, Matt Knight, Daniel Kokotajlo, Łukasz Kondraciuk, Andrew Kondrich, Aris Konstantinidis, Kyle Kosic, Gretchen Krueger, Vishal Kuo, Michael Lampe, Ikai Lan, Teddy Lee, Jan Leike, Jade Leung, Daniel Levy, Chak Ming Li, Rachel Lim, Molly Lin, Stephanie Lin, Mateusz Litwin, Theresa Lopez, Ryan Lowe, Patricia Lue, Anna Makanju, Kim Malfacini, Sam Manning, Todor Markov, Yaniv Markovski, Bianca Martin, Katie Mayer, Andrew Mayne, Bob McGrew, Scott Mayer McKinney, Christine McLeavey, Paul McMillan, Jake McNeil, David Medina, Aalok Mehta, Jacob Menick, Luke Metz, Andrey Mishchenko, Pamela Mishkin, Vinnie Monaco, Evan Morikawa, Daniel Mossing, Tong Mu, Mira Murati, Oleg Murk, David Mély, Ashvin Nair, Reiichiro Nakano, Rajeev Nayak, Arvind Neelakantan, Richard Ngo, Hyeonwoo Noh, Long Ouyang, Cullen O’Keefe, Jakub Pachocki, Alex Paino, Joe Palermo, Ashley Pantuliano, Giambattista Parascandolo, Joel Parish, Emy Parparita, Alex Passos, Mikhail Pavlov, Andrew Peng, Adam Perelman, Filipe de Avila Belbute Peres, Michael Petrov, Henrique Ponde de Oliveira Pinto, Michael, Pokorny, Michelle Pokrass, Vitchyr Pong, Tolly Powell, Alethea Power, Boris Power, Elizabeth Proehl, Raul Puri, Alec Radford, Jack Rae, Aditya Ramesh, Cameron Raymond, Francis Real, Kendra Rimbach, Carl Ross, Bob Rotsted, Henri Roussez, Nick Ryder, Mario Saltarelli, Ted Sanders, Shibani Santurkar, Girish Sastry, Heather Schmidt, David Schnurr, John Schulman, Daniel Selsam, Kyla Sheppard, Toki Sherbakov, Jessica Shieh, Sarah Shoker, Pranav Shyam, Szymon Sidor, Eric Sigler, Maddie Simens, Jordan Sitkin, Katarina Slama, Ian Sohl, Benjamin Sokolowsky, Yang Song, Natalie Staudacher, Felipe Petroski Such, Natalie Summers, Ilya Sutskever, Jie Tang, Nikolas Tezak, Madeleine Thompson, Phil Tillet, Amin Tootoonchian, Elizabeth Tseng, Preston Tuggle, Nick Turley, Jerry Tworek, Juan Felipe Cerón Uribe, Andrea Vallone, Arun Vijayvergiya, Chelsea Voss, Carroll Wainwright, Justin Jay Wang, Alvin Wang, Ben Wang, Jonathan Ward, Jason Wei, CJ Weinmann, Akila Welihinda, Peter Welinder, Jiayi Weng, Lilian Weng, Matt Wiethoff, Dave Willner, Clemens Winter, Samuel Wolrich, Hannah Wong, Lauren Workman, Sherwin Wu, Jeff Wu, Michael Wu, Kai Xiao, Tao Xu, Sarah Yoo, Kevin Yu, Qiming Yuan, Wojciech Zaremba, Rowan Zellers, Chong Zhang, Marvin Zhang, Shengjia Zhao, Tianhao Zheng, Juntang Zhuang, William Zhuk, and Barret Zoph. 2023. Gpt-4 technical report. Thomas Scialom, Paul-Alexis Dray, Patrick Gallinari, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, and Alex Wang. 2021. Questeval: Summarization asks for fact-based evaluation. arXiv preprint arXiv:2103.12693. Liyan Tang, Tanya Goyal, Alexander R Fabbri, Philippe Laban, Jiacheng Xu, Semih Yavuz, Wojciech Kry´sci´nski, Justin F Rousseau, and Greg Durrett. 2022. Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors. arXiv preprint arXiv:2205.12854. 12558 Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Bin Wang, Chen Zhang, Yan Zhang, Yiming Chen, and Haizhou Li. 2022. Analyzing and evaluating faithfulness in dialogue summarization. arXiv preprint arXiv:2210.11777. Chien-Sheng Wu, Linqing Liu, Wenhao Liu, Pontus Stenetorp, and Caiming Xiong. 2021. Controllable abstractive dialogue summarization with sketch supervision. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5108–5122, Online. Association for Computational Linguistics. Xianjun Yang, Yan Li, Xinlu Zhang, Haifeng Chen, and Wei Cheng. 2023. Exploring the limits of chatgpt for query or aspect-based text summarization. arXiv preprint arXiv:2302.08081. Tianyi Zhang, Faisal Ladhak, Esin Durmus, Percy Liang, Kathleen McKeown, and Tatsunori B Hashimoto. 2023. Benchmarking large language models for news summarization. arXiv preprint arXiv:2301.13848. Rongxin Zhu, Jianzhong Qi, and Jey Han Lau. 2023. Annotating and detecting fine-grained factual errors for dialogue summarization. arXiv preprint arXiv:2305.16548. A Annotation A.1 Annotator Recruitment We hired annotators through UpWork. Candidates underwent a qualifying round and an interview where they had to explain marked errors. Ultimately, we selected two expert proofreaders who were paid $18USD and $22 USD per hour, respectively. A.2 Annotator Instruction The following were the instructions provided to annotators to mark spans as inconsistent. Identify minimal spans in the summary that: a) Misrepresent information from the source: If a span contradicts or distorts information with respect to the source, annotate the evidence sentences from the source that demonstrate this inconsistency and select the span as inconsistent. b) Introduce new information not supported by evidence in the source: If the summary includes new information that is neither common knowledge nor a logical inference but relies on external facts or deductions, mark these spans as inconsistent. In this case, evidence sentences may not be available for annotation. B Error Rate per Dataset In figure 7 we provide the inconsistency rates for all models across each dataset. GPT-4 exhibits the highest consistency across all datasets. However, Alpaca-13b shows similar performance to BART on the dialogsum dataset but is surpassed by BART on the SAMSum datasets. C Thresholding for Binary Classification We use a subset of the evaluation data and apply thresholding to convert continuous scores into binary labels. This subset comprises approximately 150 source-summary pairs. The thresholds are individually determined for each metric, dataset, and model category. The thresholds are displayed in Figure 8 D Prompt Details D.1 ChatGPT-Direct Assessment Decide if the Summary is consistent with the corresponding Content. Note that consistency means all information in the summary is supported by the Content. Answer "yes" for consistent and "no" for inconsistent: Content: {{Dialogue}} Summary: {{Summary}} Answer D.2 ChatGPT-Span D.2.1 Identification Identify and list spans in the summary which are not supported by evidence from the content; if there are no unsupported spans, respond with "None" Content: {{Dialogue}} Summary: {{Summary}} Answer D.2.2 Verification Content: {{Dialogue}} Assess the extent to which the specified span in the following sentence is supported by evidence from the content, using a scale of 1 to 5, where 1 indicates no supporting evidence and 5 indicates full support from the evidence provided within the content 12559 RefMatters (DialogSum) 0.0 0.2 0.4 0.6 0.8 1.0 Inconsistent Summaries (proportion) GPT-4 Alpaca-13B BART MV-BART UniLM CODS RefMatters (SAMSum) GPT-4 Alpaca-13B BART MV-BART UniLM CODS FacEval (SAMSum) GPT-4 Alpaca-13B BART MV-BART Coref-BART CondigSum-BART Figure 7: Inconsistency rate of all models per dataset. We see that Alpaca-13B is competitive with BART with respect to consistency on the DialogSum dataset but is outperformed on the SAMSum datasets SAMSum FacEval DialogSum 0 1 2 3 4 5 QAFactEval FT-Summ LLM Median SAMSum FacEval DialogSum 0.0 0.2 0.4 0.6 0.8 1.0 QuestEval FT-Summ LLM Median SAMSum FacEval DialogSum 1.0 0.5 0.0 0.5 1.0 SummaC-ZS FT-Summ LLM Median SAMSum FacEval DialogSum 0.0 0.2 0.4 0.6 0.8 1.0 SummaC-Conv FT-Summ LLM Median Figure 8: Thresholds are displayed for each metric on each dataset and model category. Span: {{Span}} Sentence: {{SummarySentence}} Answer: D.3 ChatGPT-SpanMoE D.3.1 Identification Circumstantial Inference Error Definition: Circumstantial inference in summaries is inferred supplemental information, not explicitly stated in the content but derived from circumstantial evidence, often intentionally omitted in the content and assumed to be shared knowledge among participants in adherence to the principle of providing sufficient information without unnecessary details. Task Definition: Extract spans from the summary that are circumstantial inferences. Ensure the spans are the minimal erroneous spans. List each span in a new line; if there are no such spans respond with None Content: {{Dialogue}} Summary: {{Summary}} Answer Logical Error Error Definition: Logical inference errors in summaries arise from drawing conclusions or making deductions that deviate from the logical flow of content, leading to inaccuracies or misunderstandings in the representation of information or ideas. Task Definition: Extract spans from the summary that are logical errors. Ensure the spans are the minimal erroneous spans. List each span in a new line; if there are no such spans respond with None Content: {{Dialogue}} Summary: {{Summary}} Answer World Knowledge 12560 Error Definition: Factual extrapolations are real-world facts added in a summary, not explicitly mentioned in the original conversation. Task Definition: Extract spans from the summary that introduces additional details, constituting general facts or world knowledge not explicitly stated in the original content. Ensure the spans are the minimal erroneous spans. List each span in a new line; if there are no such spans respond with None Content: {{Dialogue}} Summary: {{Summary}} Answer Referential Error Error Definition: To identify referential errors, check for inconsistencies with respect to the content in linking pronouns, terms, or entities to their correct referents. Also look for instances of misattributions where statements or actions are inaccurately assigned to the wrong speaker or participant, resulting in content representation inaccuracies. Task Definition: Extract spans from the summary that are referential errors. Ensure the spans are the minimal erroneous spans. List each span in a new line; if there are no such spans respond with None Content: {{Dialogue}} Summary: {{Summary}} Answer Figurative Error Error Definition: Figurative misrepresentation occurs when non-literal information in the content is inaccurately portrayed or misunderstood as literal statements in the summary, distorting the intended meaning or message. Task Definition: Extract spans from the summary that figuratively misrepresent information in the content. Ensure the spans are the minimal erroneous spans. List each span in a new line; if there are no such spans respond with None Content: {{Dialogue}} Summary: {{Summary}} Answer The verification step that follows the spans extracted from the above prompts is the same as displayed in D.2.2 12561
2024
677
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12562–12584 August 11-16, 2024 ©2024 Association for Computational Linguistics LLM in a flash: Efficient Large Language Model Inference with Limited Memory Keivan Alizadeh, Iman Mirzadeh* , Dmitry Belenko* , S. Karen Khatamifard, Minsik Cho, Carlo C Del Mundo, Mohammad Rastegari, Mehrdad Farajtabar Apple † Abstract Large language models (LLMs) are central to modern natural language processing, delivering exceptional performance in various tasks. However, their substantial computational and memory requirements present challenges, especially for devices with limited DRAM capacity. This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks. Within this hardware-informed framework, we introduce two principal techniques. First, “windowing” strategically reduces data transfer by reusing previously activated neurons, and second, “row-column bundling”, tailored to the sequential data access strengths of flash memory, increases the size of data chunks read from flash memory. These methods collectively enable running models up to twice the size of the available DRAM, with up to 4x and 20x increase in inference speed compared to naive loading approaches in CPU and GPU, respectively. Our integration of sparsity awareness, context-adaptive loading, and a hardware-oriented design paves the way for effective inference of LLMs on devices with limited memory. 1 Introduction In recent years, large language models (LLMs) have demonstrated strong performance across a wide range of natural language tasks (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023a; Jiang et al., 2023; Gemini Team, 2023). * Major Contribution † {kalizadehvahid, imirzadeh, d_belenko, skhatamifard, minsik, cdelmundo, mrastegari, farajtabar}@apple.com Naive Llama 2-7B (CPU) Ours Naive OPT-6.7B (CPU) Ours Naive OPT-6.7B (GPU) Ours 100 450 700 1000 2250 3100 Inference Latency (ms) Compute Load From Flash Memory Management Figure 1: Average inference latency for a single token when only half of the model’s memory is available: Our method selectively loads parameters on demand for each token generation step. The latency represents the time required to repeatedly load parameters from flash memory, combined with the time needed for computations. However, the unprecedented capabilities of these models come with substantial computational and memory requirements for inference. LLMs can contain hundreds of billions or even trillions of parameters, which makes them challenging to load and run efficiently, especially on personal devices. Currently, the standard approach is to load the entire model into DRAM (Dynamic Random Access Memory) for inference (Rajbhandari et al., 2021; Aminabadi et al., 2022). However, this severely limits the maximum model size that can be run. For example, a 7 billion parameter model requires over 14GB of memory just to load the parameters in half-precision floating point format, exceeding the capabilities of most personal devices such as smartphones. While it is possible to employ techniques such as quantization to reduce the model size, still, this cannot address the main limitation of loading the entire model into DRAM. To address this limitation, we propose to store the model parameters in flash memory, which is 12562 DRAM Flash Memory ~100 GB ~10 GB CPU GPU ~ 1 GB/s ~100 GB/s (a) Bandwidth in a unified memory architecture 4 8 16 32 64 Chunk Size (KB) 0 1000 2000 3000 4000 5000 6000 Random Read Throughput (MB/s) Upper Bound (Sequential Read) Threads 32 16 8 4 2 (b) Random read throughput of flash memory Figure 2: (a) Flash memory offers significantly higher capacity but suffers from much lower bandwidth compared to DRAM and CPU/GPU caches and registers. (b) The throughput for random reads in flash memory increases with the size of sequential chunks and the number of threads. at least an order of magnitude larger than DRAM. Then, during inference, we directly load the required subset of parameters from the flash memory, avoiding the need to fit the entire model in DRAM. To this end, our work makes several contributions: • First, we study the hardware characteristics of storage systems (e.g., flash, DRAM). We show that hardware constraints such as capacity and bandwidth limitations can have significant considerations when designing efficient algorithms for serving LLMs from flash (Section 2). • Motivated by our findings, we propose several techniques that can help with (i) reducing the required data transfer, (ii) increasing the transfer throughput, and (iii) managing loaded parameters efficiently in DRAM (Section 3). • Finally, as partially demonstrated in Figure 1, we show that our proposed techniques for optimizing the cost model and selectively loading parameters on demand allows us to run models 2x larger than the device’s DRAM capacity and speed up inference up to 4x, 7x, and 20x compared to naive implementation in CPU, Metal and NVIDIA GPU backends, respectively (Section 4). 2 Flash Memory & LLM Inference In this section, we explore the characteristics of memory storage systems (e.g., flash, DRAM), and their implications for large language model (LLM) inference. We aim to understand the challenges and hardware-specific considerations essential for algorithm design, particularly in optimizing inference when working with flash memory. 2.1 Bandwidth and Energy Constraints While modern NAND flash memories offer high bandwidth and low latency, they fall well short of the performance levels of DRAM (Dynamic Random-Access Memory), in terms of both latency and throughput. Figure 2a illustrates these differences. A naive inference implementation that relies on NAND flash memory might necessitate reloading the entire model for each forward pass. This process is not only time-consuming, often taking seconds for even compressed models, but it also consumes more energy than transferring data from DRAM to the CPU or GPU’s internal memory. Load times for the models can be a problem even in the traditional DRAM-resident setup where weights are not reloaded partially – the initial, full load of the model still incurs a penalty, particularly in situations requiring rapid response times for the first token. Our approach, leveraging activation sparsity in LLMs, addresses these challenges by enabling selective reading of model weights, thereby reducing the response latency. 2.2 Read Throughput Flash memory systems perform optimally with large sequential reads. For instance, benchmarks on an Apple MacBook M1 Max with 1TB flash memory demonstrate speeds exceeding 6 GiB/s for a 1GiB linear read of an uncached file. However, this high bandwidth cannot be achieved for smaller, random reads due to the inherent multi-phase nature of these reads, encompassing the operating system, drivers, interrupt handling, and the flash controller, among others. Each phase introduces latency, disproportionately affecting smaller reads. To circumvent these limitations, we advocate two primary strategies, which can be employed 12563 jointly. The first involves reading larger chunks of data. For smaller blocks, a substantial part of the overall read time is spent waiting for data transfer to begin. This is often referred to as latency to first byte. This latency reduces the overall throughput of each read operation considerably because the overall measured throughput has to take into account not just the speed of transfer once it begins, but the latency before it begins as well, which penalizes small reads. This means that if we coalesce the reads for rows and columns of the FFN matrices, we can pay the latency cost only once for any given row/column pair in both matrices and higher throughput can be realized. This principle is depicted in Figure 2b. Perhaps a counterintuitive yet interesting observation is that in some scenarios, it will be worthwhile to read more than needed (but in larger chunks) and then discard, rather than only reading strictly the necessary parts but in smaller chunks. The second strategy leverages parallelized reads, utilizing the inherent parallelism within storage stacks and flash controllers. Our results indicate that throughputs appropriate for sparse LLM inference are achievable on modern hardware using 32KiB or larger random reads across multiple threads. Motivated by the challenges described in this section, in Section 3, we propose methods to optimize data transfer volume and enhance read throughput to significantly enhance inference speeds. 3 Load From Flash This section addresses the challenge of conducting inference on devices where the available DRAM is substantially smaller than the size of the model. This necessitates storing the full model weights in flash memory. Our primary metric for evaluating various flash loading strategies is latency, dissected into three distinct components: the I/O cost of loading from flash, the overhead of managing memory with newly loaded data, and the compute cost for inference operations. Our proposed solutions for reducing latency under memory constraints are categorized into areas: 1. Reducing Data Load: Aiming to decrease latency associated with flash I/O operations by loading less data1. 1It is notable that, by data we often refer to the weights of the neural network. However, the techniques we have developed can be easily generalized to other data types transferred and used for LLM inference, such as activations or KV cache, as suggested by Sheng et al. (2023). 2. Optimizing Data Chunk Size: Enhancing flash throughput by increasing the size of data chunks loaded, thereby mitigating latency. 3. Efficient Management of Loaded Data: Streamlining the management of data once it is loaded into memory to minimize overhead. It is important to note that our focus is not on optimizing the compute, as it is orthogonal to the core concerns of our work. Instead, we concentrate on optimizing flash memory interactions and memory management to achieve efficient inference on memory-constrained devices. We will elaborate on the implementation details of these strategies in the experimental setup section. 3.1 Reducing Data Transfer Our method leverages the inherent activation sparsity found in Feed-Forward Network (FFN) models, as documented in preceding research. The OPT 6.7B model, for instance, exhibits a notable 97% sparsity within its FFN layer. Similarly, the Falcon 7B model has been adapted through fine-tuning, which involves swapping their activation functions to ReLU, resulting in 95% sparsity while being similar in accuracy (Mirzadeh et al., 2023). Replacing activations of Llama 2 model (Touvron et al., 2023b) by FATReLU and finetuning can achieve 90% sparsity(Song et al., 2024). In light of this information, our approach involves the iterative transfer of only the essential, dynamic subset of the weights from flash memory to DRAM for processing during inference. Selective Persistence Strategy. We opt to retain the embeddings and matrices within the attention mechanism of the transformer constantly in DRAM. For the Feed-Forward Network (FFN) portions, only the non-sparse segments are dynamically loaded into DRAM as needed. Keeping attention weights, which constitute approximately one-third of the model’s size, in memory, allows for more efficient computation and quicker access, thereby enhancing inference performance without the need for full model loading. Anticipating ReLU Sparsity. The ReLU activation function naturally induces over 90% sparsity in the FFN’s intermediate outputs, which reduces the memory footprint for subsequent layers that utilize these sparse outputs. However, the preceding layer, namely the up project, must be fully present in memory. To avoid loading the entire up projection matrix, we follow Liu et al. (2023b), and employ a 12564 −4 −3 −2 −1 0 1 2 Output Magnitude (before ReLU) Count False Negative Up Projection Predictor (a) predictor vs relu N Low Rank Predictor M M N M R ReLU sigmoid
 > 0.5 Up Projection
 (FC) 0 0 1 0 1 0 . . . 0 0 N= d model M = dffn (b) low rank predictor Figure 3: (a) Preactivations of tokens in one sequence in OPT 6.7B. The blue graph shows the preactivation of elements that the predictor detected as positive while the green graph is for up projection. As it can be seen most of the False Positives are close to 0 and False Negatives constitute a small portion of the elements. (b) A small low-rank predictor finds out which intermediate neurons are going to be activated. Table 1: The low-rank predictor has a marginal impact on zero-shot metrics as the predictor of each layer accurately identifies sparsity. Zero-Shot Task OPT 6.7B with Predictor Arc Easy 66.1 66.2 Arc Challenge 30.6 30.6 HellaSwag 50.3 49.8 low-rank predictor to identify the elements zeroed by ReLU (see Figure 3b). We used a balanced loss over negative and positive samples of each layer. In contrast to their work, our predictor needs only the output of the current layer’s attention module and not the previous layer’s FFN module. We have observed that postponing the prediction to the current layer is sufficient for hardware-aware weight-loading algorithm design but leads to more accurate outcomes due to deferred inputs. We used 10000 samples from the C4 training dataset to do the training for 2 epochs. It took 4 hours on an A100 GPU to train each predictor. We thereby only load elements indicated by the predictor, as shown in Figure 3a. Furthermore, as demonstrated in Table 1, using predictors does not adversely affect the model’s performance in 0-shot tasks. For more details please refer to Appendix B. The Sliding Window Technique. In our study, we define an active neuron as one that yields a positive output in our low-rank predictor model. Our approach focuses on managing neuron data by employing a Sliding Window Technique. This technique entails maintaining a DRAM cache of only the weight rows that were predicted to be required by the recent subset of input tokens. The key aspect of this technique is the incremental loading of neuron data that differs between the current input token and its immediate predecessors. This strategy allows for efficient memory utilization, as it frees up memory resources previously allocated to cached weights required by tokens that are no longer within the sliding window (as depicted in Figure 4b). From a mathematical standpoint, let sagg(k) denote the cumulative use of neuron data across a sequence of k input tokens. Our memory architecture is designed to store an average of sagg(k) in DRAM. As we process each new token, the incremental neuron data, which is mathematically represented as sagg(k + 1) −sagg(k), is loaded from flash memory into DRAM. This practice is grounded in the observed trend of decreasing aggregated neuron usage over time. Consequently, larger values of k result in a lesser volume of data being loaded for each new token (refer to Figure 4a), while smaller values of k can help conserve DRAM that is used to store the cached weights. In determining the size of the sliding window, the aim is to maximize it within the constraints imposed by the available memory capacity. 3.2 Increasing Transfer Throughput To increase data throughput from flash memory, it is crucial to read data in larger chunks, preferably sized as the multiples of the block size of the underlying storage pool. In this section, we detail the strategy we have employed to augment the chunk sizes for more efficient flash memory reads. Bundling Columns and Rows. Note that in the FFN layer, the usage of the ith column from the up projection and the ith row from the down projection 12565 0 5 10 15 20 25 30 Window size (k) 0 10 20 30 40 50 Percentage Aggregated Usage Incremental Transfer sagg(k) sagg(k + 1) −sagg(k) (a) aggregated neuron usage 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Neurons to be deleted New Neurons Neurons from initial window Active neurons in the initial window Active neurons in the new window Initial Window Once Upon A Time There Was A Kid Who Had A Dream Sliding Window Once Upon A Time There Was A Kid Who Had A Dream (b) sliding window Figure 4: (a) Aggregated neuron usage of the tenth layer of Falcon 7B: the slope of aggregated neuron usage is decreasing. Other layers exhibit the same pattern. (b) Rather than deleting neurons that were brought to DRAM we keep the active neurons of past k tokens (we use k = 5): when the new token "Was" is being processed only a small fraction of new weights need to be loaded. coincides with the activation of the ith intermediate neuron. Consequently, by storing these corresponding columns and rows together in flash memory, we can consolidate the data into larger chunks for reading. Refer to Figure 5 for an illustration of this bundling approach. If each element of weights of the network is stored in num_bytes such bundling doubles the chunk size from dmodel×num_bytes to 2dmodel×num_bytes as shown in Figure 5. Our analysis and experiment show this increases the throughput of the model. Bundling Based on Co-activation. We hypothesized that neurons might exhibit highly correlated activity patterns, enabling bundling. By analyzing activations on the C4 validation dataset, we found a power law distribution of coactivations. However, bundling neurons with their highest coactivated neuron (closest friend) led to multiple loadings of highly active neurons, counteracting our goal. This result suggests that very active neurons are the closest friends of many others. We present this negative result to inspire future research on effective neuron bundling for efficient inference. Please refer to Appendix D for details. 3.3 Optimized Data Management in DRAM Although data transfer within DRAM is more efficient compared to accessing flash memory, it still incurs a non-negligible cost. When introducing data for new neurons, reallocating the matrix and appending new matrices can lead to significant overhead due to the need for rewriting existing neuron data in DRAM. This is particularly costly when a substantial portion (approximately 25%) 0 0 0 0 Predictor’s Output Down Proj
 Rows Up Proj 
 Columns Flash Memory load 
 from 
 flash Figure 5: By bundling columns of the up project and rows of the down project layer, we can load 2x chunks instead of reading columns or rows separately. of the Feed-Forward Networks (FFNs) in DRAM needs to be rewritten. To address this issue, we adopt an alternative memory management strategy. This involves the preallocation of all necessary memory and the establishment of a corresponding data structure for efficient management. The data structure comprises elements such as pointers, matrix, bias, num_used, and last_k_active shown in Figure 6. Each row in the matrix represents the concatenated row of the ‘up project’ and the column of the ‘down project’ of a neuron. The pointer vector indicates the original neuron index corresponding to each row in the matrix. The bias for the ‘up project’ in the original model is represented in the corresponding bias element. The num_used parameter tracks the number of rows currently utilized in the matrix, initially set to zero. The matrix for the ith layer is pre-allocated with a size 12566 1 10 15 5 -1 -1 -1 -1 -1 -1 Copy Pointer 1 5 15 5 -1 -1 -1 -1 -1 -1 0.5 0.4 0.2 0.4 -1 -1 -1 -1 -1 -1 Scalar num_rows = 4 1 5 15 7 9 -1 -1 -1 -1 -1 0.5 0.4 0.2 0.4 0.3 -1 -1 -1 -1 -1 num_rows = 5 To be deleted Remaining New Pointer Scalar 0.5 0.7 0.2 0.4 -1 -1 -1 -1 -1 -1 Pointer Scalar 1. Start deletion 2. Deletion complete 3. Insertion complete num_rows = 3 Figure 6: Memory management; First we replace elements to be deleted by last elements to maintain a consecutive occupation of memory. Then the new weights are stacked at the end. This reduces the unnecessary data movements. of Reqi × 2dmodel, where Reqi denotes the maximum number of neurons required for the specified window size in a subset of C4 validation set. By allocating enough memory for each layer in advance, we minimize the need for reallocation. Finally, the last_k_active component identifies the neurons from the original model that were most recently activated using the last k tokens. The following operations can be done as depicted in Figure 6: 1. Deleting Neurons: Neurons that are no longer required are identified efficiently in linear time, utilizing the associated last_k_active value and the current prediction. The matrix, pointer, and scalars of these redundant neurons are replaced with the most recent elements, and their count is subtracted from num_rows. For O(c) neurons to be deleted, a memory rewrite of the order O(c × dmodel) is required. 2. Bringing in New Neurons: The required weights are retrieved from flash memory. The corresponding pointers and scalars are read from DRAM, and these rows are then inserted into the matrix, extending from num_row to num_row + num_new. This approach eliminates the need for reallocating memory in DRAM and copying existing data, reducing inference latency. 3. Inference Process: For the inference operation, the first half of the matrix[:num_rows,:d_model] is used as the ‘up project’, and the transposed second half, matrix[:num_rows,d_model:].transpose(), serves as the ’down project’. This configuration is possible because the order of neurons in the intermediate output of the FFN does not alter the final output, allowing for a streamlined inference process. These steps collectively ensure efficient memory management during inference, optimizing the neural network’s performance and resource utilization. 4 Experiments and Results We start this section by briefly discussing our experimental setup and implementation details. Next, we show that the techniques introduced in Section 3 can improve the inference latency significantly across different models and runtime platforms. We postpone the some details to the appendix sections as follows: performance of our trained low-rank predictor (Appendix B). 4.1 Experimental Setup Our work is mainly motivated by optimizing inference efficiency on personal devices. To this end, in our experiments, we process sequences individually, running only one sequence at a time. This approach allows us to allocate a specific portion of DRAM for the Key-Value (KV) cache while primarily focusing on the model size. For the implementation of our inference process, we utilize HuggingFace Transformers library (Wolf et al., 2019) and PyTorch (Paszke et al., 2019). This setup is tested under the condition that approximately half of the model size is available in DRAM. While with a different level of sparsity or employing quantization, one can work with smaller available DRAM capacity, these optimizations are orthogonal to our proposed method. Models. We mainly use OPT 6.7B (Zhang et al., 2022b) and the sparsified Falcon 7B (Mirzadeh et al., 2023) model for our evaluations, but we additionally report results on Phi-2 (Gunasekar et al., 2023), Persimmon 8B (Elsen et al., 2023) and a 12567 Table 2: The I/O latency of OPT 6.7B 16 bit on M1 Max when half the memory is available. By employing the activation predictor and windowing, we can reduce the data transfer from flash memory to DRAM. While this reduces the throughput, the bundling technique can alleviate this by doubling the data transfer chunk size and hence the throughput which leads to reducing the overall latency to half. Configuration Performance Metrics Hybrid Predictor Windowing Bundling DRAM (GB) Flash→DRAM (GB) Throughput (GB/s) I/O Latency (ms) ✗ ✗ ✗ ✗ 0 13.4 GB 6.10 GB/s 2196 ms ✓ ✗ ✗ ✗ 6.7 6.7 GB 6.10 GB/s 1090 ms ✓ ✓ ✗ ✗ 4.8 0.9 GB 1.25 GB/s 738 ms ✓ ✓ ✓ ✗ 6.5 0.2 GB 1.25 GB/s 164 ms ✓ ✓ ✓ ✓ 6.5 0.2 GB 2.25 GB/s 87 ms Llama 2 (Touvron et al., 2023b) which is sparsified using FATReLU (Song et al., 2024). Note that the techniques introduced in this work are mostly independent of architecture. Data. We use a small subset of C4 validation dataset for our latency measurements. We take the first 128 tokens of each example as the prompt, and generate 256 new tokens. Hardware Configuration. Our models are evaluated across three hardware setups. The first includes an Apple M1 Max with a 1TB SSD. The second features an Apple M2 Ultra with a 2TB SSD. On MacBooks we run the model on the CPU with float32 or GPU with Metal and float16. The third setup uses a Linux machine with a 24GB NVIDIA RTX 4090, where GPU computations utilize bfloat16 models. Across all setups, we assume nearly half of the total memory (DRAM and GPU) is allocated for model computations. Baselines. We compare our models with a naive baseline of loading the model on demand when doing the forward pass. We additionally compare with our hybrid loading approach as a secondary baseline when half of the model is persisted in memory and the other half is loaded on demand at generation of every token without use of sparsity. We used best theoretical possible numbers for IO latency for each of the methods to make a fair comparison, the real number might be higher. For methods not employing sparsity or weight sharing, at least half of the model must be transferred from flash memory during the forward pass. This necessity arises because, initially, only half of the model is available in DRAM, but as the forward pass progresses, the entire model capacity is utilized. Consequently, any data not present at the start must be transferred at least once. Thus, the most efficient theoretical baseline involves loading half of the model size from the flash memory into DRAM. This optimal I/O scenario serves as our primary baseline. Given the nature of our setup (i.e., the limited available DRAM or GPU memory), we are not aware of any other method that can surpass this theoretical I/O efficiency. Implementation Details. To optimize data loading from flash memory, our system employs reads parallelized over 32 threads. This multithreaded approach is intended to both better amortize latency to the first byte by not waiting for each read sequentially, and maximize read throughput by reading multiple streams at once (Figure 2b). To better assess the actual throughput, we conducted benchmarks without the aid of operating system caching leading to a more accurate measurement. 4.2 Faster Load From Flash Our first result in Table 2 demonstrates the effectiveness of techniques we introduced in Section 3, where the I/O latency depends on how much data is being transferred from flash to DRAM, and the chunk size which determines the throughput. For instance, by using a low-rank predictor, we reduce the data transfer significantly, and the amount of this traffic can be further reduced using our proposed windowing technique. Compared to a long, contiguous read, scattered reads will necessarily result in lower throughput (e.g. 1.25 GiB/s sparse vs 6.1 GiB/s dense), but this is partially mitigated by bundling up-projection and down-projection weights. The overall effect of sparse reads is still strongly favorable, because only a small subset of the overall weights is loaded incrementally in each iteration, and the load of just the required subset of weights takes less time and less DRAM. Additionally, we examine end-to-end latencies under various setups in Table 3. We allocate approximately 50 % of the model size for OPT, Falcon, and Persimmon and Llama 2. For the significantly smaller Phi-2 model, we observed less 12568 Table 3: The end-to-end inference latency across different setups. Our efficient implementation (referred as All) that employs the predictor, windowing, and bundling can lead to significant latency reduction. Inference Latency (ms) Model Method Backend I/O Mem Compute Total OPT 6.7B Naive CPU 2196 0 986 3182 OPT 6.7B All CPU 105 58 506 669 OPT 6.7B Naive Metal M1 2196 0 193 2389 OPT 6.7B All Metal M1 92 35 438 565 OPT 6.7B Naive Metal M2 2145 0 125 2270 OPT 6.7B All Metal M2 26 8 271 305 OPT 6.7B Naive GPU 2196 0 22 2218 OPT 6.7B All GPU 30 34 20 84 OPT 6.7B Speculative GPU 38.5 9.5 12 60 Falcon 7B Naive CPU 2295 0 800 3095 Falcon 7B Hybrid CPU 1147 0 800 1947 Falcon 7B All CPU 161 92 453 706 Persimmon 8B Naive CPU 2622 0 1184 3806 Persimmon 8B Hybrid CPU 1311 0 1184 2495 Persimmon 8B All CPU 283 98 660 1041 Phi-2 2.7B Naive CPU 885 0 402 1287 Phi-2 2.7B Hybrid CPU 309 0 402 711 Phi-2 2.7B All CPU 211 76 259 546 Llama 2 7B Naive CPU 2166 0 929 3095 Llama 2 7B Hybrid CPU 974 0 929 1903 Llama 2 7B All CPU 279 152 563 994 sparsity rates, prompting us to set this limit at 65%. We observe a significant improvement in loading efficiency over both naive and hybrid approaches across all models. Moreover we showed the GPU backend outcomes further improve when combined with speculative decoding. 4.3 The Memory-Latency Tradeoff So far, we have mainly worked under the assumption that the available DRAM is roughly half of our model size. However, we note that this is not a hard constraint and we can relax this constraint. To this end, we study the impact of window size on memory usage, and consequently on latency. By increasing the window size, we increase the percentage of model parameters that we keep in DRAM. As a result, we need to bring fewer parameters, and hence the latency can be reduced at the cost of using higher DRAM as shown in Figure 7. 5 Ablation analysis 5.1 The Impact of Longer Generation In our previous results, we have used short to medium-length (256 tokens) generations for our benchmarks. It is possible that for longer generation of tokens, the ssd enable thermal throttling and lower the performance. However, Figure 8 shows that this is not the case, even when we generate 35 40 45 50 55 60 65 70 75 80 % of Model in DRAM 0 10 20 30 40 Latency (ms) Compute Memory Management Load From Flash Figure 7: By bringing more of our model (OPT-6.7B) parameters into DRAM, the latency can be reduced on the GPU machine. 1000 tokens for OPT 6.7B model on GPU. Moreover, we show that the average flash latency doesn’t increase as we go further in generation. In contrast, the flash latency for the the first few tokens is higher since the allocated memory in DRAM is empty and needs to be filled in with neurons and for first few tokens we need more data transfer. Also it is possible to argue that the non-greedy sampling methods such as the Nucleus sampling (Holtzman et al., 2020) method can result in more diverse activation, and hence less favorable towards our method. We found out this is not the case either for long token generations. Nucleus sampling doesn’t lead to lower performance in long generation in neither cpu or gpu. 5.2 Speculative Decoding To further showcase the strength of our method and adaptability to other decoding strategies we have applied speculative decoding on the OPT 6.7B model. The challenge for doing speculative decoding is the limited memory available within DRAM. Given λ tokens from the draft model, the big model verifies them and will keep a window of size k for each layer. The model should decide neurons of which tokens to keep in memory before verifications are done. If the model keeps the last k tokens out of λ + 1 tokens in memory and most of them get rejected, there will be very few neuron reuse for the next forward pass. We conjecture that if the ratio of the acceptance is α keeping the last k tokens ending with α(λ + 1)th token is optimal in DRAM. We used λ = 4 and were able to improve the speed of decoding by 1.4x as shown in table 5, this is close to the original 1.58x speedup of speculative decoding. 12569 0 200 400 600 800 1000 Generation Length 0 100 200 300 400 500 600 DRAM →Flash Latency (ms) CPU (Nucleus) GPU (Nucleus) GPU (Greedy) Figure 8: Weight loading latency of OPT 6.7B with increasing generation length. 5.3 A Note on Power Consumption In evaluating the efficiency of our method, we compared the power consumption of our sparse model approach with that of generating tokens using a dense model of similar size. While the power usage (energy per unit of time) of the sparse model was lower than that of the dense model, the extended duration required for token generation resulted in the sparse model having a higher total energy consumption. This is going to be reflected in the greater area under the curve when plotting power over time for the sparse model compared to the dense model. A systematic and quantitative evaluation of the exact power usage pattern is left as a future work. 6 Related Works As LLMs grow in size, reducing their computational and memory requirements for inference has become an active area of research. Approaches broadly fall into two categories: model compression techniques such as pruning and quantization (Han et al., 2016b; Sun et al., 2023; Jaiswal et al., 2023; Xia et al., 2023; Zhang et al., 2022a; Xu et al., 2023; Shao et al., 2023; Lin et al., 2023; Hoang et al., 2023; Zhao et al., 2023; Ahmadian et al., 2023; Li et al., 2023), and selective execution such as sparse activation (Liu et al., 2023b; Mirzadeh et al., 2023), or conditional computation (Graves, 2016; Baykal et al., 2023). Our work is orthogonal to these directions, focusing mainly on minimizing data transfer from flash memory during inference. Perhaps most related to our work is the literature on selective weight loading. SparseGPU (Narang et al., 2021) exploits activation sparsity to load a subset of weights for each layer. However, it still requires loading from RAM. FlexGen (Sheng et al., 2023) offloads the weights and KV-cache from GPU memory to DRAM and DRAM to flash memory. In contrast, we consider only the cases where the full model can’t reside in the whole DRAM and GPU memory on the edge devices. Notably, FlexGen is still theoretically bound by the slow throughput of flash to DRAM in such scenarios. An expanded discusion of related works is deferred to Appendix E. Overall, the primary assumption in the literature is that the model can fully reside in the GPU memory or system DRAM. However, considering the limited resources available on personal devices, we do not share this assumption in this work. Instead, we concentrate on exploring how to store and load parameters on flash memory more efficiently, aiming to enhance inference efficiency. 7 Discussion In this study, we have tackled the significant challenge of running large language models (LLMs) on devices with constrained memory capacities. Our approach, deeply rooted in the understanding of flash memory and DRAM characteristics, represents a novel convergence of hardware-aware strategies and machine learning. By developing an inference cost model that aligns with these hardware constraints, we have introduced two new techniques: ‘windowing’ and ‘row-column bundling’. The practical outcomes of our research are noteworthy. We have demonstrated the ability to run LLMs up to twice the size of available DRAM. For example, on OPT model, we achieve an acceleration in inference speed of 4-5x compared to traditional loading methods in CPU, and 20-25x in GPU. This is particularly crucial for deploying LLMs in resource-limited environments, thereby expanding their applicability and accessibility. While in this work we have studied the previously unexplored problem of serving LLMs from flash, we note that this work is only a first step in this direction, and has several limitations that we discuss in the next section, and we believe there are several interesting problems left to be explored in future works. For instance, from the algorithmic perspective, more optimized techniques of weight bundling and data structures can be crafted, while from the engineering perspective, the characteristics of specific hardware platforms can inform works on building more efficient inference stacks. 12570 8 Limitations Our study represents an initial endeavor in the pursuit of democratizing Large Language Model (LLM) inference, making it accessible to a wider array of individuals and devices. We recognize that this early effort has its limitations, which, in turn, open up compelling avenues for future research. A critical aspect for future exploration is the systematic analysis of power consumption and thermal limitations inherent in the methods we propose, particularly for on-device deployment. Currently, our study is limited to single-batch inference. We provide some preliminary results on combining our proposed idea with speculative decoding, however, expanding this to include more complex scenarios like prompt processing and multi-batch inference are valuable areas for further investigation. In our initial proof of concept, we operated under the assumption of memory availability being half the size of the model. Exploring the dynamics of working with varying memory sizes—both larger and smaller—introduces a fascinating balance between latency and accuracy, and is a compelling area for future exploration. In conclusion, our methodology is constructed on the foundation of sparsified networks. Nonetheless, the underlying concept holds potential for broader applications. It can be adapted to selectively load weights in non-sparse networks or to dynamically retrieve model weights from flash storage. This adaptation would be contingent on the specific requirements of the input prompt or the contextual parameters provided. Such an approach suggests a versatile strategy for managing model weights, and optimizing performance based on the nature of the input, thereby enhancing the efficiency, usefulness, and applicability of the proposed scheme in various scenarios dealing with Large Language Models (LLMs). Acknowledgements We would like to thank Itay Sagron, Lailin Chen, Chenfan (Frank) Sun, Hanieh Hashemi, Mahyar Najibi, Qichen Fu, Moin Nabi, Peter Zatloukal, Arsalan Farooq, Sachin Mehta, Mohammad Samragh, Matt Johnson, Etai Zaltsman, Lin Chang, Dominic Giampaolo, Tal Uliel, Hadi Pouransari, Fartash Faghri, Oncel Tuzel, Samy Bengio, Ruoming Pang, Chong Wang, Ronan Collobert, David Grangier, and Aftab Munshi for the valuable discussions. References Arash Ahmadian, Saurabh Dash, Hongyu Chen, Bharat Venkitesh, Stephen Gou, Phil Blunsom, A. Ustun, and Sara Hooker. 2023. Intriguing properties of quantization at scale. ArXiv, abs/2305.19268. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Maitha Alhammadi, Mazzotta Daniele, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. The falcon series of language models: Towards open frontier models. Reza Yazdani Aminabadi, Samyam Rajbhandari, Ammar Ahmad Awan, Cheng Li, Du Li, Elton Zheng, Olatunji Ruwase, Shaden Smith, Minjia Zhang, Jeff Rasley, et al. 2022. Deepspeed-inference: enabling efficient inference of transformer models at unprecedented scale. In SC22: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–15. IEEE. Sangmin Bae, Jongwoo Ko, Hwanjun Song, and Se-Young Yun. 2023. Fast and robust earlyexiting framework for autoregressive language models with synchronized parallel decoding. ArXiv, abs/2310.05424. Cenk Baykal, Dylan Cutler, Nishanth Dikkala, Nikhil Ghosh, Rina Panigrahy, and Xin Wang. 2023. Alternating updates for efficient transformers. ArXiv, abs/2301.13310. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Yew Ken Chia, Pengfei Hong, Lidong Bing, and Soujanya Poria. 2023. Instructeval: Towards holistic evaluation of instruction-tuned large language models. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Han Dai, Yi Zhang, Ziyu Gong, Nanqing Yang, Wei Dai, Eric Song, and Qiankun Xie. 2021. Spatten: Efficient sparse attention architecture with cascade token and head pruning. In Advances in Neural Information Processing Systems, volume 34. Erich Elsen, Augustus Odena, Maxwell Nye, Sa˘gnak Ta¸sırlar, Tri Dao, Curtis Hawthorne, Deepak Moparthi, and Arushi Somani. 2023. Releasing Persimmon-8B. 12571 Mingyu Gao, Jie Yu, Wentai Li, Michael C Dai, Nam Sung Kim, and Krste Asanovic. 2022. computedram: In-memory compute using off-the-shelf dram. In Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, pages 1065– 1079. Google Gemini Team. 2023. Gemini: a family of highly capable multimodal models. arXiv preprint arXiv:2312.11805. Alex Graves. 2016. Adaptive computation time for recurrent neural networks. In International Conference on Machine Learning, pages 3500–3509. PMLR. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. 2023. Textbooks are all you need. CoRR, abs/2306.11644. Jongmin Ham, Jinha Kim, Jinwoong Choi, Cheolwoo Cho, Seulki Hong, Kyeongsu Han, and Taejoo Chung. 2016. Graphssd: a high performance flash-based storage system for large-scale graph processing. In 2016 USENIX Annual Technical Conference (USENIXATC 16), pages 243–256. Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. 2016a. Eie: efficient inference engine on compressed deep neural network. arXiv preprint arXiv:1602.01528. Song Han, Huizi Mao, and William J Dally. 2016b. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In International Conference on Learning Representations (ICLR). Awni Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. 2023. MLX: Efficient and flexible machine learning on apple silicon. Zhenyu He, Zexuan Zhong, Tianle Cai, Jason D Lee, and Di He. 2023. Rest: Retrieval-based speculative decoding. ArXiv, abs/2311.08252. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Duc Nien Hoang, Minsik Cho, Thomas Merth, Mohammad Rastegari, and Zhangyang Wang. 2023. (dynamic) prompting might be all you need to repair compressed llms. ArXiv, abs/2310.00867. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Ajay Jaiswal, Zhe Gan, Xianzhi Du, Bowen Zhang, Zhangyang Wang, and Yinfei Yang. 2023. Compressing llms: The truth is rarely pure and never simple. ArXiv, abs/2310.01382. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2022. Fast inference from transformers via speculative decoding. Liang Li, Qingyuan Li, Bo Zhang, and Xiangxiang Chu. 2023. Norm tweaking: High-performance lowbit quantization of large language models. ArXiv, abs/2309.02784. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. 2023. Awq: Activationaware weight quantization for llm compression and acceleration. ArXiv, abs/2306.00978. Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra. 2023a. Llm-qat: Data-free quantization aware training for large language models. CoRR. Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, et al. 2023b. Deja vu: Contextual sparsity for efficient llms at inference time. In International Conference on Machine Learning, pages 22137–22176. PMLR. Moinuddin K Meswani, Sergey Blagodurov, David Roberts, John Slice, Mike Ignatowski, and Gabriel Loh. 2015. Neural cache: Bit-serial in-cache acceleration of deep neural networks. In 2015 48th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pages 383–394. IEEE. Iman Mirzadeh, Keivan Alizadeh, Sachin Mehta, Carlo C Del Mundo, Oncel Tuzel, Golnoosh Samei, Mohammad Rastegari, and Mehrdad Farajtabar. 2023. Relu strikes back: Exploiting activation sparsity in large language models. Sharan Narang, Logan Feistel, Erich Elsen Undersander, Cindy Song, and Gregory Diamos. 2022. Firefly: A lightweight system for running multi-billion parameter models on commodity hardware. In 2022 ACM/IEEE 49th Annual International Symposium on Computer Architecture (ISCA), pages 757–771. IEEE. 12572 Sharan Narang, Erich Elsen Undersander, and Gregory Diamos. 2021. Sparse gpu kernels for deep learning. In International Conference on Learning Representations. Angshuman Parashar, Minsoo Rhu, Anurag Mukkara, Antonio Puglielli, Rangharajan Venkatesan, Brucek Khailany, Joel Emer, Stephen W Keckler, and William J Dally. 2017. Timeloop: A systematic approach to dnn accelerator evaluation. In 2017 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pages 241–251. IEEE. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8024–8035. Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He. 2021. Zero-infinity: Breaking the gpu memory wall for extreme scale deep learning. In SC21: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–14. Minsoo Rhu, Natalia Gimelshein, Jason Clemons, Arslan Zulfiqar, and Stephen W Keckler. 2013. vdnn: Virtualized deep neural networks for scalable, memory-efficient neural network design. In 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), page Article 13. IEEE Computer Society. Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqiang Li, Kaipeng Zhang, Peng Gao, Yu Jiao Qiao, and Ping Luo. 2023. Omniquant: Omnidirectionally calibrated quantization for large language models. ArXiv, abs/2308.13137. Yifan Shao, Mengjiao Li, Wenhao Cai, Qi Wang, Dhananjay Narayanan, and Parthasarathy Ranganathan. 2022. Hotpot: Warmed-up gigascale inference with tightly-coupled compute and reuse in flash. In Proceedings of the 55th Annual IEEE/ACM International Symposium on Microarchitecture, pages 335–349. Ying Sheng, Lianmin Zheng, Binhang Yuan, Zhuohan Li, Max Ryabinin, Beidi Chen, Percy Liang, Christopher Ré, Ion Stoica, and Ce Zhang. 2023. Flexgen: High-throughput generative inference of large language models with a single GPU. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 31094–31116. PMLR. Chenyang Song, Xu Han, Zhengyan Zhang, Shengding Hu, Xiyu Shi, Kuai Li, Chen Chen, Zhiyuan Liu, Guangli Li, Tao Yang, and Maosong Sun. 2024. Prosparse: Introducing and enhancing intrinsic activation sparsity within large language models. Vedant Subramani, Marios Savvides, Li Ping, and Sharan Narang. 2022. Adapt: Parameter adaptive tokenwise inference for vision transformers. In Proceedings of the 55th Annual IEEE/ACM International Symposium on Microarchitecture. Mingjie Sun, Zhuang Liu, Anna Bair, and J. Zico Kolter. 2023. A simple and effective pruning approach for large language models. ArXiv, abs/2306.11695. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface’s transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771. Haojun Xia, Zhen Zheng, Yuchao Li, Donglin Zhuang, Zhongzhu Zhou, Xiafei Qiu, Yong Li, Wei Lin, and Shuaiwen Leon Song. 2023. Flash-llm: Enabling low-cost and highly-efficient large generative model inference with unstructured sparsity. Proc. VLDB Endow., 17:211–224. Zhaozhuo Xu, Zirui Liu, Beidi Chen, Yuxin Tang, Jue Wang, Kaixiong Zhou, Xia Hu, and Anshumali Shrivastava. 2023. Compress, then prompt: Improving accuracy-efficiency trade-off of llm inference with transferable prompt. ArXiv, abs/2305.11186. Rongjie Yi, Liwei Guo, Shiyun Wei, Ao Zhou, Shangguang Wang, and Mengwei Xu. 2023. Edgemoe: Fast on-device inference of moe-based large language models. ArXiv, abs/2308.14352. Jinchao Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, and Sharad Mehrotra. 2023. Draft & verify: Lossless large language model acceleration via self-speculative decoding. ArXiv, abs/2309.08168. Shizhao Zhang, Han Dai, Tian Sheng, Jiawei Zhang, Xiaoyong Li, Qun Xu, Mengjia Dai, Yunsong Xiao, Chao Ma, Rui Tang, et al. 2022a. Llm quantization: 12573 Quantization-aware training for large language models. In Advances in Neural Information Processing Systems, volume 35. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022b. OPT: open pre-trained transformer language models. CoRR, abs/2205.01068. Zhengyan Zhang, Yixin Song, Guanghui Yu, Xu Han, Yankai Lin, Chaojun Xiao, Chenyang Song, Zhiyuan Liu, Zeyu Mi, and Maosong Sun. 2024. Relu2 wins: Discovering efficient activation functions for sparse llms. Yilong Zhao, Chien-Yu Lin, Kan Zhu, Zihao Ye, Lequn Chen, Size Zheng, Luis Ceze, Arvind Krishnamurthy, Tianqi Chen, and Baris Kasikci. 2023. Atom: Lowbit quantization for efficient and accurate llm serving. ArXiv, abs/2310.19102. 12574 A Appendix Overview The appendix is structured as follows: • In Appendix B, we provide additional details on the low-rank predictor introduced in Section 3. We evaluate our trained predictors from both accuracy (i.e., their impact on the model’s accuracy) and efficiency perspectives (i.e., the additional neurons they predict to be activated). • Appendix C offers a more detailed description of our experimental setup and implementation for the experiments conducted in Section 4. • In Appendix D, we discuss a negative result regarding the strategy of bundling neurons based on co-activation as a potential method for increasing chunk size (cf. Section 3.2). We intentionally include this negative result as we believe it may inspire future research on effective neuron bundling and its utilization for efficient inference. • In Appendix E, we delve deeper into the review of related works in the literature. • In Appendix F, we go over implications of llm in flash when going to smaller devices. • Finally, Appendix G compares the texts generated by the base model with those produced by our models that utilize the predictor. B Low-Rank Activation Predictor: Additional Results B.1 Sparsity patterns of predictors The number of neurons predicted to be active will determine the efficiency of our algorithm, the less sparse the predicted activation the more weights will have to be loaded from flash. We evaluated the sparsity patterns over 100 random samples of the C4 validation dataset. In Figure 9 we can see the sparsity patterns of OPT, Persimmon, and Phi. In OPT, the number of active neurons predicted by the predictor is 3x the amount of actual sparsity observed in the case of dense inference. In Persimmon it is about the same - 3x the required neurons, and in Phi-2 it is roughly 2x the required neurons of the original model that are activated by the predictor. The neurons that are activated by the model and not the predictor are the false negatives. The gap between the neurons active in both the predictor and the model, and the neurons active only in the model is very narrow in all three models, hence false negatives constitute a small fraction of predictions. To reduce false negatives, the predictor has to "over-predict", which results in loading neurons that are redundant, that is, will have zero activation and no effect on the outcome. An interesting future direction of this work is improving the accuracy of the predictors to be able to load fewer neurons. One observation we had in OPT and Persimmon is the later layers have more active neurons, which can be seen in Figure 9d. B.2 Accuracy of models using predictors We evaluate the accuracy of models on public benchmarks with predictors in place. In Table 4 it can be seen zero shot accuracy of models doesn’t drop. Also, we can see that increasing the predictor size for the last 4 layers of Persimmon and Falcon improves the zero-shot metrics. We evaluated models on MMLU (Hendrycks et al., 2021) benchmark as well. We used Instruct Eval’s implementaion (Chia et al., 2023) for evaluating MMLU. In Figure 10a we can see the MMLU of Persimmon doesn’t drop when the last 4 layers use higher rank predictors but this is not the case for lower ranked ones. Phi2’s MMLU will drop 2.3 points from the relufied model still keeping at 52 as shown in Figure 10b. By increasing the threshold of lowrank predictor we can reduce the amount of data load, this comes with a slight degradation in zeroshot metrics as seen in the Table 4 for different thresholds of the Persimmon model. We have used threshold=0.7 for Persimmon. B.3 Overhead of predictors The average rank of predictors in the OPT-6.7B is 240, this will result in less than 2.4% of nonembedding weights and FLOPs. In M1 Max CPU experiments this was comprising 2.75% and in RTX GPU it was 4.8% of inference time which is negligible. For Falocn 7B, predictors take 4% model size and CPU computation. For Persimmon it was taking 2.85% of inference time on CPU. For Llama 2 7B it was taking 3.92% of inference time on CPU. C Extended Results Experimental Setup: Our experiment is designed to optimize inference efficiency on personal devices. To this end, we process sequences individually, running only one sequence at a time. This approach allows us to allocate a specific portion of 12575 0 5 10 15 20 25 30 Layer number 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Non zero Non Sparse portion both model predictor (a) OPT-6.7B 0 5 10 15 20 25 30 35 Layer number 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 22.5 Non zero (b) Persimmon 8B 0 5 10 15 20 25 30 Layer number 5 10 15 20 25 30 Non zero (c) Phi-2 0 50 100 150 200 250 Token 0 10 20 30 40 Cached neurons% layer 8 layer 16 layer 24 layer 32 (d) Rows cached over time Figure 9: (a) The percentage of fired neurons in each layer’s FFN is less than 5%. In predictor, roughly 3x of this amount will get activated. The narrow gap between neurons that are activated in both shows the model output will not change abruptly. (b) Earlier layers of Persimmon have less active neurons, and later layers of Persimmon have higher active neurons, so we trained larger predictors for them. (c) In Phi2 middle layers have a higher active neuron ratio, so we trained larger predictors for those layers. (d) The number of neurons cached within a real scenario of inference, later layers have more cached rows because of their higher nonsparse ratio. 0.50 0.55 0.60 0.65 0.70 Threshold 35 36 37 38 39 40 41 42 MMLU Accuracy Persimmon-8B Predictor: (32 × 256) + (4 × 1152) Original Predictor: 36 × 256 (a) Phi-2 Model 46 48 50 52 54 56 58 MMLU Accuarcy Original + Relufication + Distillation + Predictor (b) Figure 10: (a) If we use larger predictors in the last 4 layers MMLU wouldn’t drop a lot in Persimmon. (b) Phi’s MMLU will drop in the relufication process due to lower quality data, using distillation can improve the results for that. Using predictors will downgrade the results but still keep it at 52. DRAM for the Key-Value (KV) cache while primarily focusing on the model size. This strategy is particularly effective when dealing with only one sequence/query at a time.2 For the implementation of our inference process, we utilize the HuggingFace Transformers and KV caching. This setup is tested under the condition that approximately half of the model size is available in DRAM. We select this amount as a showcase of the idea of hosting the LLM in Flash. With a different level of sparsity or employing quantization, one can work with smaller available DRAM capacity as well, or alternatively use larger models. Such a configuration demonstrates the practicality of executing inference with lower memory footprints. Systems performance optimization: The primary target of our experiments was the Apple ma2For OPT 6.7 B model with context length 2048 KV-cache requires 2048 × 2dmodel elements which is only 8% of model size. Also, the KV cache itself can be held in flash memory. cOS 14.3 operating system. For high-performance inference, most of the existing deep learning frameworks require that the shape of the weights and the intermediate results in the computation remain static throughout. In particular, the Metal Performance Shaders (MPS) backend for PyTorch demonstrates rather steep performance cliffs when any shape dynamism is present in the computational graph. In order to build a high-performance implementation, we chose to borrow custom, dynamismfriendly Metal kernels from Apple’s open-source MLX deep learning framework (Hannun et al., 2023). In addition, we made use of the unified memory architecture available on Apple systems, which we exploit to maintain the weights cache using the GPU allocator, by creating the tensors using the MTLStorageModeShared allocation mode. This mode allows both the CPU and the GPU to access the same memory buffer directly, without redundant copies. We observe that the inputs to and the outputs from the feed-forward network 12576 Table 4: Model performance on zero-shot tasks when using predictors Model Predictor Parameters Zero-Shot Metrics Rank for Sensitives * Rank for Other Layers Threshold ArcEasy Arc Challenge Hella Swag OPT 6.7B 66.1 30.6 50.3 OPT 6.7B with predictors 1024 128 0.5 66.2 30.6 49.8 Falcon 7B 74.62 40.05 57.77 Falcon 7B relufied 72.52 38.23 54.17 Falcon 7B relufied with predictors 128 128 0.50 70.20 35.41 50.74 Falcon 7B relufied with predictors 1152 128 0.50 71.51 34.22 52.28 Falcon 7B relufied with predictors 1152 256 0.50 72.35 36.35 53.16 Persimmon 8B 67.80 34.64 50.70 Persimmon 8B with predictors 256 256 0.5 67.26 33.87 50.51 Persimmon 8B with predictors 256 256 0.55 66.71 34.73 50.54 Persimmon 8B with predictors 256 256 0.60 66.67 34.04 50.59 Persimmon 8B with predictors 256 256 0.65 66.41 34.22 50.42 Persimmon 8B with predictors 1152 256 0.70 66.30 34.40 52.70 Phi-2 79.62 51.49 55.17 Phi-2 relufied 80.60 50.12 54.30 Phi-2 relufied with predictors 800 mix 160, 480 0.40 79.96 49.57 53.50 Phi-2 relufied with predictors 800 mix 160, 480 0.55 78.90 47.90 52.75 * For OPT, Falcon, and Persimmon sensitive layers are the last 4 layers. For Phi-2 it is the middle 8. have a static shape, so by hiding the dynamism inside a binary extension and handling the shape dynamism and memory management internally, we were able to achieve a level of performance that is not achievable with PyTorch MPS backend alone while leaving the rest of the model intact. Over the course of our work, we were able to eliminate nearly all redundant data movement, improving inference performance. Caching Considerations for Data Loading from Flash Memory. When data is read from flash memory, the operating system typically caches the blocks in the block cache, anticipating future reuse. However, this caching mechanism consumes additional memory in DRAM beyond what is allocated for the model. To accurately assess the real throughput of flash memory under limited DRAM conditions, benchmarks should be conducted without relying on caching. Practical systems may or may not rely on filesystem cache, depending on requirements. For the purpose of our hardware benchmarking in this study, we deliberately and significantly pessimize our NVMe throughput measurements. On macOS and iOS, we employ the F_NOCACHE flag with the fcntl() function, while on Linux, we use DirectIO. Additionally, on macOS, we clear any resident buffers before initiating the benchmark using the purge command. This approach provides a conservative lower bound of throughput in scenarios where no caching is permitted and makes the benchmarks repeatable. It’s worth noting that these figures can improve if either the inference code or the operating system is allowed to cache some part of the weights. While OS-level buffer caching is advantageous for general-purpose applications with high cache hit rates, it lacks fine-grained control over cache usage per process or buffer eviction at the application level. In the context of on-device memory constraints and large model sizes, this could lead to a situation where the file system level cache does not help because in order to evaluate later layers earlier layers must be evicted in a rolling pattern, so the effective cache hit rate is close to zero. Aside from being inefficient, this can cause coexistence issues with other processes due to memory allocation pressure and Translation Lookaside Buffer (TLB) churn. C.1 Results for OPT 6.7B Model This section presents the outcomes for the OPT 6.7B model, specifically under conditions where the memory allocated for the model in DRAM is 12577 Table 5: The end-to-end inference latency across different setups with standard deviation. Our efficient implementation (referred to as All) that employs the predictor, windowing, and bundling can lead to significant latency reduction. Inference Latency (ms) Model Method Backend I/O Mem Compute Total OPT 6.7B All CPU 104.90 (± 18.46) 57.79 (± 9.63) 506.50 (±17.33) 669.20 (± 39.74) OPT 6.7B All GPU 30.55 (±3.09) 34.11 (±2.38) 19.97 (±0.86) 84.64 (±6.16) OPT 6.7B Speculative GPU 38.53 (±10.0) 9.45 (±1.7) 12.18 (±2.0) 60.16 (±13.4) Persimmon 8B All CPU 310.52 (±41.12) 155.80 (±21.30) 623.74 (± 24.76) 1090.08 (±79.08) Phi-2 All CPU 211.08 (±24.81) 76.87 (±7.18) 258.74 (±20.90) 546.69 (±31.98) approximately half of its baseline requirement. Predictors. For the initial 28 layers of the OPT 6.7B model, we train predictors with a rank of r = 128. To reduce the occurrence of false negatives, the final four layers employ predictors with a higher rank of r = 1024. These predictors achieve an average of 5% false negatives and 7% false positives in the OPT 6.7B model. As depicted in Figure 3a, our predictor accurately identifies most activated neurons, while occasionally misidentifying inactive ones with values near zero. Notably, these false negatives, being close to zero, do not significantly alter the final output when they are excluded. Furthermore, as demonstrated in Table 1, this level of prediction accuracy does not adversely affect the model’s performance in 0-shot tasks. Windowing in the OPT 6.7B Model. Utilizing a windowing method with k = 4 in the OPT 6.7B model significantly reduces the necessity for fresh data loading. Using active neurons of predictor would require about 10% of the DRAM memory capacity on average; however, with our method, it drops to 2.4%. This process involves reserving DRAM memory for a window of the past 5 tokens, which, in turn, increases the DRAM requirement for the Feed Forward Network (FFN) to 24%. The overall memory retained in DRAM for the model comprises several components: Embeddings, the Attention Model, the Predictor, and the Loaded Feed Forward layer. The Predictor accounts for 1.25% of the model size, while Embeddings constitute 3%. The Attention Model’s weights make up 32.3%, and the FFN occupies 15.5% (calculated as 0.24×64.62). Summing these up, the total DRAM memory usage amounts to 52.1% of the model’s size. Latency Analysis: Using a window size of 4, each token requires access to 2.4% of the Feed Forward Network (FFN) neurons. For a 32-bit model, the data chunk size per read is 2dmodel × 4 bytes = 32 KiB, as it involves concatenated rows and columns. On an M1 Max, this results in the average latency of 105ms per token for loading from flash and 57ms for memory management (involving neuron deletion and addition). Thus, the total memory-related latency is less than 162ms per token (refer to Figure 1). In contrast, the baseline approach, which requires loading 13.4GB of data at a speed of 6.1GB/s, leads to a latency of approximately 2196ms per token. Therefore, our method represents a substantial improvement over the baseline. For a 16-bit model on a GPU machine, the flash load time is reduced to 30.5ms, and memory management takes 35ms, slightly higher due to the additional overhead of transferring data from CPU to GPU. Nevertheless, the baseline method’s I/O time remains above 2000 milliseconds. Detailed comparisons of how each method impacts performance are provided in Table 2. C.2 Results for Falcon 7B Model To verify that our findings generalize beyond OPT models we also apply the idea of LLM in flash to Falcon model (Almazrouei et al., 2023). Since the original Falcon model is not sparse, we used a sparsified (relufied) version with almost the same performance as that of the base version (Mirzadeh et al., 2023). Similar to the previous section, we present the results obtained under the condition that approximately half of the model size is available for use in DRAM. Predictors. In the Falcon 7B model, predictors of rank r = 256 are used for the initial 28 layers, and r = 1152 for the last four layers. Window configuration. Our model reserves memory for a window containing the last 4 tokens. This setup utilizes 33% of the Feed Forward Net12578 Original OPT 6.7B Ours Original Persimmon Ours Original Phi2 Ours Relufied 0 2 4 6 8 10 12 Perplexity Figure 11: There is a slight drop in the perplexity of OPT and persimmon and more drop in phi-2 after using predictors. work (FFN). In terms of memory allocation, embeddings take 4.2% of the model size, attention weights account for 19.4%, and predictors require 4%. The active portion of the FFN, given our window size, is 25.3% (calculated as 0.33 × 76.8). Overall, this amounts to 52.93% of the model’s total size. Latency Analysis. Using a window size of 4 in our model requires accessing 3.1% of the Feed Forward Network (FFN) neurons for each token. In a 32-bit model, this equates to a data chunk size of 35.5 KiB per read (calculated as 2dmodel ×4 bytes). On an M1 Max device, the time taken to load this data from flash memory is approximately 161ms, and the memory management process adds another 90ms, leading to a total latency of 250ms per token. In comparison, the baseline latency is around 2196 milliseconds, making our method approximately 9 to 10 times faster. C.3 Persimmon 8B We have applied LLM in Flash for Persimmon 8b models. Since Persimmon is already using squared ReLU activation we didn’t need to finetune it further. Predictors. In the Persimmon 8B base model, predictors of rank r=256 are used for the initial 32 layers and r = 1152 for the last four layers.s. Persimmon’s sparsity is less than OPT and Falcon so we changed the sigmoid threshold to 0.7. In figure 10a you can see that the MMLU of the model doesn’t drop with this setting. This wouldn’t be the case if all the predictors had a rank of 256. Also in Figure 11 you can see that perplexity on wikitext2 with doesn’t drop abruptly. Qualitative evaluations can be found in Section G. Window configuration. Our model reserves memory for a window containing the last 4 tokens and also reduces window size dynamically whenever the whole memory usage passes the 25% of FFN size threshold. Latency analysis. Since we have fixed the memory budget we won’t exceed the 25% limit in the FFN which will be 50% of the total model size. We used nucleus sampling «cite nucleus» with p=0.9 to have a broader analysis. As it can be seen in Figure 1 it takes 310ms for loading from flash and 155ms for memory management. C.4 Phi-2 We have applied LLM in Flash for Phi-2 models. We first relufied the model then trained the predictor and applied inference. Since the model is already small, we gave it 65% of its memory for running the inference. During the inference, we modified the window size to make sure it will never exceed the limit. Relufication. We finetuned the model using a refined-web dataset following Mirzadeh et al. (2023). We found that adding a distillation loss as suggested by (Liu et al., 2023a) improves the results as can be seen in 10b. MMLU metric drops from 57 to 54.3 after relufication with distillation. Predictors. The sparsity pattern of Phi-2 is different than other models. As you can see in figure 9c the sparsity of the middle layers is less than other layers for a random sample of the C4 dataset. As a general rule of thumb, we trained larger predictors for the less sparse layers. If layers are grouped by 4, we will have 8 groups of layers. For the last group, we didn’t use any predictors. For the first, second, and seventh groups, we trained a predictor of size 160. For the third and sixth groups we trained predictors of size 480 and for groups in the middle we trained predictors of size 800. Latency analysis. Phi-2 gets 2.35x speedup over naive baseline as it can be seen in table 3. It also improves our hybrid-only approach. C.5 Llama 2 To further validate our result we tried running Llama2 (Touvron et al., 2023b) on flash. We used the sparisified Llama2 (Song et al., 2024) as the base model and run our experiments on M1 Max CPU. We used window size of 2. We didn’t cache weights when the total memory was growing over 55% of model size. Sparse models. The sparsified model (Song et al., 2024) uses FATReLU function to ensure 12579 sparsity of llama is above 90%. For models that have used Swi-GLU activation function (having a gated linear unit, a down project and an up project), replacing Swish with ReLU within the FFN doesn’t ensure high amount of sparsity (Mirzadeh et al., 2023). The FATReLU function activates neurons with gated value greater than a threshold. This will ensure only a small portion of neurons are activated which are the most informative. Predictors. We used predictors of size 1024 in 4 middle layers and predictor of size 256 in all other layers. The reason we used larger predictors in the middle layers is higher neuron activation in middle layers (similar to Phi2). The reason why in some networks middle layers are more active and in some networks later layers are more active is subject to follow up research. Latency analysis. LLM in flash gets 3x speed up over naive baseline (Table 3). It is also performing better than hybrid model which is the theoretical lower bound for approaches that doesn’t use sparsity. Accuracy analysis. When doing MMLU evaluation using InstructEval repo (Chia et al., 2023) we got MMLU of 41.8 for Llama 2, 38.96 for sparsified model by (Song et al., 2024) and 38.63 after training our predictors. We noted a difference between reported numbers and our evaluations. Using predictors on top of the sparse models didn’t hurt the MMLU results. Alternative approaches. Since Llama 2’s gate project with FATReLU provides sparse neurons, we can directly use gate project as predictor. This completely matches with the sparse base model. Since gate projects take 1 3 of FFN layer and 5 9 of each transformer block, keeping them in memory will occupy more space in DRAM than having predictors. In fact with window size of 1, this approach resulted in requiring 65% of model size in DRAM. D Bundling Based on Co-activation Given the high reuse of data in sparse models, we hypothesize that neurons may be highly correlated in their activity patterns, which may enable further bundling. To verify this we calculated the activations of neurons over the C4 validation dataset. For each neuron, the coactivation of that neuron with other ones forms a power law distribution as depicted in Figure 12a. Now, let’s call the neuron that coactivates with a neuron the most closest friend. Indeed, the closest friend of each neuron coactivates with it very often. As Figure 12b demonstrates, it is interesting to see each neuron and its closest friend coactivate with each other at least 95% of the time. The graphs for the 4th closest friend and 8th closest friend are also drawn. Based on this information we decided to put a bundle of each neuron and its closest friend in the flash memory; whenever a neuron is predicted to be active we’ll bring its closest friend too. Unfortunately, this resulted in loading highly active neurons multiple times and the bundling worked against our original intention. It means the neurons that are very active are the ‘closest friends’ of almost everyone. E Extended Related Works Efficient Inference for Large Language Models. As LLMs grow in size, reducing their computational and memory requirements for inference has become an active area of research. Approaches broadly fall into two categories: model compression techniques like pruning and quantization (Han et al., 2016b; Sun et al., 2023; Jaiswal et al., 2023; Xia et al., 2023), (Zhang et al., 2022a; Xu et al., 2023; Shao et al., 2023; Lin et al., 2023; Hoang et al., 2023; Zhao et al., 2023; Ahmadian et al., 2023; Liu et al., 2023a; Li et al., 2023), and selective execution like sparse activations (Liu et al., 2023b; Mirzadeh et al., 2023; Zhang et al., 2024) or conditional computation (Graves, 2016; Baykal et al., 2023). Our work is complementary, focusing on minimizing data transfer from flash memory during inference. Selective Weight Loading. Most related to our approach is prior work on selective weight loading. SparseGPU (Narang et al., 2021) exploits activation sparsity to load a subset of weights for each layer. However, it still requires loading from RAM. Flexgen (Sheng et al., 2023) offloads the weights and KV-cache from GPU memory to DRAM and DRAM to flash memory, in contrast, we consider only the cases where the full model can’t reside in the whole DRAM and GPU memory on the edge devices. Flexgen is theoretically bound by the slow throughput of flash to DRAM in such scenarios. Firefly (Narang et al., 2022) shares our goal of direct flash access but relies on a hand-designed schedule for loading. In contrast, we propose a cost model to optimize weight loading. Similar techniques have been explored for CNNs (Parashar et al., 2017), (Rhu et al., 2013). Concurrently, 12580 20 100 200 300 400 500 600 700 800 900 1000 Top Co-activated Neurons 10 30 50 70 90 100 Frequency (%) (a) coactivation intensity 94 95 96 97 98 99 100 Percentage of coactivation 0 2000 4000 6000 8000 10000 12000 14000 16000 Number of neurons (b) Closest friend 60 70 80 90 100 Percentage of coactivation 0 100 200 300 400 500 600 700 Number of neurons (c) 4th closest friend 50 60 70 80 90 100 Percentage of coactivation 0 200 400 600 800 1000 1200 Number of neurons (d) 8th closest friend Figure 12: (a) For a randomly selected neuron from the 10th layer of OPT 6.7B, there exists a group of neurons that are coactivated with high probability (b) The closest friend of a neuron is defined as the most coactivated neuron in the same layer, and the closet friend of every neuron in OPT 6.7B almost always get coactivated. (c) The 3rd closest friend gets co-activated with each neuron 86% of the time on average (d) The 7th closest friend seems to be less relevant and doesn’t coactivate with the neuron very often. Adapt (Subramani et al., 2022) has proposed adaptive weight loading for vision transformers. We focus on transformer-based LLMs and introduce techniques like neuron bundling tailored to LLMs. To hide flash latency, we build on speculative execution techniques like SpAtten (Dai et al., 2021; Bae et al., 2023). But, we introduce lightweight speculation tailored to adaptive weight loading. Hardware Optimizations. There is a rich body of work on hardware optimizations for efficient LLM inference, including efficient memory architectures (Gao et al., 2022), dataflow optimizations (Han et al., 2016a; Shao et al., 2022), hardware evaluation frameworks Zhang2023AHE, and flash optimizations (Ham et al., 2016), (Meswani et al., 2015). We focus on algorithmic improvements, but these could provide additional speedups. Speculative Execution. Speculative decoding (Leviathan et al., 2022; Zhang et al., 2023; He et al., 2023) is a technique that uses a draft model for generation and uses the larger model to verify those tokens. This technique is orthogonal to us and can be used for further improvement. In the case of speculative decoding, the window in our method is updated with multiple tokens rather than one. Mixture of Experts. Mixture of Experts (Yi et al., 2023) have a sparse structure in their feed-forward layer and can leverage our method for enabling larger models on the device. In summary, we propose algorithmic techniques to minimize weight loading from flash memory during LLM inference. By combining cost modeling, sparsity prediction, and hardware awareness, we demonstrate 4-5x and 20-25x speedup on CPU and GPU, respectively. Table 6: Active neuron percentage in different layers of OPT 6.7B vs Quantized model over 100 sequences. Layer OPT 6.7B Quantized 1 1.56% 1.42% 16 2.66% 2.44% 32 5.36% 5.45% average 3.30% 3.27% F Small Device Implications We note that many of the hardware assumptions (e.g., limited DRAM capacity, characteristics of Flash such as bandwidth limitations and increased throughput with larger chunks) are also applicable to smaller devices like smartphones. For example, when running a 7B model on a smartphone, the same technique can be employed; instead of loading 16-bit values, 4-bit values would be loaded. Consequently, rather than the baseline’s 3.5GB of required DRAM, our model would necessitate less than 2GB of memory for operation.x We believe that since quantization does not alter activation sparsity patterns, the same technique can be utilized here by loading 4-bit instead of 16-bit values (Table 6). However, actual implementation would require special 4-bit compute kernels on device, which falls outside the scope of this paper. G Qualitative Evaluations Here we present some of the examples generated from the model with a predictor. The purpose is to qualitatively demonstrate that the models’ output is comparatively reasonable when the predictors are applied. Table 7 and 8 show the results where the prompt was "once upon a time there was a" and the models are asked to complete it. Table 9 shows the results after prompting the model to complete a random sample taken from C4 dataset. 12581 Table 7: Comparison of the output of original model and model with predictors. Prompt: Once upon a time there was a OPT 6.7B OPT 6.7B with predictors little girl who was very sad. She was sad because her daddy was gone. He was in the army and he was in Iraq. She was sad because she missed him so much. She was sad because she was so lonely. She was sad because she was so scared. She was sad because she was so sad. She was sad because she was sad. She was sad because she was sad. She was sad because she was sad. man who was very rich. He had a beautiful wife and two beautiful children. He was a very good man and he loved his family very much. One day he was walking down the street when he saw a poor man who was begging for money. The man was very poor and he had no money. The man asked the man for some money. The man gave the man some money. The man was very happy and he thanked the man. Persimmon 8B Persimmon 8B with predictors man named Bilbo Baggins. Bilbo Baggins was a small hobbit, and he lived in a hole in the ground. Bilbo Baggins had many adventures, and he met many friends. Bilbo Baggins was brave, and he was not afraid of anything. Bilbo Baggins was happy, and he always smiled. Bilbo Baggins was a kind person, and he always helped others. man named Bilbo Baggins. He lived in a hole in the ground, with many other hobbits. Bilbo Baggins was a very brave hobbit. He liked to eat LOTS of cheese. One day, Bilbo Baggins decided to leave his hobbit hole. He decided to travel the world. Bilbo Baggins was very hungry. He ate many cheeses. Bilbo Baggins was very tired. He slept in a cave. Bilbo Baggins was very cold. He 12582 Table 8: Comparison of the output of original model and model with predictors. Prompt: Once upon a time there was a Phi-2 Relufied Phi-2 with predictors young girl named Lily who loved to read. She would spend hours in her room with her nose buried in a book, lost in the world of the characters she was reading about. Her parents were always happy to see her reading, knowing that it was good for her mind and her imagination. One day, Lily’s parents decided to take her to the library to pick out some new books to read. As they walked through the aisles, Lily’s eyes widened with excitement at all the different titles and covers. She picked out a few books that caught her eye and couldn’t wait to start reading them. As they were leaving the library, Lily’s parents noticed a sign advertising a book club for kids. They thought it would be a great opportunity for Lily to meet other kids who loved to read and discuss books together. Lily was hesitant at first, but her parents convinced her to give it a try. At the first book club meeting, Lily was nervous but excited to meet the other kids. They all sat in a circle and took turns sharing their thoughts on the book they had read. Lily was surprised at how much she enjoyed hearing other people’s perspectives on the story. She realized that reading wasn’t just about escaping into a world of her young girl named Lily who loved to read. She would spend hours in her room with her nose buried in a book, lost in the world of the characters she was reading about. Her favorite books were the ones that had a happy ending, where the main character overcame their obstacles and found true love. One day, Lily’s parents surprised her with a trip to the library. She was so excited to see all the new books she could check out. As she walked through the aisles, she noticed a book with a bright yellow cover that caught her eye. It was called "The Adventures of Lily and the Magic Book." Lily eagerly checked out the book and took it home to read. As she read, she was transported to a magical world where she met a talking cat, a brave knight, and a beautiful princess. The book was filled with adventure, magic, and romance. As Lily read on, she noticed that the book was getting darker and darker. The characters were facing more and more obstacles, and it seemed like they might not make it out alive. Lily began to worry that the book was going to end badly. Sparse Llama 2 Sparse Llama 2 with predictors little girl who loved to play with her dolls. She loved to dress them up and make them do all sorts of things. One day, she decided to make a dollhouse for her dolls. Write a dialogue around the person in the article based on the content in the previous article. Melanie: Hey, Mom! I was thinking about making a dollhouse for my dolls. Mother: That sounds like a great idea, Melanie! What little girl named Lily. She loved to play with her dolls and stuffed animals. One day, she decided to make a new doll for herself. Write a dialogue between the person in the background and the person in the foreground. Person in the background: Lily, what are you doing? Person in the foreground: I’m making a doll for myself. Person in the background: Why do you need a doll? Person in the foreground 12583 Table 9: Comparison of the output of original model and model with predictors. Prompt: There is a lot of discussion right now about stress reduction. Today we see an increased work load of mental health professionals, increased occurrences of abuse or violence faced by social workers and other therapists, and the terrible struggles facing our returning veterans. All of these things have increased our awareness of the stressful elements of our society and the impact those elements have on both our clients and the professionals whose job it is to help. As a social worker I am Persimmon 8B (nucleus sampling) Persimmon 8B with predictors (nucleus sampling) constantly reminded of the importance of stress reduction. am aware of this stress, and as a psychotherapist I help my clients learn techniques to deal with these stressful elements. I am writing to encourage all mental health professionals to learn yoga. I strongly believe that yoga should be part of your clinical routine, and that yoga classes should be taken by clinicians for credit, just like psychotherapy, case management and substance use classes constantly reflecting on my own stress levels and how I can remain more fit to handle the pressures of the profession. In social work a social worker calls themselves "a problem solver." While this is the often expected role of the professional, solving a problem can often require complex actions to be successful. Phi-2 Relufied Phi-2 with predictors constantly looking for ways to reduce stress in my life and in the lives of my clients. One of the most effective ways to reduce stress is to take time to relax. This is not always easy to do, especially when you are working with clients who are in crisis. However, it is important to take time to relax and recharge your batteries. One way to do this is to take a break from work and go for a walk. Walking is a great way to clear your mind and get some often asked how I deal with stress. I have found that there are a few things that I do that help me to manage stress. 1. I make sure that I get enough sleep. I try to get at least 8 hours of sleep a night. I also try to get up early and get out of bed by 7:00 a.m. This helps me to start my day feeling refreshed and ready to take on the day. 2. I make sure that I eat a healthy diet 12584
2024
678
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12585–12602 August 11-16, 2024 ©2024 Association for Computational Linguistics Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models Muhammad Maaz1*, Hanoona Rasheed1*, Salman Khan1,2, Fahad Shahbaz Khan1,3 1Mohamed bin Zayed University of AI, UAE 2Australian National University, Australia 3Linköping University, Sweden Abstract Conversation agents fueled by Large Language Models (LLMs) are providing a new way to interact with visual data. While there have been initial attempts for image-based conversation models, this work addresses the under-explored field of video-based conversation by introducing Video-ChatGPT. It is a multimodal model that merges a video-adapted visual encoder with an LLM. The resulting model is capable of understanding and generating detailed conversations about videos. We introduce a new dataset of 100,000 video-instruction pairs used to train Video-ChatGPT acquired via manual and semi-automated pipeline that is easily scalable and robust to label noise. We also develop a quantitative evaluation framework for videobased dialogue models to objectively analyze the strengths and weaknesses of video-based dialogue models. Code: https://github.com/ mbzuai-oryx/Video-ChatGPT. 1 Introduction The surge of deep learning applications for video understanding has lead to major advancements in video-related tasks. However, the current video understanding models are still unable to hold an open-ended conversation about the video content in a coherent manner. A video-based dialogue model can revolutionize video search, surveillance operations and help summarize key events and abnormal event detection. Above all, it can provide a unified human-understandable interface to video-related tasks such as action recognition, localization, detection, segmentation, retrieval, and tracking. Further, such a capability is of great interest as it will demonstrate the model’s ability to encode temporal and spatial cues, contextual relationships and long-term dependencies. Recent advancements in multimodal understanding are largely based on the combination of pre1Equal contribution. trained image models with Large Language Models (LLMs) but generally do not consider video inputs (Liu et al., 2023; Zhu et al., 2023; Li et al., 2022, 2023a; Dai et al., 2023). It is therefore interesting to leverage the vast capabilities of LLMs for video understanding tasks in a way that would not only maintain the temporal and spatial characteristics but also be adept at generating human-like conversations about videos. In this paper, we introduce Video-ChatGPT, a novel multimodal model that merges the representational abilities of a pretrained visual encoder and the generative powers of an LLM, capable of understanding and conversing about videos. Video-ChatGPT leverages an adapted LLM (Liu et al., 2023) that integrates the visual encoder of CLIP (Radford et al., 2021) with Vicuna (Chiang et al., 2023) as a language decoder, fine-tuned on generated instructional image-text pairs. Our approach further adapts the design for spatiotemporal video modeling and fine-tunes the model on video-instruction data to capture temporal dynamics and frame-to-frame consistency relationships available in video data. In contrast to other concurrent works for video-based conversation (Li et al., 2023b; Zhang et al., 2023; Su et al., 2023), VideoChatGPT excels at temporal understanding, spatial consistency and contextual comprehension as demonstrated by our extensive evaluations. A fundamental contribution of this work is the creation of a dataset of 100,000 video-instruction pairs using a combination of human-assisted and semi-automatic annotation methods. Each pair consists of a video and its associated instruction in the form of a question-answer. This provides VideoChatGPT with a large and diverse dataset to learn from, increasing its video-specific understanding, attention to temporal relationships and conversation capabilities. Moreover, we introduce the first quantitative video conversation evaluation framework for bench12585 marking, allowing for a more accurate evaluation of the performance of video conversation models. This framework evaluates models on a variety of capabilities, such as correctness of information, detail orientation, contextual understanding, temporal understanding, and consistency. The contributions of this work are as follows, • We propose Video-ChatGPT, a video conversation model capable of generating meaningful conversations about videos. It combines the capabilities of LLMs with a pretrained visual encoder adapted for spatiotemporal video representations. • We introduce 100,000 high-quality video instruction pairs together with a novel annotation framework that is scalable and generates a diverse range of video-specific instruction sets. • We develop the first quantitative video conversation evaluation framework for benchmarking video conversation models. We demonstrate Video-ChatGPT to perform well compared to concurrent conversational engines for videos such as Video Chat (Li et al., 2023b). 2 Related Works Vision Language Models: Significant advancements in the field of computer vision have recently been observed due to the development of many foundational vision-language models. These models represent a significant leap towards creating general-purpose vision models capable of tackling various tasks simultaneously (Radford et al., 2021; et al, 2022; Gupta et al., 2022; Maaz et al., 2022). A prime example is CLIP (Radford et al., 2021), which is trained on 400M image-text pairs and has demonstrated impressive zero-shot performance on numerous benchmarks. It has been employed in various downstream applications, from imagebased object detection and segmentation (Rasheed et al., 2022; Liang et al., 2023) to 3D applications (Rozenberszki et al., 2022; Ni et al., 2022). Numerous attempts have also been made to adapt CLIP for video applications (Wang et al., 2021; Ni et al., 2022). Similar to our design, ViFiCLIP (Rasheed et al., 2023) suggests employing temporal pooling across video frames to adapt the image-based CLIP model for video-based tasks. Large Language Models: The field of natural language processing has witnessed a paradigm shift with the advent of pretrained Large Language Models (LLMs) such as GPT (Brown et al., 2020), LLaMA (Touvron et al., 2023), OPT (Zhang et al., 2022), and MOSS (OpenLMLab, 2023). These models exhibit extraordinary abilities like language generation and in-context learning, and their knack for understanding intricate tasks given user prompts in a zero-shot manner reflects their impressive adaptability and generalization. The proven capabilities of LLMs have encouraged researchers to fine-tune them to maximize their proficiency. A key strategy in this pursuit is instruction tuning. This approach focuses on improving the model’s alignment with user intentions and optimizing its output quality. For instance, InstructGPT (Ouyang et al., 2022) and ChatGPT (OpenAI, 2023) significantly benefit from this technique, showcasing improvements in diverse conversational interaction capabilities and their aptitude to answer a broad range of complex questions. This effective approach has recently been employed in open-source models like Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023), both developed using the LLaMA (Touvron et al., 2023) framework, resulting in performance improvements. Pre-trained LLMs in Vision-Language Tasks: The recent strides in multimodal understanding have primarily been driven by the integration of image-based vision models with LLMs. Seminal contributions such as Flamingo (et al, 2022) and BLIP-2 (Li et al., 2023a) have demonstrated the power of utilizing web-scale image-text data, as well as pioneering techniques in cross-modal alignment, to exhibit dynamic abilities in conversational and few-shot learning contexts. Building on this foundation, MiniGPT-4 (Zhu et al., 2023) allows image-based conversations by integrating BLIP-2 and Vicuna for zero-shot image comprehension. Equally significant is the emergence of LLaVA (Liu et al., 2023), a model derived from the LLaMa architecture, leveraging GPT-4’s language proficiency to generate multimodal instructionfollowing data. With instruction tuning applied on the derived data, LLaVA has displayed interesting multimodal chat capability, hinting at the scalability potential of such a methodology. In addition, InstructBLIP (Dai et al., 2023) has demonstrated strong image-based dialogue capabilities via vision-language instruction tuning by innovating with instruction-aware visual feature extraction. More closely related to our work, VideoChat (Li et al., 2023b) employs selective components of 12586 video foundational models (Wang et al., 2022) and image foundation models (Li et al., 2023a), and integrates them with LLMs (Chiang et al., 2023) in conjunction with few learnable layers, tuned using a two-stage lightweight training. Additionally, they construct a video-specific dataset using off-theshelf vision-language models (Wu et al., 2022; Li et al., 2023a; Huang et al., 2023; Wang et al., 2022) for generating noisy detailed textual descriptions to enhance the training of video-centric conversational models. Different from VideoChat, we propose a novel human assisted and semi-automatic annotation framework for generating high quality instruction data for videos. Our simple and scalable architecture design utilizes pretrained CLIP (Radford et al., 2021) to generate spatiotemporal features which help Video-ChatGPT in generating meaningful video conversation. Further, we are the first to propose quantitative framework for evaluating video conversation tasks (see Section "Video Instruction Data Generation" for more details). 3 Video-ChatGPT Video-ChatGPT is a large vision-language model that aligns video representations with a Large Language Model (LLM), thus enhancing its ability to generate meaningful conversation about videos. Our approach draws from the approach employed in designing vision-language (VL) models for the video domain. Given the limited availability of video-caption pairs and the substantial resources required for training on such data from scratch, these models commonly adapt pretrained image-based VL models for video tasks (Ni et al., 2022; Wang et al., 2021; Rasheed et al., 2023). We adopt a similar approach, starting with the Language-aligned Large Vision Assistant (LLaVA)(Liu et al., 2023) as our foundation. LLaVA is a LMM that integrates the visual encoder of CLIP (Radford et al., 2021) with the Vicuna language decoder (Chiang et al., 2023) and is fine-tuned end-to-end on generated instructional vision-language data. We fine-tune this model using our video-instruction data, adapting it for video conversation task. The video-instruction data is obtained as a combination of manual and automated pipelines in our proposed instruction generation setup. This adaptation on video-specific instructions allows for accommodating additional temporal dynamics, frame-to-frame consistency, and long-range relationships present in video data. As a result, our Video-ChatGPT excels in video reasoning, creativity, and understanding of spatial, temporal, and action-oriented components within videos. 3.1 Architecture We use CLIP ViT-L/14, which is pretrained using large-scale visual instruction tuning in LLaVa, as the visual encoder. However, LLaVa visual encoder is meant for images, which we modify to capture spatiotemporal representations in videos. Given a video sample Vi ∈RT×H×W×C with T frames, the visual encoder generates temporal and spatial features. The visual encoder encodes the T frames independently as a batch of images and produces frame-level embeddings xi ∈RT×h×w×D, where h = H/p, w = W/p. Here p is the patch size (i.e. 14 for ViT-L/14), and we represent the number of tokens as N, where N = h × w. Frame-level embeddings are average-pooled along the spatial dimension to obtain a video-level temporal representation ti ∈RT×D. This operation implicitly incorporates temporal learning through the aggregation of multiple frames. Similarly, the frame-level embeddings are average-pooled along the temporal dimension to yield the video-level spatial representation zi ∈RN×D. The temporal and spatial features are concatenated to obtain the video-level features vi, vi = [ti zi] ∈R(T+N)×D. (1) A simple trainable linear layer g, projects these video-level features into the language decoder’s embedding space, transforming them into corresponding language embedding tokens Qv, Qv = g(vi) ∈R(T+N)×K. (2) Note that the function g acts as an adapter and can be implemented with more complicated architectures as well. However, we opt for a simplistic design that gives competitive performance compared to more sophisticated choices in our experiments. The text queries are tokenized to the same dimensions, Qt ∈RL×K. Here L represents the length of text query. Finally, Qv is concatenated with Qt and input to the language decoder. 3.2 Video Instruction Tuning We employ instruction-tuning of the LLM on the prediction tokens, utilizing its original autoregressive training objective. The pretrained model 12587 Figure 1: Architecture of Video-ChatGPT. Video-ChatGPT leverages the CLIP-L/14 visual encoder to extract both spatial and temporal video features. This is accomplished by averaging frame-level features across temporal and spatial dimensions respectively. The computed spatiotemporal features are then fed into a learnable linear layer, which projects them into the LLMs input space. In our approach, we utilize the Vicuna-v1.1 model, comprised of 7B parameters, and initialize it with weights from LLaVA (Liu et al., 2023). is finetuned with curated, high-quality video-text pairs. During the finetuning phase, we use predefined prompts based on the following template: USER: <Instruction> <Vid-tokens> Assistant: Using the notations, we can represent it as, USER: <Qt> <Qv> Assistant: In this prompt, the <Instruction> represents a question pertaining to the video, randomly sampled from the training set of video-question-answer pairs. Questions can be general, asking to describe the video, or they may relate to specific temporal, spatial, or creative aspects of the video content. The prediction answer <Answer> corresponds to the specific question asked. Throughout the training, the weights for both the video encoder and LLM remain frozen, and the model maximizes the likelihood of predicting tokens representing the answer by adapting the linear layer. Consequently, the video features Qv become aligned with the pretrained LLM word embeddings, equipping VideoChatGPT with the ability to produce more natural and dependable responses. 4 Video Instruction Data Generation In this section, we discuss our data-focused approach, which uses both human-assisted and semiautomatic annotation methods to generate highquality video instruction data. This data is crucial for training Video-ChatGPT, ensuring accurate and meaningful responses. Our data collection involves two key methods. The human-assisted annotation, involves expert annotators analysing video content and providing detailed descriptions. This process generates data rich in context and detail, which helps our model understand complex aspects of video content. On the other hand, the semi-automatic annotation framework is more cost-effective and scalable. Leveraging state-of-theart vision-language models, this method generates broad, high-volume annotations, thus increasing the quantity of data without compromising the quality substantially. Through these combined methods, we have successfully accumulated a robust set of 100,000 video-instruction pairs. This extensive dataset is crucial in fine-tuning our model to comprehend video content effectively, integrating both spatial and temporal cues into its understanding. Our instructional data is both diverse and comprehensive, incorporating a wide range of data types. These include detailed descriptions, summarizations, question-answer pairs, tasks that stimulate creativity or generation of new ideas, and conversational tasks. The data spans a broad spectrum of concepts, ranging from visual appearance and temporal relations to complex reasoning tasks and beyond, providing a diverse training ground for our model to learn from. 12588 Figure 2: Examples of data enrichment via human-assisted annotation. Human annotators augment video descriptions from video-caption datasets. The captions are enriched by integrating detailed information about spatial and temporal aspects, object relationships, reasoning, scene descriptions, and the chronological sequence of events. 4.1 Human-assisted Annotation In this process, we leverage datasets containing video-caption pairs and utilize the expertise of human annotators to enrich the original ground truth annotations. Specifically, we use a subset of ActivityNet-200 (Fabian Caba Heilbron and Niebles, 2015) which provides concise ground truth descriptions of various activities in distinct video segments. The annotators further enrich the captions by adding comprehensive information about physical appearances and spatial and temporal localization, among other critical contextual details. Figure 2 shows an example of how a ground truth caption is enriched using human-assisted annotation. 4.2 Semi-automatic Annotation Framework In addition to the rich human-assisted annotations, we also harness the capabilities of advanced dense image vision-language models, developing a semiautomatic annotation framework. This approach is cost-effective and scalable, thereby increasing the quantity of data without substantially compromising the quality. Similar to the human-assisted process, this framework also leverages datasets containing video-caption pairs. We enrich these datasets using contextual information drawn from off-theshelf dense prediction and captioning image-based vision-language models. These models provide predictions that deliver additional contextual information, thereby enriching the video captions. We developed a comprehensive method that combines these predictions, and utilize specific models for the purpose of eliminating noisy or irrelevant context from the data. This ensures that the data maintains its accuracy and relevance. Building on the use of off-the-shelf models, we apply pretrained models like BLIP-2 (Li et al., 2023a) and GRiT (Wu et al., 2022) for keyframe analysis in the videos. The BLIP-2 imagecaptioning model generates frame-level captions, while the GRiT dense captioning model provides detailed captions for scene objects. Additionally, 12589 Figure 3: Examples of generating instructional data using our proposed semi-automatic annotation pipeline. We employ off-the-shelf dense prediction and captioning models to augment video descriptions. BLIP-v2 (Li et al., 2023a) generates frame-level captions, while GRIT (Wu et al., 2022) is utilized for dense frame captions. Tag2Text (Huang et al., 2023) generates tags for each key-frame, aiding in eliminating noise (e.g. the GRiT descriptions containing flower pattern and on phone would be discarded as there are no corresponding tags detected). Finally, we query GPT-3.5 with in-context examples to generate video-instructional data. the pretrained Tag2Text (Huang et al., 2023) model is used to generate tags for each key-frame of the video. Despite their utility, these models can introduce noise into the data. To ensure high-quality data and mitigate noise, we implement three key steps. First, we maintain a high prediction threshold for all off-the-shelf models to uphold accuracy. Second, we employ a specialized filtering mechanism that removes any frame-level caption from BLIP-2 or GRiT not matching with the Tag2Text frame-level tags. This process involves extracting words from the frame-level captions that are within the predefined Tag2Text tags vocabulary and eliminates any captions that contain words not in the tags for a given frame. This strategy acts as an additional filtering layer and enriches the captions by integrating predictions from multiple models. In the third step, we merge frame-level captions and use the GPT-3.5 model to generate a singular, coherent video-level caption. This step augments the original ground truth caption with context from these models. We also direct GPT-3.5 to discard inconsistent information across frames, ensuring a precise, contextually rich video instruction dataset. Figure 3,4 illustrates how a ground truth caption is enriched using this process after all three refinement stages to generate instructional data and detailed descriptive caption. All of our designed prompts for in-context learning along with the curated dataset will be made publicly available. 4.3 GPT-Assisted Postprocessing Lastly, we implement a GPT-Assisted Postprocessing mechanism that refines and optimizes the enriched annotations, in order to generate highquality video instructional data. We prompt GPT3.5 model to create question-answer pairs from the enriched and detailed captions that cover a wide variety of aspects using in-context learning. These aspects include detailed descriptions, summarizations, question-answer pairs, tasks that stimulate creativity or the generation of new ideas, and conversational tasks. Each of these elements plays a crucial role in our data-centric approach. Our ultimate goal is to create a video-based conversation model that is accurate, capable of understanding video content from both spatial and temporal cues, and adept at engaging in conversations. 12590 Figure 4: Examples of data enrichment using our proposed semi-automatic annotation. We employ off-theshelf dense prediction and captioning models (Li et al., 2023a; Wu et al., 2022; Huang et al., 2023) to augment video descriptions. All additional context elements are combined with the video captions and undergo a GPT-assisted post-processing stage, generating the final detailed description. Evaluation Aspect Video Chat LLaMA Adapter Video-LLaMA Video-ChatGPT Correctness of Information 2.23 2.03 1.96 2.40 Detail Orientation 2.50 2.32 2.18 2.52 Contextual Understanding 2.53 2.30 2.16 2.62 Temporal Understanding 1.94 1.98 1.82 1.98 Consistency 2.24 2.15 1.79 2.37 Table 1: Performance benchmarking of text generation models. An in-depth comparative analysis of VideoChatGPT and Video Chat (Li et al., 2023b) across five key evaluation aspects we propose in our benchmark. For a fair comparison, 7B variants are used for all the models. Video-ChatGPT shows competent performance across all key aspects. 5 Experiments 5.1 Implementation Details We use LLaVA (Liu et al., 2023) as our baseline model and finetune it on our 100K video instruction pairs. We only update the linear layer projecting the video features to the LLMs’ input space, while the rest of the architecture is kept frozen. We finetune the model for 3 epochs using a learning rate of 2e−5 and an overall batch size of 32. We use 7B parameter model in all the experiments and its training took around 3 hours on 8 A100 40GB GPUs. During inference, for memory efficiency, we load the models in FP16 mode. In our semi-automatic annotation framework, we use Katna (KeplerLab, 2019) to extract video keyframes. For off-the-shelf Tag2Text (Huang et al., 2023) model, we use the Swin-B variant with an input size of 384×384 and a confidence threshold of 0.7. For GRIT (Wu et al., 2022), we use ViT-B version with CenterNet2 (Zhou et al., 2021). 5.2 Quantitative evaluation In this section, we highlight a key contribution of our work: the quantitative evaluation of VideoChatGPT using advanced metrics and comparative evaluations with existing state-of-the-art models. We conduct two types of quantitative evaluations: i) Video-based Generative Performance Benchmarking and ii) Zero-Shot Question-Answer Evaluation. Video-based Text Generation Performance Benchmarking: We introduce a benchmark to evaluate the text generation performance of video-based conversation models. To do this, we curate a test set based on the ActivityNet-200 dataset (Fabian Caba Heilbron and Niebles, 2015), featuring videos with rich, dense descriptive captions and associated question-answer pairs from human annotations. We also develop an evaluation pipeline using the GPT-3.5 model. This pipeline assesses various capabilities of the model and assigns a relative score to the generated predictions 12591 Model MSVD-QA MSRVTT-QA TGIF-QA Activity Net-QA Accuracy Score Accuracy Score Accuracy Score Accuracy Score FrozenBiLM 32.2 – 16.8 – 41.0 – 24.7 – Video Chat 56.3 2.8 45.0 2.5 34.4 2.3 26.5 2.2 LLaMA Adapter 54.9 3.1 43.8 2.7 34.2 2.7 Video LLaMA 51.6 2.5 29.6 1.8 12.4 1.1 Video-ChatGPT 64.9 3.3 49.3 2.8 51.4 3.0 35.2 2.8 Table 2: Zeroshot question-answering comparison of Video-ChatGPT with other video generative models. For a fair comparison, 7B variants are used for all the models. Video-ChatGPT performs competitively across all datasets. on a scale of 1-5, in the following five aspects: (i) Correctness of Information: We verify the accuracy of the generated text, ensuring it aligns with the video content and does not misinterpret or misinform. (ii) Detail Orientation: We evaluate the depth of the model’s responses, looking for both completeness, meaning the model’s response covers all major points from the video, and specificity, denoting the inclusion of specific details rather than just generic points in the model’s response. (iii) Contextual Understanding: We assess the model’s understanding of the video’s context, checking if its responses align with the overall context of the video content. (iv) Temporal Understanding: We examine the model’s grasp of the temporal sequence of events in the video when answering questions. (v) Consistency: We evaluate the model’s consistency across different but similar questions or different sections of the video. We present the evaluation results of our proposed model, Video-ChatGPT, using the quantitative benchmarking framework in Table 1. The results reveal its competent performance across all key aspects compared with the recently introduced contemporary video conversation models, Video Chat (Li et al., 2023b), LLaMA Adapter (Gao et al., 2023) and Video-LLaMA (Zhang et al., 2023). Video-ChatGPT shows good performance, largely due to the instruction tuning we perform and its straightforward architecture that leverages LLMs with a pretrained visual encoder fine-tuned for video data. This provides it with the robust ability to generate contextually relevant, detailed, and temporally accurate text from video input. Zero-Shot Question-Answer Evaluation: We conducted a comprehensive quantitative evaluation using several commonly used open-ended question-answer datasets: MSRVTT-QA (Xu et al., 2017), MSVD-QA (Xu et al., 2017), TGIF-QA FrameQA (Jang et al., 2017), and ActivityNetQA (Yu et al., 2019). These evaluations were carried out in a zero-shot manner, employing GPTassisted evaluation to assess the model’s capabilities. This evaluation process measures the accuracy of the model’s generated predictions and assigns a relative score on a scale of 1-5. To benchmark Video-ChatGPT, we compared its performance with other significant models, such as FrozenBiLM (Yang et al., 2022) and the generative video model, Video Chat, LLaMA Adapter and Video-LLaMA. FrozenBiLM is a model that adapts frozen bidirectional language models pretrained on Web-scale text-only data to multi-modal inputs, showing promising results in zero-shot VideoQA settings. Despite the solid foundation established by these models, Video-ChatGPT consistently outperformed them, achieving state-of-the-art (SOTA) performance across all datasets. These results indicate Video-ChatGPT’s ability to understand video content and generate accurate, contextually rich answers to questions. 5.3 Ablations Impact of Semi-Automatic Annotations: We train Video-ChatGPT on two subsets: one with human annotations (30% of our data) and one with semi-automatic annotations (70%). The results in Table. 3 indicate that training solely with humanannotated data or semi-automatically generated data yields good performance. The overall performance when using only human-generated data is the lowest due to the limited number of labels (30% of all data) available in this scenario. However, the optimal results are achieved when utilizing a combined dataset for training. 12592 Metric Human only Automatic only Combined Correctness 2.27 2.35 2.40 Detail Orientation 2.49 2.49 2.52 Contextual Understanding 2.50 2.56 2.62 Temporal Understanding 1.85 1.92 1.98 Consistency 2.21 2.38 2.37 Average 2.28 2.34 2.38 Table 3: Human Annotated vs Semi-automatically Annotated Data: Training using both human annotated and semi-automatically annotated data achieves best performance. Quantitative Evaluation with GPT-3.5: Considering the limitations posed by the use of GPT3.5, which is accessed via API and is not opensource, we perform evaluations using the opensource LLM, Vicuna-1.5 (13B) (Chiang et al., 2023). The results in Table. 4 show similar trend in correctness, detail, contextual and temporal understanding and consistency compared with the initial GPT-3.5 evaluation. This ensures our evaluation method remains accessible and replicable. Metric Video Chat Video-LLaMA Video-ChatGPT Correctness 2.32 2.10 2.49 Detail Orientation 2.50 2.18 2.52 Contextual Understanding 2.76 2.41 2.85 Temporal Understanding 2.27 2.17 2.38 Consistency 2.95 2.67 3.09 Table 4: Evaluation using Vicuna-1.5 (13B) Model: We observe similar trend when evaluating using open-source Vicuna1.5 (13B) model versus GPT-3.5-Turbo. Ensuring Automatic Annotation Pipeline Consistency: To ensure consistency between our automatic evaluation pipeline and human assessments, we conducted a blind test comparing QA pairs from human and semi-automatically annotated sources using 50 randomly sampled videos. A 52% accuracy rate in distinguishing between the two demonstrated the reliability of our semi-automatic data, confirming that our quality control effectively aligns automatic evaluations with human judgment standards. 6 Conclusion In this work, we presented Video-ChatGPT, a multimodal model that merges a pretrained visual encoder with a large language model (LLM) to enable video understanding and conversations based on videos. Video-ChatGPT leverages an adapter on top of pretrained LLM and vision backbones and is fine-tuned on video-instruction data to capture temporal dynamics and spatial consistency relationships in spatiotemporal sequences. A dataset of 100,000 video-instruction pairs is created to enhance Video-ChatGPT’s video-specific understanding and conversation capabilities. The work also introduced a quantitative video conversation evaluation framework for benchmarking, evaluating models on a diverse set of capabilities including conventional video question answering as well as open-ended descriptions. 7 Limitations While the model performs competitively in several scenarios, we note it finds it challenging to understand subtle temporal relationships in long videos (> 2 min), which can compromise its predictive performance. Additionally, it has difficulty recognizing the details of small objects, often missing additional information embedded in these details. 8 Potential Risks Video-ChatGPT, like any other AI model, must be handled with due caution to prevent misuse and to ensure it upholds the principles of fairness, transparency, and respect for user privacy. We made a concerted effort to minimize bias during the dataset creation phase for Video-ChatGPT. Despite these efforts, it is important to recognize the possibility of residual bias persisting. The use of our model should be mindful of these potential biases, which may subtly influence the model’s understanding and response to visual content. We encourage all users to consider these limitations in their application of Video-ChatGPT and to strive for ethical and responsible use in all contexts. 9 Use of Data and AI Assistant We curate our dataset based on a subset of the ActivityNet-200 dataset (Fabian Caba Heilbron and Niebles, 2015), distributed under MIT LICENSE, available for use in research. Further, the use of GPT models abides by (OpenAI). Respecting source license information, we will release all datasets created in this work under MIT LICENSE. 10 Human Annotations The semi-automatic dataset curation involves human annotation. Annotators are provided with concise video caption ground truths. Specific instructions are given to enrich the caption with comprehensive descriptions of the video content, with specific attention to temporal and spatial details. They are given specific instructions to neutralize the tone and biases during the correction process. 12593 References Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An opensource chatbot impressing gpt-4 with 90%* chatgpt quality. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructblip: Towards general-purpose vision-language models with instruction tuning. Jean-Baptiste Alayrac et al. 2022. Flamingo: a visual language model for few-shot learning. Bernard Ghanem Fabian Caba Heilbron, Victor Escorcia and Juan Carlos Niebles. 2015. Activitynet: A large-scale video benchmark for human activity understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 961–970. Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, Hongsheng Li, and Yu Qiao. 2023. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010. Tanmay Gupta, Amita Kamath, Aniruddha Kembhavi, and Derek Hoiem. 2022. Towards general purpose vision systems: An end-to-end task-agnostic vision-language architecture. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Xinyu Huang, Youcai Zhang, Jinyu Ma, Weiwei Tian, Rui Feng, Yuejie Zhang, Yaqian Li, Yandong Guo, and Lei Zhang. 2023. Tag2text: Guiding visionlanguage model via image tagging. arXiv preprint arXiv:2303.05657. Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatiotemporal reasoning in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2758–2766. KeplerLab. 2019. Katna: Tool for automating video keyframe extraction, video compression, image autocrop and smart image resize tasks. https:// github.com/keplerlab/katna. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023a. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation. In International Conference on Machine Learning, pages 12888–12900. PMLR. Kunchang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. 2023b. Videochat: Chat-centric video understanding. arXiv preprint arXiv:2305.06355. Feng Liang, Bichen Wu, Xiaoliang Dai, Kunpeng Li, Yinan Zhao, Hang Zhang, Peizhao Zhang, Peter Vajda, and Diana Marculescu. 2023. Open-vocabulary semantic segmentation with mask-adapted clip. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023. Visual instruction tuning. Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan, Rao Muhammad Anwer, and Ming-Hsuan Yang. 2022. Class-agnostic object detection with multi-modal transformer. In The European Conference on Computer Vision. Springer. Bolin Ni, Houwen Peng, Minghao Chen, Songyang Zhang, Gaofeng Meng, Jianlong Fu, Shiming Xiang, and Haibin Ling. 2022. Expanding language-image pretrained models for general video recognition. In The European Conference on Computer Vision. OpenAI. Openai terms of use. https://openai.com/ policies/terms-of-use. OpenAI. 2023. Chatgpt. Large Language Model for human style conversation https://chat.openai.com. OpenLMLab. 2023. Moss: Codebase for moss project. An open-sourced plugin-augmented conversational language model, https://github.com/ OpenLMLab/MOSS. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. Hanoona Rasheed, Muhammad Uzair khattak, Muhammad Maaz, Salman Khan, and Fahad Shahbaz Khan. 2023. Finetuned clip models are efficient video learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 12594 Hanoona Rasheed, Muhammad Maaz, Muhammad Uzair Khattak, Salman Khan, and Fahad Shahbaz Khan. 2022. Bridging the gap between object and image-level representations for open-vocabulary detection. In Advances in Neural Information Processing Systems. David Rozenberszki, Or Litany, and Angela Dai. 2022. Language-grounded indoor 3d semantic segmentation in the wild. In The European Conference on Computer Vision. Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, and Deng Cai. 2023. Pandagpt: One model to instruction-follow them all. arXiv preprint arXiv:2305.16355. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Mengmeng Wang, Jiazheng Xing, and Yong Liu. 2021. Actionclip: A new paradigm for video action recognition. arXiv preprint arXiv:2109.08472. Yi Wang, Kunchang Li, Yizhuo Li, Yinan He, Bingkun Huang, Zhiyu Zhao, Hongjie Zhang, Jilan Xu, Yi Liu, Zun Wang, et al. 2022. Internvideo: General video foundation models via generative and discriminative learning. arXiv preprint arXiv:2212.03191. Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. 2022. Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280. Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. 2017. Video question answering via gradually refined attention over appearance and motion. In Proceedings of the 25th ACM international conference on Multimedia, pages 1645–1653. Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. 2022. Zero-shot video question answering via frozen bidirectional language models. arXiv preprint arXiv:2206.08155. Zhou Yu, Dejing Xu, Jun Yu, Ting Yu, Zhou Zhao, Yueting Zhuang, and Dacheng Tao. 2019. Activitynet-qa: A dataset for understanding complex web videos via question answering. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 9127–9134. Hang Zhang, Xin Li, and Lidong Bing. 2023. Videollama: An instruction-tuned audio-visual language model for video understanding. arXiv preprint arXiv:2306.02858. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Xingyi Zhou, Vladlen Koltun, and Philipp Krähenbühl. 2021. Probabilistic two-stage detection. In arXiv preprint arXiv:2103.07461. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. 12595 Supplemental Material A Qualitative Evaluation We performed an extensive evaluation of our model on a variety of open-ended video questionanswering tasks, utilizing diverse videos sourced from ActivityNet and YouTube. The evaluation tasks included video reasoning (Figure 5), creative and generative tasks (see Figure 6), spatial understanding (Figure 7), action recognition (Figure 8), video conversation (Figure 9), question answering (Figure 10) and temporal understanding (Figure 11). Our model demonstrates proficiency in comprehending the content of the videos and generating accurate responses across multiple videobased tasks. Our model can effectively understand the visual information present in the videos and provide precise answers (see Figures 5 to 11). 12596 Figure 5: Video Reasoning Task. This figure illustrates an example from Video-ChatGPT’s demonstration showcasing its performance in video reasoning tasks. 12597 Figure 6: Creative and generative tasks. Illustrative examples from Video-ChatGPT’s demonstration highlight its performance in video-based creative and generative tasks, such as crafting a story, poem, or advertisement. 12598 Figure 7: Spatial understanding tasks. The figure depicts examples from Video-ChatGPT’s demonstration, emphasizing its capability in video-based spatial understanding tasks, including identifying renowned locations or counting the number of objects in a scene. Figure 8: Actiong Recognition Task. This figure illustrates examples from Video-ChatGPT’s demonstration showcasing its performance in video action recognition tasks such as playing drums and grooming horse. 12599 Figure 9: Video Understanding and Conversation Tasks. This figure illustrates examples from Video-ChatGPT’s demonstration showcasing its performance in video understanding and conversation tasks. 12600 Figure 10: Question-Answering Task. The figure depicts examples Video-ChatGPT’s demonstration showcasing its performance in question-answering tasks. 12601 Figure 11: Temporal Understanding Task. The figure provides examples from Video-ChatGPT’s demonstration, highlighting its performance in temporal understanding tasks, particularly in comprehending sequences of events. 12602
2024
679
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1249–1265 August 11-16, 2024 ©2024 Association for Computational Linguistics ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base Siyu Yuan♡∗, Jiangjie Chen♣∗, Changzhi Sun♢†, Jiaqing Liang♡♠‡, Yanghua Xiao♣♠, Deqing Yang♡♠‡ ♡School of Data Science, ♣School of Computer Science, Fudan University ♢East China Normal University, ♠Shanghai Key Laboratory of Data Science, [email protected], [email protected], [email protected], {liangjiaqing,shawyh,yangdeqing}@fudan.edu.cn Abstract Analogical reasoning is a fundamental cognitive ability of humans. However, current language models (LMs) still struggle to achieve human-like performance in analogical reasoning tasks due to a lack of resources for model training. In this work, we address this gap by proposing ANALOGYKB, a million-scale analogy knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large language models (LLMs), followed by minor human efforts for data quality control. Evaluations on a series of datasets of two analogical reasoning tasks (analogy recognition and generation) demonstrate that ANALOGYKB successfully enables both smaller LMs and LLMs to gain better analogical reasoning capabilities. Resources of this paper can be found at https://github.com/siyuyuan/ analogykb. 1 Introduction Making analogies requires identifying and mapping a familiar domain (i.e., source domain) to a less familiar domain (i.e., target domain) (Hofstadter and Sander, 2013). As shown in Figure 1, utilizing the analogy of the solar system can facilitate comprehension of the complex structure of atoms. Analogical reasoning is an important aspect of the cognitive intelligence of humans, allowing us to quickly adapt our knowledge to new domains (Hofstadter, 2001; Ding et al., 2023), make decisions (Hansen-Estruch et al., 2022), and solve problems (Yasunaga et al., 2023). As a result, the topic of analogy has been drawing significant research attention in the community. ∗Equal contribution. †Work done during ByteDance AI Lab. ‡Corresponding authors. 1 Earth Sun Newton Gravity Force Electron Nucleus Faraday Electric Force Has Has Orbit Orbit Discover Discover 1.(Orbit) Nucleus is to Electrons as Sun is to Earth 2.(Has) Electric Force is to Nucleus as Gravity Force is to Sun 3.(Discover) Newton is to Gravity Force as Faraday is to Electric Force 4.… Discovered Analogies Knowledge Graphs Solar System Atom Structure Figure 1: An example of acquiring analogies from KGs. Based on the relational knowledge triples from KGs, i.e., facts about the solar system and an atom structure, we can discover new analogies using the corresponding relations between concepts. However, resources for analogical reasoning are rather limited in scale (Mikolov et al., 2013b; Gladkova et al., 2016; Chen et al., 2022), which usually consist of only hundreds or thousands of data samples. As a result, these datasets do not support effective training of language models to gain analogical reasoning abilities. Although large language models (LLMs) can make some reasonable analogies without requiring gradient update, their performance still lies behind humans (Bhavya et al., 2022; Jiayang et al., 2023). Therefore, larger-scale data sources are needed to facilitate the research in this area. With richer analogies, we can train specialized analogy-making models and retrieve high-quality examples to assist LLMs. Therefore, the research question is: How to acquire largescale analogies at a moderate cost? An analogy is determined by the relational structure (Bartha, 2013), e.g., A:B::C:D (i.e., A is to B as C is to D), where the relation between A and B is analogous to the relation between C and D. The concepts A, B, C, and D can be entities and events. As shown in Figure 1, the “solar system” and an “atom” share a similar structure, allowing us to quickly grasp the relation between an “electron” 1249 and a “nucleus” in concepts of their source domain counterparts. Such relational structure can be derived from the triplet knowledge, e.g. (electron, orbit, nucleus) and (earth, orbit, sun), in knowledge graphs (KGs) (Wu et al., 2012). Therefore, such structure knowledge can be utilized and reorganized to create new analogy knowledge, supporting large-scale knowledge acquisition. In this work, we aim to build a knowledge base (KB) for storing analogies derived from existing KGs to improve analogical reasoning. However, due to the complicated relational structures, discovering analogies from KGs is not a trivial task. Although two pairs of concepts with the same relation can form a valid analogy (e.g., lion, isA, animal and apple, isA, fruit), interesting and diverse analogies are implicit in the KGs, with more complex relations. Concepts under two distinct but similar relations in KGs can also form a reasonable analogy (Hesse, 1959). For example, chief executive officer and head of state can both be abstracted into a meta relation (Hesse, 1959; Gentner and Maravilla, 2017), i.e., head of organization. Therefore, they are analogous relations under a meta relation. It is important to generalize the finding of implicit analogies beyond the same relations within KGs. We present ANALOGYKB, which is a largescale analogy KB. We use Wikidata (Vrandeˇci´c and Krötzsch, 2014) and ConceptNet (Speer et al., 2017) as our seed KGs and discover two types of analogies from these KGs: analogies of 1) same relations and 2) analogous relations. Analogies of the same relations can be directly extracted from existing KGs. In contrast, analogies of analogous relations are more implicit, requiring the finding of relation pairs from the KGs that can form valid analogies. However, it is costly to manually select analogous relation pairs. Therefore, we use InstructGPT003 (Ouyang et al., 2022), a LLM of great capabilities in NLP tasks, for finding and deciding the analogical semantics of relations. To eliminate the noise from the outputs of InstructGPT003 (§ 3.5), we devise two filtering rules based on 1) the symmetry of analogy and 2) meta relation summarization, which generalizes two relations into a more abstract meta relation. Then, we manually review the filtered results to further ensure data quality. Our ANALOGYKB comprises over 1 million analogies with 943 relations, including 103 analogous relations. Smaller LMs trained on ANALOGYKB gain significant improvements over the previous methods, even rivaling human performance on some analogy recognition tasks. Furthermore, we prove that ANALOGYKB can endow both smaller LMs and LLMs with satisfactory analogymaking capabilities. Our contributions are summarized as follows: • To the best of our knowledge, we are the first to construct an analogy KB (ANALOGYKB) with a million scale and diverse relational structures. • We propose a novel framework with LLMs to discover more interesting and implicit analogies of analogous relations; • We conduct extensive experiments to evaluate the effectiveness of ANALOGYKB, which significantly improves the analogical reasoning performance of both smaller LMs and LLMs. 2 Related Work Analogy Acquisition Early studies mainly acquire analogy knowledge via linguists (Turney et al., 2003; Boteanu and Chernova, 2015), which is costly and inefficient. Recent studies consider exploiting relations in KGs to build analogies (Speer et al., 2008; Allen and Hospedales, 2019; Ulˇcar et al., 2020), which can be divided into two lines of work: 1) Acquiring from commonsense KGs, which leverages semantic and morphological relations from WordNet (Miller, 1995), ConceptNet (Speer et al., 2017), etc. However, some of these datasets are large-scale but of poor quality (Li et al., 2018, 2020), while others are of high quality but limited in size (Mikolov et al., 2013b; Gladkova et al., 2016). 2) Acquiring from encyclopedia KGs (Si and Carlson, 2017; Zhang et al., 2022; Ilievski et al., 2022), which utilizes the relations from DBpedia (Auer et al., 2007) and Wikidata (Vrandeˇci´c and Krötzsch, 2014), but their empirical experiments are relatively small in size. Analogical Reasoning Analogical reasoning aims to identify a relational structure between two domains (Bartha, 2013; Chen et al., 2022). Previous work adopts the word analogy task to investigate the analogical reasoning capability of LMs (Mikolov et al., 2013a,b; Levy and Goldberg, 2014; Gladkova et al., 2016; Schluter, 2018; Fournier et al., 2020; Ushio et al., 2021). Recent work demonstrates that LLMs can generate some 1250 1 R1: antonym Left:Right Up:Down High:Low R2: CEO Andy Jassy:Amazon Tim Cook:Apple Elon Musk:Twitter R3: head of state Joe Biden:USA Rishi Sunak:UK Emmanuel Macron:France Up:Down :: High:Low Tim Cook:Apple :: Joe Biden:USA Analogous Relations Same Relation Concept pairs Relation pairs Left:Right, Andy Jassy:Amazon (antonym, CEO), (CEO, head of state) ConceptNet Analogies of the same relation Analogies of analogous relations Figure 2: The relations with concept pairs are stored in ANALOGYKB. We define two types of analogies, i.e., analogies of the same relation and analogies of analogous relations, and derive them from existing KGs. reasonable abstract (Mitchell, 2021; Hu et al., 2023; Webb et al., 2023) and natural language-based analogies (Bhavya et al., 2022; Wijesiriwardene et al., 2023; Jiayang et al., 2023) but still lay behind humans in some cases, and smaller LMs struggle to learn analogical reasoning ability due to a lack of training data. Knowledge Base Construction Knowledge base (KB) consists of structured knowledge to support various applications. The approaches to constructing KBs can be divided into three categories: 1) Manual construction (Miller, 1995; Speer et al., 2017), which creates the KBs with specialized knowledge written by experts, and thus is laborintensive; 2) Automatic construction (Wu et al., 2012; Martinez-Rodriguez et al., 2018), which leverages models to extract knowledge from unstructured corpora, may lead to low data quality; 3) Semi-automatic construction (Dalvi Mishra et al., 2017; Romero and Razniewski, 2020), which involves manual curation and annotation. Our work is based on automatic approaches with LLMs only requiring small-scale human checking efforts. 3 ANALOGYKB Construction This section details the framework for building ANALOGYKB. We first define the schema of ANALOGYKB (§ 3.1). Then, we collect relations with concept pairs from existing KGs (§ 3.2, Step 1) and directly obtain analogies of the same relations from KGs (§ 3.3, Step 2). We propose adopting LLMs (Ouyang et al., 2022) followed by minor human efforts to acquire analogies of analogous relations (§ 3.4, Step 3). 3.1 Schema for Analogies in ANALOGYKB This paper focuses on the analogy formed as A:B::C:D, where concepts as A, B, C and D can be entities or events. The concept pair A:B is analogous to C:D based on an underlying relational structure. Since ANALOGYKB is built on existing KGs, we define two types of that relational structure based on KG semantics: 1) analogies of the same relation and 2) analogies of analogous relations. Data in ANALOGYKB is organized as in Figure 2, where each relation R contains subjectobject concept pairs s : o. Within each relation, analogies of the same relation can be naturally formed, e.g., “Up is to Down as High is to Low”. Also, the concept pairs between two relations can form analogies, as long as the relation pair have analogous structures (Hesse, 1959). For example, “Tim Cook is to Apple as Joe Biden is to USA”, where R2 (CEO) is analogous to R3 (head of state). Therefore, ANALOGYKB only has to store concept pairs of each relation and analogous relation pairs, from which analogies can be easily derived. We list the definitions of each terminology with examples in Appendix A.1 for better understanding. 3.2 Source Data Collection We choose the two most-used KGs, i.e., ConceptNet and Wikidata consisting of high-quality concept pairs with relations, as our data sources. For ConceptNet, we select the concept pairs with weights bigger than 2.0 to improve the data quality and collect 100,000 concept pairs with 27 relations. Due to the vast amount of Wikidata, we randomly sample 5 million concepts with 813 relations from Wikidata, resulting in 20 million concept pairs. 3.3 Acquiring Analogies of the Same Relation We can directly utilize the concept pairs in the KGs to generate analogies of the same relations. An important perspective is that humans usually draw upon familiar domains and situations to better understand unfamiliar ones. To make our analogy KB more applicable to real-world scenarios, we rank the concept pairs according to their popularity scores, reflected by pageview times (in Wikidata) and concept weights (in ConceptNet). 3.4 Acquiring Analogies of Analogous Relations As defined in § 3.1, analogies of analogous relations consist of two concept pairs with analogous 1251 I: Analogous Relations Generation /* I: Task prompt */ Choose the relations from the relation candidates that can form an analogy with the given relation. /* Examples */ Given relation: written by Relation candidates: [lyrics by, composed by, ...] Answer: lyrics by, composed by, ... /* Auto selection of analogical relations */ Given relation: chief executive officer Relation candidates: [head of state, ...] Answer: head of state, head of government, ... II: Meta Relation Summarization /* Task prompt */ Induce two relations into a higher-level relation and explain why they can form an analogy. /* Examples */ The relation [lyrics by] and the relation [composed by] can form an analogy because both of them can be induced into a relation: [created by]. The relation [written by] and the relation [written system] can form an analogy because both of them can be induced into a relation: None. /* Auto-completion for meta relation */ The relation [chief executive officer] and the relation [head of government] can form an analogy because both of them can be induced into a relation: head of organization. Table 1: Examples of prompt for InstructGPT003 for analogous relations generation and meta relation summarization. Green texts are generated by InstructGPT003. relations R1 and R2. However, it is difficult to automatically check whether R1 and R2 are analogous and manual annotation is costly. Recently, LLMs (Ouyang et al., 2022; OpenAI, 2022) have shown their remarkable few-shot learning abilities with in-context learning. Given a task prompt describing the task and several examples, LLMs can do the task well without training. Therefore, we propose to exploit LLMs (e.g., InstructGPT003) to acquire analogies of analogous relations. Finding Candidate Relation Pairs We collect 840 relations, leading to a potential amount of 840 2  relation pairs. The relations that are semantically similar to each other can form an analogy (Hesse, 1959). For each relation, we first narrow down the candidate set from the 840 relations to the 20-most similar ones. Specifically, we use InstructGPT embeddings (text-embedding-ada-002) to convert the relations into embeddings and calculate the cosine similarity between them. By identifying the top 20 relations with the highest similarity as candidate relations for the query relation, the search space is significantly reduced for filtering analogous relations. Predicting Analogous Relation Pairs While the search space is reduced, manual annotation remains cost-prohibitive (840 × 20). Thus, we continue to adopt InstructGPT003 to predict analogous relation pairs. An example in Table 1 (I) shows the acquisition of analogous relation pairs. Given examples and the query (“chief executive officer”), InstructGPT003 selects the relations “head of state” and “head of organization” from the candidates to form analogies. Finally, InstructGPT003 obtains 284 relation pairs. However, we find that InstructGPT003 struggles to filter out similar but wrong relations that cannot form analogies with queries, e.g., “operator” for “chief executive officer”, which requires further filtering. Filtering for High-quality Relation Pairs In the examination process of 284 acquired relation pairs, we further implement two automatic filtering rules before conducting manual filtering to reduce human labor: 1. Rule 1: if two relations can form an analogy, InstructGPT003 should simultaneously select R1 for R2 and R2 for R1. 2. Rule 2 (Hesse, 1959): The second rule is using a more abstract meta relation to decide if two relations can form an analogy. The rationale behind the Rule 2 is that if two relations are analogous, then they can be generalized into a more abstract meta relation. For example, in Table 1 (II), written by and composed by are analogous since they can be induced to a meta relation created by. To acquire meta relations, we prompt InstructGPT003 with a task prompt with some examples, as shown in Table 1 (II). If InstructGPT003 returns “None”, we discard this case. After filtering, 103 relation pairs remain. To further improve data quality, we adopt a third filtering by recruiting two volunteers to manually examine the remaining results, including deleting relation pairs that fail to form analogies or adding previously unchosen relation pairs that can form analogies from candidates. Finally, we sort the concept pairs by pageview (Wikidata) and weight (ConceptNet). 3.5 Analysis of ANALOGYKB As shown in Table 2, ANALOGYKB is massive, consisting of over 1 million concept pairs and 943 1252 Source # Concept Pair # Rel(s) Analogy Acc. Analogies of the Same Relation ConceptNet 75,019 27 98.50% Wikidata 563,024 813 98.00% Analogies of Analogous Relations ConceptNet 11,829 5 95.50% Wikidata 382,168 98 96.00% Total 1,032,040 943 97.00% Table 2: The statistics of ANALOGYKB. We report the number of concept pairs (# Concept Pair) and relations (pairs if for analogous relations) (# Rel(s)), manually evaluated the accuracy of randomly selected 200 analogies (Analogy Acc.) and the source KB (Source). Data # Analogy # Rel Language SAT 374 En Google 550 15 En UNIT 2 252 En UNIT 4 480 En BATS 1,998 4 En E-KAR 1251 28 En E-KAR 1655 28 Zh ANALOGYKB ≥1,032,040 943 En Table 3: Comparison between ANALOGYKB and previous analogy data source: numbers of analogies (i.e., A:B::C:D), number of relations and language. relations, which can form even more pairs of analogies. Since ANALOGYKB provides a more comprehensive range of relations than previous datasets, it allows users to select their preferred analogies within each relation (pair). To evaluate the quality of ANALOGYKB, we randomly sample 200 analogies from each data type, i.e., two concept pairs of the same or analogous relations, in the form of A:B::C:D. The data is annotated by two annotators with Fleiss’s κ = 0.86 (Fleiss et al., 1981). Results show that ANALOGYKB is of high quality. Even for analogies of analogous relations, analogies are still of over 95% accuracy. We further compare ANALOGYKB with the resources related to the analogy, as reported in Table 3. We find that ANALOGYKB is much larger than previous data sources, with more analogies and relations. To better present the fabric of ANALOGYKB, we present the distribution of the categories of concepts covered in ANALOGYKB in Figure 3. The categories are obtained from the hypernym of concepts from Probase (Wu et al., 2012). We find that ANALOGYKB exhibits high diversity. human: 23.46% place: 23.05% thing: 14.36% group: 5.23% organization: 14.26% event: 2.76% species: 6.72% technology: 4.56% game: 3.66% language: 1.94% actor artist author politician screenwriter composer film director journalist musician novelist poet singer city country region station town urban area movie album book instrument mountain painting river road song club community ethnic group minority group political party team company school industry institution war conflict crime historical event mammal organism primate device material operating system product program video game contact sport team sport european language modern language natural language Figure 3: Distribution of concept categories in our ANALOGYKB. Method # Total # Correct ChatGPT 299 74 InstructGPT003 284 97 + Rule 1 139 97 + Rule 1 & Rule 2 103 97 + Rule 1 & Rule 2 & Human 103 103 Table 4: Ablated evaluation results of the analogous relation pairs. We record the total number of analogous relation pairs (# Total) the model selects and correct ones (# Correct). Note that “Human” denotes manual modifications, including adding missing relations or deleting incorrect ones, so the results are already correct (103→103). Are the filtering techniques for analogous relations useful? We evaluate the usefulness of the filtering components, i.e., symmetry (Rule 1) and meta relation summarization (Rule 2), and manual correction. We also adopt ChatGPT (OpenAI, 2022) as an ablated variant. We record the total number of analogous relation pairs output by models (# Total) and then employ annotators to report the number of correct ones out of them (# Correct). In this process, the annotators need to review these relation pairs but no need to correct them. Each pair is examined by two annotators with Fleiss’s κ = 0.86. The results in Table 4 show that: 1) InstructGPT003 is superior to ChatGPT but it still cannot filter out similar but wrong relation pairs, indicating the need for further filtering; 2) We find the rule-based filtering technique to be rather effective, as there are not many manual corrections based on human annotations. This overcomes the 1253 Method E-KAR BATS UNIT 2 UNIT 4 Google SAT Mean Word Embedding from RoBERTa-Large 28.20 72.00 58.30 57.40 96.60 56.70 61.53 Word Embedding from InstructGPT 33.41 78.30 65.39 62.60 98.70 55.38 65.63 Sentence Embedding from SentenceBERT 25.40 68.00 53.40 46.00 90.45 47.70 55.16 Sentence Embedding from SimCSE 23.50 66.54 54.29 50.32 92.32 45.10 55.35 T5-Large 40.08 77.37 34.65 31.25 75.60 31.45 48.40 BERT-Large 36.64 70.10 32.89 34.49 90.40 41.30 50.97 ERNIE 40.83 82.54 34.21 36.80 82.40 34.92 51.95 LUKE 40.45 82.82 34.64 39.12 88.40 30.26 52.62 RoBERTa-Large 46.70 78.20 46.05 40.04 96.90 51.60 59.92 + ANALOGYKB 53.43 90.93 87.28 76.15 97.80 59.05 77.44 + ANALOGYKB (w/o check) 45.34 80.30 44.20 39.25 96.01 43.38 58.08 DeBERTa-v3 47.18 79.54 50.00 46.99 96.20 52.26 62.03 + ANALOGYKB 53.05 92.42 88.32 75.30 98.80 60.78 78.11 + ANALOGYKB (w/o check) 43.89 78.82 45.18 45.60 96.00 48.36 59.64 Human 77.80 84.85 87.50 66.66 99.41 57.00 78.87 Table 5: Accuracy on the analogy recognition task. We compare models and human performance on different benchmarks under different settings. The human performance values are obtained from the original papers of these analogy datasets. The best results are bolded and the second best ones are underlined. labor-intensiveness of traditional KB construction methods and reveals the potential of this approach to be extended to the construction of other KBs. 4 ANALOGYKB Evaluation 4.1 Analogy Recognition Evaluation Analogy recognition task aims to recognize the most analogous candidate to the query, formulated as multiple-choice question-answering. Can models trained on ANALOGYKB acquire better analogy recognition abilities? We adopt six analogy benchmarks, i.e., E-KAR (Chen et al., 2022), BATS (Gladkova et al., 2016), UNIT 2 and UNIT 4 (Boteanu and Chernova, 2015), Google (Mikolov et al., 2013b) and SAT (Turney et al., 2003) for evaluation. Compared to BATS and Google, E-KAR, UNIT 2, UNIT 4, and SAT contain more abstract and complex analogies and thus more difficult for humans. For the backbone model, we use the RoBERTaLarge (Liu et al., 2019) and randomly sample 10,000 data points from ANALOGYKB to train the model in a multiple-choice question-answer format. We first train the model on the data from ANALOGYKB and then further fine-tune it on benchmarks.1 For baselines, we adopt pre-trained word embeddings (Ushio et al., 2021; Ouyang et al., 2022), pre-trained sentence embeddings (Reimers 1Detailed information on the benchmarks is shown in Appendix B and the construction of ANALOGYKB sample data is shown in Appendix C.1. and Gurevych, 2019; Gao et al., 2021), pre-trained language models (Raffel et al., 2022; Devlin et al., 2019; Liu et al., 2019; He et al., 2023). To rule out the confounder in ANALOGYKB, we also add knowledge-enhanced models, ERNIE (Zhang et al., 2019) and LUKE (Yamada et al., 2020) which contain the relational knowledge between entities. Moreover, we also randomly sampled 10,000 data points from the ANALOGYKB without checking and filtering, i.e., + ANALOGYKB (w/o check), to prove the necessity of filtering. After human examination, nearly about 63% of data points do not form analogies. Previous benchmarks, except E-KAR, do not have a training set. Thus, we fine-tune LMs on their small development set.2 The results presented in Table 5 show that: 1) The performance of sentence embeddings is inferior to word embeddings. The rationale is that such word analogy is based on relational rather than semantic similarity between two sentences. Therefore, taking the difference between two word embeddings is a more reasonable yet still problematic approach for finding word analogies. 2) Incorporating entity knowledge cannot improve model performance on analogy recognition; 3) The training data without checking brings noise and even degrades model performance, further emphasizing the importance of high-quality data in ANALOGYKB. 4) Training models on ANALOGYKB can signifi2Details about the baselines and training process are shown in Appendix C.2 and C.3. The statistical significant test is shown in Appendix C.5 1254 E-KAR BATS UNIT 2 UNIT 4 Google SAT Mean 0 20 40 60 80 100 Pre-trained Acc(%) 46 79 50 46 96 49 61 42 80 43 43 94 49 58 29 28 21 25 19 15 23 Data Datasame Datapseudo E-KAR BATS UNIT 2 UNIT 4 Google SAT Mean 0 20 40 60 80 100 Fine-tuned Acc(%) 53 90 87 76 97 59 77 51 90 71 63 97 57 72 35 38 24 30 32 24 30 Figure 4: The accuracy of RoBERTa-Large trained on different data subsets on the analogy recognition task. Data denotes the dataset sampled directly from ANALOGYKB, Datasame denotes the dataset that only has samerelation analogies, and Datapseudo denotes the dataset with concept pairs that do not form analogies. All the datasets have the same size. cantly improve the model performance on analogy recognition by a large margin. How much do analogies of analogous relations in ANALOGYKB contribute to performance? We create two ablated variants from ANALOGYKB to train the models: 1) Analogies of the same relations, denoted as Datasame: we randomly sampled 10,000 data of the same relations as an ablated variant. 2) Pseudo analogies, denoted as Datapseudo: for each data point, we randomly sample 5 concept pairs from the ANALOGYKB and choose one as the query, one as the answer, and the remaining three as distractions. This makes sure that ANALOGYKB indeed imposes analogical reasoning ability on the model rather than simply data augmentation. We adopt two settings: only train RoBERTa-Large on 10,000 data (i.e., Pre-trained) and first train RoBERTa-Large on 10,000 data and then fine-tune it on the specific benchmarks (i.e., Fine-tuned). The results in Figure 4 show that: 1) Analogies of analogous relations in ANALOGYKB are rather important for models to comprehend analogies with more abstract and complex relations. 2) Training models on randomly constructed analogy-style data even drags down model performance, further emphasizing the importance of ANALOGYKB.3 How do data sizes and model sizes affect performance? We use T5-Large as the base model to examine the effects of training data size on model 3We also compare the data from different KB sources, which is shown in Appendix C.4. 100 101 102 20 30 40 50 Data Size (×1k) AKB AKB + E-KAR (a) Different data sizes. 102 103 20 30 40 50 Model Size (M) AKB AKB + E-KAR (b) Different model sizes. Figure 5: Performance change (Accuracy %) for T5 on E-KAR test set with increasing training data (1K, 5K, 10K, 50K, 100K) from ANALOGYKB and model size (60M, 220M, 770M, 3B). T5 is either trained on ANALOGYKB (AKB) or both ANALOGYKB and EKAR (AKB + E-KAR). Model E-KAR UNIT 4 SAT vanilla T5 13.00 17.00 8.00 AnalogyT5same 42.00 63.00 37.00 AnalogyT5 57.00 80.00 64.00 InstructGPT003 61.00 70.00 60.00 + Human 68.00 76.00 74.00 + ANALOGYKBsame 64.00 77.00 77.00 + ANALOGYKB 75.00 80.00 85.00 ChatGPT 58.00 76.00 78.00 + Human 64.00 81.00 80.00 + ANALOGYKBsame 64.00 80.00 81.00 + ANALOGYKB 69.00 92.00 91.00 Table 6: Accuracy on analogy generation. For LLMs, we compare LLMs with 0-shot and human-written examples (+ Human) vs. ANALOGYKB-retrieved examples (+ ANALOGYKB). For smaller LMs, we compare AnalogyT5 with vanilla T5. AnalogyT5same and ANALOGYKBsame are the ablation variants with analogies of the same relations from ANALOGYKB. performance. We first train the model on data from ANALOGYKB, and fine-tune it on E-KAR. As illustrated in Figure 5(a), increasing the amount of training data from ANALOGYKB improves model performance. Figure 5(b) shows the results of different-sized T5 models on 10,000 data points from ANALOGYKB. We find that the larger models get less of a performance gain from E-KAR, indicating that they learn more from ANALOGYKB and can better generalize to E-KAR. 4.2 Analogy Generation Evaluation This task can be formulated as a text generation task: completing the D given A, B, C to form a plausible analogy A is to B as C is to D. Analogy generation is of more practical use, since the generation of familiar analogies could be helpful to 1255 Model Acc. MRR Rec@5 GPT-2 2.00 5.70 10.70 + BATS 1.10 2.20 3.60 + E-KAR 2.39 5.44 10.46 + ANALOGYKBsame 4.00 5.49 11.45 + ANALOGYKB 5.12 6.41 12.77 BERT 0.90 4.40 8.00 + BATS 0.40 1.90 3.10 + E-KAR 1.50 4.32 7.92 + ANALOGYKBsame 4.01 7.44 10.89 + ANALOGYKB 6.24 10.36 14.07 InstructGPT003 3.32 15.75 34.58 + BATS 5.07 21.40 32.37 + E-KAR 9.12 25.00 36.27 + ANALOGYKBsame 6.91 25.32 33.42 + ANALOGYKB 15.30 32.80 38.46 Table 7: Analogy generation results on SCAN. For LLMs, we compare LLMs with 0-shot and examples retrieved from BATS (+ BATS) and E-KAR (+ E-KAR) vs. retrieved from ANALOGYKB (+ ANALOGYKB). For smaller LMs, we pre-train the models on BATS (+ BATS) or E-KAR (+ E-KAR) or data sampled from ANALOGYKB (+ ANALOGYKB). comprehend the source problem. Does ANALOGYKB support analogy generation? To answer this question, we investigate two settings: For smaller LMs, we randomly sample 1 million data points from ANALOGYKB. Then we fine-tune T5-Large on ANALOGYKB (named AnalogyT5) to compare vanilla T5. For LLMs, we convert the query and analogies from ANALOGYKB into InstructGPT embeddings, retrieve the top-8 most similar analogies based on cosine embedding similarity, and use them as examples in the prompt. We test models on 100 test data sampled from three challenging benchmarks, which are not found in the training set.4 Each generation is evaluated by three annotators with Fleiss’s κ = 0.93. The results in Table 6 show that, in both pre-training and in-context learning, ANALOGYKB enables better analogy generation, and the analogies of analogous relations prove significantly valuable to the performance of models. Does ANALOGYKB help LMs generalize to outof-domain analogies? Despite its high coverage of common concepts (§ 3.5), ANALOGYKB contains few analogies related to metaphor and science which are not common in the KGs and thus out4Detailed information on the training process and the results on six benchmarks are shown in Appendix D.1 and D.3. We also conduct the impact of data and model sizes and case studies for further analysis in Appendix D.2 and D.4. InstructGPT003 ChatGPT Model 50 55 60 65 70 75 80 Explanation ACC (%) Base Model + Human + ANALOGYKBsame + ANALOGYKB Figure 6: The accuracy of LLMs on the analogy explanation task. We compare LLMs with 0-shot (Base Model) and human-written examples (+ Human) vs. ANALOGYKB-retrieved examples (+ ANALOGYKB). of-domain. To examine whether ANALOGYKB can generalize the ability of LMs to reason about these analogies, we test AnalogyT5 on the SCAN dataset (Czinczoll et al., 2022), which has 449 analogies of metaphor and science domains. For smaller LMs, we follow the original experimental setup and compare the models trained on ANALOGYKB (see Appendix D.5 for details). For LLMs, we retrieve the top-8 most similar analogies from ANALOGYKB as examples, in contrast to zeroshot settings, retrieving from BATS and E-KAR. The results shown in Table 7 reveal that 1) For smaller LMs, training on BATS even worsens performance on SCAN. However, training on E-KAR with complex analogies can indeed improve the model performance on SCAN. 2) Compared to EKAR, ANALOGYKB can further help both LLMs and smaller models generalize to out-of-domain analogies. Can ANALOGYKB better support analogy explanation for LLMs? Analogy explanation needs LLMs to provide a reasonable explanation for a given analogy, which more closely simulates the process of human reasoning and knowledge explanation. In this setting, we first retrieve top-8 most similar analogies based on cosine embedding similarity. Then, we ask GPT-4 to generate explanations for the analogies given relations and use them as examples in the prompt. We test InstructGPT003 and ChatGPT on 100 data samples from E-KAR, and employ two annotators to judge whether the explanations are correct with Fleiss’s κ = 0.97). The results in Figure 6 are consistent with Table 6, demonstrating that ANALOGYKB can facilitate better analogy explanation for LLMs, and the analogies of analogous relations are significantly valuable for performance. 1256 5 Conclusion In this paper, we introduce ANALOGYKB, a million-scale analogy KB to improve model performance in analogical reasoning tasks. We identify two types of analogies in existing KGs, i.e., analogies of the same and analogous relations, and utilize LLMs with minor human examinations to find them. ANALOGYKB demonstrates its great value in assisting both smaller LMs and LLMs with the resolution of analogy recognition and generation tasks, especially with analogies of analogous relations in ANALOGYKB. Limitations First, this paper only considers analogies involving one or two relations and primarily concentrates on analogies in the form of “A is to B as C is to D”. However, analogies may involve the combination of multiple relations of multiple entities or even events. For example, an engineer can learn the eye cross-section by taking the analogy of the camera structure. Here, the analogy involves multiple entities and relations in the two systems (camera and eye): Aperture should be analogous to pupil since both are channels for light to enter and black paint should be analogous to choroid since both absorb light to prevent it from bouncing and reflecting. Second, our ANALOGYKB is constructed using data from Wikidata and ConceptNet, which do not include analogies in other domains such as the scientific domain. For example, it would be challenging for LMs trained on ANALOGYKB to reason about an analogy such as Protein synthesis in a cell is like a factory assembly line as it would require a deep understanding of biological and industrial processes, which is not well-covered in our data sources. Also, ANALOGYKB is stored in the form of tuples, but in practice, some analogy situations may not be easily converted to this format. Future research should address how to bridge this gap. Due to the limited computational resources, we only use a subset of ANALOGYKB. Assuming unlimited computational resources, the far-stretching goal of this project is to enable the discovery of new, better analogies for applications such as explanation (e.g., science popularization), text polishing, and case-based reasoning. So, with the full scale of the data, we can train a specialized open-source large language model (e.g., Llama 2) in such related tasks with data from ANALOGYKB so that these models can discover novel analogies and understand new concepts and knowledge with analogical reasoning ability. Ethics Statement We hereby acknowledge that all authors of this work are aware of the provided ACL Code of Ethics and honor the code of conduct. Use of Human Annotations The annotations of relation pairs in ANALOGYKB are implemented by annotators recruited by our institution. The construction team remains anonymous to the authors, and the annotation quality is ensured by using a double-check strategy as described in Section 3. We ensure that the privacy rights of all annotators are respected throughout the annotation process. All annotators are compensated above the local minimum wage and consent to the use of ANALOGYKB for research purposes, as described in our paper. The annotation details are shown in Appendix A.2. Risks The database is sourced from publicly available sources, Wikidata and ConceptNet. However, we cannot guarantee that it is free of socially harmful or toxic language. Additionally, analogy evaluation relies on commonsense, and different individuals with diverse backgrounds may have varying perspectives. Acknowledgement We thank the anonymous reviewers for their valuable comments. This work is supported by the Chinese NSF Major Research Plan (No.92270121), Shanghai Science and Technology Innovation Action Plan (No.21511100401) and the National Natural Science Foundation of China (No. 62102095). The computations in this research were performed using the CFFF platform of Fudan University. References Carl Allen and Timothy Hospedales. 2019. Analogies explained: Towards understanding word embeddings. In International Conference on Machine Learning, pages 223–231. PMLR. Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, pages 722–735, Berlin, Heidelberg. Springer Berlin Heidelberg. Paul Bartha. 2013. Analogy and analogical reasoning. 1257 Bhavya Bhavya, Jinjun Xiong, and Chengxiang Zhai. 2022. Analogy generation by prompting large language models: A case study of instructgpt. arXiv preprint arXiv:2210.04186. Adrian Boteanu and Sonia Chernova. 2015. Solving and explaining analogy questions using semantic networks. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei Li, Yanghua Xiao, and Hao Zhou. 2022. E-KAR: A benchmark for rationalizing natural language analogical reasoning. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3941–3955, Dublin, Ireland. Association for Computational Linguistics. Tamara Czinczoll, Helen Yannakoudakis, Pushkar Mishra, and Ekaterina Shutova. 2022. Scientific and creative analogies in pretrained language models. arXiv preprint arXiv:2211.15268. Bhavana Dalvi Mishra, Niket Tandon, and Peter Clark. 2017. Domain-targeted, high precision knowledge extraction. Transactions of the Association for Computational Linguistics, 5:233–246. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Zijian Ding, Arvind Srinivasan, Stephen Macneil, and Joel Chan. 2023. Fluid transformers and creative analogies: Exploring large language models’ capacity for augmenting cross-domain analogical creativity. In Proceedings of the 15th Conference on Creativity and Cognition, C&C ’23, page 489–505, New York, NY, USA. Association for Computing Machinery. Joseph L Fleiss, Bruce Levin, Myunghee Cho Paik, et al. 1981. The measurement of interrater agreement. Statistical methods for rates and proportions, 2(212236):22–23. Louis Fournier, Emmanuel Dupoux, and Ewan Dunbar. 2020. Analogies minus analogy test: measuring regularities in word embeddings. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 365–375, Online. Association for Computational Linguistics. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Dedre Gentner and Francisco Maravilla. 2017. Analogical reasoning. In The Routledge International Handbook of Thinking and Reasoning, pages 186– 203. Routledge. Anna Gladkova, Aleksandr Drozd, and Satoshi Matsuoka. 2016. Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In Proceedings of the NAACL Student Research Workshop, pages 8–15, San Diego, California. Association for Computational Linguistics. Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, and Sergey Levine. 2022. Bisimulation makes analogies in goal-conditioned reinforcement learning. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 8407–8426. PMLR. Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2023. DeBERTav3: Improving deBERTa using ELECTRAstyle pre-training with gradient-disentangled embedding sharing. In The Eleventh International Conference on Learning Representations. Mary B Hesse. 1959. On defining analogy. In Proceedings of the Aristotelian Society, volume 60, pages 79–100. JSTOR. Douglas R Hofstadter. 2001. Analogy as the core of cognition. The analogical mind: Perspectives from cognitive science, pages 499–538. Douglas R Hofstadter and Emmanuel Sander. 2013. Surfaces and essences: Analogy as the fuel and fire of thinking. Basic books. Xiaoyang Hu, Shane Storks, Richard Lewis, and Joyce Chai. 2023. In-context analogical reasoning with pre-trained language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1953–1969, Toronto, Canada. Association for Computational Linguistics. Filip Ilievski, Jay Pujara, and Kartik Shenoy. 2022. Does wikidata support analogical reasoning? In Iberoamerican Knowledge Graphs and Semantic Web Conference, pages 178–191. Springer. Cheng Jiayang, Lin Qiu, Tsz Ho Chan, Tianqing Fang, Weiqi Wang, Chunkit Chan, Dongyu Ru, Qipeng Guo, Hongming Zhang, Yangqiu Song, Yue Zhang, and Zheng Zhang. 2023. Storyanalogy: Deriving story-level analogies from large language models to unlock analogical understanding. Omer Levy and Yoav Goldberg. 2014. Linguistic regularities in sparse and explicit word representations. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 171–180, Ann Arbor, Michigan. Association for Computational Linguistics. 1258 Peng-Hsuan Li, Tsan-Yu Yang, and Wei-Yun Ma. 2020. CA-EHN: Commonsense analogy from E-HowNet. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 2984–2990, Marseille, France. European Language Resources Association. Shen Li, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. 2018. Analogical reasoning on Chinese morphological and semantic relations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 138–143, Melbourne, Australia. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Jose L. Martinez-Rodriguez, Ivan Lopez-Arevalo, and Ana B. Rios-Alvarado. 2018. Openie-based approach for knowledge graph construction from text. Expert Systems with Applications, 113:339–355. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 2, NIPS’13, page 3111–3119, Red Hook, NY, USA. Curran Associates Inc. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, Georgia. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41. Melanie Mitchell. 2021. Abstraction and analogymaking in artificial intelligence. Annals of the New York Academy of Sciences, 1505(1):79–101. OpenAI. 2022. Chatgpt. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2022. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(1). Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Julien Romero and Simon Razniewski. 2020. Inside quasimodo: Exploring construction and usage of commonsense knowledge. In Proceedings of the 29th ACM International Conference on Information &amp; Knowledge Management, CIKM ’20, page 3445–3448, New York, NY, USA. Association for Computing Machinery. Natalie Schluter. 2018. The word analogy testing caveat. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 242–246, New Orleans, Louisiana. Association for Computational Linguistics. Mei Si and Craig Carlson. 2017. A data-driven approach for making analogies. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, CogSci 2017, London, UK, 16-29 July 2017. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI’17, page 4444–4451. AAAI Press. Robyn Speer, Catherine Havasi, and Henry Lieberman. 2008. Analogyspace: Reducing the dimensionality of common sense knowledge. In AAAI Conference on Artificial Intelligence. Peter D Turney, Michael L Littman, Jeffrey Bigham, and Victor Shnayder. 2003. Combining independent modules in lexical multiple-choice problems. Recent Advances in Natural Language Processing III: Selected Papers from RANLP, 2003:101–110. Matej Ulˇcar, Kristiina Vaik, Jessica Lindström, Milda Dailid˙enait˙e, and Marko Robnik-Šikonja. 2020. Multilingual culture-independent word analogy datasets. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4074–4080, Marseille, France. European Language Resources Association. Asahi Ushio, Luis Espinosa Anke, Steven Schockaert, and Jose Camacho-Collados. 2021. BERT is to NLP what AlexNet is to CV: Can pre-trained language models identify analogies? In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3609–3624, Online. Association for Computational Linguistics. 1259 Denny Vrandeˇci´c and Markus Krötzsch. 2014. Wikidata: A free collaborative knowledgebase. Commun. ACM, 57(10):78–85. Taylor Webb, Keith J Holyoak, and Hongjing Lu. 2023. Emergent analogical reasoning in large language models. Nature Human Behaviour, pages 1–16. Thilini Wijesiriwardene, Ruwan Wickramarachchi, Bimal Gajera, Shreeyash Gowaikar, Chandan Gupta, Aman Chadha, Aishwarya Naresh Reganti, Amit Sheth, and Amitava Das. 2023. ANALOGICAL a novel benchmark for long text analogy evaluation in large language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 3534–3549, Toronto, Canada. Association for Computational Linguistics. Wentao Wu, Hongsong Li, Haixun Wang, and Kenny Q. Zhu. 2012. Probase: A probabilistic taxonomy for text understanding. In Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data, SIGMOD ’12, page 481–492, New York, NY, USA. Association for Computing Machinery. Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entityaware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442–6454, Online. Association for Computational Linguistics. Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H Chi, and Denny Zhou. 2023. Large language models as analogical reasoners. arXiv preprint arXiv:2310.01714. Ningyu Zhang, Lei Li, Xiang Chen, Xiaozhuan Liang, Shumin Deng, and Huajun Chen. 2022. Multimodal analogical reasoning over knowledge graphs. arXiv preprint arXiv:2210.00312. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced language representation with informative entities. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1441–1451, Florence, Italy. Association for Computational Linguistics. A Details of ANALOGYKB A.1 Terminology Definition in ANALOGYKB To better understand the schema for analogies in ANALOGYKB, we list the terminologies in Table 9. A.2 Crowd-sourcing Details We have recruited a team of two undergraduates. We pay each annotator $8/h, exceeding the local minimum wage. The screenshots of the instructions and annotation interface are shown in Figure 8. B Benchmark We compare our methods with baselines and human performance in 6 different benchmarks. An example of these benchmarks is given in Table 8 For benchmarks without training sets, we only finetune models on their validation sets. • E-KAR (Chen et al., 2022): a Explainable Knowledge-intensive Analogical Reasoning benchmark sourced from the publicly available Civil Service Examinations (CSE) of China, which contains linguistic, commonsense, encyclopedic, and cultural (e.g., idiom and historical) knowledge. This dataset contains 870 training data, 119 validation data, and 262 test data. The SOTA model on this benchmark is proposed by Chen et al. (2022). • BATS (Gladkova et al., 2016): is Bigger Analogy Test Set containing more than 1,000 analogies. The analogies can be divided into four categories: lexicographic, encyclopedic, derivational and inflectional morphology. This dataset contains 199 validation data and 1799 test data. The SOTA model on this benchmark is proposed by Ushio et al. (2021). • UNIT 2 (Boteanu and Chernova, 2015): a benchmark using word analogy problems from an educational resource. This dataset contains 24 validation data and 228 test data. The SOTA model on this benchmark is proposed by Ushio et al. (2021). • UNIT 4 (Boteanu and Chernova, 2015): this benchmark also comes from an educational resource but is harder than U2. This dataset contains 48 validation data and 432 test data. The SOTA model on this benchmark is proposed by Ushio et al. (2021) • Google (Mikolov et al., 2013b): a benchmark for intrinsic evaluation of word embeddings proposed by Google, which contains semantic and morphological relations. This dataset consists of 50 validation data and 500 test data. The SOTA model on this benchmark is proposed by Chen et al. (2022) • SAT (Turney et al., 2003): a benchmark constructed from SAT exams in the US college admission test consisting of 374 word analogy problems. This dataset contains 37 validation 1260 Query army:order Candidates: (A) volunteer:summon (B) band:band leader (C) tourist:guide (D) students:instruction Table 8: An example of analogy recognition task. The true answers are highlighted. data and 337 test data. The SOTA model on this benchmark is proposed by Ushio et al. (2021). As shown in Table 10, We list the overlap rates of ANALOGYKB with other analogy datasets. The overlap rates are calculated as (Data in ANALOGYKB Data in Other Datasets) / (Data in Other Datasets). Specifically, one data sample, i.e., "A is to B as C is to D" can be changed into two tuples (A, R1, B) and (C, R2, D), where R1 and R2 can be exactly the same or analogous. If both tuples are present in ANALOGYKB, the overlap rate for this data instance is considered greater than 0. The results indicate that ANALOGYKB contains a portion of the data from other analogy benchmarks, exhibiting high coverage. However, after our checking, we confirm that the training data sampled from ANALOGYKB, which is used to train LMs, does not contain the test data from other analogy benchmarks. This confirms the absence of data leakage, underscoring that LMs on ANALOGYKB can significantly improve the model performance on analogy recognition and generation tasks. C Analogy Recognition Task C.1 Data Construction To pre-train RoBERTa-Large on ANALOGYKB, we randomly sample 5,000 analogies of the same relation and 5,000 analogies of analogous relations from ANALOGYKB and formulate them into the multiple-choice question-answering format. Specifically, for each instance, we randomly sample a concept pair from ANALOGYKB as a query and select another concept pair from the analogous relation as the answer. Then, we randomly sample 3 concept pairs from the relations that can not be analogous to the relation of the query as distractions. We also randomly sample 10,000 data of the same relations as an ablated variant to show the effectiveness of analogies of analogous relations (denoted as Datasame). The construction method is similar, except that the query and answer are derived from the same relation. Additionally, we randomly sample 10,000 data points from ANALOGYKB and construct analogy-style data (denoted as Datapseudo). Specifically, we randomly sample 50,000 concept pairs without considering analogous relations from ANALOGYKB as the data pool. For each data point, we randomly sample 5 concept pairs from the data pool and choose one as the query, one as the answer, and the remaining three as distractions. C.2 Details of Baselines Word Embedding and Sentence Embedding For the method of pre-trained word embeddings, we follow the method proposed by Ushio et al. (2021). And represent word pairs by taking the difference between their embeddings. Then, we choose the answer candidate with the highest cosine similarity to the query in terms of this vector difference. For the method of sentence embedding, we convert query A:B to "A is to B" and choose the answer candidate ("C is to D") with the highest cosine similarity to the query. C.3 Training Process To pre-train language models on the sample data from ANALOGYKB, we follow the code from Huggingface 5. Since previous benchmarks, except E-KAR, do not have a training set, we fine-tune LMs on their small development set (about 300 samples). To achieve hyperparameter search, we maximize performance on the development set of E-KAR (119 data samples) as a compromise. The training settings are: batch size = 64, learning rate = 3e-5, dropout rate = 0.1 and training epoch = 10. C.4 Comparison with Different KB Sources We also create two ablated variants to train the models to evaluate the necessity of ConceptNet and Wikidata: 1) Analogies from ConceptNet, denoted as Datacon: we randomly sampled 10,000 (the same size as before) data of the relations only in ConceptNet as an ablated variant. 2) Analogies from Wikidata, denoted as Datawiki: we randomly sampled 10,000 data of the relations only in Wikidata as an ablated variant. The results in Figure 7 show that ANALOGYKB can combine the commonsense knowledge of ConceptNet and the entity knowledge of Wikidata and thus exhibits superior 5https://huggingface.co/docs/transformers/ tasks/multiple_choice 1261 Category Definition Example Analogies A:B::C:D (A is to B as C is to D) Up:Down::High:Low, Tim Cook:Apple::Joe Biden:USA Concept pairs A:B or C:D Left:Right, Tim Cook:Apple Relation pairs Two relations (antonym, CEO), (CEO, head of state) Analogous relations Two relations that can form analogies (CEO, head of state) Analogies of analogous relations A:B::C:D where the relation of A:B is different but analogous to the relation of C:D Tim Cook:Apple::Joe Biden:USA Table 9: The definitions of terminologies with examples in the schema for ANALOGYKB Dataset Overlap Rate E-KAR 28.98% BATS 78.25% UNIT 2 52.32% UNIT 4 41.48% Google 98.52% SAT 34.70% Table 10: The overlap rates of ANALOGYKB with other analogy datasets. E-KAR BATS UNIT 2 UNIT 4 Google SAT Mean 0 20 40 60 80 100 Pre-trianed Acc(%) 46 79 50 46 96 49 61 39 73 39 38 89 41 53 43 79 45 45 92 48 59 Data Datacon Datawiki E-KAR BATS UNIT 2 UNIT 4 Google SAT Mean 0 20 40 60 80 100 Fine-tuned Acc(%) 53 90 87 76 97 59 77 47 88 66 58 97 53 68 52 89 76 67 97 57 73 Figure 7: The accuracy of RoBERTa-Large trained on different data subsets on the analogy recognition task. Data denotes the dataset sampled directly from ANALOGYKB and Datacon (or Datawiki) denotes the analogies only from ConceptNet (or Wikidata). All the datasets have the same size. performance in improving the analogy-making ability of models compared to utilizing a single data source. C.5 Significant Test For the results in Table 5, we demonstrate that the random sampling of data does not greatly impact the accuracy through the statistical significance test. Specifically, we sample the training data of ANALOGYKB twice with different random seeds and run our method on these benchmarks in Table 5. Then, /* Task prompt */ Please make analogies. /* Examples */ input: artist is to paintbrush as magician is to output: wand input: razor is to shave as knife is to output: cut ... /* Test data */ input: classroom is to desk as church is to output: pew Table 11: Prompt for LLMs for analogy generation task. Generated texts by LLMs are highlighted. Data Size Hit@k E-KAR UNIT 4 SAT 100K 1 30.00 38.00 25.00 3 33.00 44.00 25.00 5 33.00 44.00 26.00 500K 1 39.00 53.00 38.00 3 42.00 58.00 38.00 5 42.00 63.00 41.00 1M 1 57.00 80.00 64.00 3 62.00 86.00 76.00 5 66.00 91.00 84.00 Table 12: The model trained on data with different sizes is T5-Large (770M). we implement a t-test on the two results with a 0.05 significance level. The result is not significant (pvalue: 0.208), and thus we can not reject the null hypothesis (H0: r1 −r2 = 0, where ri=(Acc. of E-KAR, Acc. of BATS, Acc. of UNIT 2, Acc. of UNIT 4, Acc. of Google, Acc. of SAT)); Furthermore, we fix the training data of ANALOGYKB and run our method on the benchmarks in Table 5 twice with different random seeds. The result is insignificant (p-value: 0.250), and thus, we can not reject the null hypothesis (H0: r1 −r2 = 0). For the results in Figure 4, we conduct a statistical significance test on Data and Datasame. We 1262 Model Size Hit@k E-KAR UNIT 4 SAT T5-small (60M) 1 18.00 18.00 14.00 3 18.00 21.00 15.00 5 18.00 22.00 15.00 T5-base (220M) 1 22.00 31.00 28.00 3 23.00 31.00 34.00 5 25.00 31.00 34.00 T5-large (770M) 1 57.00 80.00 64.00 3 62.00 86.00 76.00 5 66.00 91.00 84.00 Table 13: The model trained on data with different sizes is T5-Large (770M). average the accuracy of the two settings and implement a t-test with a 0.05 significance level. The null hypothesis H0 is r1 −r2 = 0, and the H1 is r1 −r2 > 0, where r1 and r2 are the lists of benchmarks’ average accuracy of Data and Datasame in Pre-trained and Fine-tuned settings. The result is significant (p-value: 0.012), and we can reject the null hypothesis H0. Thus, we can conclude that analogies of analogous relations in ANALOGYKB are rather important for models in the analogy recognition task. D Analogy Generation Task D.1 Training Process To construct the training data, we convert A:B::C:D to “A is to B as C is to D” and let T5-Large generate the concept D given the input text “A is to B as C is to”. The training settings are: batch size = 32, learning rate = 3e-5, dropout rate = 0.1 and training epoch = 20. D.2 The impact of data sizes and model sizes For the analogy generation task, we have examined the effects of training data size and model size on model performance. The results in Fugure 12 and Fugure 13 show that: 1) By incorporating a larger volume of data from ANALOGYKB, we observe a gradual improvement in model performance, revealing the essential role of ANALOGYKB. 2) Only larger models with enough training data can boost the ability to generate reasonable analogies. D.3 Results on Six Benchmarks in Analogy Generation Tasks We expanded the experiments in Table 6 to six analogy benchmark tasks. The results in Table 14 indicate that compared to analogies with simple and same relations, ANALOGYKB is more crucial for models to understand analogies with more abstract and complex relations, such as E-KAR, UNIT 4, and SAT. D.4 Case Study We are curious whether LMs trained on ANALOGYKB can generalize to novel analogies. After manual inspection, we observe from Table 15 that, AnalogyT5 can generate a reasonable concept D for the input. AnalogyT5 also generates reasonable analogies of analogous relations, such as “triangle” is to “area” as “cube” is to “volume”. However, analogies about adjectives are more error-prone, possibly due to the paucity of adjectives in ANALOGYKB. We also discover that training on ANALOGYKB enables LMs to generate reasonable analogies by changing concept B while holding fixed A (i.e., electron) and C (i.e., earth). D.5 Out-of-domain Analogy Dataset SCAN (Czinczoll et al., 2022) is an analogy dataset consisting of 449 analogy instances clustered into 65 full-concept mappings. The overlap rate of ANALOGYKB with SCAN is only 2.67%. An example mapping in SCAN is shown in Table 16. Unlike the previous analogy dataset, SCAN mainly contains metaphorical and scientific analogies, which are abstract and thus rarely appear in the corpus and are difficult for LMs. In addition, each concept in SCAN only has one token and SCAN is not confined to the word analogy task due to its full-concept mappings. Baseline The original paper evaluates the analogical capabilities of GPT-2 and BERT on the SCAN dataset. The authors convert the analogy instance to “If A is like B, then C is like D”, and force the models to predict the last token of the sentence. For GPT-2, the model needs to generate the last token given the input text “If A is like B, then C is like”. For BERT, the authors first mask D as “If A is like B, then C is like [MASK]” and let the model predict word D. In addition, the authors fine-tune the LMs on the 1,500-sized set of BATS (i.e., + BATS) and investigate whether the models learn about analogical reasoning in general after training on BATS. We follow this setting and randomly sample 1,500 data from ANALOGYKB and fine-tune the LMs on the sample data (i.e., + ANALOGYKB). To prove the necessity of analogies of analogous relations, we randomly sample 1,500 analogies of the same relations as 1263 Model E-KAR UNIT 4 SAT BATS UNIT 2 Google vanilla T5 13.00 17.00 8.00 38.00 35.00 45.00 AnalogyT5same 42.00 63.00 37.00 75.00 73.00 94.00 AnalogyT5 57.00 80.00 64.00 80.00 84.00 95.00 InstructGPT003 61.00 70.00 60.00 82.00 79.00 94.00 + Human 68.00 76.00 74.00 85.00 83.00 98.00 + ANALOGYKBsame 64.00 77.00 77.00 83.00 85.00 100.00 + ANALOGYKB 75.00 80.00 85.00 88.00 88.00 100.00 ChatGPT 58.00 76.00 78.00 84.00 84.00 96.00 + Human 64.00 81.00 80.00 88.00 88.00 100.00 + ANALOGYKBsame 64.00 80.00 81.00 92.00 91.00 100.00 + ANALOGYKB 69.00 92.00 91.00 96.00 94.00 100.00 Table 14: The accuracy of different methods on the six analogy benchmark tasks in the analogy generation task. Input Completion Mcdonald is to America as Samsung is to south korea oxygen is to breathe as brain is to thinking terrestrial is to land as aquatic is to water meticulous is to careful as ascetic is to asceticism triangle is to area as cube is to volume electron is to nucleus as earth is to sun electron is to electric force as earth is to gravity electron is to atom as earth is to solar system Table 15: Randomly selected and novel analogy generated from the AnalogyT5. Novel generations are concept pairs not found in the training set of AnalogyT5. Whether the analogy is considered plausible or not is decided by human annotators. Target Source Attribute mapping Argument War Debater Combatant Topic Battleground Claim Position Criticize Attack Rhetoric Maneuver Table 16: Example mappings in SCAN. For a source concept, multiple related attributes are mapped to corresponding attributes of the target concept. an ablated variant (i.e., + ANALOGYKBsame). We also added LMs trained on the 800 data points of E-KAR (i.e., + E-KAR). We further explore the performance of LLMs on the SCAN dataset. Specifically, we also adopt the prompt in Table 11 to let LLMs generate the word D. Since each concept in SCAN has only one token, we can obtain the top 5 results from InstructGPT through the OpenAI API. Evaluation Metrics Following Czinczoll et al. (2022), we report accuracy, recall@5 and the mean reciprocal rank (MRR) to compare the performance of models. To reduce computing, we only consider the MRR of the first token of the target word among the top 10 predicted tokens. The RR of a label is 0 if it is not in the top 10 tokens. Training Process The training settings of GPT-2 and BERT are: batch size = 128, learning rate = 3e-5, dropout rate = 0.1 and training epoch = 10. 1264 Thanks for participating in this HIT! Please spend some time reading this instruction and the example section to better understand our HIT! In this hit, you need to first read the schema in our ANALOGYKB and then check whether the given relation pair is analogous. We focuses on the analogy formed as A:B::C:D, where concepts as A, B, C and D can be entities or events. The concept pair A:B is analogous to C:D based on an underlying relational structure. Since ANALOGYKB is built on existing KGs, we define two types of that relational structure based on KG semantics: 1. Analogies of the same relation 2. Analogies of analogous relations. Within each relation, analogies of the same relation can be naturally formed, e.g., "Up is to Down as High is to Low". Also, the concept pairs between two relations can form analogies, as long as the relation pair have analogous structures. For example, "Tim Cook is to Apple as Joe Biden is to USA", where CEO is analogous to head of state. Therefore, ANALOGYKB only has to store concept pairs of each relation and analogous relation pairs, from which analogies can be easily derived. After reading the above context, we believe you have understood the schema for analogies in ANALOGYKB and analogies of analogous relations. Next, you need to manually examine each relation pair. One data example is shown in Figure. You need to perform two steps to complete the examination: One data example is shown in Figure Step l: Check whether the given relation pair example is analogous. If not, please delete this relation pair Textbox [lyrics by, composed by] Yes No Step 2: Check whether the relations in the given relation pair can be analogous to other relations. If so, please add the new analogous relations. Textbox please add the new analogous relations: (Rl, R2) Submit ◄ Figure 8: The screenshots of the instructions and annotation interface. 1265
2024
68
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12603–12621 August 11-16, 2024 ©2024 Association for Computational Linguistics To Distill or Not to Distill? On the Robustness of Robust Knowledge Distillation Abdul Waheedξ Karima Kadaouiξ Muhammad Abdul-Mageedξ,γ,λ ξMBZUAI γThe University of British Columbia λ Invertible AI {abdul.waheed,karima.kadaoui}@mbzuai.ac.ae [email protected] Abstract Arabic is known to present unique challenges for Automatic Speech Recognition (ASR). On one hand, its rich linguistic diversity and wide range of dialects complicate the development of robust, inclusive models. On the other, current multilingual ASR models are compute-intensive and lack proper comprehensive evaluations. In light of these challenges, we distill knowledge from large teacher models into smaller student variants that are more efficient. We also introduce a novel human-annotated dataset covering five under-represented Arabic dialects for evaluation. We further evaluate both our models and existing SoTA multilingual models on both standard available benchmarks and our new dialectal data. Our best-distilled model’s overall performance (45.0% WER) surpasses that of a SoTA model twice its size (SeamlessM4T-large-v2, WER=47.0%) and its teacher model (Whisper-large-v2, WER=55.1%), and its average performance on our new dialectal data (56.9% WER) outperforms all other models. To gain more insight into the poor performance of these models on dialectal data, we conduct an error analysis and report the main types of errors the different models tend to make. The GitHub repository for the project is available at https: //github.com/UBC-NLP/distill-whisper-ar. 1 Introduction There have been significant advancements in multilingual automatic speech recognition (ASR) in both training methodologies and architectures. Models such as OpenAI’s Whisper (Radford et al., 2023) and Meta’s SeamlessM4T (Communication et al., 2023) can transcribe speech from languages in the order of the hundreds, albeit with varying degrees of accuracy. Especially for low-resource languages, these models do not perform well (Radford et al., 2023; Williams et al., 2023; Talafha et al., 2023a). Arabic, for example, poses significant challenges to these multilingual models and hence is the object of the current work. Arabic can be classified into three broad categories, namely: Classical Arabic (CA), used in early literature and religious texts; Modern Standard Arabic (MSA), the ‘high’ variety used in official documents and in the media; and Dialectal Arabic (DA), the collection of ‘low’ varieties used in day-to-day conversations (Bouamor et al., 2014). DA can vary extensively at the regional level (e.g. Gulf vs Maghrebi), country level (e.g. Egyptian vs Sudanese), and sub-country (e.g. Hourani or Northern Jordanian Dialect vs Urban or Madani dialect) (Habash, 2022; Abdul-Mageed et al., 2020; Shon et al., 2020; Abdul-Mageed et al., 2018). Due to the significant differences in lexicon, phonetics, and even grammar between these varieties, ASR systems trained on MSA alone cannot be reliably leveraged off-the-shelf for all Arabic speech. Developing effective models for DA can prove especially difficult, given the lack of standardized orthography, the scarceness of labeled data for many dialects, inconsistent use of diacritics, and use of code-switching (Ali et al., 2021). Although most multilingual and multimodal systems (e.g., (Radford et al., 2023; Barrault et al., 2023; Communication et al., 2023)) cover Arabic, their evaluation predominantly involves benchmarks established for MSA, such as FLEURS (Conneau et al., 2022), Common Voice (CV) (Ardila et al., 2020), and the Arabic Speech Corpus (ASC) (Halabi et al., 2016). Since Arabic exhibits substantial linguistic diversity, encompassing various varieties and dialects, evaluations conducted solely on MSA are inherently limited. Existing works aiming to address this gap, e.g., (Talafha et al., 2023b), lack thorough evaluation and do not cover current state-of-the-art (SoTA) models. To address this, we conduct a comprehensive evaluation of all recently developed models on a linguisti12603 cally diverse set of Arabic datasets. Beyond the challenge of inadequate evaluation, the deployment of massive multilingual multimodal systems such as SeamlessM4T (Communication et al., 2023) and Whisper (Radford et al., 2023) is hampered by the considerable computational resources they require during both training and inference. These efficiency issues pose a significant accessibility barrier, discriminating against populations with limited resources. To alleviate this concern, we employ a framework for knowledge distillation (Gandhi et al., 2023) from large models such as Whisper (Radford et al., 2023) into relatively compact models for Arabic speech recognition. We show that our distilled models are not only compute-efficient but their performance is on par or better compared to larger counterparts. In summary, the gaps in existing work include (1) the insufficient knowledge about the utility of recent multilingual speech model models on Arabic, including dialects, (2) the discrepancy in representing some Arabic dialects in existing dialectal benchmarks, and (3) the inefficiency of these models due to their large sizes which demands significant compute resources at both training and inference time. We address these limitations through a number of contributions, as follows: • We evaluate major multilingual speech models on a wide variety of standard benchmarks representing Arabic to identify their zero-shot performance. • To evaluate the models under diverse varieties, we introduce a never-seen in-house labeled ASR dataset covering five under-represented Arabic dialects. • We distill knowledge from large ASR models into relatively small, and hence more efficient, (student) models with minimal-to-no performance drops compared to the bigger (teacher) counterparts. The rest of the paper is organized as follows: Section 2 is a review of related works. In Section 3, we introduce knowledge distillation and outline our related methods and training strategies. In Section 4, we provide details about our experiments, and in Section 5 introduce and discuss our results. Section 6 delivers a thorough error analysis based on the model predictions on our new dialectal data. We conclude the work in Section 7. Finally, we oultine our limitations and ethical considerations in Sections 8 and 9, respectively. 2 Related Work Multilingual ASR. Recent efforts in ASR have focused on building massive multilingual systems (Communication et al., 2023; Barrault et al., 2023; Radford et al., 2023; Pratap et al., 2023b; Dhawan et al., 2023; Rekesh et al., 2023; Zhang et al., 2023a; Baevski et al., 2020a; Conneau et al., 2020). These multilingual models perform quite well for high-resource languages such as English across various evaluation settings. However, they often perform poorly for low-resource languages and in challenging settings (Williams et al., 2023; Talafha et al., 2023a; Chemudupati et al., 2023; Bhogale et al., 2023; Pratama and Amrullah, 2024; Radford et al., 2023). This suggests that a thorough evaluation of these systems for low-resource languages is needed. Arabic ASR. For Arabic, the performance of these models remains under-explored. While OpenAI’s Whisper model whisper-large-v3 (Radford et al., 2023) achieves 15.1% word error rate (WER) on Common Voice 15.0’s (Ardila et al., 2020) Arabic split and 9.6% WER on FLEURS (Conneau et al., 2022), a performance close to humanlevel, Talafha et al. (2023b) show that it is vulnerable to linguistic variations where its performance degrades substantially on several Arabic dialects. Furthermore, the performance of other multilingual systems such as SeamlessM4T (Communication et al., 2023), Universal Speech Model (USM) (Zhang et al., 2023a), and XLS-R (Babu et al., 2021b) on diverse Arabic varieties remains unknown. Efficiency. The size of these massive multilingual systems poses another challenge to their usability. To address this, Gandhi (2024) shows that speculative decoding (Leviathan et al., 2023) can expedite the generation from Whisper by a factor of two. Efficient transformer inference engines such as CTranslate2 (OpenNMT) based inference SYSTRAN can also improve the generation speed, despite having the same memory requirements. Moreover, although quantization techniques have been effective in reducing memory requirements (Frantar et al., 2022; Lin et al., 2023), they do not decrease the number of active parameters, leading to variable improvement in generation speed (Jin et al., 2024). 12604 Knowledge distillation. Knowledge distillation is a method used to transfer knowledge from large models to smaller ones, thereby reducing both memory and compute requirements (Hinton et al., 2015; Sanh et al., 2019; Gou et al., 2021; Lopes et al., 2017a; Kim and Rush, 2016). This technique has been effectively applied in various domains. For example, in computer vision applications, knowledge distillation results in compact and efficient models (Kaleem et al., 2024; Koohpayegani et al., 2020). Similarly, in diffusion models (Luo, 2023) and large language models (Xu et al., 2024), knowledge distillation produces small, efficient, and task-specific models. Yang et al. (2023) distill knowledge from multiple foundation models into small and dedicated speech recognition models. Ni et al. (2023) proposes cross-modality knowledge distillation from large language models into speech models. Knowledge distillation in speech. Ferraz et al. (2024) distill knowledge from a large Whisper model into small multilingual models but limit their evaluation to standard benchmarks in eight languages (Arabic not included in the set). Shao et al. (2023) apply a novel distillation approach to Whisper, reducing its size by 80-90% while also improving its performance. Chang et al. (2021) propose a layer-wise distillation approach that reduces the size of a Hubert model by 75% while increasing its processing speed by 73%, retaining most of the original model’s performance across multiple tasks. In addition to that, researchers have introduced methods for model compression, such as data-free knowledge distillation and teacher-student (TS) learning for domain adaptation (Lopes et al., 2017b; Manohar et al., 2018). These approaches involve training student models to mimic teacher models using various strategies, including Gaussian noise generation and sequence-level Kullback-Leibler (KL) divergence (Kullback and Leibler, 1951). Among different knowledge distillation approaches such as the ones highlighted above, the standard student-teacher distillation is a task- and modality-independent framework that is simple yet effective. Gandhi et al. (2023) use this framework to distill Whisper into small monolingual models for English using large-scale pseudo-labels. However, their work is limited to high-resource language. We take inspiration from (Gandhi et al., 2023) and distill Whisper into small models for Arabic and perform a thorough evaluation. One difference between our work and that of (Gandhi et al., 2023) is that while there is limited information about the out-of-distribution datasets of (Gandhi et al., 2023)’s work and whether they are part of the teacher’s training data, we employ new dialectal speech data never seen by the model. 3 Knowledge Distillation Knowledge distillation is a method of transferring knowledge from a large model (teacher) to a relatively small model (student). The student model is trained to mimic the behavior of the teacher model both at the dense representation level and the sequence level (Hinton et al., 2015; Sanh et al., 2020; Kim and Rush, 2016). Following Gandhi et al. (2023), who distill a Whisper model for English ASR, we first generate large-scale pseudo-labels from the teacher model and apply a threshold to filter the output. We then train the student model with high-quality filtered pseudo-labels as ground truth, which can be expressed as: LPL = − N′ X i=1 P (yi|ˆy<i, H1:M) (1) The student model is also trained to minimize the discrepancy between the probability distributions over tokens of the student and teacher models, based on KL divergence: LKL = N X i=1 KL (Qi, Pi) (2) Objective: We take the weighted sum of (1) and (2) to get the final objective, which can be written as: LKD = αKLLKL + αPLLPL (3) We use the same values for αKL (0.8) and αPL (1.0) as Gandhi et al. (2023). Details about our teacher and student models, along with training data, can be found in Table 2. 4 Experiments 4.1 Datasets Common Voice. CV (Ardila et al., 2020) is a widely used multilingual benchmark for speech recognition. In our experiments, we use the test and validation splits of four different CV versions (6.1, 9.0, 11.0, 15.0) which have been widely used 12605 Dia. Utt. Words Words/Utt. Hours ALG 815 8,900 10.92 0.97 JOR 2,671 28,291 10.59 3.27 PAL 1,097 15,152 13.81 1.67 UAE 3,701 41,345 11.17 4.42 YEM 2,283 27,605 12.09 2.94 Total 10567 121293 11.48 13.29 Avg. 2113.4 24258.6 11.72 2.65 Table 1: Utterance and word count statistics across the different dialects from our in-house dataset. Di.: Dialect. #: Number of. Avg.: Average. Utt.: Utterance in other work for training and evaluating Arabic ASR models (Talafha et al., 2023b; Waheed et al., 2023). Upon inspection of the data, we found it to be composed of mostly MSA along with some CA speech. Multi-Genre Broadcast. Multi-genre broadcast (MGB) (Ali et al., 2019a,b, 2017) is a challenge for a wide range of Arabic speech understanding tasks such as speech recognition, speaker identification, dialect identification, etc. We experiment with three variants, namely MGB2, MGB3, and MGB5. MGB2 has roughly around 70% MSA, with the remainder containing other dialects (Ali et al., 2016). MGB3 is predominantly composed of Egyptian Arabic, while MGB5 focuses on Moroccan Arabic. FLEURS. FLEURS (Conneau et al., 2022) is a multilingual collection of parallel speech corpora. We use the dev and test splits of the Arabic subset “ar_eg”, which contains MSA spoken with an Egyptian accent, to evaluate our models in a zero-shot setting. We also use the train split in distillation. In-House Data. In response to the notable scarcity of publicly available dialectal data, we manually curate a dataset representing five underrepresented Arabic dialects, namely Algerian (ALG), Jordanian (JOR), Palestinian (PAL), Emirati (UAE), and Yemeni (YEM), spanning four dialectal regions (North African, Levantine, Gulf, and Yemeni). We task native speakers of each dialect to annotate segments from local TV series sourced from YouTube. Our dataset comprises a total of 10, 567 utterances and 121, 293 words (2, 133 utterances and 24, 258 words per dialect, on average) amounting to over 13 total hours. Individual statistics for each dialect can be found in Table 1. 4.2 Models We evaluate a wide range of multilingual speech recognition models on different varieties of Arabic from the aforementioned datasets, including standard and accented MSA, and various Arabic dialects. We also distill small dedicated1 models from larger Whisper models. We categorize these systems as follows: 4.2.1 Supervised Baselines We evaluate two openly available supervised baselines along with a Whisper model that we fine-tune on Arabic ASR in a supervised setting. The first two models are Wav2Vec2-XLS-R (Conneau et al., 2020; Babu et al., 2021a), trained on CV8.0 which has significant overlap with other versions of the CV dataset, and HuBERT (Hsu et al., 2021), trained on MGB-3 (Ali et al., 2017) and the Egyptian Arabic Conversational Speech Corpus (5.5 hours). The third model is whisper-large-v2, which we finetune on CV11.0 and MGB-2. We evaluate all three models on the datasets listed in Section 4.1. This includes the in-distribution test and dev splits of MGB-2 and CV11.0. 4.2.2 Zero-Shot Models Large multilingual speech models are acclaimed for transcending language and task barriers. In particular, these models are usually claimed to demonstrate proficiency in a variety of speech tasks on English in the zero-shot setting. However, it is crucial to conduct thorough evaluations of these models on other languages and dialects and under diverse conditions. Hence, our objective is to assess a wide array of zero-shot models on a wide range of Arabic speech recognition datasets to asses their robustness and generalization capability beyond English. We focus on a number of recently introduced models that have gained popularity in the community as well as existing commercial systems, as wel explain next. Whisper. Whisper (Radford et al., 2023) is a multilingual speech model capable of speech recognition and translation across languages including Arabic. We evaluate four variants of Whisper, namely small (W-S), medium (W-M), large-v2 (W-L-v2), and large-v3 (W-L-v3). We use all the default parameters for decoding with a maximum sequence length of 225 tokens. 1Our models are ‘dedicated’ in the sense that they are solely focused on Arabic and only handle ASR. 12606 SeamlessM4T. Multimodal multilingual speech models are also capable of generating high-quality transcripts across languages (Communication et al., 2023). However, they lack a comprehensive evaluation in languages besides English. We address this by evaluating three available variants of SeamlessM4T (medium (SM4T-M), large-v1(SM4T-Lv1) and large-v2 (SM4T-v2)) for Arabic ASR in a zero-shot setting. We use all the default parameters provided in the model’s inference pipeline. Commercial Systems. We broaden our evaluation beyond publicly accessible ASR models, incorporating proprietary platforms, with a focus on Amazon’s ASR system. Due to cost considerations, our evaluation is exclusively centered on the Amazon Transcribe service on our in-house data.2 4.2.3 Distilled Models As described in Section 3, we distill whisper-largev2 into seven different student models (see Table 2). We provide more details about the teacher and student models and distillation data here. Teacher and Student Models. We use a whisperlarge-v2 checkpoint for pseudo-labeling and the same model as the teacher during training. We train four variants of the student model in different configurations in terms of the number of layers being removed. Following Gandhi et al. (2023), we initialize the student models with maximally spaced layers in the encoder and decoder block of the teacher model. We provide more details about our distilled models in Table 2. Model # EL # DL Data W-L-v2 32 32 N/A DW-8-8 8 8 100K DW-16-16 16 16 100K DW-32-16 32 16 100K DW-16-32 16 32 100K DW-16-16++ 16 16 500K DW-32-16++ 32 16 500K DW-16-16-1M 32 16 1M Table 2: The student models are initialized from maximally spaced layers of the teacher model. The size of data is stated as the number of segments. All distilled models are trained for ten epochs. W-Lv2: Whisper-large-v2. #: Number of. DW: DistillWhisper. EL: Encoder Layers. DL: Decoder Layers. Training Data. We randomly sample 100K and 2https://aws.amazon.com/transcribe/ 500K segments from a mixture of MGB2 (Ali et al., 2016), MGB3 (Ali et al., 2017), FLEURS (Conneau et al., 2022), CommonVoice 15.0 (Ardila et al., 2020), QASR (Mubarak et al., 2021), Arabic Speech Corpus (Halabi et al., 2016), and Massive Arabic Speech Corpus (MASC) (Al-Fetyani et al., 2021). This amounts to roughly 100 and 500 hours of pseudo labeled speech data, respectively. We explicitly include only the train split of each dataset. 4.3 Experimental Setup We conduct all of our training and evaluation experiments on 8xA100/4xA100 (40G) GPU nodes. For the evaluation, we use the default decoding parameters used in the corresponding models unless otherwise specified. We use 225 as the maximum sequence length throughout our experiments and report Word Error Rate (WER) and Character Error Rate (CER) as our evaluation metrics. For distillation, we use a value of 80% for the WER threshold λ to filter-out low-quality transcription from pseudo-labels for the results reported in Table 3. We also experiment with different threshold values and discuss the findings in Section 5. Although our threshold for main results seems too high, Gandhi et al. (2023) find that going from a threshold of 80 to five yields a marginal improvement of one point in terms of average WER across different indistribution and out-of-distribution evaluation sets. In addition, we believe that a high threshold value also helps approximate the performance where we do not have labeled data to conduct the filtering process, especially when labeled data is scarce. Due to computing limitations, we do not conduct any training hyperparameter search and directly apply the configuration used in Gandhi et al. (2023). For the distillation process, we report our key parameters in Table 5 (Appendix B). Text Preprocessing. In everyday writing, Arabic is characterized by inconsistencies in diacritics use and letter variations (e.g. @ vs @ ). This linguistic variability poses a challenge for ASR evaluation, as transcriptions that are phonetically accurate and intelligible to a native speaker might still be marked as errors due to strict lexical mismatches. To address this, we follow Talafha et al. (2023b); Chowdhury et al. (2021) to standardize and normalize the text. Specifically, we (1) remove any special characters and diacritics, (2) remove all Latin characters since we are not concerned about code-switching, (3) transliterate all Arabic digits (i.e. 1, 2, 3) to 12607 Arabic numerals (i.e. 1, 2, 3), and (4) normalize all alef variations to the one with no hamza. 5 Results and Discussion We evaluate all models on four versions of CV (6.1, 9.0, 11.0, 15.0), MGB-2, MGB-3, MGB-5, FLEURS, and our five novel dialectal sets. We report WER and CER scores on the orthographic and normalized predictions (as per Section 4.3) in Table 3 for test splits and in Table 6 (Appendix C) for dev splits. CV15.0 results are included in Table 3 and other versions can be found in Appendix C Table 7. Commercial Systems and Supervised Models. The supervised finetuned (SFT) baselines are trained on MGB-2, MGB-3, and the CV datasets. Other evaluation sets thus represent outof-distribution data. As a result, we see that supervised HuBERT (15.4) and Whisper (25.2) outperform all other models on in-distribution data MGB-2 and MGB-3, respectively. However, these baselines often perform poorly on all other evaluation sets that are not in their training data. On our private in-house data, the supervised models usually produce more incorrect words than the number of words in the corresponding reference. We find varying levels of transcription difficulty for these models when evaluated on distinct dialects and linguistic varieties. The Amazon transcribe system performs well on our in-house data compared to the supervised baselines. It gives 45.5% WER on JOR which is not too far from the best WER of 41.5% by SeamlessM4T-large-v2. We find that it struggles with ALG, which goes along the trend noticed with all models. Zero-Shot Models. We find that both Whisper and SeamlessM4T models perform quite well on CV3 and FLEURS in zero-shot setting. More specifically, the best Whisper model shows WER scores of 15.8% and 11.3% on CV and FLEURS, respectively, while the best SeamlessM4T achieves 9.7% and 7.6% WER. MMS shows the lowest performance of the zero-shot models and the second lowest across all model types with an 82.5% average WER. Meanwhile, a consistent challenge across all models is observed with MGB-5, followed by MGB-3. These last two datasets involve dialects; namely, EGY and MOR dialects respectively. The transition from MSA to dialects thus marks a signif3We refer to CV15.0 as CV. icant drop in performance, indicating the models’ difficulties in adapting to dialectal variations. This pattern becomes even more apparent when looking at the results of our in-house data. All models particularly struggle with the ALG and YEM dialects, whereas JOR and PAL are less challenging to transcribe. This underscores the distinct issues that dialectal diversity poses to current ASR systems. The best-performing model overall on both the existing datasets and our new data is SeamlessM4T-large-v2, showing a significant improvement in performance compared to its previous version. Although the size is the same between the two systems, Barrault et al. (2023) attributes the higher performance to its novel UnitY2 architecture. We also find that both the SeamlessM4T and Whisper family models consistently improve as we increase in size, except for SeamlessM4T-medium (48.1% WER) which outperforms SeamlessM4T-large-v1 (51.1% WER) model on average. Distilled Models.4 We distill a wide range of models of varying sizes from Whisper-large-v2 by reducing the number of encoder and decoder blocks. Our smallest distilled model, which has eight encoder and decoder blocks (resulting in approximately a 75% reduction in parameters from the teacher model), outperforms Whisper-medium with a WER of 64.8% compared to 65.4%, while being half the size (Table 3). When comparing our distilled models with smaller Whisper variants, we find that DW-16-16 outperforms Whisper-medium by over 12 points. However, both these models are similar in size. As expected, we observe that increasing the number of layers in the distilled model enhances its performance. Consequently, our best-performing distilled model, DW-32-16++ (WER 45.0%), surpasses all other models, including Whisper-largev3 (WER 49.5%) and SeamlessM4T-v2 (WER 47.0%), despite being half of its size (see Table 3). To sum up, our best-performing distilled models yield the best results in terms of WER on four out of ten evaluated datasets and are on par with an overall best model in terms of average WER while being half in size. However, when looking at the average performance across in-house data only, it outperforms all other systems with a 56.9% WER, whereas the best zero-shot model (Seamlessm4Tlarge-v2) has 61.74% WER and teacher model 4We call the models trained on 500K segments DW16-16++ and DW-32-16++, and the model trained on 1M segments DW-16-16-1M. 12608 Model Size CV15.0 MGB2 MGB3 MGB5 Fleurs In-house Data Avg. ALG JOR PAL UAE YEM Normalized + No Diacritics Amazon -/-/-/-/-/-/83.6/70.2 45.5/25.6 52.4/29.0 58.8/40.8 64.7/43.5 61.0/41.8 XLS-R 0.96 89.7/39.4 97.6/53.1 98.7/61.6 99.5/68.0 94.9/43.9 99.7/67.0 99.1/61.4 99.1/61.1 99.4/64.6 99.5/63.6 97.7/58.4 HuBERT 0.31 55.2/18.9 49.6/17.3 25.2/9.5 92.4/45.5 34.9/10.9 96.8/44.3 65.2/23.3 73.8/27.9 83.0/36.7 90.5/38.8 66.7/27.3 W-FT 1.5 35.8/21.9 15.3/8.1 48.9/26.9 101.4/62.3 9.8/3.4 115.5/69.6 67.8/37.2 69.6/35.4 105.9/69.1 107.1/64.8 67.7/39.9 MMS-all 1.0 106.4/80.9 39.3/13.4 75.3/34.6 89.7/45.9 23.8/6.3 100.2/78.0 89.8/55.4 99.9/75.1 100.1/78.1 100.2/76.6 82.5/54.4 SM4T-M 1.2 16.3/5.7 19.5/9.0 41.4/21.7 83.8/46.6 8.7/3.6 81.1/39.7 46.3/15.9 55.2/20.1 59.8/24.7 68.9/29.5 48.1/21.7 SM4T-L-v1 2.3 19.8/7.3 21.8/10.5 44.4/22.6 89.9/52.1 11.1/5.1 87.9/47.8 50.7/18.8 57.5/23.1 61.8/27.4 72.2/32.5 51.7/24.7 SM4T-L-v2 2.3 11.3/3.5 17.3/8.7 36.2/18.6 89.1/53.7 7.6/4.0 92.1/52.0 41.5/14.6 49.5/17.2 55.9/23.3 69.7/30.7 47.0/22.6 W-S 0.24 40.3/16.4 46.8/24.7 81.4/51.9 226.5/164.8 28.2/8.7 130.7/84.7 68.6/32.9 73.8/36.3 97.8/59.7 107.1/66.7 80.8/45.7 W-M 0.77 29.8/13.2 33.1/18.5 64.3/39.5 127.7/88.3 16.4/5.1 103.7/69.9 50.5/21.1 58.7/24.7 82.5/52.6 86.8/52.0 65.4/38.5 W-L-v2 1.5 19.6/7.8 26.5/15.3 53.0/33.0 99.2/68.9 11.4/3.6 106.4/71.7 42.3/17.0 51.1/22.3 63.8/38.2 77.3/45.5 55.1/32.3 W-L-v3 1.5 15.8/5.2 15.9/7.6 35.7/17.3 79.8/44.6 9.7/3.2 101.9/65.4 43.6/16.3 53.4/22.7 63.4/32.7 76.1/38.9 49.5/25.4 DW-8-8 0.44 32.7/12.3 39.6/17.8 64.9/36.6 89.7/53.0 29.8/11.4 91.4/48.2 66.2/29.0 73.2/33.0 78.0/38.4 82.9/41.5 64.8/32.1 DW-16-16 0.80 22.1/7.2 26.0/10.8 50.5/25.1 82.4/43.3 18.8/6.6 83.0/38.5 50.4/18.2 61.0/23.3 64.6/27.7 72.7/31.6 53.2/23.2 DW-32-16 1.12 18.8/5.9 21.1/8.9 43.8/21.4 78.9/40.4 14.2/4.8 79.5/33.4 44.4/14.7 55.0/19.5 58.1/22.8 68.5/28.1 48.2/20.0 DW-16-32 1.22 21.5/7.3 25.0/10.7 49.1/26.3 83.0/47.5 18.4/6.0 84.3/44.0 49.8/18.0 60.3/25.4 64.4/29.0 73.8/36.8 53.0/25.1 DW-16-16++ 0.80 19.2/6.2 23.0/10.2 47.2/24.8 79.0/42.6 15.0/5.2 79.0/39.0 46.7/17.2 56.4/21.6 60.4/26.8 69.1/31.5 49.5/22.5 DW-32-16++ 1.12 17.1/5.5 19.7/8.8 40.7/20.3 76.6/40.6 11.1/3.1 74.6/33.3 41.6/13.4 51.4/18.8 53.5/21.1 63.5/26.8 45.0/19.2 Orthographic Amazon -/-/-/-/-/-/88.0/71.6 59.2/29.1 63.4/32.2 71.1/44.3 77.4/47.7 71.8/45.0 XLS-R 0.96 92.7/46.7 97.7/54.5 99.1/64.5 99.6/70.1 95.1/45.4 99.7/68.0 99.3/62.9 99.2/62.8 99.5/66.4 99.7/66.4 98.2/60.8 HuBERT 0.31 76.5/31.0 59.4/20.3 43.3/16.5 95.0/48.7 48.9/14.4 96.2/45.6 70.6/25.4 81.5/31.4 87.9/39.9 91.3/40.8 75.1/31.4 W-FT 1.5 70.0/33.8 29.4/10.9 60.1/32.2 105.0/64.3 28.7/7.3 114.5/70.3 75.1/39.0 81.3/38.7 113.7/70.9 110.1/65.6 78.8/43.3 MMS-all 1.0 106.0/82.5 40.3/14.0 77.7/38.1 90.4/48.5 28.8/7.8 100.2/77.8 91.5/56.2 100.0/75.8 100.1/78.4 100.1/76.8 83.5/55.6 SM4T-M 1.2 42.3/18.2 28.1/11.2 50.2/26.8 88.2/50.8 19.5/6.0 84.5/42.8 55.2/18.7 63.0/23.0 68.0/28.1 79.4/34.5 57.8/26.0 SM4T-L-v1 2.3 44.2/19.1 25.9/11.7 52.5/27.6 92.8/55.9 22.6/7.6 89.7/50.3 59.1/21.7 64.7/25.8 69.0/30.3 81.5/37.0 60.2/28.7 SM4T-L-v2 2.3 37.7/15.8 22.4/9.9 46.7/23.9 92.1/58.4 19.8/6.5 94.8/55.2 51.3/17.6 58.5/20.1 65.6/26.9 80.6/35.5 57.0/27.0 W-S 0.24 68.9/31.8 49.5/25.7 84.8/55.4 228.6/164.5 33.4/10.3 129.15/87.85 75.25/36.55 79.73/39.3 103.83/63 112.69/70.69 96.6/58.5 W-M 0.77 55.1/24.2 37.6/19.6 71.5/43.7 129.7/89.4 24.0/7.1 103.9/71.4 59.0/23.9 66.8/27.6 90.7/55.7 95.2/56.2 73.4/41.9 W-L-v2 1.5 46.9/19.6 33.7/16.9 60.6/37.7 101.1/71.1 19.7/5.6 106.9/74.6 51.2/19.6 60.2/25.2 73.2/41.2 86.9/50.1 67.4/38.3 W-L-v3 1.5 43.2/16.9 20.4/8.6 44.6/22.5 82.0/47.7 16.4/4.8 103.8/68.9 52.7/18.9 64.3/26.4 72.3/35.9 86.0/43.3 58.6/29.4 DW-8-8 0.44 55.0/23.2 44.4/19.2 69.2/40.4 91.0/55.5 36.1/13.3 91.5/49.6 71.4/31.2 78.4/35.6 82.5/41.2 87.5/44.9 70.7/35.4 DW-16-16 0.80 48.0/18.9 33.2/12.5 57.1/29.6 84.1/46.2 26.2/8.5 83.8/40.2 57.8/20.5 68.2/26.2 72.0/31.0 80.0/35.6 61.0/26.9 DW-32-16 1.12 45.6/17.7 27.7/10.3 51.2/26.1 80.9/43.4 22.0/6.6 80.5/35.1 52.6/17.1 62.9/22.4 66.7/26.3 77.3/32.6 56.7/23.8 DW-16-32 1.22 47.4/18.8 29.9/11.8 55.5/30.7 84.7/50.2 26.0/7.9 84.8/45.7 57.4/20.4 67.4/28.2 71.7/32.3 81.5/40.9 60.6/28.7 DW-16-16++ 0.80 44.1/17.1 28.5/10.5 54.5/28.5 83.2/45.6 22.4/6.9 82.3/38.7 55.4/18.9 65.2/24.9 69.3/28.2 76.8/33.0 58.2/25.2 DW-32-16++ 1.12 44.7/17.3 25.2/10.0 48.8/25.2 79.0/43.7 20.2/5.0 76.4/35.4 50.0/15.9 60.1/21.8 63.2/24.7 73.5/31.5 54.1/23.1 Table 3: WER/CER scores after normalization and removing diacritics as well as on orthographic transcription. Average is the mean score across all the evaluation sets. All distilled models are trained with a filtering threshold of 80. We report the score on the test split of each dataset. Abbreviations. W - Whisper, FT - Finetuned, M Medium, L - Large, S - Small, D - Distil. (Whisper-large-v2) 67.6%, showing substantial improvement on unseen dialects. We report the average across benchmark and in-house data in Appendix C Table 8. Our results underscore the distilled models’ inherent efficiency and generalization to unseen dialects, possibly resulting from the mixture of data. This may imply that the process retains the critical linguistic and acoustic features necessary for high-quality ASR in a linguistically diverse setting. Orthographic, Normalized, and NonDiacritized Evaluation. To better understand the effect of normalization and diacritics removal, we calculate WER/CER on orthographic, normalized, and normalized+non-diacritized (ND) transcriptions of Whisper-large-v2. We report the results in Table 4. With normalization, the WER goes down on CV from 47.1% to 39.4%. Similarly, we see a near 50% drop in WER on FLEURS which suggests that the model is much more prone to miss diacritics than missing entire words. However, in the case of MGB-3 and MGB-5, we do not notice any significant changes after pre-processing, which again shows the poor generalization capability of Whisper on unseen and linguistically diverse data. Dataset Ortho Norm Norm + ND CV 47.1/18.9 39.4/17.0 19.4/6.8 MGB3 52.4/28.2 46.6/23.9 43.5/21.9 MGB5 85.2/52.2 83.6/49.5 83.0/49.1 Fleurs 20.3/5.9 17.4/5.0 11.6/3.7 Table 4: WER/CER scores on orthogonal, normalized, and without diacritics outputs produced by Whisperlarge-v2. Abbreviations: Norm - Normalized, ND No Diacritics. Effect of WER Threshold. The knowledge distillation framework that we follow (Gandhi et al., 2023) involves pseudo-labels filtered by WER. While we initially use a WER threshold of 80 in Table 3, we experiment with different values to find an optimal threshold value that yields better results across different evaluation sets while reducing the amount of data required, subsequently resulting in 12609 faster training. The summary of our results is illustrated in Figure 1 and the detailed results can be found in Table 9 in Appendix C. From our experiments with the DW-16-16 and DW-32-16 models (trained on 100K segments), we find that discarding examples where the WER is above 80% (amounting to about 28% of the total examples) results in the best overall performance across different evaluation setups closely followed by the 20% WER threshold. Both these models significantly outperform the base teacher model whisper-large-v2 and the whisper-medium model, which is comparable in size. That being said, reducing the threshold from 20% to 10% worsens the models’ performance. However, training the models without applying any filtering still outperforms the zero-shot baselines. Based on these results, we conclude that there exists a trade-off between the quality and quantity of the distillation data. This implies that we can distill small and compute-efficient language-specific speech recognition models without training on any labeled speech data while being on par or better than the base models. Threshold (λ) WER 0 10 20 30 40 50 60 70 80 10 20 40 80 100 32-16-100K-Bench 32-16-100K-IH 16-16-100K-Bench 16-16-100K-IH ZS-Bench ZS-IH Figure 1: Average WER on five MSA benchmarks and five dialects from our in-house data with different filtering thresholds. The dotted flat line represents the Whisper-large-v2 (teacher) in the zero-shot setting. Abbreviations: Bench - Benchmark. IH - In-house. ZS - Zero-shot. Data Scaling. We train all of our models on 100K speech segments (≈100 hours) sampled from the mixture of over 3M segments (≈4000 hours) described in Section 4.1. We increase the data size from 100K to 500K and then up to 1M segments to study the effect of the quantity of the data. With the filtering threshold set to 20%, DW-16-16 trained on 500K segments outperforms the zero-shot teacher baseline on the MSA benchmark (36.7% WER compared to 42.0%) and is significantly better on the in-house data. This trend remains consistent after scaling the data to 1M segments: the model achieves 35.0% and 60.0% WER on the MSA benchmark and in-house data, respectively, compared to 42.0% and 68.0% from the zero-shot baseline, despite being half its size. Data WER 0 10 20 30 40 50 60 70 80 100 500 1000 Bench IH ZS-Bench ZS-IH Figure 2: Average WER from DW-16-16 model trained with different amounts of data. The dotted line represents the Whisper-large-v2 zero-shot baseline. Abbreviations: Bench - Benchmark. IH - In-house. ZS - Zero-shot. 6 Error Analysis To gain a better understanding of the results, we conduct an error analysis on our in-house data by randomly sampling 20 sentences per dialect from each models’ outputs, with the aim of identifying the specific types of errors present. We then categorize the errors into the following types: MSA Translation: The transcription is semantically accurate but employs words in MSA that differ from the dialectal words spoken in the utterance. Hallucination: The transcription is found to be both semantically and acoustically distant from the utterance. Deterioration: The transcription is either gibberish (random characters) or involves an excessive repetition of the same word or expression. Incomplete Transcription: Parts of the utterance are omitted and do not appear in the transcription. Empty Transcription: The model fails to generate a prediction at all. Dialectal Inaccuracies: The prediction and ground truth mismatch is of dialectal nature. Instances such as unrecognized dialectal words, first names, cultural expressions, pronunciations (e.g. Emirati Arabic subbing ya ø for jim h. ) and alternate orthographies fall in this category. An example for each of these categories can be found in Table 10 in Appendix D. 12610 Upon inspecting the initial results, we decided to further analyze the performance discrepancies among the ASR models by looking at the most problematic transcriptions. Due to the tedious aspect of this exercise, we limit this part of the analysis to five models: the best supervised baseline (HuBERT), the best Whisper (W-L-v3), the best SeamlessM4T (SM4Tv2), the best distilled (DW32-16), and finally Whisper-medium (W-M) (given its closeness in size to DW-32-16). We set a threshold of 75% CER and look at all the transcriptions with a higher error rate. It is noteworthy that a transcription could embody multiple error categories simultaneously, such as being both incomplete and translated to MSA. Our results show that the supervised baselines struggle the most, with W-M amounting to 635 highly erroneous transcriptions with hallucination (closely followed by deterioration) being the category with the most instances. Our D-W-32-16 model has the least issues with 108 cases, most of which are simple inaccuracies. This indicates that this model produces the most coherent outputs. In other words, this model is more likely to make predictions that maintain relevance, are logically consistent and closely aligned with the input speech. The fine-tuned model, HuBERT, makes considerably fewer errors than the supervised baselines but still struggles with a lot of deterioration. This category, however, looks different on HuBERT cases than it does on the Whisper and Seamless M4T models: instead of repeating words or characters, it outputs seemingly random sequences of characters occasionally including a single square bracket. These characters are stringed together in wordsized sequences and can include ta marbouta è in the middle of the “gibberish” words (e.g. ñ»èéJ “). It also tends to eliminate spaces between correctly predicted words or fusing two or more half-words together. While empty transcriptions seems to be the category with the least appearances across all models, HuBERT shows a notable increase in these compared to the other systems. That being said, all models except for HuBERT show MSA Translation errors, with the least observed in the distilled model and the most committed by W-L-v3. We theorize that these could be due to the models being trained on data that include Arabic shows or movies spoken in dialect but mapped with MSA subtitles. Among the hallucinations of the Whisper models, we also notice a commonly occurring transcription: èA J®Ë@ ú ¯ @ñ»Q ƒ@ (Eng. subscribe to the channel), which we believe could be resulting from training models on videos from platforms like YouTube. These videos can include captions that contain these sentences in interludes when the audio contains no speech (noise or background music). At the dialect level, YEM and UAE are the most problematic across all models, surprisingly exceeding the ALG dialect given its higher error rate overall. PAL and Jordanian are the least challenging, which goes in line with the systems’ overall performance on them. The exact statistics are provided in Table 11 in Appendix D. 7 Conclusion We present a comprehensive evaluation of multilingual ASR systems on a wide range of Arabic varieties and dialects to assess the robustness and generalization capability of these systems to linguistic variations. We then distill small dedicated models for Arabic ASR from large multilingual speech models (Whisper). We evaluate our distilled models on ten diverse datasets and find that despite being 25-50% smaller, they outperform the base model and are on par with state-of-the-art models twice their size. We also find our distilled models to be the most robust to linguistic diversity. We further conduct a comprehensive error analysis to investigate the nature of the errors these models make. We find that speech models with language model decoding are more prone to hallucination compared to other models. Our work reveals an inherent limitation of these models to generalize beyond their training data. In the future, we intend to expand this work to low-resource and unseen languages. 8 Limitations In this study, we distill small Whisper models from relatively large ones via pseudo-labeling. While our distilled models are compute efficient and maintain a performance similar to or better than the base teacher model, we believe that our work has several limitations which we outline below. Evaluation. Arabic is a linguistically rich and complex language with over 400 million speakers (Abdul-Mageed et al., 2021), resulting in its wide range of varieties and dialects. We evaluate all the models on ten different datasets representing different varieties, including five novel dialects 12611 collected and curated by native speakers and never seen before by any models. However, our varieties do not cover all Arabic-speaking regions. We aim to address this in future work by covering more varieties and dialects. Efficiency. Our distilled models are 25-75% compute efficient while maintaining the same performance as big models. However, the training process demands substantial computational resources. Our rough approximation indicates an expenditure of more than 3000 A100 (80G) GPU hours in our experiments, equivalent to over 500 kg of CO2 emissions of which zero percent is directly offset. To offer perspective, this carbon output aligns with what a typical internal combustion engine emits during a distance of about 2,000 kilometers. Our estimations rely on the Machine Learning Impact calculator presented in (Lacoste et al., 2019). Distillation Training Data. We distilled four variants of student models using 100K and 500K segments of which approximately 25% are filtered. We see improvement going from 100K (≈100 hours) to 500K (≈500 hours) segments. As (Gandhi et al., 2023) shows going over 1000 hours results in a better model, we aim to study how distillation can be done under a low resource setting which is why we do not scale the data. Additionally, we also keep the WER threshold high (80) so that we remain close to a setting where no labeled data is available (even for filtering). It would be interesting, however, to see how distilled models may perform on unfiltered data in low-resource setting. Nature of Speech Data. Despite putting together a never-seen dataset of under-represented Arabic dialects, we realize that sourcing our data from television series renders its nature distant from speech spoken in the wild. This type of content tends to be more “theatrical” and involves different elements such as background music and laughing tracks that do not accurately reflect regular conversational Arabic. Consequently, this could fail to accurately portray the performance of these models on real speech. 9 Ethics Statement Data Collection and Release. Given that we collect our data from TV series available on YouTube, we ensure that our use of this data aligns with the principles of fair use, given its application to a noncommercial academic setting. Each annotator of the data was made fully aware of the research objectives of the study and the intended use of their annotations. Intended Use. We believe our work will embolden further research on distilling small and efficient models from large and powerful foundation models especially applied to medium and low-resource languages. Our results show that small distilled models can yield on-par performance on even better results compared to large teacher models. Therefore, our work can raise the interest among the researchers who work on developing efficient machine learning systems under low resource settings however crucial to a wide range of population. Potential Misuse and Bias. Our distilled models can efficiently generate high-quality transcripts for multiple Arabic dialects and have the potential to be misused. Since there exists little-to-no clarity on the nature of the training data of the teacher model, our distilled models can produce potentially harmful and biased content that they can inherit from the teacher model. In addition to that, in our human evaluation, we find that these are susceptible to generating examples from the training data which raises the threat of information leakage. Therefore, we recommend against our distilled models being used without a careful prior consideration of potential misuse and bias. Acknowledgments We acknowledge support from Canada Research Chairs (CRC), the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 8952020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Digital Research Alliance of Canada,5 and UBC Advanced Research Computing-Sockeye.6 References Muhammad Abdul-Mageed, Hassan Alhuzali, and Mohamed Elaraby. 2018. You tweet what you speak: A city-level dataset of Arabic dialects. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Muhammad Abdul-Mageed, AbdelRahim A. Elmadany, and El Moatez Billah Nagoudi. 2021. ARBERT & 5https://alliancecan.ca 6https://arc.ubc.ca/ubc-arc-sockeye 12612 MARBERT: deep bidirectional transformers for arabic. CoRR, abs/2101.01785. Muhammad Abdul-Mageed, Chiyu Zhang, AbdelRahim Elmadany, and Lyle Ungar. 2020. Toward microdialect identification in diaglossic and code-switched environments. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5855–5876, Online. Association for Computational Linguistics. Mohammad Al-Fetyani, Muhammad Al-Barham, Gheith Abandah, Adham Alsharkawi, and Maha Dawas. 2021. Masc: Massive arabic speech corpus. Ahmed Ali, Peter Bell, James Glass, Yacine Messaoui, Hamdy Mubarak, Steve Renals, and Yifan Zhang. 2016. The mgb-2 challenge: Arabic multi-dialect broadcast media recognition. In 2016 IEEE Spoken Language Technology Workshop (SLT), pages 279– 284. IEEE. Ahmed Ali, Peter Bell, James Glass, Yacine Messaoui, Hamdy Mubarak, Steve Renals, and Yifan Zhang. 2019a. The mgb-2 challenge: Arabic multi-dialect broadcast media recognition. Ahmed Ali, Shammur Chowdhury, Amir Hussein, and Yasser Hifny. 2021. Arabic code-switching speech recognition using monolingual data. Ahmed Ali, Suwon Shon, Younes Samih, Hamdy Mubarak, Ahmed Abdelali, James Glass, Steve Renals, and Khalid Choukri. 2019b. The mgb-5 challenge: Recognition and dialect identification of dialectal arabic speech. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1026–1033. Ahmed Ali, Stephan Vogel, and Steve Renals. 2017. Speech recognition challenge in the wild: Arabic mgb-3. Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2020. Common voice: A massivelymultilingual speech corpus. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2021a. Xls-r: Self-supervised cross-lingual speech representation learning at scale. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, et al. 2021b. Xls-r: Self-supervised cross-lingual speech representation learning at scale. arXiv preprint arXiv:2111.09296. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020a. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449–12460. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020b. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449–12460. Loïc Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady Elsahar, Justin Haaheim, et al. 2023. Seamless: Multilingual expressive and streaming speech translation. arXiv preprint arXiv:2312.05187. Kaushal Santosh Bhogale, Sai Sundaresan, Abhigyan Raman, Tahir Javed, Mitesh M Khapra, and Pratyush Kumar. 2023. Vistaar: Diverse benchmarks and training sets for indian language asr. arXiv preprint arXiv:2305.15386. Houda Bouamor, Nizar Habash, and Kemal Oflazer. 2014. A multidialectal parallel corpus of Arabic. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14), pages 1240–1245, Reykjavik, Iceland. European Language Resources Association (ELRA). William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. 2015. Listen, attend and spell. arXiv preprint arXiv:1508.01211. Heng-Jui Chang, Shu-Wen Yang, and Hung-yi Lee. 2021. Distilhubert: Speech representation learning by layer-wise distillation of hidden-unit BERT. CoRR, abs/2110.01900. Vamsikrishna Chemudupati, Marzieh S. Tahaei, Heitor R. Guimarães, Arthur Pimentel, Anderson R. Avila, Mehdi Rezagholizadeh, Boxing Chen, and Tiago H. Falk. 2023. On the transferability of whisper-based representations for "in-the-wild" cross-task downstream speech applications. ArXiv, abs/2305.14546. Chung-Cheng Chiu, James Qin, Yu Zhang, Jiahui Yu, and Yonghui Wu. 2022. Self-supervised learning with random-projection quantizer for speech recognition. In International Conference on Machine Learning, pages 3915–3924. PMLR. Shammur Absar Chowdhury, Amir Hussein, Ahmed Abdelali, and Ahmed Ali. 2021. Towards one model to rule all: Multilingual strategy for dialectal codeswitching arabic asr. Seamless Communication, Loïc Barrault, Yu-An Chung, Mariano Cora Meglioli, David Dale, Ning Dong, Paul-Ambroise Duquenne, Hady ElSahar, Hongyu Gong, Kevin Heffernan, John Hoffman, Christopher Klaiber, Peng Li, Daniel Licht, Jean Maillard, Alice Rakotoarison, Kaushik Ram Sadagopan, Guillaume Wenzek, Ethan Ye, Bapi Akula, Peng-Jen Chen, Naji El Hachem, Brian Ellis, Gabriel Mejia Gonzalez, Justin Haaheim, Prangthip Hansanti, Russ 12613 Howes, Bernie Huang, Min-Jae Hwang, Hirofumi Inaguma, Somya Jain, Elahe Kalbassi, Amanda Kallet, Ilia Kulikov, Janice Lam, Shang-Wen Li, Xutai Ma, Ruslan Mavlyutov, Benjamin Peloquin, M.L. Ramadan, Abinesh Ramakrishnan, Anna Sun, Ke M. Tran, Tuan Tran, Igor Tufanov, Vish Vogeti, Carleigh Wood, Yilin Yang, Bo Yu, Pierre Yves Andrews, Can Balioglu, Marta Ruiz Costa-jussà, Onur Çelebi, Maha Elbayad, Cynthia Gao, Francisco Guzm’an, Justine T. Kao, Ann Lee, Alexandre Mourachko, Juan Miguel Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Paden Tomasello, Changhan Wang, Jeff Wang, and Skyler Wang. 2023. Seamlessm4t: Massively multilingual&multimodal machine translation. Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. 2020. Unsupervised cross-lingual representation learning for speech recognition. arXiv preprint arXiv:2006.13979. Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. 2022. Fleurs: Few-shot learning evaluation of universal representations of speech. Kunal Dhawan, Dima Rekesh, and Boris Ginsburg. 2023. Towards training bilingual and code-switched speech recognition models from monolingual data sources. arXiv preprint arXiv:2306.08753. Thomas Palmeira Ferraz, Marcely Zanon Boito, Caroline Brun, and Vassilina Nikoulina. 2024. Multilingual distilwhisper: Efficient distillation of multi-task speech models via language-specific experts. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. 2022. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323. Sanchit Gandhi. 2024. Speculative decoding for 2x faster whisper inference. Hugging Face Blog. Sanchit Gandhi, Patrick von Platen, and Alexander M. Rush. 2023. Distil-whisper: Robust knowledge distillation via large-scale pseudo labelling. Jianping Gou, Baosheng Yu, Stephen J. Maybank, and Dacheng Tao. 2021. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789–1819. Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pages 369–376. Nizar Y Habash. 2022. Introduction to Arabic natural language processing. Springer Nature. Nawar Halabi et al. 2016. Arabic speech corpus. Oxford Text Archive Core Collection. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. Renren Jin, Jiangcun Du, Wuwei Huang, Wei Liu, Jian Luan, Bin Wang, and Deyi Xiong. 2024. A comprehensive evaluation of quantization strategies for large language models. arXiv preprint arXiv:2402.16775. Sheikh Musa Kaleem, Tufail Rouf, Gousia Habib, Brejesh Lall, et al. 2024. A comprehensive review of knowledge distillation in computer vision. arXiv preprint arXiv:2404.00936. Yoon Kim and Alexander M. Rush. 2016. Sequencelevel knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1317–1327, Austin, Texas. Association for Computational Linguistics. Soroush Abbasi Koohpayegani, Ajinkya Tejankar, and Hamed Pirsiavash. 2020. Compress: Self-supervised learning by compressing representations. CoRR, abs/2010.14713. Ajinkya Kulkarni, Atharva Kulkarni, Sara Abedalmonem Mohammad Shatnawi, and Hanan Aldarmaki. 2023. Clartts: An open-source classical arabic textto-speech corpus. arXiv preprint arXiv:2303.00069. Solomon Kullback and R. A. Leibler. 1951. On information and sufficiency. Annals of Mathematical Statistics, 22:79–86. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pages 19274–19286. PMLR. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. 2023. Awq: Activationaware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978. Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. 2017a. Data-free knowledge distillation for deep neural networks. Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. 2017b. Data-free knowledge distillation for deep neural networks. CoRR, abs/1710.07535. 12614 Weijian Luo. 2023. A comprehensive survey on knowledge distillation of diffusion models. ArXiv, abs/2304.04262. Vimal Manohar, Pegah Ghahremani, Daniel Povey, and Sanjeev Khudanpur. 2018. A teacher-student learning approach for unsupervised domain adaptation of sequence-trained asr models. In 2018 IEEE Spoken Language Technology Workshop (SLT), pages 250– 257. Hamdy Mubarak, Amir Hussein, Shammur Absar Chowdhury, and Ahmed Ali. 2021. Qasr: Qcri aljazeera speech resource – a large scale annotated arabic speech corpus. Jinjie Ni, Yukun Ma, Wen Wang, Qian Chen, Dianwen Ng, Han Lei, Trung Hieu Nguyen, Chong Zhang, Bin Ma, and Erik Cambria. 2023. Adaptive knowledge distillation between text and speech pre-trained models. OpenNMT. Ctranslate2: Fast inference engine for transformer models. Jing Pan, Tao Lei, Kwangyoun Kim, Kyu J. Han, and Shinji Watanabe. 2022. Sru++: Pioneering fast recurrence with attention for speech recognition. In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7872–7876. Juan M Perero-Codosero, Fernando M EspinozaCuadros, and Luis A Hernández-Gómez. 2022. A comparison of hybrid and end-to-end asr systems for the iberspeech-rtve 2020 speech-to-text transcription challenge. Applied Sciences, 12(2):903. Rohit Prabhavalkar, Takaaki Hori, Tara N. Sainath, Ralf Schlüter, and Shinji Watanabe. 2024. End-to-end speech recognition: A survey. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 32:325–351. Riefkyanov Pratama and Agit Amrullah. 2024. Analysis of whisper automatic speech recognition performance on low resource language. Jurnal Pilar Nusa Mandiri, 20:1–8. Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2023a. Scaling speech technology to 1,000+ languages. Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, et al. 2023b. Scaling speech technology to 1,000+ languages. arXiv preprint arXiv:2305.13516. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pages 28492–28518. PMLR. Dima Rekesh, Samuel Kriman, Somshubra Majumdar, Vahid Noroozi, He Juang, Oleksii Hrinchuk, Ankur Kumar, and Boris Ginsburg. 2023. Fast conformer with linearly scalable attention for efficient speech recognition. 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1–8. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Hang Shao, Wei Wang, Bei Liu, Xun Gong, Haoyu Wang, and Yanmin Qian. 2023. Whisper-kdq: A lightweight whisper via guided knowledge distillation and quantization for efficient asr. Suwon Shon, Ahmed M. Ali, Younes Samih, Hamdy Mubarak, and James R. Glass. 2020. Adi17: A finegrained arabic dialect identification dataset. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8244–8248. SYSTRAN. faster-whisper: Faster whisper transcription with ctranslate2. Bashar Talafha, Abdul Waheed, and Muhammad AbdulMageed. 2023a. N-shot benchmarking of whisper on diverse arabic speech recognition. arXiv preprint arXiv:2306.02902. Bashar Talafha, Abdul Waheed, and Muhammad AbdulMageed. 2023b. N-shot benchmarking of whisper on diverse arabic speech recognition. Hawau Olamide Toyin, Amirbek Djanibekov, Ajinkya Kulkarni, and Hanan Aldarmaki. 2023. Artst: Arabic text and speech transformer. arXiv preprint arXiv:2310.16621. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Abdul Waheed, Bashar Talafha, Peter Sullivan, AbdelRahim Elmadany, and Muhammad Abdul-Mageed. 2023. VoxArabica: A robust dialect-aware Arabic speech recognition system. In Proceedings of ArabicNLP 2023, pages 441–449, Singapore (Hybrid). Association for Computational Linguistics. Aiden Williams, Andrea Demarco, and Claudia Borg. 2023. The Applicability of Wav2Vec2 and Whisper for Low-Resource Maltese ASR . In Proc. 2nd Annual Meeting of the ELRA/ISCA SIG on Underresourced Languages (SIGUL 2023), pages 39–43. 12615 Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can Xu, Dacheng Tao, and Tianyi Zhou. 2024. A survey on knowledge distillation of large language models. Xiaoyu Yang, Qiujia Li, Chao Zhang, and Philip C. Woodland. 2023. Knowledge distillation from multiple foundation models for end-to-end speech recognition. Yu Zhang, Wei Han, James Qin, Yongqiang Wang, Ankur Bapna, Zhehuai Chen, Nanxin Chen, Bo Li, Vera Axelrod, Gary Wang, Zhong Meng, Ke Hu, Andrew Rosenberg, Rohit Prabhavalkar, Daniel S. Park, Parisa Haghani, Jason Riesa, Ginger Perng, Hagen Soltau, Trevor Strohman, Bhuvana Ramabhadran, Tara Sainath, Pedro Moreno, Chung-Cheng Chiu, Johan Schalkwyk, Françoise Beaufays, and Yonghui Wu. 2023a. Google usm: Scaling automatic speech recognition beyond 100 languages. Yu Zhang, Wei Han, James Qin, Yongqiang Wang, Ankur Bapna, Zhehuai Chen, Nanxin Chen, Bo Li, Vera Axelrod, Gary Wang, et al. 2023b. Google usm: Scaling automatic speech recognition beyond 100 languages. arXiv preprint arXiv:2303.01037. A Related Work While early ASR systems were primarily hybrid (Perero-Codosero et al., 2022), often in the form of combinations of Hidden Markov Models (HMMs) and either Gaussian Mixture Models (GMMs) or Deep Neural Networks (DNNs), the desire for simpler architectures led to a shift towards End-to-End (E2E) models (Prabhavalkar et al., 2024). This was made possible in part thanks to the availability of extensive labeled datasets and increased computational power. Transformers (Vaswani et al., 2017) have come to light as the dominant architecture in modern ASR systems (Pan et al., 2022), owing to their attention mechanism’s ability to model long-range dependencies all while being scalable and efficient. OpenAI’s Whisper (Radford et al., 2023), a weakly supervised encoder-decoder Transformer, was trained on an extensive 630K hours of multilingual data, 739 of which are in Arabic. Whisper supports multilingual ASR, Automatic Speech Translation (AST) to English, and Language Identification (LID). Massively Multilingual Speech (MMS) (Pratap et al., 2023a), a system for multilingual ASR, speech synthesis (TTS) and LID build by Meta, is the result of pre-training wav2vec 2.0 (Baevski et al., 2020b) models (300M and 1B parameter versions) on 419K hours from 6 different corpora, spanning 1406 languages. For the ASR task, they fine-tune the pre-trained 1B model on 44.7K hours of labeled data in 1107 languages using Connectionist Temporal Classification (CTC) (Graves et al., 2006). Meta also developed SeamlessM4T v2 (Barrault et al., 2023; Communication et al., 2023), a collection of models featuring the new w2v-BERT 2.0 speech encoder pre-trained on 4.5M unlabeled data hours and fine-tuned on automatically aligned pairs. It supports 100 languages and its Arabic training data includes 119K hours of raw audio and 822 hours of labeled data. Another system that performs ASR and AST is Google’s Universal Speech Model (USM) (Zhang et al., 2023b), a 2B parameter model employing a Conformer encoder. It was pre-trained using BESTRQ (Chiu et al., 2022) on 12M unlabeled hours covering 300 distinct languages. Supervised ASR training was then used on the Conformer features using either CTC or Listen, Attend and Spell (LAS) (Chan et al., 2015) transducers using 90K hours of labeled data across 70 languages. XLS-R (Babu et al., 2021b) is yet another wav2vec 2.0-based model used for ASR, AST and speech classification tasks (LID and Speaker ID). It comes in variants of 0.3B, 1B and 2B parameters, trained on 436K hours (95 are in Arabic) that include 128 languages. ArTST (Toyin et al., 2023) is a SpeechT5 model focused on MSA and fine-tuned on the MGB3 dataset for ASR and on ASC (Halabi et al., 2016) and ClArTTS (Kulkarni et al., 2023) for TTS. B Training C Results D Error Analysis 12616 Parameter Value warmup_steps 50 learning_rate 0.0001 lr_scheduler_type constant_with_warmup batch_size 128 max_label_length 225 gradient_accumulation_steps 1 dtype bfloat16 Table 5: Training parameters. We use all the default training parameters provided in Huggingface Seq2SeqTrainingArguments unless otherwise specific in this table. Model Orthographic Normalized + No Diacritics CV15.0 MGB2 MGB3 MGB5 FLEURS CV15.0 MGB2 MGB3 MGB5 FLEURS XLS-R 91.3/45.5 98.0/58.2 99.1/63.7 99.8/73.5 95.3/46.3 85.7/34.2 97.7/55.6 98.7/61.1 99.8/71.8 94.8/44.8 HuBERT 79.9/34.4 70.0/27.7 48.8/18.6 98.4/52.7 49.4/14.3 53.0/17.6 55.1/22.5 33.2/12.5 96.9/50.2 33.9/10.7 W-FT 73.8/36.1 50.3/22.8 62.8/33.2 118.9/75.8 30.7/7.6 30.1/18.9 27.6/17.1 52.0/28.4 116.7/74.8 10.4/3.7 MMS-all 107.6/81.2 55.5/23.5 76.1/35.9 94.2/52.0 28.7/7.1 108.0/78.3 48.3/20.0 73.8/32.7 93.7/49.8 22.8/5.6 SM4T-M 48.0/21.7 44.9/21.0 49.6/25.1 92.1/55.2 20.4/6.1 13.3/4.6 31.2/16.2 41.7/20.5 88.0/51.5 9.2/3.7 SM4T-L-v1 48.1/21.4 44.4/21.4 50.6/25.3 95.7/60.0 22.9/7.5 16.5/5.9 33.0/17.3 43.2/20.9 93.2/56.8 11.4/5.1 SM4T-L-v2 40.0/16.9 42.5/20.6 46.0/22.6 95.9/60.7 20.2/6.4 8.3/2.5 30.3/16.5 36.1/17.7 90.8/56.2 7.9/3.8 W-S 64.2/29.1 83.8/50.6 85.4/55.5 198.3/140.1 36.3/12.1 44.0/19.1 75.2/47.5 82.0/51.3 197.2/140.0 30.4/10.6 W-M 59.1/26.7 65.4/35.8 70.7/43.8 145.1/105.3 24.2/6.5 23.7/9.7 52.7/32.1 63.2/39.6 143.2/104.1 15.5/4.5 W-L-v2 53.2/23.4 57.0/30.0 58.4/34.9 118.4/86.0 18.6/4.8 15.4/5.5 41.4/25.6 50.5/30.5 116.5/84.1 10.2/2.9 W-L-v3 50.3/21.8 37.5/17.1 44.1/20.6 88.1/53.5 17.1/4.2 12.2/3.9 23.4/12.7 34.7/15.7 86.0/50.8 8.9/2.6 DW-8-8 59.1/26.2 59.5/29.3 68.5/39.2 94.9/60.1 38.1/14.2 27.2/9.4 48.5/25.3 64.1/35.8 94.0/57.9 31.3/12.2 DW-16-16 53.5/23.2 50.5/22.4 56.2/27.9 89.8/51.2 27.4/8.6 17.5/5.4 35.9/17.7 50.0/23.9 88.4/48.7 19.5/6.7 DW-32-16 51.9/22.3 45.8/20.1 50.4/24.3 87.4/47.7 23.1/6.7 14.7/4.4 30.3/15.3 43.3/20.0 85.6/45.1 14.9/4.9 DW-16-32 52.9/22.9 48.1/23.3 55.4/29.8 90.4/54.0 24.8/9.0 16.9/5.4 34.9/19.0 49.2/25.8 88.9/51.6 17.4/7.1 DW-16-16++ 52.4/22.5 47.3/21.7 53.3/27.7 87.0/50.1 23.4/6.6 14.9/4.6 32.5/17.0 46.2/23.4 85.3/47.5 15.0/4.7 DW-32-16++ 51.4/22.0 41.4/18.1 48.1/24.1 85.0/47.3 19.2/5.6 13.2/3.9 27.2/13.7 40.3/19.7 82.7/44.6 11.1/3.7 Table 6: WER/CER on validation split of each dataset. Our in-house data only includes a single split reported in Table 3. Abbreviations. W - Whisper, FT - Finetuned, M - Medium, L - Large, S - Small, D - Distil. 12617 Split Model Orthographic Normalized + No Diacritics CV6.1 CV9.0 CV11.0 CV6.1 CV9.0 CV11.0 Test XLS-R 92.2/47.4 92.9/47.1 92.8/46.9 88.0/37.7 89.9/39.6 89.8/39.5 HuBERT 78.9/33.1 76.6/31.1 76.5/31.0 52.0/17.8 54.7/18.7 54.8/18.8 W-FT 74.9/36.7 69.8/33.5 69.5/33.3 32.8/21.8 34.9/21.1 35.0/21.1 MMS-all 106.1/82.4 106.0/82.6 105.9/82.5 106.8/80.2 106.5/80.9 106.4/80.9 SM4T-M 40.8/17.4 42.1/18.2 42.1/18.1 13.2/4.9 16.2/5.7 16.2/5.7 SM4T-L-v1 43.3/19.2 44.2/19.2 44.0/19.0 15.8/6.4 19.6/7.4 19.6/7.3 SM4T-L-v2 34.2/13.5 37.5/15.8 37.4/15.7 8.4/2.8 11.1/3.5 11.1/3.5 W-S 73.9/35.4 68.7/31.7 68.9/31.9 44.0/19.2 40.3/16.3 40.3/16.4 W-M 59.1/26.3 55.4/24.4 55.5/24.6 25.8/11.9 29.5/13.0 29.8/13.4 W-L-v2 51.4/21.9 47.9/20.3 47.7/20.2 16.2/7.0 19.8/8.2 19.9/8.2 W-L-v3 49.2/19.8 43.7/17.3 43.6/17.1 12.8/4.4 15.5/5.1 15.6/5.2 DW-8-8 58.2/24.8 55.4/23.5 55.2/23.3 28.5/10.4 32.5/12.2 32.6/12.2 DW-16-16 52.6/21.3 48.5/19.3 48.3/19.1 18.5/6.1 21.9/7.2 22.1/7.2 DW-32-16 50.5/20.2 46.2/18.0 46.0/17.9 15.2/4.8 18.5/5.8 18.7/5.8 DW-16-32 52.0/21.1 47.9/19.2 47.7/19.0 17.6/5.9 21.2/7.2 21.3/7.3 DW-16-16++ 51.2/20.6 46.6/18.4 46.5/18.2 15.8/5.2 19.0/6.2 19.1/6.2 DW-32-16++ 49.8/19.9 45.2/17.7 45.0/17.5 13.7/4.4 16.9/5.4 17.0/5.5 Val. XLS-R 92.1/48.6 91.1/45.3 91.3/45.6 86.6/36.2 85.4/34.0 85.7/34.2 HuBERT 82.6/36.8 79.9/34.3 80.0/34.4 54.9/18.9 53.1/17.7 53.0/17.6 W-FT 81.3/41.4 74.3/36.7 74.5/36.7 36.5/24.1 31.1/19.8 30.9/19.6 MMS-all 105.8/81.9 107.6/81.3 107.6/81.3 106.2/78.7 107.9/78.3 108.0/78.3 SM4T-M 48.9/22.2 48.0/21.7 48.1/21.8 13.5/4.8 13.2/4.5 13.2/4.5 SM4T-L-v1 49.6/22.6 48.1/21.6 48.4/21.7 16.6/6.0 16.5/5.9 16.6/5.9 SM4T-L-v2 40.7/17.6 40.2/17.2 40.3/17.1 8.2/2.6 8.2/2.5 8.3/2.5 W-S 67.1/31.2 64.9/29.7 64.8/29.5 40.2/17.8 44.1/19.3 44.2/19.4 W-M 64.5/30.2 58.7/26.4 59.0/26.6 27.3/12.1 23.6/9.3 23.6/9.3 W-L-v2 57.2/25.9 52.8/23.3 53.1/23.4 17.2/6.7 15.4/5.5 15.3/5.4 W-L-v3 53.9/24.2 50.0/21.8 50.3/22.0 13.4/4.7 12.2/4.0 12.2/3.9 DW-8-8 62.7/28.7 58.9/26.2 59.1/26.4 29.0/10.4 27.4/9.4 27.3/9.4 DW-16-16 57.3/25.5 53.3/23.1 53.5/23.3 19.0/6.2 17.5/5.5 17.5/5.4 DW-32-16 55.5/24.5 51.7/22.3 51.9/22.4 15.9/4.9 14.8/4.4 14.7/4.4 DW-16-32 56.7/25.3 52.7/22.9 53.0/23.1 18.4/6.1 17.0/5.4 16.9/5.3 DW-16-16++ 55.9/24.6 52.1/22.4 52.4/22.5 15.8/5.0 15.1/4.6 15.0/4.5 DW-32-16++ 55.1/24.3 51.2/22.1 51.4/22.2 14.3/4.6 13.3/4.0 13.2/4.0 Table 7: Test and validation split results for other common voice versions. Abbreviations. W - Whisper, FT Finetuned, M - Medium, L - Large, S - Small, D - Distil. 12618 Model Overall Avg. Avg. Benchmark Avg. In-House Amazon 61.0/41.8 -/-/XLS-R 97.7/58.4 96.1/53.2 99.4/63.5 HuBERT 66.7/27.3 51.5/20.4 81.9/34.2 W-FT 67.7/39.9 42.2/24.5 93.2/55.22 MMS-all 82.5/54.4 66.9/36.2 98.0/72.6 SM4T-M 48.1/21.7 33.9/17.3 62.3/26.0 SM4T-L-v1 51.7/24.7 37.4/19.5 66.0/29.9 SM4T-L-v2 47.0/22.6 32.3/17.7 61.7/27.6 W-S 80.8/45.7 66.0/35.3 95.6/56.1 W-M 65.4/38.5 54.3/32.9 76.4/44.1 W-L-v2 55.1/32.3 42.0/25.7 68.2/38.9 W-L-v3 49.5/25.4 31.4/15.6 67.7/35.2 DW-8-8 64.8/32.1 51.3/26.2 78.3/38.0 DW-16-16 53.2/23.2 40.0/18.6 66.3/27.9 DW-32-16 48.2/20.0 35.4/16.3 61.1/23.7 DW-16-32 53.0/25.1 39.4/19.6 66.5/30.6 DW-16-16++ 49.5/22.5 36.7/17.8 62.3/27.2 DW-32-16++ 45.0/19.2 33.0/15.7 56.9/22.7 Table 8: Average WER/CER scores on the benchmark, in-house, and overall data. Avg.: Average. 12619 Filtering Threshold (λ) 10 (82.8) 20 (74.7) 40 (54.5) 80 (28.0) None (0.0) Model Dataset Split Orth. N+ND Orth. N+ND Orth. N+ND Orth. N+ND Orth. N+ND DW-32-16 CV15.0 Test 45.4/17.7 19.3/6.0 43.6/16.8 16.9/5.2 46.7/18.3 20.8/6.8 45.6/17.7 18.8/5.9 47.2/18.8 21.2/7.3 Dev 51.6/22.2 14.9/4.4 50.6/21.8 13.4/3.9 52.8/22.9 16.6/5.2 51.9/22.3 14.7/4.4 53.2/23.5 16.7/5.8 MGB2 Test 30.0/11.3 26.0/10.3 25.6/9.4 20.8/8.3 31.8/12.6 26.1/11.3 27.7/10.3 21.1/8.9 29.0/11.8 22.8/10.4 Dev 48.5/22.5 35.8/18.2 43.8/19.7 30.1/15.2 49.0/22.3 35.4/17.8 45.8/20.1 30.3/15.3 52.4/26.1 38.2/21.8 MGB3 Test 58.9/31.4 53.2/27.1 51.3/26.3 44.8/21.7 56.8/30.3 50.4/25.9 51.2/26.1 43.8/21.4 58.3/35.0 51.3/30.8 Dev 57.4/29.1 51.8/25.1 50.0/23.9 43.9/19.8 55.9/28.3 49.5/24.2 50.4/24.3 43.3/20.0 58.9/35.2 51.9/31.5 MGB5 Test 84.2/46.1 82.4/43.1 81.1/43.2 79.3/40.1 85.2/48.5 83.4/45.6 80.9/43.4 78.9/40.4 92.5/60.8 90.5/58.8 Dev 89.8/50.6 88.3/47.9 87.3/47.1 85.7/44.4 91.6/53.1 90.0/50.6 87.4/47.7 85.6/45.1 97.4/67.0 95.6/65.3 Fleurs Test 27.1/8.9 20.0/7.1 22.2/6.8 14.5/5.0 25.0/8.3 18.1/6.6 22.0/6.6 14.2/4.8 23.5/7.0 14.9/4.9 Dev 28.1/8.4 20.2/6.5 22.8/6.3 15.0/4.6 25.2/7.6 18.0/5.9 23.1/6.7 14.9/4.9 24.5/7.0 15.7/4.9 Avg. B. – 52.1/24.8 41.2/19.6 47.8/22.1 36.4/16.8 52.0/25.2 40.8/20.0 48.6/22.5 36.6/17.1 53.7/29.2 41.9/24.2 ALG – 83.5/38.6 82.6/36.7 80.7/35.8 79.6/34.0 85.1/42.9 84.3/41.2 80.5/35.1 79.5/33.4 88.0/64.2 87.3/63.2 JOR – 58.1/20.1 50.7/17.7 52.7/17.1 44.8/14.6 58.5/21.3 51.3/18.9 52.6/17.1 44.4/14.7 55.3/22.4 47.6/20.0 PAL – 68.2/25.3 60.9/22.4 63.2/21.9 55.5/19.0 68.0/27.2 60.8/24.4 62.9/22.4 55.0/19.5 65.5/27.9 57.1/24.9 UAE – 71.0/29.3 63.6/25.8 66.4/25.6 58.1/22.1 72.9/32.7 65.6/29.4 66.7/26.3 58.1/22.8 72.5/38.8 63.9/35.6 YEM – 78.6/34.0 70.3/29.4 75.2/30.6 65.6/25.6 79.9/36.7 72.4/32.6 77.3/32.6 68.5/28.1 80.4/53.8 73.0/50.9 Avg. IH – 71.9/29.5 65.6/26.4 67.6/26.2 60.7/23.1 72.9/32.2 66.9/29.3 68.0/26.7 61.1/23.7 72.3/41.4 65.8/38.9 DW-16-16 CV15.0 Test 51.8/20.7 28.3/9.6 47.5/18.7 22.4/7.3 49.1/19.4 24.3/8.0 48.0/18.9 22.1/7.2 48.3/19.1 22.8/7.6 Dev 56.5/24.6 22.9/7.3 53.0/23.0 17.6/5.4 54.0/23.6 19.2/6.1 53.5/23.2 17.5/5.4 54.1/23.5 18.2/5.7 MGB2 Test 43.6/16.9 39.7/15.7 34.4/12.2 27.5/10.6 35.4/13.9 29.9/12.5 33.2/12.5 26.0/10.8 34.2/13.0 26.1/11.2 Dev 59.4/27.4 48.5/23.3 52.3/23.5 37.8/18.8 52.0/24.2 39.5/20.0 50.5/22.4 35.9/17.7 55.5/26.5 40.6/21.9 MGB3 Test 72.0/38.6 68.3/34.7 61.0/31.9 55.5/27.5 60.2/31.7 54.2/27.4 57.1/29.6 50.5/25.1 60.2/33.9 54.1/29.7 Dev 71.0/36.9 67.3/33.3 60.1/30.2 54.4/26.2 59.3/30.4 53.6/26.4 56.2/27.9 50.0/23.9 60.1/32.1 54.0/28.3 MGB5 Test 90.2/51.7 89.1/49.0 86.2/47.8 84.7/44.8 88.8/50.3 87.1/47.6 84.1/46.2 82.4/43.3 96.8/61.3 95.1/59.1 Dev 94.2/56.0 93.4/53.7 91.7/52.5 90.5/49.9 94.8/57.1 93.5/54.9 89.8/51.2 88.4/48.7 102.1/65.5 100.7/63.6 Fleurs Test 36.6/13.3 31.6/11.9 28.2/9.7 21.0/7.9 28.7/9.5 21.6/7.8 26.2/8.5 18.8/6.6 24.9/7.9 17.6/6.0 Dev 36.6/12.7 31.4/11.3 29.0/9.1 21.5/7.3 29.9/9.9 22.6/8.1 27.4/8.6 19.5/6.7 26.7/7.9 19.3/6.2 Avg. B. – 61.2/29.9 52.1/25.0 54.3/25.9 43.3/20.6 55.2/27.0 44.6/21.9 52.6/24.9 41.1/19.5 56.3/29.1 44.9/23.9 ALG – 89.9/46.4 89.2/44.9 85.7/41.3 84.8/39.6 87.0/46.8 86.8/45.3 83.8/40.2 83.0/38.5 93.8/53.1 93.4/51.6 JOR – 71.4/29.6 66.5/27.5 62.3/23.1 55.5/20.7 63.4/24.3 56.9/22.1 57.8/20.5 50.4/18.2 58.9/22.4 51.8/20.2 PAL – 78.7/34.2 73.6/31.5 70.9/27.8 64.6/25.0 72.4/31.3 66.2/28.6 68.2/26.2 61.0/23.3 72.3/30.2 64.7/27.3 UAE – 81.7/39.1 77.1/36.2 74.9/32.4 68.2/29.1 76.1/35.9 69.8/32.9 72.0/31.0 64.6/27.7 75.5/37.2 68.0/34.0 YEM – 85.1/41.5 79.7/37.9 80.1/35.6 72.6/31.3 83.3/40.8 77.2/37.1 80.0/35.6 72.7/31.6 84.8/42.3 78.3/38.8 Avg. IH – 81.4/38.2 77.2/35.6 74.8/32.0 69.1/29.1 76.4/35.8 71.4/33.2 72.4/30.7 66.3/27.9 77.1/37.0 71.2/34.4 Table 9: Results for different threshold (λ) values distilling from 100K segments. The value in the bracket along with λ represents the ratio of filtered examples. The average of reported results on test split of benchmarks and in-house data. Orth.: Orthographic. N: Normalized. ND: Non Diacritized. Category Dia. Reference Prediction Model MSA Trans. ALG ú ÎJk. Q k É ª ƒ ú ο éƒAg ú G @ hðQ K éJ «AK. ú G@P A ‚ ÒºK áÓ h. Q k @ à @ I. m.' é K AK. Qª ƒ @ A K @ Ðñ¯ @ à @ YK P @ ú æ K B  ÒºË@ W-M Hallucination JOR úæ •Q K ? ú æªK Õ΂‚  ?ñ ƒ ?A J K. AëXñk. ñK. Èñ“ñË@ úÍ@ Èñ“ñË@ ©J ¢‚  ám ' È Q ÖÏ@ úÍ@ DW-32-16 Deterioration PAL ? éÓCƒ ¼YK. Ñj. JË éÊK A¯  ñK. QÓ àAÓ AK èQËA‚Ó H È ú » ñÊ £@ HuBERT Incomplete YEM ú Γ , é<ËAK QÖß Aë QË@ ,Õç' Q» ú k@ é<Ë@ ÈñƒP úΫ ø Q» ú k @ W-M Unr. Word ALG ñË@ð P@X , P@X Iƒñk ø BñÓ Yg@ð úæk I.“ AÓ Èñ®K ñË@ð ,P@X P@Yg. @ñë ,ø BñÓ Yg@ð úæk I.“ AÓ Èñ®K M4T v1 Unr. Name JOR é K@ èYëA K áÓ éJªÖޅ ú ÎË@ A K @ . Ð B@ ð H. B@ éÒJ K ú æ•A« éJ.J ¢ k áÓ IªÖޅ ú ÎË@ A K @ . éJ kA K é K @ Ð B@ð H. B@ éÒJ K ,ú æ•A« éJ.J ¢ k M4T-v1 Unr. Pron. UAE , àðY¯QK éK A JJ.Ë@ Éë IJ Ê g AÓ ÑîD Ê« †Y¯YK ËAK †Y“ AK àðY¯QK éK A JJ.ËAë IJ Êg AÓ ÑîD Ê« †X W-L-v3 Alt. Orthog. JOR hP I K @ ,  Ë ú ήJK. ¼YªK.ð ?ú æ Jm.' h@P I K@ ?  Ë ú Í Èñ®JK. ¼YªK.ð ?ú æ Jm.' W-L-v3 Table 10: Examples for the different error categories observed during error analysis. Dia.: Dialect. Trans.: Translatoin. Unr.: Unrecognized. Pron.: Pronunciation. Alt.: Alternative. 12620 Model Error Type Algeria Jordan Palestine UAE Yemen SM4T-L-v2 Total err. count 86 24 20 106 124 Hallucination (%) 20.9 37.5 55.0 29.3 18.6 Deterioration (%) 26.7 33.3 20.0 26.4 28.2 Empty (%) 1.2 0.0 0.0 0.0 0.0 Incomplete (%) 8.1 4.2 5.0 1.89 0.8 MSA translation (%) 31.4 8.3 5.0 2.8 0.8 Dia. inaccuracies (%) 19.8 16.7 20.0 41.5 51.6 W-L-v3 Total err. count 87 27 16 104 111 Hallucination (%) 32.2 18.5 25.0 28.9 21.6 Deterioration (%) 31.0 33.3 43.8 36.5 34.2 Empty (%) 0.0 0.0 0.0 0.0 0.0 Incomplete (%) 1.2 7.4 0.0 5.8 6.3 MSA translation (%) 23.0 33.3 12.5 10.6 8.1 Dia. inaccuracies (%) 18.4 11.1 18.8 19.2 29.7 W-M Total err. count 90 59 20 253 213 Hallucination (%) 35.6 28.8 40.0 35.2 33.3 Deterioration (%) 31.1 22.0 35.0 37.2 26.3 Empty (%) 0.0 0.0 0.0 0.0 0.0 Incomplete (%) 13.3 33.9 10.0 17.8 26.3 MSA translation (%) 14.4 15.3 10.0 6.7 4.2 Dia. inaccuracies (%) 6.7 5.1 5.0 6.3 13.2 HuBERT Total err. count 28 4 9 75 61 Hallucination (%) 14.3 25.0 22.2 22.7 37.7 Deterioration (%) 57.1 50.0 55.6 68.0 60.7 Empty (%) 10.7 0.0 11.1 5.3 16.4 Incomplete (%) 3.6 0.0 0.0 1.4 4.9 MSA translation (%) 0.0 0.0 0.0 0.0 0.0 Dia. inaccuracies (%) 14.3 25.0 11.1 2.7 3.3 DW-32-16 (Ours) Total err. count 12 3 4 39 50 Hallucination (%) 50.0 66.7 50.0 20.5 12.0 Deterioration (%) 25.0 0.0 25.0 23.1 20.0 Empty (%) 0.0 0.0 0.0 0.0 2.0 Incomplete (%) 8.3 0.0 25.0 2.6 4.00 MSA translation (%) 8.3 33.3 0.0 2.6 4.0 Dia. inaccuracies (%) 8.3 0.0 0.0 51.3 58.0 Table 11: Error analysis statistics of different systems evaluated on our in-house data. Err.: Error. Dia.: Dialectal. 12621
2024
680
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12622–12642 August 11-16, 2024 ©2024 Association for Computational Linguistics LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding Mostafa Elhoushi1∗†, Akshat Shrivastava1∗†, Diana Liskovich2†, Basil Hosmer1, Bram Wasti2, Liangzhen Lai3, Anas Mahmoud4, Bilge Acun1, Saurabh Agrawal6, Ahmed Roman7, Ahmed A Aly3, Beidi Chen1,5, Carole Jean-Wu1, 1FAIR at Meta 2GenAI at Meta 3Reality Labs at Meta 4University of Toronto 5Carnegie Mellon University 6University of Wisconsin-Madison 7Dana-Farber Cancer Institute [email protected] [email protected] Abstract We present LayerSkip, an end-to-end solution to speed-up inference of large language models (LLMs). First, during training we apply layer dropout, with low dropout rates for earlier layers and higher dropout rates for later layers, and an early exit loss where all transformer layers share the same exit. Second, during inference, we show that this training recipe increases the accuracy of early exit at earlier layers, without adding any auxiliary layers or modules to the model. Third, we present a novel self-speculative decoding solution where we exit at early layers and verify and correct with remaining layers of the model. Our proposed self-speculative decoding approach has less memory footprint than other speculative decoding approaches and benefits from shared compute and activations of the draft and verification stages. We run experiments on different Llama model sizes on different types of training: pretraining from scratch, continual pretraining, finetuning on specific data domain, and finetuning on specific task. We implement our inference solution and show speedups of up to 2.16× on summarization for CNN/DM documents, 1.82× on coding, and 2.0× on TOPv2 semantic parsing task. We open source code at https://github.com/ facebookresearch/LayerSkip. 1 Introduction Large Language Models (LLMs) have been deployed to many applications, yet their high compute and memory requirements lead to high financial and energy costs when deployed to GPU servers Samsi et al. (2023). Acceleration solutions do exist to deploy to commodity GPUs on laptops but they suffer from significant drop in accuracy Zhu et al. (2023). Accelerating LLMs further to mobile or edge devices is still an active research *Equal Contribution. †Core Contributors. T1 T2 T3 T4 LM Head T5 Self Speculation Decoding T1 T2 T3 T4 LM Head Early Exit Inference Train using Layer Dropout + Early Exit Loss.... ... enables inference with subset of layers with higher accuracy... ... and we can improve accuracy by verifying and correcting with remaining layers Skipped Computed Skipped w. Probability Legend Cached T1 T2 T3 T4 LM Head Training with Layer Dropout & Early Exit Loss Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Figure 1: Overview of our end-to-end solution, LayerSkip, showing its 3 components. area Çöplü et al. (2023); Liu et al. (2024). While a large portion of LLM acceleration approaches reduce number of non-zero weights Xia et al. (2023) (a.k.a. sparsity), number of bits per weight Xiao et al. (2023) (a.k.a. quantization), number of heads per layer Shim et al. (2021) (a.k.a. head pruning), a smaller portion of approaches focus on reducing number of layers Fan et al. (2020); Elbayad et al. (2020). In this paper, we explore reducing the number of layers required for each token by exiting early during inference. Unlike quantization or sparsity, acceleration by reducing number of layers does not require specialized hardware or software kernels. Moreover, a popular research trend in LLM acceleration is speculative decoding Leviathan et al. (2023); Chen et al. (2023) that has no drop in accuracy, where a large model, referred to as the main model, is accompanied with a faster model, referred to as the draft model. The advantage of speculative decoding is that it leads to faster inference compared to the main model, but requires a larger memory footprint and complexity in implementation to maintain key-value (KV) cache in two different models. In addition to exiting early, this paper also proposes combining exiting early with 12622 speculative decoding to propose a self-speculative decoding approach that does not require an additional model or auxiliary layers. The contribution of this paper is an end-to-end solution: • a training recipe that combines layer dropout and early exit loss, that leads to, • inference that is more robust to exiting at earlier layers of the model, essentially creating different sized sub-models within the same model, and • a self-speculative decoding solution that decodes with earlier layers and verifies and corrects with later layers. The solution achieves speedups between 1.34× and 2.16× depending on the task. We provide an overview of the solution in Figure 1. 2 Motivation 2.1 Exiting Earlier in LLMs To motivate our approach, we investigate, with an example prompt, what happens in each layer in a LLM. In Figure 2a, we provide the first prompt from the HumanEval coding dataset Chen et al. (2021) to a pretrained Llama1 7B model Touvron et al. (2023a). The prompt consists of a Python function header and a docstring, and the model autocompletes it by defining the function body. When generating each token, we probe each transformer layer in the LLM by projecting its output embeddings on the language model (LM) head (that consists of the model’s final layer normalization and linear layer), applying softmax, and then obtaining the index of the output element with highest value. The resulting index corresponds to the predicted token at this layer. This operation is referred to in some literature as the unembedding operation Phuong and Hutter (2022); Cancedda (2024), as it converts an embedding to an index. Unembedding at each layer is equivalent to early-exit at that layer, i.e., it is equivalent to skipping the remaining transformer layers to the model’s LM head. The token predictions across layers in Figure 2b illustrate the evolution of embeddings from an input token fed to the model to the predicted next token by the model. When analyzing the token prediction in each layer in Figure 2b, we make a few observations. First, token predictions in earlier layers appear to be irrelevant as they correspond to the previous token projected on the model’s embedding layer’s weights, which are different from the weights of the LM head. In later layers, token predictions converge to the final prediction. Second, we do not always need all the layers to predict the correct token. In fact, most of the time, the final token prediction is predicted fewer layers before the end. We also notice that intermediate layers are sometimes hesitant and “change their minds”, e.g., for Token 05, the model was predicting “range” as early as Layer 07, but changed its mind between Layer 22 and Layer 26, before settling again on “range”. Similar analysis was done in Geva et al. (2022) on a GPT2 model Radford et al. (2019) as it developed predictors to estimate when prediction saturates to exit early. For the particular example we present in Figure 2, we find, on average, a token requires 23.45 layers out of the model’s 32 layers. Hence, even if we have a perfect predictor that has zero compute overhead, we can only save up to 26% of computation. Therefore, there is a need to make LLM models require fewer layers to predict each token, and spend less compute being hesitant or “changing its mind”. By default, deep learning models are not motivated to predict their final output early and instead spread their compute across all layers Voita et al. (2019, 2023). We see in Figure 2b, that tokens we would consider easy or straightforward to predict, e.g., Token 02 that starts a for-loop, required all 32 layers to predict “for”. We would like our model to be less reliant on later layers and only use later layers for harder tokens. We would like our models to be more reliant on earlier layers than later layers. To do that, we propose skipping layers during training, which we refer to as layer dropout. However, we use higher dropout rates for later layers and lower dropout rates for earlier layers, to make the model less reliant on later layers. Moreover, LM heads in LLMs are trained to unembed embeddings from the last transformer layer. They were not trained to unembed from earlier layers. Therefore, our solution also adds a loss function during training to make LM heads better “understand” embeddings of earlier layers. While most papers that explored early exit Schuster et al. (2022); Elbayad et al. (2020) trained a dedicated LM head for each transformer layer, and some have introduced additional modules for each early exit Zhang et al. (2019), we chose to have a shared LM head for all transformer layers in the model. This makes training faster, require 12623 Prompt: from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: """ Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """ Generation: for i in range(len(numbers)):\n for j in range(i+1, len(numbers)):\n if abs(numbers[i] - numbers[j]) < threshold:\n return True\n return False\n (a) Token 01 Token 02 Token 03 Token 04 Token 05 Token 06 Token 07 Token 08 Token 09 Token 10 Token 11 Token 12 Token 13 Token 14 Token 15 Token 16 Token 17 Layer 00 ... // eground externs EV Anleitung this зня ** hips academ ... // eground unction EV Anleitung Layer 01 and // instance Pod response atel self View self AS timing everybody // sure osh hib ous Layer 02 Dark # ego Pod pse ula ** tern self AT __ SO // instance osh isti ula Layer 03 << // ego tak ula s łu self Helper Hem Rein // _, tak ula Layer 04 dorf // ego range town pract Nar zero paces Hem `.` // isu input iter ula Layer 05 Pay if instance Sold range widet etra Hem paces fib ioned if isu fish range ula Layer 06 (...) if isu iter iska ̂ тика Trace SR fib ioned if za Gree iter iri Layer 07 hoff if _, range ("@ ̂ тика , piel Kn ioned if pat Gree range ("@ Layer 08 \n if instance range ("@ len тика ulp paces AT ioned if pat range ("@ Layer 09 Cow return loop range stag self of this cope ori ioned if wards forg range ( Layer 10 return return i range stag len ect Maz smallest Cop ioned if i Sn range till Layer 11 return return i range val self <= this ut ida ioned if i enda range iras Layer 12 return return i 話 range stag len kir din plit cep ioned if i pent range iras Layer 13 return return i range stag len <= self plit oss ioned if i oth range iras Layer 14 return return i range len len <= self plit lag ioned if i next range ( Layer 15 return return i range len len <= len list { ioned if i unction range ( Layer 16 # i range len len ovo self paces ioned if i ura range ( Layer 17 return # i range len len <= self paces \n ioned if ii ura range ( Layer 18 # pair ota range len len len numbers plit \n irm if i i range ( Layer 19 # i ota range len len len numbers plit for if i i range ( Layer 20 # i ota range len len len numbers plit for ioned if i i range ( Layer 21 distance i ota range len len <= numbers plit for ioned for i i range ( Layer 22 return i _ numbers (): len ( numbers plit for irm if i in range ( Layer 23 return i _ range ( len ( numbers ): for irm if i in range ( Layer 24 return number in numbers ( len ( numbers [: for irm for j in range ( Layer 25 return i in range ( len ( numbers ): for for j in range ( Layer 26 return i in numbers ( len ): numbers )): \n for j in range ( Layer 27 return i in range ( len ): numbers )): \n for j in range ( Layer 28 return i in range ( len ( numbers )): \n for j in range ( Layer 29 return i in range ( len ( numbers )): \n for j in range ( Layer 30 return i in range ( len ( numbers )): \n for j in range ( Layer 31 for i in range ( len ( numbers )): \n for j in range ( (b) Figure 2: (a) A prompt from the HumanEval dataset Chen et al. (2021) and corresponding text generated by Llama1 7B. The color of each generated token corresponds to the earliest layer in the model that predicted it. (b) Token prediction at each layer in Llama1 7B. T1 T2 T3 T4 LM Head T1 T2 T3 T4 LM Head Train once using Layer Dropout + Early Exit.... T1 T2 T3 T4 LM Head T1 T2 T3 T4 LM Head T1 T2 T3 T4 LM Head ... to create different sized models with shared weights Skipped Computed Skipped w. Probability Legend Cached Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Figure 3: We propose using layer dropout and early exit loss during training to create a model that is equivalent to an ensemble of models of various depths. less memory consumption for both training and inference, and eases deployment and maintenance. Hence, as shown in Figure 3, we train a deep learning model that is equivalent to an ensemble of models of various depths, capable of skipping from different transfomer layers to the LM head. 2.2 Correcting if we Exit Too Early Regardless if we use heuristics or predictors (as Schuster et al. (2022); Geva et al. (2022)) to exit early, or if we modify the training procedure to make models predict early (as Elbayad et al. (2020); Zhang et al. (2019) and this paper as well), it is likely that exiting early during inference will lead to a reduction in accuracy. It will be ideal if there is a way to verify if an early prediction is accurate, and correct it by executing remaining layers. Some approaches like Zhang et al. (2019) proposed a confidence heuristic to decide after executing an early exit if the remaining layers are needed. Here, we leverage speculative decoding techniques to verify the early exit prediction and correct it. Speculative decoding benefits from the fact that verifying the prediction of a group of tokens is faster than generating each token auto-regressively. Hence, we present a self-speculative decoding approach where we use early exit to generate each token auto-regressively, and use the remaining layers to verify a group of tokens in parallel, and correct them. 3 Related Work Dropout Dropout was first introduced by Srivastava et al. (2014) and involved stochastically replacing a portion of output elements of fullyconnected layers with zeros during training. We refer to this variant of dropout as unstructured dropout. It presented a regularization effect for training, with the purpose of reducing over-fitting. Unstructured dropout was commonly used in convolutional neural networks (CNNs) before batch normalization Ioffe and Szegedy (2015) replaced it as a means to improve generalization. However, the introduction of transformers brought it back to light as Vaswani et al. (2017) used a dropout rate of 0.1. However, dropout faded again when pretraining dataset sizes increased, e.g., large scale models like Llama Touvron et al. (2023a) and GPT3 Brown et al. (2020) do not mention dropout in their papers. Layer Dropout Skipping layers stochastically during training is referred to in literature with different terms such as stochastic depth or layer dropout. It was first explored in ResNets by Huang et al. (2016) and is used to train ConvNext Liu et al. 12624 (2022).In language models, LayerDrop Fan et al. (2020) applied dropout to every other transformer layer, which increased its robustness to pruning layers at inference time. Zhang and He (2020) increased the pretraining speed of BERT by applying a dropout rate that progressively increased every iteration as well as every layer. To the best of our knowledge, layer dropout for training decoder-only models, or scaling language models to large model sizes or large datasets has not been explored. Moreover, our paper is the first to propose using layer dropout to improve early exit inference. Early Exit Exiting early in deep learning has first been explored in CNNs Panda et al. (2016); Teerapittayanon et al. (2017). They added branch modules at different exit points in a deep learning network and introduced additional loss functions during training to improve the accuracies of those early exits. In language models Elbayad et al. (2020) added a dedicated LM head for each decoder layer in an encoder-decoder translation model.CALM Schuster et al. (2022) built upon that and started with a model pretrained with early exit losses, and focused on finding optimal criteria to decide which layer to exit at during inference. Din et al. (2023) started with pretrained models and finetuned auxiliary fully-connected layers to map the embeddings outputted by earlier layers to later layers. In our proposed solution, we do not introduce any additional modules or linear layers for early exit, and instead used a shared exit for all layers. Speculative Decoding Speculative decoding Leviathan et al. (2023); Chen et al. (2023) is a popular acceleration technique for language models. It is based on the fact that auto-regressive decoding of decoder models are slow as they generate one token a time, while measuring the likelihood of a group of generated tokens in parallel is faster. It uses a fast, less accurate model, referred to as the draft model, to generate multiple tokens auto-regressively, and a large, slower, more accurate main model, to verify the tokens in parallel, and correct them when needed. The draft model could have the same or different architecture as the main model, or could be a compressed version of the model. Zhang et al. (2023) recently proposed a self-speculative decoding approach where the draft model is the same as the main model, but with a group of intermediate attention and feed forward network (FFN) layers skipped. The advantage of our proposed solution compared to Zhang et al. (2023) is that verification and correction stages can reuse the activation and KV cache from the draft stage as both stages execute the same early layers in the same order, while Zhang et al. (2023) can not reuse them as it skips intermediate layers. Hooper et al. (2024) used shared transformer layer groups and a shared LM head to exit each token at a different layer and execute different layer groups in a pipeline fashion. 4 Proposed Solution Our approach has three different stages: 1. Training using Layer Dropout & Early Exit Loss 2. Inference using Early Exit 3. Verification and Correction using Speculative Decoding We explain each stage in the following subsections. 4.1 Training using Layer Dropout & Early Exit Loss We denote the input tokens to a transformer model as X and its output as Y , with an embedding layer that maps the token indices to token embeddings, x0, and a transformer model with L transformer layers, where transformer layer l evolves embeddings outputted from its previous layer, xl+1 = xl + fl(xl), and a final LM head that maps the embedding outputs of the last layer, xL to logits, eL = g(xL). We denote the cross entropy loss function that is usually used to train language models as JCE(eL, Y ). 4.1.1 Layer Dropout The first modification we apply to common training recipes, is to apply layer dropout. Hence the transformer layer operation at layer l and training iteration t changes to: xl+1,t = xl,t + M(pl,t)fl(xl,t) (1) where pl,t is the dropout rate of layer l at iteration t, M(p) is a Bernoulli function that returns 0 with probability p and returns 1 with probability 1 −p. We apply the dropout operation on each sample separately within a batch. We remove the dropped samples from a batch, apply the transformer operation fl on the remaining samples, and then concatenate the output with the dropped samples. To 12625 ensure higher speedup during training, we seed the random number generator for each GPU with the same seed, so that each transformer layer at each iteration will drop the same number of samples. The dropout rate can be different at each layer l and training iteration t, pl,t: pl,t = S(t)D(l)pmax (2) where pmax is a hyperparameter that sets the maximum dropout rate in the model during training, D(l) is a per-layer scaling function, and S(t) is a per-time step scaling function. We found that the best per-layer scaling is to increase dropout rate exponentially across layers from 0.0 in layer 0, to 1.0 in last layer, L −1: D(l) = e lln2 L−1 −1 (3) For scaling across time, S(t), we found that if we start with a pre-trained model and perform continual pre-training or finetuning, it is best to not scale across time and hence set S(t) = 1. However, for pretraining from scratch, we found that an exponential curriculum, Sexp(t), lead to best accuracies for T training steps: Sexp(t) = e tln2 T −1 −1 (4) 4.1.2 Early Exit Loss To boost prediction accuracy of lower layers, we need to ensure that the model’s LM head, g, is capable of unembedding outputs of different layers. Hence, during training, we augment layer dropout with early exit loss at each layer. During training we supervise the model directly to connect the early exit layers to the LM head, this enables us to directly supervise the lower layers for the language modeling task. The total loss of the model at iteration t is: J(X, Y, t) = l=L−1 X l=0 ˜e(t, l)JCE(g(xl+1), Y ) (5) Where ˜e(t, l) is a normalized per-layer loss scale, whose sum across all layers is equal to 1: ˜e(t, l) = C(t, l)e(l) Pi=L−1 i=0 C(t, i)e(i) (6) C(t, l) is a binary curriculum function that determines if we enable early exit of layer l at iteration t. We build upon Elbayad et al. (2020) and set a scale that increases across layers, such as the scale at one layer is proportional to the sum of the scales of all previous layers: e(l) = ( escale Pi=l i=0 i, if 0 ≤l < L −1 L −1 + escale Pi=L−2 i=0 i, if l = L −1 This way, we penalize later layers with quadratically higher weight, as predicting in later layers is easier. 0 ≤escale ≤1 is a hyperparameter that controls the scale of early exit loss. Note that we do not add additional LM heads as proposed in other early exit papers Elbayad et al. (2020); Schuster et al. (2022), as we essentially use the same LM head for all layers. Early Exit Loss Curriculum We find that adding early exit loss of all layers at all iterations during training slows down training and reduces accuracy of the last layer. To overcome this, we introduce a curriculum, C(t, l). We have explored 2 different curricula. First, we explored a rotational early exit curriculum, Crot,R, where we enable early exit at every R layers, and perform circular rotation at each iteration. This way, early exit at each layer is enabled once every R iterations. Hence, at each training iteration, only ⌈L/R⌉unembedding operations are applied. Second, we explored a gradual early exit curriculum, Cgrad, where we gradually enable early exit loss from layers L −1 to 0, one layer at a time every T/2L iterations. 4.2 Inference using Early Exit When generating each token during autoregressive decoding, we run the first E transformer layers in a model, and skip to the model’s LM head, i.e., the model’s final output becomes g(xE). We explore with different values of E and provide the accuracies in the Results section. 4.3 Inference using Self-Speculative Decoding With layer dropout and early exit loss in training, we show it is possible to speedup autoregressive generation by exiting early, but this comes at an accuracy cost compared to using the full model. Speculative decoding Leviathan et al. (2023); Chen et al. (2023) is able to leverage a faster yet less accurate model to speedup generation without accuracy cost. However, this requires storing and training 2 models. We introduce a novel self-speculative decoding algorithm built on top of early exit, enabling us to 12626 T1 T2 T3 T4 LM Head Verify: Reuse the draft states, only recompute the remaining model T1 T2 T3 T4 LM Head Draft: Use first N layers of model to compute next token T1 T2 T3 T4 LM Head Autoregressive Generation: Compute each token T5 T5 Cost: L layers per token Cost: E layers per token E < L Cost: Prefill Latency for L-E layers L for new token generation Self Speculation Decoding Autoregressive Decoding T1 T2 T3 T4 LM Head Verify: Use main model to verify draft tokens through prefill T1 T2 T3 T4 LM Head Draft: Use draft model to generate candidate tokens T1....T4 T5 Cost: L' layers per token Cost: Prefill Latency for L layers L for new token generation Speculative Decoding Skipped Computed Cached Legend Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Layer 4 Layer 3 Layer 2 Layer 1 Layer 2 Layer 1 Figure 4: Comparison between autoregressive decoding, speculative decoding, and our proposed self-speculative decoding. reduce memory through the use of a single model and latency of traditional speculative decoding through re-using hidden states in draft and verify steps. As shown in Figure 4, our self-speculation algorithm consists of 2 key steps (1) Self-Drafting, using the early exit to draft tokens from the same model (2) Self-Verification, using the remaining layers to validate the prediction. To enable re-use in (1) and (2), we develop a novel Cache Reuse technique that unifies the KV cache and storing the exit query. We provide a high level description of the algorithm in sections §4.3.1 and 4.3.2 and provide pseudo code in A.6. 4.3.1 Self-Drafting The first step in speculative decoding is to define a set of draft tokens D0...d−1. In our algorithm, we compute the first d draft tokens through early exit. We refer to d as the number of speculations. We leverage a subset of the LLM and conduct autoregressive inference exiting at layer E. Our training recipe enabled us to train the model once to get an ensemble of different candidate draft models at each layer depth. We can evaluate exiting at different layers and observe a trade off between latency and accuracy. 4.3.2 Self-Verification The next step in speculative decoding is verification. Verification leverages the full LLM to predict the next token for each draft token in a single forward pass. We then assess to see where the draft tokens and verified tokens agree. All the draft tokens up till the disagreement point are added to the output along with the next verified token and the process continues from the draft stage. In our self-speculative decoding algorithm, the self-verification stage critically only requires computing the remaining layers of the model that were not used in the draft stage. For a model with L layers, the number of verification layers is L −E. In order to re-use the first E layers from the draft stage we employ some modifications to the KV cache as we show in the subsequent subsection. 4.3.3 Reusing the Cache In autoregressive transformers, KV cache is a critical component of efficient generation, allowing us to avoid recomputing prior KV pairs in each layer. As our draft stage uses the first E layers of the model and the verification stage uses the remaining L −E layers, we are able to re-use a significant amount of compute between the 2 stages: • Single KV Cache As draft and verification stages operate on the same model using the same order of layers, the first E layers are shared in both steps. Hence, in the draft stage, the KV cache in the first E layers are already computed, so we are able to effectively maintain a single KV cache for the draft and verify steps, reducing memory and latency. • Exit Query Cache: To further reduce computation of the first E layers, we introduce an exit query cache that saves the query vector of exit layer E −1 for verification to directly continue from layer E to last layer L. Critically note that we need to save only the query for the exit layer. We term the union of the KV cache and the exit query as KVQ cache. 5 Experiments We would like to evaluate our training recipe on different types of training, whether pretraining from scratch or finetuning. To verify our approach, we run different types of training experiments: • Continual Pretraining: start with a pretrained model and continue pretraining on 52B tokens from a corpus of diverse data containing natural language text and code. We experiment using pretrained Llama2 7B (32 layers), with pmax = 0.1, escale = 0.2, Crot,R=8, and Llama2 13B (40 layers), with pmax = 0.1, escale = 0.1, Crot,R=39. 12627 • Pretraining from Scratch: start with randomly initialized model and pretrain on 26B tokens from a corpus of diverse data containing natural language text and code. We experiment with Llama2 1.5B (a custom small Llama-like model with 24 layers) (see A.3.1 for architecture details) with pmax = 0.1, escale = 0.2, Crot,R=23 and Llama2 7B (32 layers) with pmax = 0.2, escale = 0.2, Crot,R=31. Following Srivastava et al. (2014) we use higher learning rates when layer dropout is greater than 0.0. • Finetuning on Code Data: see §A.2 for details and §A.4 for results. • Finetuning on Task-Specific Dataset: see §A.2 for details and §A.4 for results. We try different variants of LayerSkip: layer dropout only (LD), early exit loss only (EE), and both layer dropout and early exit loss (LD+EE). We provide more details about training hyperparameters in Appendix A.3. 6 Results 6.1 Early Exit Inference Results After training each model configuration, we evaluate accuracy of exiting early at different layers. Continual Pretraining In Figure 5, we present our results for Llama2 7B and 13B on a diverse set of evaluation tasks (see § A.3.2 for task details) and compare with the baseline model from Touvron et al. (2023b). In Table A4 we zoom in and show the specific values of accuracies for the last layer and middle layer of each model. In Figure A1 we show sample text generations for exiting at earlier layers for both models with and without continual pretraining with LayerSkip. Overall, for earlier layers, LayerSkip is clearly better than the baseline. For last layer accuracy, LayerSkip has minimal drop in accuracy compared to baseline. Pretraining from Scratch In Figure A2, we present our results for Llama2 1.5B and 7B pretrained from scratch on 26B tokens using LayerSkip on a diverse set of evaluation tasks (see § A.3.2 for task details) and compare with the same models pretrained on the same number of tokens from scratch without LayerSkip. In Figure A3 we show sample text generations for exiting at earlier layers. Results show that introducing our proposed training recipe leads to higher accuracy than the baseline on earlier layers. On the last layer, we do see a slight drop in accuracy in some downstream tasks, while in other tasks we see LayerSkip leading to higher accuracy. 6.2 Self-Speculative Decoding Results We evaluate the self-speculative decoding algorithm introduced in §4.3 on different trained models. We report quality metrics, EM (exact match) and ROUGE-2 Ganesan (2018), token acceptance rate for the self speculation algorithm (how often verification accepts each of the draft tokens), throughput measured as tokens per second averaged over the sampled dataset, and speed up compared to autoregressive decoding. For our early exit and our self-speculative decoding experiments, we denote layer we exit at as E. We compare with Draft & Verify Zhang et al. (2023) on common models and tasks evaluated in both papers. All experiments were performed with greedy decoding and generated a maximum of 512 tokens for each sample. Following Zhang et al. (2023), speedup is calculated as acceleration of average inference time per token compared to “Autoregressive” baseline. “Autoregressive” experiments use baseline models that were pretrained or finetuned without LayerSkip, while “Early Exit” and “Self Speculative” experiments use our models trained or finetuned with LayerSkip. Our implementation leverages HuggingFace Wolf et al. (2020). Continual Pretraining In Table 1, we evaluate the continual pre-training of Llama2 7B and 13B with and without LayerSkip on various tasks: CNN/DM Nallapati et al. (2016), XSUM Narayan et al. (2018) abstractive summarization tasks, and HumanEval Chen et al. (2021) coding task. The experiments were performed on NVIDIA H100 GPUs. The number of speculations, i.e., the number of tokens generated in the draft stage, is denoted d. We obtain speedups between 1.34× and 2.16× depending on model or task. In general, we observe higher speedups for the smaller 7B compared to the larger 13B model. Comparing with Draft & Verify, we are significantly faster on CNN/DM (1.81× vs. 1.5×) and slightly slower on XSUM (1.34× vs. 1.48×). Pretraining from Scratch Experiments were performed on H100 GPUs and results presented in Table 2. We found an opposite trend to continual pretraining: bigger model has a bigger speedup, reaching 2.16× speedup. 12628 102 104 PPL Wikipedia Test 102 PPL Books Test 102 104 PPL The Stack Test 40 60 80 Acc BoolQ 60 70 Acc PIQA 35 40 45 Acc SIQA 30 40 50 Acc HellaSwag 50 60 70 Acc Winogrande 1.1 40 60 Acc ARC Easy 20 30 40 Acc ARC Challenge 20 30 Acc OBQA 60 80 Acc COPA 20 40 60 Acc RACE Middle 30 40 Acc RACE High 30 40 Acc MMLU 0 10 20 EM NQ 0 25 50 EM TQA 0 10 EM GSM8K 0 2 EM MATH 32 24 16 8 4 Layer 0 10 Pass@1 HumanEval 32 24 16 8 4 Layer 0 10 20 Pass@1 MBPP Baseline LayerSkip-LD+EE (a) Llama2 7B 102 104 PPL Wikipedia Test 102 PPL Books Test 102 104 PPL The Stack Test 70 80 Acc BoolQ 60 70 80 Acc PIQA 35 40 45 Acc SIQA 40 60 Acc HellaSwag 50 60 70 Acc Winogrande 1.1 40 60 80 Acc ARC Easy 20 40 Acc ARC Challenge 20 30 Acc OBQA 70 80 90 Acc COPA 40 60 Acc RACE Middle 30 40 Acc RACE High 30 40 50 Acc MMLU 0 20 EM NQ 0 25 50 EM TQA 0 20 EM GSM8K 0 2 4 EM MATH 40 30 20 10 5 Layer 0 10 20 Pass@1 HumanEval 40 30 20 10 5 Layer 0 20 Pass@1 MBPP Baseline LayerSkip-LD+EE (b) Llama2 13B Figure 5: Early exit evaluation of continual pretraining 7 Ablation Studies Many ablation studies are in the Appendix, but we summarize some here. Scaling with Pretraining Tokens Figure A5, shows that without LayerSkip pretraining increases perplexity of earlier layers by orders of magnitude. KV Cache in Self-Speculation Table A7 shows that our proposed re-use of KV cache consistently saves us 9-20 ms per token depending on the task. Selecting Parameters for Self Speculation Self speculation relies on 2 core parameters (1) early exit layer and (2) number of speculations. There exists a tradeoff where selecting too low of an exit point and too many tokens are rejected, too high and the latency cost of the exit layer reduces the benefits of speculation. We find that these parameters are task dependent. Figure 6 shows how range of decoding parameters varies for different tasks. 8 Conclusion We show that combining layer dropout & early exit loss with curriculum, improves accuracy of early exit during inference, and developed a novel selfspeculative decoding solution that led upto 1.86× speedup. We hope this encourages researchers to adopt the proposed recipe in pretraining and finetuning. In the future, we can increase accuracy of earlier layers to obtain better speedups for selfspeculative decoding, e.g., by combining with dynamic conditions (like Schuster et al. (2022)). 12629 Llama2 7B Llama2 13B Generation E d ROUGE-2 Token Acc. Tokens per Sec. Speedup E d ROUGE-2 Token Acc. Tokens per Sec. Speedup CNN-DM One-Shot Abstractive Summarization Autoregressive 0.079 62.7 1.00× 0.098 37.2 1.00× Early Exit 8 0.012 232.4 15 0.016 105.5 Self Speculative 8 12 0.078 68.9% 127.9 1.86× 15 12 0.098 74.5% 70.2 1.81× Draft and Verify n/a n/a n/a n/a n/a n/a 0.107 n/a n/a 1.56× XSUM Abstractive Summarization Autoregressive 0.073 63.4 1.00× 0.124 43.8 1.00× Early Exit 8 0.002 228.0 15 0.009 110.6 Self Speculative 8 12 0.073 54.6% 104.7 1.54× 15 4 0.124 67.7% 60.5 1.34× Draft and Verify n/a n/a n/a n/a n/a n/a 0.126 n/a n/a 1.48× HumanEval Coding Autoregressive 0.041 62.9 1.00× 0.055 48.9 1.00× Early Exit 8 0.003 225.4 15 0.0005 244.3 Self Speculative 8 6 0.042 67.1% 122.8 1.83× 7 4 0.055 57.0% 84.2 1.66× Table 1: Generation results for Llama2 continually pretrained with and without LayerSkip. Llama2 1.5B - 26B Tokens Llama2 7B - 26B Tokens Generation E ROUGE-2 Token Acc. Tokens per Sec. Speedup E ROUGE-2 Token Acc. Tokens per Sec. Speedup CNN-DM One-Shot Abstractive Summarization Autoregressive 0.063 91.6 1.00× 0.060 64.5 1.00× Self Speculative 8 0.063 77.4% 167.4 1.76× 8 0.067 77.8% 145.6 2.16× Table 2: Generation results for Llama2 pretrained from scratch on 26B tokens with and without LayerSkip. 4 6 8 10 12 14 16 18 20 Exit Layer 2 4 6 8 10 12 14 16 18 20 Number of Speculations Tokens Per Second 48 60 72 84 96 108 120 132 (a) Llama2 7B CNN-DM Self-Speculation 2 4 6 8 10 12 14 16 Exit Layer 2 4 6 8 10 12 14 16 18 20 Number of Speculations Tokens Per Second 40 52 64 76 88 100 112 124 (b) Llama2 7B HumanEval Self-Speculation Figure 6: Self Speculation Decoding Parameters Sweep. 12630 9 Limitations • Our self-speculative decoding solution requires finetuning a model or pretraining it with our recipe, while the self-speculative decoding approach propoposed in Zhang et al. (2023) does not require changing a model’s weights. • The introduced hyperparameters, pmax for layer dropout, escale and R for early exit, requires tuning in order to avoid a drop in last layer accuracy. • When pretraining with layer dropout from scratch, increasing the learning rate is required to maintain accuracy, and tuning learning rate to get optimal accuracy could be tricky and time consuming. Acknowledgements We would like to thank: • Volker Seeker, Artem Korenev, and Ilia Kulikov for logistic support, • Fabian Gloeckle, Andrey Gromov, Francisco Massa, Daniel Haziza, Aaditya Singh, Karen Hambardzumyan, Nicola Cancedda, for discussions, • FAIR’s clusters’ support team members, especially, Henry Estela, Hongsheng Song, Shubho Sengupta, and Nabib Ahmed, for their help in maintaing our clusters, • Kamila Benzina, Carolyn Krol, Helen Klein, and Philippe Brunet for legal support. 12631 References Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. 2021. Program synthesis with large language models. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In ThirtyFourth AAAI Conference on Artificial Intelligence. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Nicola Cancedda. 2024. Spectral filters, dark signals, and attention sinks. Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, L. Sifre, and John M. Jumper. 2023. Accelerating large language model decoding with speculative sampling. ArXiv, abs/2302.01318. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, and Sonal Gupta. 2020. Low-resource domain adaptation for compositional task-oriented semantic parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5090–5100, Online. Association for Computational Linguistics. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. Boolq: Exploring the surprising difficulty of natural yes/no questions. In NAACL. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. Alexander Yom Din, Taelin Karidi, Leshem Choshen, and Mor Geva. 2023. Jump to conclusions: Shortcutting transformers with linear transformations. Maha Elbayad, Jiatao Gu, Edouard Grave, and Michael Auli. 2020. Depth-adaptive transformer. In In Proc. of ICLR. Angela Fan, Edouard Grave, and Armand Joulin. 2020. Reducing transformer depth on demand with structured dropout. In International Conference on Learning Representations. Kavita Ganesan. 2018. Rouge 2.0: Updated and improved measures for evaluation of summarization tasks. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800gb dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027. Mor Geva, Avi Caciularu, Kevin Wang, and Yoav Goldberg. 2022. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 30–45, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. Measuring massive multitask language understanding. In International Conference on Learning Representations. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. NeurIPS. Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Hasan Genc, Kurt Keutzer, Amir Gholami, and Sophia Shao. 2024. Speed: Speculative pipelined execution for efficient decoding. 12632 Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. 2016. Deep networks with stochastic depth. Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5376–5384. Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. 2022. The stack: 3 tb of permissively licensed source code. Preprint. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785– 794, Copenhagen, Denmark. Association for Computational Linguistics. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org. Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, and Vikas Chandra. 2024. Mobilellm: Optimizing sub-billion parameter language models for on-device use cases. Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. 2022. A convnet for the 2020s. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Conference on Empirical Methods in Natural Language Processing. Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-tosequence rnns and beyond. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! Topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Priyadarshini Panda, Abhronil Sengupta, and Kaushik Roy. 2016. Conditional deep learning for energyefficient and enhanced pattern recognition. Mary Phuong and Marcus Hutter. 2022. Formal algorithms for transformers. ArXiv, abs/2207.09238. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In Logical Formalizations of Commonsense Reasoning, Papers from the 2011 AAAI Spring Symposium, Technical Report SS-11-06, Stanford, California, USA, March 21-23, 2011. AAAI. Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, and Gabriel Synnaeve. 2023. Code llama: Open foundation models for code. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. Siddharth Samsi, Dan Zhao, Joseph McDonald, Baolin Li, Adam Michaleas, Michael Jones, William Bergeron, Jeremy Kepner, Devesh Tiwari, and Vijay Gadepally. 2023. From Words to Watts: Benchmarking the Energy Costs of Large Language Model Inference. arXiv e-prints, page arXiv:2310.03003. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463– 4473, Hong Kong, China. Association for Computational Linguistics. Tal Schuster, Adam Fisch, Jai Gupta, Mostafa Dehghani, Dara Bahri, Vinh Q. Tran, Yi Tay, and Donald Metzler. 2022. Confident adaptive language modeling. In Advances in Neural Information Processing Systems. 12633 Kyuhong Shim, Iksoo Choi, Wonyong Sung, and Jungwook Choi. 2021. Layer-wise pruning of transformer attention heads for efficient language modeling. In 2021 18th International SoC Design Conference (ISOCC), pages 357–358. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(56):1929–1958. PyTorch Team. 2024. gpt-fast. https://github.com/ mostafaelhoushi/gpt-fast. Surat Teerapittayanon, Bradley McDanel, and H. T. Kung. 2017. Branchynet: Fast inference via early exiting from deep neural networks. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Elena Voita, Javier Ferrando, and Christoforos Nalmpantis. 2023. Neurons in large language models: Dead, n-gram, positional. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Haojun Xia, Zhen Zheng, Yuchao Li, Donglin Zhuang, Zhongzhu Zhou, Xiafei Qiu, Yong Li, Wei Lin, and Shuaiwen Leon Song. 2023. Flash-llm: Enabling cost-effective and highly-efficient large generative model inference with unstructured sparsity. Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han. 2023. SmoothQuant: Accurate and efficient post-training quantization for large language models. In Proceedings of the 40th International Conference on Machine Learning. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800, Florence, Italy. Association for Computational Linguistics. Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, and Sharad Mehrotra. 2023. Draft & verify: Lossless large language model acceleration via self-speculative decoding. Linfeng Zhang, Zhanhong Tan, Jiebo Song, Jingwei Chen, Chenglong Bao, and Kaisheng Ma. 2019. Scan: A scalable neural networks framework towards compact and efficient models. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc. Minjia Zhang and Yuxiong He. 2020. Accelerating training of transformer-based language models with progressive layer dropping. In Advances in Neural Information Processing Systems, volume 33, pages 14011–14023. Curran Associates, Inc. Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. 2023. A survey on model compression for large language models. Tolga Çöplü, Marc Loedi, Arto Bendiken, Mykhailo Makohin, Joshua J. Bouw, and Stephen Cobb. 2023. A performance evaluation of a quantized large language model on various smartphones. 12634 A Appendix A.1 LayerSkip Hyperparameters The hyperparameters of LayerSkip training recipe: • Layer Dropout: – pmax: maximum dropout rate of last layer of the model, – S(t): layer dropout curriculum. We use either no curriculum S(t) = 1 for finetuning or continual pretraining, or an exponential curriculum, S(t) = Sexp(t) for pretraining from scratch, • Early Exit Loss: – escale: scalar scale of loss of earlier layers, – C(t, l): early exit loss curriculum, either rotational, Crot,R(t, l), or gradual, Cgrad(t, l) * R: is a dilation across layers for rotational early exit loss curriculum The hyperparameters of LayerSkip selfspeculative decoding inference: • E: layer to exit at during draft stage, • d: number of speculations, i.e., number of tokens generated during the draft stage autoregressively, that are then verified in parallel during the verification stage by the remaining layers. A.2 Additional Experiments In addition to pretraining from scratch and continual pretraining, we evaluate LayerSkip on finetuning on specific domain data in further experiments: • Finetuning on Code Data: start with pretrained Llama1 7B model Touvron et al. (2023a) and finetune on 5.2B tokens of CodeLlama Rozière et al. (2023) data mix. We use pmax = 0.1, escale = 1.0, Crot,R=16. • Finetuning on Task-Specific Dataset: start with a pretrained Llama 1.5B (24 layers) and finetune on TOPv2 Chen et al. (2020), a multidomain task-oriented compositional semantic parsing dataset. We post processed the dataset into a JSON format to be more aligned with code pre-training. We report our results on the TOPv2 evaluation set. We use pmax = 0.2, escale = 1.0, Cgrad. A.3 Experiment Details We provide details of training configuration and hyperparameters for each of our experiments in Table A1. When pretraining from scratch, layer dropout leads to higher accuracy when trained on higher learning rate Srivastava et al. (2014). Therefore, we show learning rates of each experiment with and without layer dropout separately in Table A2. A.3.1 Model Architectures We provide details of architectures of different models in Table A3. A.3.2 Evaluation Tasks We have evaluated our language models on a wide range of tasks. For the sake of discussions in § 6.1, we categorize the tasks into: • “Classification” Tasks: where model responds with one out of pre-defined answers, e.g., multiple-choice questions, or questions whose answers are either “True” or “False”: – Common Sense Reasoning Tasks * BoolQ Clark et al. (2019) * PIQA (Physical Interaction Question Answering) Bisk et al. (2020) * SIQA (Social Interaction Question Answering) Sap et al. (2019) * HellaSwag Zellers et al. (2019) * Winogrande 1.1 Sakaguchi et al. (2019) * ARC (Abstraction and Reasoning Corpus) Clark et al. (2018) · ARC Challenge · ARC Easy * OBQA (Open Book Question Answers) Mihaylov et al. (2018) * COPA (Choice Of Plausible Alternatives) Roemmele et al. (2011) – RACE (ReAding Comprehension dataset from Examinations) Lai et al. (2017) * RACE Middle * RACE High – MMLU (Massive Multitask Language Understanding) Hendrycks et al. (2021a) • “Generation” Tasks: where model responds with an open-ended sequence of tokens and we evaluate either exact match of the tokens with a reference answer, or, in case of code, build or execute. 12635 Experiment Model Batch Size Steps GPUs Continual Pretraining Llama2 7B 4 50 × 103 64 A100 80 GB Llama2 13B 4 50 × 103 64 A100 80 GB Pretraining from Scratch Llama 1.5B 4 50 × 103 32 A100 30 GB Llama2 7B 4 50 × 103 32 A100 30 GB Finetuning on Code Data Llama1 7B 4 10 × 103 32 A100 80 GB Finetuning on Task-Specific Dataset Llama 1.5B 32 5.8 × 103 8 A100 80 GB Table A1: Training Hyperparameters and Configuration of Experiments Experiment Model Dropout Initial Learning Rate Continual Pretraining Llama2 7B ✓ 3 × 10−5 Llama2 13B ✓ 2 × 10−5 Pretraining from Scratch Llama 1.5B 4 × 10−4 Llama 1.5B ✓ 8 × 10−4 Llama2 7B 3 × 10−4 Llama2 7B ✓ 8 × 10−4 Finetuning on Code Data Llama1 7B 1 × 10−4 Llama1 7B ✓ 1 × 10−4 Finetuning on Task-Specific Dataset Llama 1.5B 1 × 10−4 Llama 1.5B ✓ 1 × 10−4 Table A2: Learning Rates of Experiments Model Dim Heads Layers Context Llama 1.5B 2048 16 24 4096 Llama1 7B Touvron et al. (2023a) 4096 16 32 2048 Llama2 7B Touvron et al. (2023a) 4096 16 32 4096 Llama2 13B Touvron et al. (2023b) 5120 40 40 4096 Table A3: Model Architectures – Question Answering * NQ (Natural Questions) Kwiatkowski et al. (2019) * TQA (Textbook Question Answering) Kembhavi et al. (2017) – Mathematics * MATH Hendrycks et al. (2021b) * GSM8K Cobbe et al. (2021) – Code Generation * HumanEval Chen et al. (2021) * MBPP (Mostly Basic Python Problems Dataset) Austin et al. (2021) We also evaluate perplexity on held out test sets on the following datasets: • The Stack, a coding dataset Kocetkov et al. (2022) • Books Gao et al. (2020) • Wikipedia A.4 Additional Results A.4.1 Early Exit Results Continual Pretraining Table A4 zooms into the accuracies of middle and last layers of Llama2 7B and Llama2 13B continual pretraining experiments that are shown in Figure 5. It is noteworthy that some “classification” tasks, i.e., multiple choice question or true/false question tasks, maintain relatively decent accuracy on earlier layers on the baseline model, while open-ended “generation” tasks drop drastically. Surprisingly, MMLU Hendrycks et al. (2021a) which is considered a challenging task, only drops from 55.2% to 49.2% on Llama2 13B baseline from the last to the middle layer. This could be because classification tasks are evaluated on generating one token only while generation tasks are evaluated on the accuracy of many tokens, and an error in one token may have a compounding effect when generating later tokens. Moreover, classification tasks evaluate a token out of 4 or 2 possible outcomes, while generation tasks evaluate each token out of thousands of possible entries in the LLM’s dictionary. We observe LayerSkip’s significant importance on generation tasks, e.g., NaturalQuestions Kwiatkowski et al. (2019) drops from 25.1% to 0% when exiting in middle layers of Llama2 7B, but jump to 4% when using LayerSkip. Figure A1 shows sample generations exiting at different layers for Llama2 7B and Llama2 13B 12636 Llama2 7B Llama2 13B Last Layer Middle Layer Last Layer Middle Layer (Layer 32) (Layer 16) (Layer 40) (Layer 20) Baseline LayerSkip Baseline LayerSkip Baseline LayerSkip Baseline LayerSkip Eval Perplexity ↓ Wikipedia 4.32 4.3 1900 8.12 3.97 3.98 507 10.5 Selected Books 1.60 1.06 4390 6.53 1.40 1.40 1170 11.9 The Stack 2.15 2.14 968 2.99 2.05 2.06 65.8 3.71 Common Sense Reasoning Tasks ↑ (Multiple Choice Questions / True False Questions) BoolQ 77.4 77.8 62.2 75.7 81.6 82.0 62.2 69.7 PIQA 78.0 77.9 57.9 69.5 79.3 78.5 62.8 67.8 SIQA 44.7 44.2 37.8 42.0 46.7 46.3 40.7 44.7 HellaSwag 57.0 56.6 31.5 43.8 60.1 60.3 35.6 46.8 WinoGrande 69.8 71.4 58.6 65.2 72.3 72.5 59.4 68.1 ARC-e 76.5 76.5 38.6 57.5 79.4 79.2 48.8 61.1 ARC-c 43.8 43.6 26.8 30.6 48.3 47.3 31.9 35.6 OBQA 33.4 33.4 19.6 25.4 34.4 35.4 23.8 25.4 COPA 90 88 68 79 91 93 73 82 Reading Comprehension ↑ (Multiple Choice Questions) RACE Middle 58.2 57.4 34.0 51.1 62.0 60.7 40.9 55.1 RACE High 42.9 42.2 28.0 37.6 44.9 44.5 31.8 39.3 MMLU ↑ (Multiple Choice Questions) MMLU 46.0 43.1 38.9 40.2 55.2 53.7 49.2 52.9 Question Answering ↑ (Open Ended Answers) NaturalQuestions 25.1 23.2 0.0554 4.07 31.5 31.8 0.609 4.43 TriviaQA 58.5 56.8 0.619 11.8 66.2 66.3 4.36 11.4 Mathematics ↑ (Open Ended Answers) GSM8K 14.3 12.2 0 2.05 29.3 27.4 0.0758 1.74 MATH 3.22 3.16 0 0.96 5.06 5.16 0 0.46 Code Generation ↑ (Open Ended Answers) HumanEval 13.4 15.9 0 4.88 18.9 18.3 0 3.05 MBPP 21.0 22.4 0 7.20 26.4 29.0 0 3.40 Table A4: Evaluation of continual pretraining of Llama2 7B and Llama2 13B. with and without continual pretraining with LayerSkip. Pretraining from Scratch Figure A3 shows sample generations exiting at different layers for Llama2 7B pretrained with and without LayerSkip. Finetuning on Code Data In Figure A4a, we present our results on 2 coding tasks and compare accuracy to Llama1 7B finetuned on the same number of code tokens without LayerSkip. For earlier layers, LayerSkip is clearly better than the baseline, with layer dropout combined with early exit loss showing a big improvement on one of the 2 tasks. For last layer accuracy, LayerSkip with both layer dropout and early exit loss has almost the same accuracy as baseline. Note that since this experiment finetuned on specific domain data, we were able to increase escale to 1.0 (as opposed to escale = 0.1 or 0.2 in the previous two configurations). Finetuning on Task-Specific Dataset In Figure A4b, we compare results of fine-tuning our Llama 1.5B model on TOPv2 training set with and without LayerSkip. In semantic parsing, correctness requires an exact match (EM) between generated sequence and annotated parse. We find when removing layers from the baseline model, 12637 Baseline LayerSkip-LD+EE The capital of Egypt is Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian Egyptian The capital of Egypt is a big city with a large number of tourist attractions. Growing to be one of the most famous cities in the world, it attracts more than 10 million people a year. At the heart of the city is a complex of ancient monuments, buildings, and wonders. The Egyptian capital is a wonderful city where the history of Egypt is spread out like a palette of colors. The most famous of all Egyptian monuments is the Pyramid of Giza, which is located at the end of the Giza boulder. It is the only one of three great pyramids that survived the centuries. It is the oldest of the three pyramids and the only one that has three tiers. Many tourists visit Egypt to see the pyramids of Giza. They are not the only attractions in the city, however. The oldest of the three great pyramids of Giza, which is the oldest of the three great pyramids, has three tiers. Many tourists visit the pyramids of Giza to see the pyramids of Giza. They are not the only attractions in the city, however. The oldest of the three great pyramids of from typing import List def add_two_lists(A: List[float], B: List[float]) -> List: hw genuineershipomy twitch channelрдчиngdompairing桥Ɉ bridgebridge bridge bridge bridge bridge bridge bridge bridge bridge bridge bridgebridge bridge bridge bridge bridgebridge bridge bridge bridge bridge bridgebridge bridge bridgebridge bridge bridgebridgebridge bridge bridge bridgebridge bridge bridge bridge bridgebridge bridge bridgebridgebridge bridge bridgebridge bridge bridge bridge bridgebridgebridge bridge bridgebridge bridgebridge bridgebridge bridge bridge bridgebridge bridgebridge bridgebridge bridge bridge bridge bridge bridge bridgebridgebridge bridgebridge bridge bridgebridge bridge bridgebridge bridge bridge bridgebridgebridge bridgebridge bridge bridgebridgebridge bridge bridge bridge bridgebridgebridge bridgebridge bridge bridge bridgebridgebridge bridgebridgebridge bridgebridge bridge bridgebridgebridge Bridge bridge bridgebridge bridge bridge bridge bridge bridge bridge bridge bridge bridgebridgebridgebridge bridgebridge bridgebridge bridge bridgebridgebridge bridge bridge bridgebridge bridgebridge bridgebridge Bridgebridge bridge bridge bridgebridge bridgebridge bridgebridge bridgebridge bridgebridge bridgebridgebridgebridge bridge bridgebridge Bridge from typing import List def add_two_lists(A: List[float], B: List[float]) -> List: A_sorted = sorted(A) B_sorted = sorted(B) new_list = [] for A_item, B_item in zip(A_sorted, B_sorted): new_list.append(A_item + B_item) return new_list def main(): listA = [1, 2, 3, 4, 5, 6] listB = [7, 8, 9, 10, 11, 12] listC = add_two_lists(listA, listB) print(listC) if __name__ == "__main__": main() (a) Llama2 7B (that has 32 layers) exiting at layer 20. Baseline LayerSkip-LD+EE The capital of New York State is Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb Alb The capital of New York State is Albany. It is also the capital of Albany County. It was founded on Albany Albany River. The city is situated at the Albany Albany River and is surrounded by Albany Albany River, the Albany Albany River and the Albany Albany River. It is also surrounded by Albany Albany State Forest and Albany Albany State Park. The city has 24 neighborhoods, and the population is about 45,000. There are many clubs, clubs and clubs. There are many clubs, clubs and clubs. There are many clubs, clubs and clubs. def bubble_sort(list_a): konnichiya gentlemen ladies gentlemen ladies welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome welcome def bubble_sort(list_a): i = len(list_a) for i-1: for i-1: if list_a[i] > list_a[i-1]: swap = list_a[i] list_a[i] = list_a[i-1] list_a[i-1] = swap i = i - 1 return list_a (b) Llama2 13B (that has 40 layers) exiting at layer 24. Figure A1: Early exit text generation examples for models continually pretrained with LayerSkip. Blue: The prompt fed into the model. Red: incorrect phrases or words generated by the model (whether factually or grammatically wrong, or hallucinations). With self-speculative decoding, we fix those incorrect phrases by verifying with remaining layers. the model is not able to generate any complete or accurate. However, with LayerSkip, early exit inference improves to 77% at layer 12. We notice a regression in the final layer reducing accuracy by 3%. Again, as this configuration finetuned data on a specific task, we were able to set escale = 1.0. A.4.2 Self-Speculative Decoding Results Finetuning on Code Data In Table A5, we evaluate our code-finetuned Llama1 7B on HumanEval using 12 speculations, and exit at layer 6 for self speculation & early exit. The experiments were performed on NVIDIA A100 GPUs. We show speedup of upto 1.82× with no accuracy drop. Finetuning on Task-Specific Dataset In Table A6 we show results for Llama 1.5B finetuned on TOPv2’s training dataset and evaluated on TOPv2 test set. The experiments were performed on NVIDIA H100 GPUs. We present the EM (exact match) on the fully TOPv2 test set, further we sample 1000 samples for latency experiments where we leverage 8 speculations, and generate the next 80 tokens with greedy decoding. With selfspeculation, the model was able to achieve high token acceptance rate, (E = 6: 76.0%, E = 12: 97.2%, E = 18: 98.9%) reaching 2.0× speedup. Generation E ROUGE-2 Token Acc. Tokens per Sec. Speedup Autoregressive 0.0513 34 1.0× Early Exit 6 0.0035 170 Self Speculative 6 0.0513 45% 62 1.82× Table A5: Generation results on HumanEval for Llama 7B finetuned on code 12638 101 102 103 PPL Wikipedia Test 101 102 103 PPL Books Test 101 102 103 PPL The Stack Test 60 70 Acc PIQA 35 40 Acc SIQA 30 35 Acc HellaSwag 50 55 Acc Winogrande 1.1 40 60 Acc ARC Easy 60 70 Acc COPA 30 40 Acc RACE Middle 25 30 Acc RACE High 0 2 4 EM NQ 0 10 EM TQA 1 2 EM GSM8K 0 1 EM MATH 24 18 12 6 Layer 0 2 Pass@1 HumanEval 24 18 12 6 Layer 0 2 Pass@1 MBPP Baseline LayerSkip-LD LayerSkip-EE LayerSkip-LD+EE (a) Llama2 1.5B - 26B tokens 101 102 103 PPL Wikipedia Test 101 102 103 PPL Books Test 101 102 103 PPL The Stack Test 60 70 Acc PIQA 35 40 Acc SIQA 30 40 Acc HellaSwag 50 55 60 Acc Winogrande 1.1 40 60 Acc ARC Easy 60 70 80 Acc COPA 20 40 Acc RACE Middle 25 30 35 Acc RACE High 0 5 EM NQ 0 10 20 EM TQA 0 2 EM GSM8K 0 1 EM MATH 32 24 16 8 4 Layer 0 2 4 Pass@1 HumanEval 32 24 16 8 4 Layer 0 2 Pass@1 MBPP Baseline LayerSkip-LD LayerSkip-EE LayerSkip-LD+EE (b) Llama2 7B - 26B tokens Figure A2: Early exit evaluation of pretraining from scratch on 26B tokens. Baseline LayerSkip-LD The capital of Egypt is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is Egypt capital city is The capital of Egypt is Egypt City, which is located on the banks of river Egypt. This city is situated on the banks of River Egypt, which is one of the largest rivers in the world. The city has a population of 1.3 million people and it is also known as the second largest city in Egypt after the capital city of Egypt, Cairo. The city was built on the banks of River Egypt in 1958. The city is known for its historical monuments and for its museums and museums. The city also has a lot of facilities for tourism. It has several hotels and resorts which are popular among tourists. The city also has a lot of clubs and bars which are popular among tourists. The city also has many shopping centres which are popular among tourists. The city also has a lot of shopping centres which are popular among tourists. The city also has a lot of shopping centres which are popular among tourists. The city also has a lot of shopping centres which are popular among tourists. The city also has a lot of shopping centres which are popular among tourists. The city also has a lot of shopping centres which are popular among tourists. The city LayerSkip-EE LayerSkip-LD+EE The capital of Egypt is a country which has a lot to offer to visitors from all over the world. You can visit the monuments of the Egyptian History, the Pyramids of Giza, the Cairo Opera House, the Sphinx, the Giza Plateau, the Aswan Dam, the Valley of the Kings, the Coptic Museum, the Tomb of Tutankhamun, the Medinet Habu, the Giza plateau, the Valley of the Kings, the ancient city of Luxor, the Giza Pyramids and the Hanging Churches. Cairo is also the center of Arabic culture and the Middle East, and the best way to see it is by seeing the Cairo Opera House, the Cairo International Museum, the Egyptian Museum, the Egyptian national library, the Alexandria Library, the Egyptian National Museum, the Cairo bazaar, the Cairo University, the Khan El-Khalili square, and the Cairo Pyramids and Sphinx. All these sites are only some of the sites you can see in Egypt. For more information on what to do and where to go, please visit our Egypt Travel Blog. The capital of Egypt is Cairo, located in the northeast of the country. It is a cosmopolitan city, where you can find modern international hotels, traditional inns, and a large number of museums. The most important sightseeing attractions include the Mohamed Ali Mosque, the Al Kut (Bahari) Mosque, the Said Sadan Mosque, the Abdel Kader Mosque, the Al Mansur Mosque, the El-Hakim Mosque, the El-Qahira Mosque, the Al Khalidi Mosque, the Giza Pyramids, the Dar El-Arqam Mosque, the El Sayed Mosque, the Abu Salim Mosque, the El Karnak Temple, the Al Hakim Mosque, the Cairo Opera House, the El Karnak Catacombs, the El-Mahareb Mosque, the Abdel Moneim ElKhozami Mosque, the El-Qasr Mosque, and the Al Fatah Mosque. The oldest historical monuments in Cairo include the Al Kahira Mosque, the Al-Kahira Fort, the Al-Azhar Mosque (a) Llama2 7B (that has 32 layers) pretrained from scratch on 26B tokens only, exiting at layer 24. Figure A3: Early exit text generation examples for models pretrained from scratch on 26B tokens with and without LayerSkip. Blue: The prompt fed into the model. Red: incorrect phrases or words generated by the model (whether factually or grammatically wrong, or hallucinations). With self-speculative decoding, we fix those incorrect phrases by verifying with remaining layers. A.5 Ablation Studies KV Cache in Self-Speculation In §4.3.3 we introduced the re-use of KV cache as a method for improving model generation speed. We measure its effect in Table A7. We follow the same inference setup as described in §6.2. We find that the use of 12639 32 24 16 8 4 Layer 0 10 Pass@1 HumanEval 32 24 16 8 4 Layer 0 10 20 Pass@1 MBPP Baseline LayerSkip-LD LayerSkip-EE LayerSkip-LD+EE (a) Finetuning Llama1 7B on code. 24 18 12 6 Layer 0 50 Acc TOPv2 Baseline LayerSkip-LD LayerSkip-EE LayerSkip-LD+EE (b) Finetuning Llama1 1.5B on TOPv2 training set. Figure A4: Early exit evaluation of finetuning on domain-specific or task-specific data. Generation E EM Token Acc. Time per Token (ms) Speedup Autoregressive 85.9% 36 1.00× Early Exit 18 83.3% 28 Early Exit 12 79.4% 19 Early Exit 6 62.9% 10 Self Speculative 18 82.9% 98.9% 29 1.24× Self Speculative 12 82.9% 97.6% 22 1.64× Self Speculative 6 82.9% 76.0% 18 2.0× Table A6: Generation results on TOPv2 task for Llama 1.5B finetuned on TOPv2 training data. Generation TOPv2 ms/t CNN/DM ms/t Self Speculation(E = 18) 134 166 w.o KVQ Reuse 143 182 Self Speculation(E = 12) 104 165 w.o KVQ Reuse 110 185 Table A7: Ablation on re-use of the KV cache and exit query cache. Results are presented on CPU inference. KV cache is able to consistently save us 9-20 ms per token depending on the task. Optimized Implementation The selfspeculative decoding performance results were based on HuggingFace Wolf et al. (2020) in eager mode. We have developed another implementation on gpt-fast Team (2024) that optimizes performance using torch.compile(). We provide a prompt “Hello, my name is” to our continually pretrained Llama2 7B and measure the average tokens per second running a 1000 times on a single NVIDIA A100 GPU. We also compare with regular speculative decoding where the draft model is 4-bit quantized model. We use the optimal number of speculations and early exit layer based on a sweep for both our self-speculative and speculative decoding solutions. The results are presented in Table A8. We can see that: • Our proposed self-speculative decoding solution consumes the same memory (total memory for weights, activations, and KV-cache) as the baseline auto-regressive solution, and less than the standard speculative decoding solution that requires 2 models, as we re-use the earlier subset of layers of the model as the draft stage. • Our proposed self-speculative decoding solution is faster than the standard one. This does not necessarily mean self-speculation is faster than speculation. More experiments on different sized draft models are required to evaluate that. CPU Inference Experiments We conduct our task specific fine-tuning on Llama 1.5B to measure decoding performance on CPU as well, showing a near 2× speed up on CPU as well, presented in Table A9. We conduct our experiments using the first 100 samples from the TOPv2 test set, leveraging 7 speculations, generating the next 50 tokens with greedy decoding. Scaling with Pretraining Tokens In order to understand how the accuracy of last and middle layers change across time when pretraining from scratch, we ran 3 training experiments with different number of tokens on Llama 1.5B and show the results in Figure A5. Each experiment trained for 50,000 steps, per device batch size of 4, context window of 4096, but changed the number of GPUs to 32, 64, 128. We plotted the perplexity of a held out split of The Stack dataset on the last layer (layer 24) and the middle layer (layer 12). As expected, perplexity on last layer decreases as we train on more tokens. However, surprisingly, we discover that perplexity on middle layer increases drastically by default in training, unless we apply early exit loss. Layer dropout reduces the increase as well. This could open the door to more research on the dynamics of transformers and the evolution of embeddings in earlier layers to understand why embeddings across layers are close to each other early on in training but diverge drastically as training progresses. This 12640 Generation Temperature Draft d Total Memory (GB) Tokens per Second Speedup Autoregressive 13.90 108.52 1× Speculative 0.0 Llama2 7B Int4 5 18.26 125.06 1.15× Self-Speculative 0.0 E = 5 3 13.90 150.07 1.38× Speculative 0.6 Llama2 7B Int4 5 19.30 122.05 1.12× Self-Speculative 0.6 E = 4 3 13.90 133.98 1.23× Table A8: Decoding performance evaluation on PyTorch gpt-fast of Llama2 7B continually pretrained with LayerSkip. Generation EM Acceptance Time per Token (ms) Autoregressive 85.39 165 Early Exit E = 18 82.0 124 E = 12 77.2 84 E = 6 29.8 44 Self Speculation E = 18 82.9 99 134 E = 12 82.9 97 104 E = 6 82.9 76 87 Table A9: Generation results on CPU for TOPv2 task for small Llama-like finetuned on TOPv2 training data. could also present a motivation for our training recipe that has minimal drop in last layer accuracy while significantly improves accuracy of earlier layers. 2.6 2.7 2.8 2.9 PPL Last Layer 32x 64x 128x Training Tokens 0 200 400 600 PPL Middle Layer Baseline LayerSkip-LD LayerSkip-EE LayerSkip-LD+EE Figure A5: Perplexity on The Stack Kocetkov et al. (2022) test set when pretraining Llama 1.5B from scratch with different number of tokens. Layer Dropout Configurations In Figure A6 we show that our layer dropout configuration leads to lower loss compared to a constant layer dropout across all layers with the same average value. Step Loss 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1 2.3 2.5 500 1000 1500 2000 2500 Const Exp Average Training Loss Figure A6: Training loss using different layer dropout configurations. “Const” refers to equal dropout on all layers equal to 0.0889, and “Exp” refers to dropout exponentially increasing from 0 at the first layer to 0.2 at the last layer. Both configurations have equivalent average dropout across all layers. A.6 Self Speculation Pseudo Code Below we share pseudo code for implementing self speculation def self_speculate( model , input , num_speculations , ): """ Input Arguments: model: Decoder LLM with L layers , supports 2 main functions: * forward_early: computes inference with the first E layers of the transformer , saves KV states and the exit layer query cache (kvq) * forward_remainder: computes verification with the last L-E layers reusing the kvq cache input: the input prompt sequence for the model num_speculations: the number of speculations from the draft forward pass """ # output contains the generations # kvq_cache is the kv cache for generation # with the addition of the exit layers query # cache to speed up verification output , kvq_cache = [] # continue with speculative generation # until the maximum sequence length is hit while len(output) < max_tokens: # produce `num_speculation ` draft tokens # using the first N layers of the model draft_tokens = [] 12641 draft_input = input # conduct num_speculations drafts # to produce the draft sequence for i in range(num_speculations): draft_token = model.forward_e( draft_input , kvq_cache ) draft_tokens.append(draft_token) draft_input = draft_token # verify each of the predictions with the # full model inference using a single # forward passs verified_tokens = model.forward_remainder( input + draft_tokens , kvq_cache ) # find the number of accepted tokens # by comparing the matches between the draft # and verified tokens matched_tokens = match( draft_tokens , verified_tokens) output.extend(matched_tokens) # the last matched token doesn't have a cache # and needs to be added input = matched_tokens [-1] # the kvq_cache needs to be updated to remove # the last token kvq_cache = crop( kvq_cache , output_len -1) return output 12642
2024
681
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12643–12655 August 11-16, 2024 ©2024 Association for Computational Linguistics Classist Tools: Social Class Correlates with Performance in NLP Amanda Cercas Curry MilaNLP Bocconi University [email protected] Giuseppe Attanasio Instituto de Telecomunicações Lisbon, Portugal [email protected] Zeerak Talat Mohamed Bin Zayed University of Artificial Intelligence [email protected] Dirk Hovy MilaNLP Bocconi University [email protected] Abstract The field of sociolinguistics has studied factors affecting language use for the last century. Labov (1964) and Bernstein (1960) showed that socioeconomic class strongly influences our accents, syntax and lexicon. However, despite growing concerns surrounding fairness and bias in Natural Language Processing (NLP), there is a dearth of studies delving into the effects it may have on NLP systems. We show empirically that NLP systems’ performance is affected by speakers’ SES, potentially disadvantaging less-privileged socioeconomic groups. We annotate a corpus of 95K utterances from movies with social class, ethnicity and geographical language variety and measure the performance of NLP systems on three tasks: language modelling, automatic speech recognition, and grammar error correction. We find significant performance disparities that can be attributed to socioeconomic status as well as ethnicity and geographical differences.1 With NLP technologies becoming ever more ubiquitous and quotidian, they must accommodate all language varieties to avoid disadvantaging already marginalised groups. We argue for the inclusion of socioeconomic class in future language technologies. 1 Introduction The relationship between social class and language was first examined by Labov (1964). He discovered that New Yorkers with greater socioeconomic status pronounce the /R/ sound after vowels, whereas individuals with lower socioeconomic position drop it. He quantified this observation by asking employees in various department shops (a proxy for socioeconomic rank) for things found on 1We make our artifacts available on our Github repository. Figure 1: Overview of the empirical framework. We use OpenSubtitles as truth-value data to calculate perplexity, grammaticality, and word-error-rate in Automatic Speech Recognition (ASR) for each episode. the fourth floor. Then he kept track of how many Rs were dropped in those two words. He discovered a clear anti-correlation between the store’s (and presumably the speakers’) socioeconomic standing and the quantity of dropped Rs: the higher the status, the fewer dropped Rs. Since Labov’s (1964) analysis of social stratification and language, linguistics has made concerted efforts to understand how different sociodemographic factors influence language production and perception, and how speakers use them to create identity (Rickford, 1986; Bucholtz and Hall, 2005; Eckert, 2012); using both naturally produced language and language from media like TV shows and films (Stamou, 2014) to study sociolects. Despite a considerable body of evidence demonstrating links between language and demographic characteristics (the “first wave” of sociolinguistic variation studies), comparatively few socio-demographic factors have been studied in the context of Natural Language Processing (NLP) technology (Cercas Curry et al., 2024). But NLP is a product and a tool for understanding other phe12643 nomena and must represent all language varieties. Existing research on socio-demographic characteristics has primarily concentrated on the signaling effect of certain linguistic variables like age, ethnicity, regional origin, and gender (Johannsen et al., 2015). Most of these publications focus on any bias toward the particular variable they study. However, NLP hardly interacts with the second and third waves of sociolinguistics, i.e., how variation 1) shapes local identity and 2) drives language development. To narrow this gap, we use NLP to focus on socioeconomic position. Using a dataset of 95K utterances from Englishlanguage television episodes and movies, we empirically investigate performance as a function of socioeconomic class. We study its effect on vocabulary, Automated Speech Recognition (ASR), language modelling, and grammatical error correction. In all applications, we see strong correlations. Movie characters are usually typecast to present as a certain class, providing a representative sample without the privacy issues associated with regular subjects. The authenticity of these representations is backed by work in sociolinguistics such as McHoul (1987) and Quaglio (2008). We discover that the performance of NLP tools is associated with socioeconomic class, geographic variety (US vs. UK), and race across all tasks. Our findings highlight an important lack of flexibility of NLP tools, and a more fundamental issue: What does it mean if NLP technologies only reliably work for a limited segment of society? Contributions • We construct a corpus of 95K utterances from television shows and movie scripts, coded for socioeconomic status, geography, and race. • We empirically show that socioeconomic status measurably impacts the performance of NLP systems across four measures. 2 Social Class and Language Social stratification is the grouping of people based on their socioeconomic status (SES), which is determined by factors such as income, education, wealth, and other characteristics. Different groups are distinguished in terms of power and prestige. There are various social stratification systems, such as the Indian caste system, indigenous American clans or tribes, and the Western hierarchical class system. The exact number of social strata is unknown and varies by country and culture. However, at least three strata are commonly used to refer to different groups in Western societies: upper, middle, and lower-class people. Other systems distinguish between blue collar and white collar jobs. Recently, the Great British Class Survey (GBCS, Savage et al., 2013) has taken an empirical approach to understanding the different social strata, and they propose a seven-level system for the United Kingdom. Their stratification is based on economic, social, and cultural capital: elite, established middle-class, technical middle-class, new affluent workers, traditional working class, emergent service workers, and precariat. They derive these classes from a survey conducted over British citizens that received more than 160K responses. Social class influences people’s daily lives by granting or limiting access to resources. Beyond power and prestige, social stratification has a significant impact on people. For example, lower socioeconomic status has been linked to poorer health outcomes and higher mortality. (Saydah et al., 2013). SES also affects language use from the very early stages of development. Bernstein (1960) theorised that language takes on a different role in families of different socioeconomic status, where middleclass parents encourage language learning to describe more abstract thinking leading to broader vocabularies. In working-class families, parents are limited to more concrete and descriptive concepts. Parents from lower SES tend to interact less with their children, with fewer open-ended questions than parents from higher SES, which shapes language development (Clark and Casillas, 2015). While Usategui Basozábal et al. (1992) show that education reduces this language gap, education has traditionally been one of the most important factors in determining social class and potential for upward social mobility. Given the well-documented effects of socioeconomic status on language development and use, it stands to reason that social class should be carefully considered as a variable in NLP. 3 Dataset Here, we want to assess the impact a user’s social class has on the performance of NLP systems. Although there have been concerted efforts to document wide varieties of English, existing datasets do not currently capture all social strata or other sociodemographic aspects sufficiently. For example, the National Speech Corpus (Clopper and Pisoni, 2006) provides education information 12644 for the talker and their parents but consists of almost exclusively white and educated speakers;2 the CORAAL corpus (Kendall and Farrington, 2018) on the other hand is focused on African American English (AAE), not capturing other ethnicities. To the best of our knowledge, no datasets are available that would suit our purposes: a large enough sample that is annotated with the speakers’ socieconomic background. We thus collect our own. We identify an initial sample of English language television series by selecting the 250 highestrated series on the International Movie Database (IMDB).3 We annotate each show according to attributes,4 i.e., class, race, gender, dialect, and geography of the main characters, as well as the genre and time period of the show. Three annotators familiar with the shows agree on the labels unanimously. We only consider shows that are scripted to ensure that the language production of each character is controlled by the script-writers and actors to produce a particular vernacular and to avoid collecting highly sensitive information about private individuals. We therefore exclude cartoons, documentaries. We further exclude science fiction and historical fiction, as these are likely to operate without strict adherence to existing social structures and language variation. After our filtering and selection, 63 shows remain in consideration. Of those, 34 are predominantly about white men, and 29 feature a mixture of white women and men. Only three predominantly feature female characters, and none black or lower-class female characters. Based on this initial sample, we identified additional shows to augment gaps (e.g., the lack of Black stories) in class and dialect. Because we want to measure the effect of class separately from gender, race and regional accent, we select the TV shows from the list representing each category. In our final selection of movies and television shows, we include a total of 19 shows that are based in the United States of America and the United Kingdom. To ensure racial diversity, we emphasise movies and shows that predominantly represent Black characters in addition to movies and 2Education is one aspect of social class. For a further discussion see Saunders (1990) and Savage et al. (2013). 3IMDB top 250 TV shows. 4Note that we are using ascribed characteristics rather than self-reported since we are dealing with fictional characters, however these two are found to be highly correlated based on linguistic features (Weirich and Simpson, 2018) shows that represent white characters.5 We include the U.S.-based shows The Wire and When They See Us (predominantly lower-class Black characters), Fargo (middle-class Black characters), The Fresh Prince of Bel-Air (upper-class Black characters). For the shows’ centre white characters, we include Trailer Park Boys (lower-class white), The Sopranos (lower-class white), Breaking Bad (middle-class white), and Arrested Development (upper-class white). The list of shows above, however, predominantly feature male characters. For this reason, we also include shows whose characters represent a wider array of genders. The shows that we include are Pose (lower-class Black and Latinx), Big Little Lies (middle-class white), and Sex and the City (upper middle-class white). Finally, to compare with other regional varieties of English, we include movies and shows from different regions in the United Kingdom. The movies and shows that we include are Train Spotting and T2 (lower-class white Scottish, predominantly male characters), Derry Girls (lower-class white Irish, primarily female), Downton Abbey (English), The IT Crowd (white, middle-class) and The Crown (upper-class white English). We report full details in Table 8, Appendix A. We collect the first season for each of the shows on our list, except for Fargo and The Wire. For Fargo, we collect the fourth season, which includes Black characters. For The Wire, we collect seasons 1 and 3, which feature Black characters but have a minority representation of white characters. We deliberately chose this approach to data selection from “artificial sources” since collecting class information is highly sensitive and can introduce privacy risks. Fictional characters, however, are often scripted into particular roles with explicit SES. A drawback of this methodology is the question of authenticity - the extent to which actors are representative of particular sociolects. Actors reportedly undergo significant training to learn different lects.6 By selecting highly rated shows, we hope to capture good, realistic performances. Moreover, TV show and film data is increasingly used in sociolinguistics (Stamou, 2014) and Quaglio (2008) found important similarities between real and TV show dialogue. In NLP, media data is broadly used 5At least one of the authors is familiar with every show and movie. We annotated them for race and class based on the setup and main characters. 6See for example, https://www.newyorker.com/ magazine/2009/11/09/talk-this-way 12645 Geography Class Episodes Utterances USA Low 13 20119 Low, middle 13 22947 Middle 7 4140 Middle, Upper 25 13221 Upper 15 8561 EN Middle 6 2271 Upper 3 1515 Upper, Low 7 5214 NE Low 7 3929 Middle 2 1238 Table 1: Summary of statistics by geographic variety and class. EN: England, NE: non-English U.K. in tasks like emotion analysis (Plaza-del Arco et al., 2024) and dialogue (Lison and Tiedemann, 2016). We therefore have reason to believe our dataset provides a reasonable proxy for real speakers. In total, our dataset contains 95K utterances, text and speech, from 19 TV shows and movies (see Table 1 for dataset statistics). 3.1 Lexical Analysis Bernstein (1960) showed that children from working-class families had significantly smaller vocabularies even when general IQ was controlled for, and Flekova et al. (2016) showed that there is significant lexical and stylistic variation between different social strata, with lexical features being stronger predictors of income than even age. We first assess whether our dataset has sufficient linguistic cues to capture lexical differences. We generally follow the methodology set out in Flekova et al. (2016). However, we model class as a categorical value. We calculate: Surface: Turn length, mean word length per turn, ratio of words longer than five letters, type-token ratio. Flekova et al. (2016) also measure the use of emojis, but this is not relevant to our analysis. Readability Metrics: Readability metrics aim to measure the complexity of a text generally based on the number of syllables per word and the number of words per sentence. Using Textstat,7 we calculate the Automatic Readability Index (ARI) (Senter and Smith, 1967), the Flesch-Kincaid Grade Level (FKG) (Kincaid et al., 1975), the Coleman-Liau Index (CLI) (Coleman and Liau, 1975), the Flesch Reading Ease (FRE) (Flesch, 1948), the LIX Index (LIX) (Anderson, 1983), and the Gunning-Fog Index (FOG) (Gunning, 1968). We normalise the text before calculating these metrics by lower-casing 7https://github.com/textstat/textstat Class Gender Race Geography Length -0.033 -0.034 -0.025 0.052 Chars -0.437 0.243 0.203 0.155 >5 0.073 -0.024 0.017 -0.022 TTR -0.019 -0.047 -0.046 -0.085 FRE -0.096 0.033 0.032 0.074 ARI -0.214 0.068 0.059 0.110 CLI 0.467 -0.213 -0.201 -0.137 FKG 0.007 -0.035 -0.029 -0.005 FOG 0.137 -0.031 -0.040 -0.025 LIX 0.106 0.013 -0.007 -0.065 NOUN -0.181 0.076 -0.035 0.163 VERB -0.059 0.039 -0.002 0.082 PROPN -0.017 -0.064 -0.099 -0.067 ADV -0.066 -0.068 -0.103 -0.037 ADJ -0.051 0.044 0.021 0.076 DET 0.037 0.059 0.024 0.026 INTJ 0.121 -0.079 -0.093 -0.044 PRON -0.004 0.014 0.008 0.020 NER 0.078 -0.001 0.035 -0.010 abst 0.052 -0.034 -0.026 -0.034 HL -0.034 0.109 0.142 0.066 Table 2: Coefficients for the logistic regression model. For each class and type of feature, the strongest coefficient is in bold face. Val Test Lexical 0.290 TF-IDF 0.528 0.524 TF-IDF + Lexical 0.297 0.290 Sentence Embed 0.457 0.454 Table 3: F1 Macro for different representations on social class prediction. Best results in bold. and removing punctuating except for periods since these are used by CLI. Note that these were designed to evaluate long-form text. Syntax: Texts with more nouns and articles as opposed to pronouns and adverbs are considered more formal. We measure the ratio of each POS per turn using Stanza (Qi et al., 2020). Style: Finally, to capture some notion of speaker style, we measure the number of abstract words8 , the ratio of hapax legomena (HL), and the number of named entities (NER). We run a multi-class logistic regression to understand whether the features are correlated to class. After normalisation, we run a logistic regression with social class as the predicted class and the 8https://onlymyenglish.com/ list-of-abstract-nouns/ 12646 features above as the predictive features. In contrast with Flekova et al. (2016), we find our set of features only mildly predictive in this dataset. Although CLI and the number of characters have strong coefficients, the overall performance is very low. Our best model achieves a macro F1 of 0.16, only slightly better than a most-frequent label baseline (macro-F1 of 0.07). In addition, we run the same regression for the other sociodemographic aspects: gender, race, and geographical origin. Overall, we find these less predictive with F1 scores similar to the baseline, suggesting the features better capture differences caused by SES than other sociodemographic factors. Table 2 shows the coefficients for each sociodemographic factor. Lexical features do not capture strong differences in SES indicating one of two things: the features are better suited for written text (for example, many of them capture formality), and/or our dataset does not accurately capture linguistic differences. To assess whether linguistic cues in our dataset might help us differentiate between speakers of SES which the previous study did not capture, we experiment with alternative text encodings such as TF-IDF, and sentence embeddings (Reimers and Gurevych, 2019).9 The results are shown in Table 3. Logistic regression based on TF-IDF achieves an F1 of 0.52 (x1.8 lexical features). We hypothesise two reasons for the improvement over the lexical features: (1) there are linguistic differences that the metrics do not capture, or (2) the logistic regression model can better differentiate between TV shows. Given the existing literature in support of linguistic differences and the (albeit small) predictive power of the metrics, we ascribe the improvement to a mixture of both. With this in mind, we continue our experiments with user-facing NLP systems. 4 Social Class and Speech Beyond written language, our dataset affords testing whether speech models understand and process accent unevenly. Therefore, we test disparities in ASR word error rates across social classes. We calculate the WER for all shows and movies to measure how well ASR technologies capture different variants of English in our sample. Models: We use Wav2Vec2 (Baevski et al., 2020), XLS-R (Babu et al., 2021), and Whispermedium (Radford et al., 2023) as implemented in 9We use scikit-learn’s TfidfVectorizer collecting uniand bi-grams for TF-IDF, and https://huggingface.co/ sentence-transformers/all-mpnet-base-v2. transformers (Wolf et al., 2020). Processing Pipeline: We use movies and TV shows which are available online. We first collect the audio and subtitles from OpenSubtitles10 and use them as gold reference.11 Then, we used one ASR model to transcribe the audio. The timestamps from the model-generated transcription and the subtitles often do not match, and neither source provides information about which character is speaking. We therefore group all utterances from a single episode or movie together into one long string and calculate the WER using Jiwer (Morris et al., 2004). Ideally, we would like to separate by character to get a more fine-grained analysis. However, this approach would require us to keep mappings from utterance to character consistent across episodes. Despite several approaches, we were not able to achieve this goal with the computational power we had. Our analysis is therefore group-based, rather than character-based. Results: Figure 2 shows the differences in worderror-rate by race and class in US TV shows. We find clear trends across models, with effects from both race and class: lower ASR error rates are associated with higher SES and with whiteness. These trends are stronger for the Wav2Vec2 model. 5 Language Modelling Language modelling plays an essential role in downstream applications in NLP, so it is imperative that language models can accurately and equally represent different speakers’ lects. Language models are evaluated through perplexity, a measure of how expected a given sentence is: the higher the perplexity, the more unexpected. Perplexity can give us an indication of how well a model might perform in downstream tasks: Gonen et al. (2023) show that the lower the perplexity of a prompt, the better the prompt can perform the task. Here, we calculate perplexity as a measure of linguistic acceptability, that is, how ‘expectable’ a particular language variant might be to a given language model. Should models display differing perplexities for different groups, they would put such groups at a disadvantage. Models: We experiment with two state-of-theart base models, Llama 2 (Touvron et al., 2023) 10https://www.opensubtitles.org/ 11Here, we manually checked a random sample to verify the quality. We henceforth assume that the subtitles are an accurate transcription of what is being said. 12647 (a) Wav2Vec2 (b) Whisper Figure 2: Word error rate for the different models, grouped by the speakers’ SES and race, for US-based TV shows. The race in the chart refers to that of the majority of characters in the show. Figure 3: Mean perplexity among U.K.-based shows by model. and Mistral-7B (Jiang et al., 2023). Moreover, we include Zephyr-7B (Tunstall et al., 2023), a Mistral-7B-based chat model to test the effect of alignment.12 We calculate perplexity for each sentence in our dataset after lower-casing them and removing numbers and punctuation. We exclude turns shorter than five tokens as they generally do not differentiate between classes. Results: Table 4 shows the mean perplexity and standard deviation for each model and each class for the entire dataset. We do not find significant differences based on geographical location (U.K. vs U.S.A) across the models. We see small differences across groups based on class alone, however, because language is affected by other sociodemographic besides class, we also consider the effect of race and geographic variety. We calculate mean perplexity by SES in the U.K. and the U.S.A. Figure 3 shows that class and per12HuggingFace: meta-llama/Llama-2-7b, mistralai/Mistral7B-v0.1, and HuggingFaceH4/zephyr-7b-beta plexity are correlated and we find similar patterns across geographical dialects: higher SES leads to lower perplexity also in U.S.A.-based shows. Next, we consider U.S.-based shows. The breakdown of by class and race are shown in Table 5. Once both factors are taken into account, a clearer picture emerges: lower SES leads to higher perplexity. To confirm these differences are significant, we run a one-tailed student’s t-test and find significant differences between classes, particularly for white speakers for both Mistral-7B and Zephyr-7B. In the case of Mistral, lower-class and lower-middle class speakers have significantly higher perplexities than Mid-Upper (∗) and Middle class speakers (∗). In the case of Zephyr, the model shows higher perplexities for lower classes than it does for Middle (∗), Mid-Upper (∗∗) and Upper class speakers (∗∗).13 In sum, models show significantly higher perplexities for Lower class speakers than for higher prestige groups. 6 Grammar Correction Finally, we consider the grammaticality of each sentence. We suspect that grammar features correlate strongly with class, as “correct” grammar is typically a hallmark of signalling higher SES. We calculate the edit distance between the original sentence and the corrected one. Lower edit distance means higher grammaticality. Models: Since we are concerned with potential end-user experience, we choose to evaluate the four most downloaded models on Huggingface since they will reach the largest audience. We use the following models for grammar correction: 13Significance: ∗: p < 0.05, ∗∗: p < 0.01. 12648 Mistral-7B Zephyr-7B Llama 2 Class Mean Std Mean Std Mean Std Low 294.606 690.361 415.641 989.066 189.804 361.815 Low, middle 442.756 911.300 649.462 1441.759 265.114 477.377 Middle 252.729 536.903 353.667 822.553 177.900 398.560 Middle, Upper 241.923 683.345 332.224 999.137 164.807 450.446 Upper 288.324 693.994 399.854 1063.942 190.536 370.958 Upper, Low 282.815 575.833 385.273 876.126 190.696 358.952 Table 4: Mean perplexity and standard deviation per class and model. Best mean result per model in bold. Perplexity Black Low 328.848 Middle, Upper 259.673 White Low 275.337 Low, middle 342.865 Middle 229.278 Middle, Upper 205.585 Upper 302.057 Table 5: Mean perplexity and standard deviation per class and race. Best results per race in bold. Happy CoEdit Flan-T5 Gramf. Low 2.455 1.963 6.424 2.192 Mid-Low 2.590 2.138 12.050 1.553 Middle 2.068 1.785 6.117 1.653 Mid-Upper 2.866 1.944 7.952 2.091 Upper 1.122 1.212 5.518 0.808 % corrected 19.76 35.94 66.42 19.11 Table 6: Mean edit distance between the subtitle and the grammar-corrected utterances generated by each model, and percentage of sentences with at least one correction. • T5 Grammar Correction14: A model based on the HappyTransformer15 trained on the JFLEG dataset (Napoles et al., 2017). • Gramformer:16 A seq2seq model fine-tuned on a dataset of WikiEdits. • CoEdit-large17: A Flan-T5 (Large) based model, finetuned on the CoEdit dataset (?). • Grammar-synthesis-large18: A fine-tuned version of Google’s flan-t5-large for grammar correction, trained on an expanded version of the JFLEG dataset. 14T5-based Grammar Correction 15HappyTransformer 16Gramformer 17CoEdit 18Flan-T5 Correction Figure 4: Proportion of utterances that are grammarcorrected per model and geographical language variety. Results: Figure 4 shows the percentage of utterances that each model corrected, grouped by geographical variation. The Flan-T5-based model generates significantly more corrections than the other models, followed by CoEdit-large. Overall, U.K. utterances are corrected slightly more often by the models. Table 6 shows the mean edit distances between the original sentence extracted from the subtitles and the models’ proposed corrections. We find little difference in grammar error correction for most models, even when controlling for geographical and racial differences. However, on further analysis, we find that models generally produce corrections for relatively few utterances. Instead, we consider whether models are more likely to produce edits for different classes. In Figure 5 we plot the count of utterances with corrections per model and per class. From this, we can see a clear pattern: models produce corrections more frequently for those of lower SES. In addition, we notice that often some of these corrections are performed on in-group slang or regional linguistic phenomena, see Table 7. 19Skag refers to heroin in Scottish slang. 12649 Figure 5: Distribution by class of utterances with at least one correction. 7 Discussion Following established work in sociolinguistics, our results support our hypothesis that NLP systems display biases towards different sociolects, beyond geographical and ethnic differences. On the difference between life and art: The use of media (if not synthetic) data as a stand-in for real-world examples is not uncommon in NLP (e.g. Plaza-del Arco et al., 2024; Lison and Tiedemann, 2016). Our work presents a starting point to study an aspect of linguistic variation that has thus far received little attention in the NLP literature. To substantiate our call to action, the characters’ language need not be an exact replica but a good approximation of a given sociolect. However, we do not claim that all media portrayals are accurate representations: TV shows and films are written by certain people, for specific audiences, and the quality of acting varies. Through our selection process, we selected shows that would not caricature certain groups, including TV shows like The Wire and Pose, which feature a cast representative of the characters. Nevertheless, these results should be validated with real speakers. NLP as an enforcer of “standard” language: Native speakers of any language have different registers available to them, including when interacting with NLP systems. While this may help mitigate some of the potential harms shown in this paper, our position is that humans should not have to adapt their language to interact with technology, but rather technology should adapt to humans. By dint of their use, NLP systems are setting a standard of language: NLP systems are becoming more common, not only in social contexts calling for formal (i.e. standardised) language, or in fields like Social Science (Ziems et al., 2024), but in everyday sceThe Crown And what a bunch of ice-veined monsters my family are. And what a bunch of ice-veined monsters my family is. Pose The tittiness of it all. The quality of it all. Trainspotting So, I’m off the skag .19 So, I’m off the grid. Trailer park Boys We be dope when the new NW A come out, know what I’m sayin’ We will be happy when the new NWA comes out, know what I’m saying? Table 7: Examples of grammar correction in different shows. The corrected examples are from the T5 Gramar correction model. narios. For example one of the proposed use cases for the Gramformer is correcting text messages as a built-in feature for messaging apps, language models inform systems like automatic correction. In Section 6, we show grammar-correction models erase particularities of certain language varieties like regional slang or norms. Likewise, LLMs like chatGPT now boast hundreds millions of users. Keeping in mind the findings from Gonen et al. (2023) and ours in Section 5, we may find users of lower SES have worse user experience with such systems. This widespread interaction with language technologies reinforces the lect of middle class, white English speakers as the standard even in quotidian activities. Going forward While reporting on the socioeconomic background of study participants and dataset contributors is more complicated than other factors such as age or ethnicity (as it is not a one-point measurement and it is culture-dependent), we encourage researchers to report on this to draw a more accurate picture of the language varieties NLP will capture and to better benchmark systems across sociolects. Given the limitations of our study (most notably, restriction to English and group rather than individual scores, see the Limitations section below), we envision several possible avenues for 12650 future research in this area: (1) the findings of this study should be validated for other languages where such differences have been observed; (2) the collection of a (possibly larger) corpus of unaggregated speakers; and (3) while we find support from previous research such as Flekova et al. (2016) for social media data, the findings should also be validated for other data forms such as long-form written texts. 8 Related Work Works in NLP considering social class in the context of language use are few and far between (Cercas Curry et al., 2024), mostly focusing on predicting social class from texts (e.g. Basile et al., 2019; Lampos et al., 2014). Nguyen et al. (2016) ascribe the dearth of work in bias and social class to the lack of self-reported explicit labels in online user profiles. The difficulty in obtaining such data hinders efforts in addressing this issue. More recently, some work has focused on representational harms with regards to social class: Curto et al. (2022) investigate bias against the poor in NLP in terms of the language that surrounds poor people, considering the associations of ‘poor’ and ‘rich’ in embeddings and language models; Kiritchenko et al. (2023) instead focus on aporophobia, classist attitude stigmatising the poor, in the context of toxic language. To the best of our knowledge, we are the first to investigate how NLP systems perform with speakers from different social strata. 9 Conclusion Social class plays an important role in people’s identity construction, and is consequently strongly reflected in their language use. Despite its central status as a variable in sociolinguistics, there is so far little work in NLP engaging with social class and its impact. Our contributions are twofold: following methods from sociolinguistics, we use a data set of 95K movie transcripts that we annotate for portrayed class, race, and geographical variety (UK vs. US). We use this data to show that linguistic class markers greatly affect the performance of various NLP tools. Our findings suggest that speakers of low-prestige sociolects experience lower application performance on a range of tasks. These results suggest that we should actively incorporate social class as a variable in NLP systems if we want to make them more equitable and fair. Our data set also provides a starting point for more explorations of social class in NLP. Limitations Our study presents several limitations: First, we are limited to the study of class, region, and race in English. TV shows and films are biased in their representations of different identities—not all groups are (equally) represented and those that are may be stereotyped into certain roles. For example, in the initial top-200 TV shows list, we did not find any TV shows about lower-class women, Latinxs, upper-class Non-English Britons, or non-white Britons. For this reason, we excluded a gender-based analysis. Moreover, in this list, the majority of TV shows about Black Americans revolve around the drug trade. Second, characters are not necessarily written by, nor perfectly depict the group they represent and this is potentially reflected in their language. To ensure high quality portrayals, we sourced highly rated and realistic shows, ensuring characters were not caricatures of the groups they portrayed. Having said that, a sociolinguistics review found that TV show and film data is increasingly common as data for the study of sociolects, particularly with regards to race and class (Stamou, 2014). More specifically, work in sociolinguistics has also defended the use of film data in conversation analysis (McHoul, 1987; Quaglio, 2008). Furthermore, TV shows like The Wire have been praised for their linguistic authenticity (Lopez and Bucholtz, 2017), reportedly primarily hiring Baltimore natives. For these reasons, we believe our study sufficiently captures the relevant linguistic phenomena. Despite our best efforts, we were unable to reliably separate the utterances by character across episodes. Instead of introducing more sources of potential mix-ups, we therefore estimate the metrics over all characters in a show. Although this is a limitation in terms of the accuracy of the results, we expect that character-level annotations would strengthen our findings rather than negate them. Generally speaking, each TV show focuses on a single social stratum. A single character belonging to a different stratum would add noise and make the classes less clearly defined, so we suspect our results would be more marked with character-level annotations, not less. We leave the character-based separation for future work. Finally, though our dataset is large in terms of utterances, it is limited in the number of characters. However, we present a case study hoping to motivate further research that may validate our findings. 12651 Ethical Considerations Our paper is arguing for fairness in the services provided by NLP systems for people of all socioeconomic backgrounds. However, we note that by showing measurable differences in language use across groups, we run the risk of profiling speakers. In our work, we used fictional characters to avoid this, but in order to ensure fairness of service all peoples must be represented. Future work should ensure that this is done respectfully and with consent from any participants. Furthermore, measuring socioeconomic status is not a straightforward process. The class system used in this paper is fuzzy and tied to Western social structures that are not valid across the world. Future work should focus on measuring in the most appropriate way by following established metrics and guidelines from economics and other fields, such as Savage et al. (2013). We urge any future work to follow ethical guidelines when it comes to dataset and system development. Acknowledgements A special thanks to Federico Bianchi and Lindsay Schaffer and the reviewers for their helpful comments. Amanda Cercas Curry and Dirk Hovy were supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 949944, INTEGRATOR). They are members of the MilaNLP group and the Data and Marketing Insights Unit of the Bocconi Institute for Data Science and Analysis. Giuseppe Attanasio was supported by the Portuguese Recovery and Resilience Plan through project C645008882-00000055 (Center for Responsible AI) and by Fundação para a Ciência e Tecnologia through contract UIDB/50008/2020. He conducted part of the work as a member of the MilaNLP group at Bocconi University, Milan. References Jonathan Anderson. 1983. Lix and rix: Variations on a little-known readability index. Journal of Reading, 26(6):490–496. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, et al. 2021. XLS-R: Self-supervised cross-lingual speech representation learning at scale. arXiv preprint arXiv:2111.09296. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449–12460. Angelo Basile, Albert Gatt, and Malvina Nissim. 2019. You write like you eat: Stylistic variation as a predictor of social stratification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2583–2593, Florence, Italy. Association for Computational Linguistics. Basil Bernstein. 1960. Language and social class. The British journal of sociology, 11(3):271–276. Mary Bucholtz and Kira Hall. 2005. Identity and interaction: A sociocultural linguistic approach. Discourse studies, 7(4-5):585–614. Amanda Cercas Curry, Zeerak Talat, and Dirk Hovy. 2024. Impoverished language technology: The lack of (social) class in NLP. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 8675–8682, Torino, Italia. ELRA and ICCL. Eve V Clark and Marisa Casillas. 2015. First language acquisition. In The Routledge handbook of linguistics, pages 311–328. Routledge. Cynthia G Clopper and David B Pisoni. 2006. The nationwide speech project: A new corpus of american english dialects. Speech communication, 48(6):633– 644. Meri Coleman and Ta Lin Liau. 1975. A computer readability formula designed for machine scoring. Journal of Applied Psychology, 60(2):283. Georgina Curto, Mario Fernando Jojoa Acosta, Flavio Comim, and Begoña Garcia-Zapirain. 2022. Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings. AI & society, pages 1–16. Penelope Eckert. 2012. Three waves of variation study: The emergence of meaning in the study of sociolinguistic variation. Annual review of Anthropology, 41(1):87–100. Lucie Flekova, Daniel Preo¸tiuc-Pietro, and Lyle Ungar. 2016. Exploring stylistic variation with age and income on Twitter. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 313–319, Berlin, Germany. Association for Computational Linguistics. Rudolph Flesch. 1948. A new readability yardstick. Journal of applied psychology, 32(3):221. Hila Gonen, Srini Iyer, Terra Blevins, Noah Smith, and Luke Zettlemoyer. 2023. Demystifying prompts in 12652 language models via perplexity estimation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10136–10148, Singapore. Association for Computational Linguistics. Robert Gunning. 1968. The Technique of Clear Writing. McGraw-Hill Book Company, New York. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Anders Johannsen, Dirk Hovy, and Anders Søgaard. 2015. Cross-lingual syntactic variation over age and gender. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning, pages 103–112, Beijing, China. Association for Computational Linguistics. Tyler Kendall and Charlie Farrington. 2018. The corpus of regional african american language. Version, 6:1. J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Svetlana Kiritchenko, Georgina Curto Rex, Isar Nejadgholi, and Kathleen C. Fraser. 2023. Aporophobia: An overlooked type of toxic language targeting the poor. In The 7th Workshop on Online Abuse and Harms (WOAH), pages 113–125, Toronto, Canada. Association for Computational Linguistics. William Labov. 1964. The social stratification of English in New York city. Ph.D. thesis, Columbia University. Vasileios Lampos, Daniel Preo¸tiuc-Pietro, Sina Samangooei, Douwe Gelling, and Trevor Cohn. 2014. Extracting socioeconomic patterns from the news: Modelling text and outlet importance jointly. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pages 13–17, Baltimore, MD, USA. Association for Computational Linguistics. Pierre Lison and Jörg Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. Qiuana Lopez and Mary Bucholtz. 2017. “How my hair look?” Linguistic authenticity and racialized gender and sexuality on The Wire. Journal of Language and Sexuality, 6(1):1–29. Alec W McHoul. 1987. An initial investigation of the usability of fictional conversation for doing conversation analysis. Semiotica, 67(1-2):83–104. Andrew Cameron Morris, Viktoria Maier, and Phil Green. 2004. From WER and RIL to MER and WIL: Improved evaluation measures for connected speech recognition. In Interspeech 2004, pages 2765–2768. ISCA. Courtney Napoles, Keisuke Sakaguchi, and Joel Tetreault. 2017. JFLEG: A fluency corpus and benchmark for grammatical error correction. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 229–234, Valencia, Spain. Association for Computational Linguistics. Dong Nguyen, A. Seza Do˘gruöz, Carolyn P. Rosé, and Franciska de Jong. 2016. Survey: Computational sociolinguistics: A Survey. Computational Linguistics, 42(3):537–593. Flor Miriam Plaza-del Arco, Alba A. Cercas Curry, Amanda Cercas Curry, and Dirk Hovy. 2024. Emotion analysis in NLP: Trends, gaps and roadmap for future directions. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LRECCOLING 2024), pages 5696–5710, Torino, Italia. ELRA and ICCL. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101–108, Online. Association for Computational Linguistics. Paulo Quaglio. 2008. Television dialogue and natural conversation. Corpora and discourse, pages 189– 210. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pages 28492–28518. PMLR. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using Siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. John R Rickford. 1986. The need for new approaches to social class analysis in sociolinguistics. Language and communication, 6(3):215–221. Peter Saunders. 1990. Social class and stratification. Routledge. Mike Savage, Fiona Devine, Niall Cunningham, Mark Taylor, Yaojun Li, Johs Hjellbrekke, Brigitte Le Roux, Sam Friedman, and Andrew Miles. 2013. A new model of social class? findings from the bbc’s great british class survey experiment. Sociology, 47(2):219–250. 12653 Sharon H. Saydah, Giuseppina Imperatore, and Gloria L. Beckles. 2013. Socioeconomic status and mortality. Diabetes Care, 36(1):49–55. RJ Senter and Edgar A Smith. 1967. Automated readability index. AMRL-TR. Aerospace Medical Research Laboratories, pages 1–14. Anastasia G Stamou. 2014. A literature review on the mediation of sociolinguistic style in television and cinematic fiction: Sustaining the ideology of authenticity. Language and Literature, 23(2):118–140. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, et al. 2023. Zephyr: Direct distillation of lm alignment. arXiv preprint arXiv:2310.16944. Elisa Usategui Basozábal et al. 1992. La sociolingüística de basil bernstein y sus implicaciones en el ámbito escolar. Revista de educación. Melanie Weirich and Adrian P Simpson. 2018. Gender identity is indexed and perceived in speech. PLoS One, 13(12):e0209226. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. 2024. Can large language models transform computational social science? Computational Linguistics, 50(1):237–291. A Annotations per show Table 8 shows the annotated demographics for each season/film. 12654 Class Gender Race Geography Arrested Development Upper M, F White USA Sex and the City Middle, Upper F White USA The Fresh Prince of BelAir Middle, Upper M, F Black USA Big Little Lies Middle F White USA Breaking Bad Low, middle M White USA The Wire (S01) Low, middle M Black, white USA The Wire (S03) Low, middle M Black, white USA Fargo (S04) Low, middle M, F Black, white USA Trailer Park Boys Low M White USA When They See Us Low M, F Black USA The Sopranos Low M White USA Pose Low F, Trans Black,Latino USA The Crown Upper M, F White UK Downton Abbey Upper, Low M, F White UK The IT Crowd Middle M White UK Shetland Middle M, F White Scotland (UK) T2.trainspotting Low M White Scotland (UK) Trainspotting Low M White Scotland (UK) Smother Middle M, F White Ireland (UK) Derry Girls Low F White Ireland (NE) Table 8: Annotated demographics for each movie or TV show. M denotes Men and F, women. 12655
2024
682
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12656–12671 August 11-16, 2024 ©2024 Association for Computational Linguistics ActionIE: Action Extraction from Scientific Literature with Programming Languages Xianrui Zhong1∗, Yufeng Du1∗, Siru Ouyang1, Ming Zhong1, Tingfeng Luo1, Qirong Ho2 Hao Peng1, Heng Ji1, Jiawei Han1 1 University of Illinois Urbana-Champaign 2 MBZUAI {xzhong23,yufengd4, siruo2, mingz5, tluo9, haopeng, hengji, hanj}@illinois.edu [email protected] Abstract Extraction of experimental procedures from human language in scientific literature and patents into actionable sequences in robotics language holds immense significance in scientific domains. Such an action extraction task is particularly challenging given the intricate details and context-dependent nature of the instructions, especially in fields like chemistry where reproducibility is paramount. In this paper, we introduce ACTIONIE, a method that leverages Large Language Models (LLMs) to bridge this divide by converting actions written in natural language into executable Python code. This enables us to capture the entities of interest, and the relationship between each action, given the features of Programming Languages. Utilizing linguistic cues identified by frequent patterns, ActionIE provides an improved mechanism to discern entities of interest. While our method is broadly applicable, we exemplify its power in the domain of chemical literature, wherein we focus on extracting experimental procedures for chemical synthesis. The code generated by our method can be easily transformed into robotics language which is in high demand in scientific fields. Comprehensive experiments demonstrate the superiority of our method. In addition, we propose a graph-based metric to more accurately reflect the precision of extraction. We also develop a dataset to address the scarcity of scientific literature occurred in existing datasets. 1 Introduction Recently, the integration of Natural Language Processing (NLP) techniques into various scientific fields has achieved significant success (Wang et al., 2019; Soleimani et al., 2022; Song et al., 2023; Lai et al., 2023; Ouyang et al., 2024). Among the applications, extracting information from unstructured ∗Equal contribution. Figure 1: An example of action extraction from literature that describes a sequence of chemical reaction actions. The text is drawn from Vaucher et al. (2020a). scientific literature has been one with growing significance (Guo et al., 2022; Zhong et al., 2023a,b). For example, chemists typically look through a wide range of publications to select candidate protocols for one organic synthesis scene, based on their own reading and repetitive trial-and-error procedures (Davies, 2019; Vaucher et al., 2021). Therefore, structured chemical data, including reaction formulae, chemical entities, and experiment conditions, facilitates effective utilization and automatic analysis, such as indexing and searching by keywords; discovering and analyzing relations between entities; clustering related objects and discovering potential patterns; automatically executing protocols; and predicting and optimizing experiment conditions. Representatively, Figure 1 presents a case of structured chemical experimental procedure, essential for guiding practitioners in their laboratory work (Vaucher et al., 2020b; Zeng et al., 2023). This task involves extracting a 12656 sequence of chemical reaction actions from a scientific text passage, where each action is defined by an operation and its corresponding attributes. For instance, in the example “ADD EtOAc” shown in Figure1, “ADD” represents the operation, and the chemical “EtOAc” is the attribute. However, the discovery of new chemical experimental procedures is scattered across unstructured scientific text and described in various writing styles, posing a significant challenge to the automatic creation of reaction action databases. Existing chemical databases, predominantly commercial ones such as Reaxys (Elsevier B.V., 2023), SciFinder (Chemical Abstracts Service (CAS), 2023), and Pistachio (NextMove Software, 2023), depend extensively on the manual contributions of domain experts. Analyzing, indexing, and utilizing scientific literature typically requires extensive and costly annotation or labeling by human experts. Moreover, this method is prone to errors due to the sheer volume of rapidly expanding scientific data. Despite the considerable manual effort, these databases prioritize storing information on the reactants, products, and reaction conditions, rather than the concrete sequences of chemical actions. This is primarily because manually designing these experiment procedures is both time-consuming and costly. To tackle this issue, various studies have employed text mining techniques (Hawizy et al., 2011; Swain and Cole, 2016; Vaucher et al., 2020b; Wang et al., 2022b; Zeng et al., 2023) to automatically extract structured information on procedures from unstructured text, leveraging the advancements in NLP field. However, extracting experimental procedures remains a challenging task. One major hurdle is the complexity and variability of scientific language, which often features intricate sentence structures, domain-specific terminology, abbreviations, and acronyms. These elements pose substantial difficulties for sequential tagging-based approaches. For example, as shown in Figure 1, a text describing a series of chemical reaction actions includes the “WASH” operation followed by three chemicals. While sequential tagging-based methods might recognize the chemical compounds, they often struggle to accurately identify the operations and associate them with their corresponding attributes. Furthermore, the scarcity of large, annotated datasets poses an additional obstacle to training deep learning models on chemical experimental data effectively. In this paper, we choose chemical experiment procedures as a case study, and explore the potential of large language models (LLMs) to extract structured data from the complex and domainspecific language in chemical papers and patents. We propose a novel approach that frames the procedure extraction task as a code generation problem, where we express the experimental procedures as a series of pre-defined operations, and utilize the unique features of coding, such as classes, inheritance, and types, to structure this information. Our method leverages the capabilities of LLMs in few-shot in-context learning, reducing the need for large amounts of annotated data, and accelerating the preparation process. Moreover, our proposed framework also offers an easy solution to generate protocols for different automated platforms by applying different language configurations. From the perspective of evaluation, we first pinpoint shortcomings within current evaluation metrics for the chemical action extraction task, and propose a novel metric based on graph-matching that substantially improves correlation with human judgments. Existing benchmarks largely concentrate on patent documents, which are inherently well-structured. To more accurately meet the realworld demands of practitioners, we meticulously annotate a test set derived from chemistry literature, which offers a more comprehensive evaluation of model performance. Notably, our new benchmark is considerably more extensive than previous ones, with an average length of 770.8 characters compared to 158.2 characters, providing a testing environment that mirrors realistic scenarios more closely. Experimentally, our method ActionIE demonstrates consistent superiority over strong baseline models, both against traditional benchmarks and our newly established testbed. 2 Related Work The practice of using NLP in structured scientific data extraction has seen significant advancements, from utilizing traditional NLP techniques to integrating code generation methods into structure extraction, which is especially influenced by the growing capabilities of large language models (LLMs). 2.1 Action Extraction in Chemical Documents The algorithms for action extraction in chemical texts evolve with the development of NLP. Earlier approaches, such as ChemDataExtractor (Swain 12657 Figure 2: Overview of ActionIE. and Cole, 2016) and ChemicalTagger (Hawizy et al., 2011), used part-of-speech tagging techniques to perform named entity recognition on chemical literature. These methods were fast and effective at extracting key information, but had limited capabilities at handling more complex sentence structures in patent documents. In recent years, the transformer structure has also been introduced to action extraction. Paragraph2Actions (Vaucher et al., 2020b) used a transformer-based encoderdecoder architecture trained on human-annotated data to generate action sequences. More recent advancements in NLP are led by pretrained LLMs. Wang et al. (2022b) finetuned a BERT model to perform named entity recognition on materials and extract synthesis actions on a dataset of solution-based inorganic materials synthesis. Zeng et al. (2023) finetuned both a T5 model and a GPT-3.5 model on a human-annotated dataset. While these transformer-based models excel in capturing the semantics of diverse scientific language, they rely on human-annotated datasets, which are created under extensive labor from domain experts, and are prone to human errors. Also, these methods hard-code structure definitions inside their framework, and have to infer structure semantics based on the training data, which could lead to inaccuracies if the training data is not representative enough. Recently, some datasets have been constructed for different tasks in mining chemical patents. Fang et al. (2021) is proposed for evaluating the task of anaphora resolution. He et al. (2020) builds an evaluation benchmark in BART format (Stenetorp et al., 2012) for action extraction in chemical patents, but the ground truth is not public. 2.2 Leveraging Programming Languages for Structure Extraction Tasks With the overwhelming success of very large decoder-only language models (such as GPT-3 (Brown et al., 2020), GPT-3.5 and GPT-4 (OpenAI, 2023), PaLM 2 (Anil et al., 2023), Llama 2 (Touvron et al., 2023), etc.) on a variety of NLP tasks, recent research has increasingly focused on the application of LLMs for scientific structure extraction tasks. Agrawal et al. (2022) demonstrated the power of zero-shot learning on GPT-3 for extracting information from clinical texts. Dunn et al. (2022) further performed chemical entities and relation extraction with a GPT-3 model finetuned on 500 input-output pairs. Zhong et al. (2023b) uses GPT-4 to capture the roles of chemical entities in scientific text. On the other hand, the large language models show noteworthy improvement in code generation. Codex (Tyers et al., 2023), finetuned from a GPT model, has shown remarkable abilities in code completion. The recent year has seen the application of GPT-based agents (Hong et al., 2023; Zhou et al., 2023; Wang et al., 2024), which leverage the reasoning and decision abilities of GPT models along with Chain-of-Thought approaches, in programming tasks. Among these developments of structure extraction and code generation, Code4Struct (Wang et al., 2022a) extracts structured event information from natural language using code generation. It aligns programming constructs, such as class definitions, inheritance, and functions with the entity and event 12658 Module Name Models Pattern Mining Flan-T5-Large & GPT-4-0613 Text Rephrasing GPT-4-0613 Code Generation GPT-4-0613 Code to Natural Language Pre-defined Rules Table 1: Models used for each module in ActionIE. types of interest, utilizing both the structural and semantic information of coding. 3 ActionIE Framework 3.1 Task Formulation Given a text T, we aim to extract all procedures (actions) P = {(o1, a1), ..., (on, an)}, oi ∈S mentioned in T in sequence, where S is a set of predefined operation types, and ai is the pre-defined attributes of operation Oi. Note that rather than identifying the specific role a substance plays within a reaction, our task focuses on the category of attribute to which it belongs. Following prior work (Vaucher et al., 2020a), we set the pre-defined operation types as follows: Add, CollectLayer, Concentrate, Degas, DrySolid, DrySolution, Extract, Filter, MakeSolution, Microwave, Partition, PH, PhaseSeparation, Purify, Quench, Recrystallize, Reflux, SetTemperature, Sonicate, Stir, Triturate, Wait, Wash, Yield, FollowOtherProcedure, InvalidAction, OtherLanguage, and NoAction. Definitions for each action are described in Appendix A. 3.2 Action Extraction with Programming Languages Previous methods utilize a large amount of rules and patterns provided by human or train a model in a supervised way which require cost-sensitive labelled data. In addition, the definitions of actions may change based on the needs of scientists. Under certain circumstances, re-creating rules and patterns by human may be required for unsupervised methods; and relabelling data may be needed for supervised methods. Driven by the aforementioned drawbacks, and with the emergence of Large Language Models (LLMs), we propose to use LLMs to tackle this action extraction task, as they have demonstrated promising capabilities in information extraction, particularly in data-scarce scenarios. Naively, one may directly input a paragraph along with all definitions of actions and ask LLMs to extract the action Figure 3: Rephrasing Example. information. However, this approach poses some problems. The first is the well-known hallucination problem of the generation from LLMs (Huang et al., 2023). LLMs may generate actions that are not in the pre-defined action set since LLMs may directly output the verb found in the paragraph as an action or output an action not in the predefined set based on its summarization. Furthermore, LLMs may output detailed action sequences while it should summarize some of the actions. For instance, the ground-truth action for text “Add HCl to pH 5.” (adding HCl until the pH of the liquid is 5) is “PH with HCl to pH 5.”, while LLMs also include the “ADD” action which results in “ADD HCl; PH with HCl to pH5.” This demonstrates that LLMs fail to understand the relationship between actions. In light of these limitations, we propose to reformulate the action extraction task as a code writing problem for LLMs that we transform each action type into a Python class. This has a few advantages. First, the abstract nature of class in programming languages and the relationship between classes including “Inheritance” and “Composition” relationships help LLMs better interpret the relationships between actions. Second, class variables in programming languages enable LLMs to understand what needs to be extracted for each action. Next, it is more suitable for an environment that needs changing the set of actions and the interested information for each action. The users can easily define the operations they want to extract and the attributes for each operation by simply modifying the Python class file which is fed to the LLMs. Finally, this minimizes the gap between natural language and robotics language as it is more convenient to transform the Python code produced by our method to the code that can be executed by robots. Figure 7 demonstrates the prompt we use for code 12659 generation. 3.3 External Information Guided Extraction Text Rephrasing Scientific literature may have its own writing style that is different from ordinary writing, particularly in chemistry literature. We propose to first use LLMs to rephrase the given paragraph for two main reasons. First, rephrasing complex scientific texts into simpler language enhances their comprehensibility for large language models like GPT-4, which are pre-trained on general, non-scientific text sources. Second, it introduces domain knowledge encoded in the language model. This is examplified in the case presented in Figure 3. The original paragraph (a) contains a phrase “m.p. 49 °C”, which is usually been misinterpreted as environment temperature. By leveraging LLMs for rephrasing, “m.p.” is rephrased as “melting point”, shown in Figure 3, and leads to a correct extraction. In practice, we prompt GPT-4 to rephrase the input text, as well as feeding in the mined patterns to keep the structure of the rephrased text as much as possible. Figure 6 shows the prompt we use for text rephrasing. Pattern-guided Extraction For human, it is possible to identify certain action information, even lack of any prior domain knowledge. Consider an example “Partition between water (100 mL) and ethyl acetate (100 mL)” (Vaucher et al., 2020a). We can identify that “water (100 mL)” and “ethyl acetate (100 mL)” are the chemicals involved in the “Partition” action. This can be accomplished by the guidance of linguistic cues, including the semantics of phrases and the structure of sentences. Motivated by this observation, we utilize frequent patterns in the text that indicate specific reaction actions as linguistic cues to guide LLMs to extract action information. Take “PH” action as an example, we first use a special token “[Chemical]” to replace all occurrences of the chemical with CHEMDATAEXTRACTOR (Swain and Cole, 2016). Several seed patterns are created, such as pH [pH] with [Chemical]. The red [pH] indicates a pH value, and the blue [Chemical] indicates the chemical for adjusting the pH. With a set of seed patterns for each action, we mine the enriched patterns through 1) labeling all occurrences in the corpus with seed patterns, 2) training a Flan-T5 model in a question-answering fashion, 3) re-labeling the corpus with the trained Flan-T5 model, and 4) selecting the most frequent patterns as the enriched patterns. GPT-4-0613 is also used for creating a more diverse set of patterns. After merging enriched patterns with seed patterns as new seed patterns, we repeat the aforementioned process to mine more reliable patterns iteratively. The whole process is illustrated in Figure 4. 3.4 Extracted Action Evaluation We observe that some actions are equivalent to each other, for instance, [MakeSolution] with A and B is equivalent to [Add] A; [Add] B, and sometimes the order of actions does not matter. Previous evaluation metrics do not consider the order of actions nor the equivalence between actions, and penalize mismatches. In order to take the order of actions and their equivalences, we propose a graph-based metric called GRAPH MATCHING SIMILARITY. Given a sentence t with n ∈Z actions a1, a2, ..., an, and equivalent relations f : A →{A}, where A is a set of actions and ai is an arbitrary action, we first construct its corresponding graph G. Details can be found in Algorithm 1. We first construct graphs for the ground truth sentence and the sentence to be evaluated, denoted as Ggt and Gquery. Then we find the maximal common subgraph Gsub in Ggt given Gquery with the algorithm described in (Hattori et al., 2003). Finally, we calculate the similarity score with Equation 1. score = |Gsub ∩Gquery| |Gsub ∪Gquery| (1) The evaluation with human judgements compared with other metrics can be found at Section 4.3. Algorithm 1 Algorithm for Action Graph Construction Input: Sentence t = (a1, a2, ..., an) Equivalent Relations f : A →{A} Output: Graph G = (V, E) procedure CONSTRUCTGRAPH(t, f) V = {a1} for i ←2 to n −1 do V ∪{ai}; E ∪{(ai, ai−1), (ai+1, ai)} if ai ∈D(f) then V ∪{f(ai)} E ∪{(f(ai), ai−1), (ai+1, f(ai))} end if end for return G = (V, E) end procedure 12660 Figure 4: Pattern Enrichment Overview. 4 Experiments 4.1 Experimental Setup Datasets We evaluate the effectiveness of our method on two datasets. One is the test set used in previous work (Vaucher et al., 2020a), which contains 352 texts related to an experimental procedure for chemical synthesis. We refer this dataset as PATENTACTION as all the paragraphs in it are from patent data. Dataset statistics can be found in Appendix B. The input is a paragraph from chemical literature which contains one or multiple actions. The output is a combination of pre-defined actions in natural language. This dataset is designed for evaluating action extraction in chemical literature setting based on chemist’s need. Since extracting action information from scientific literature is of the same significance as from patent data, collabrating with chemists, we construct a dataset called SCIENTIFICACTION. 100 long paragraphs are collected from ChemRxiv (Cambridge Open Engage, 2023). The average length (number of characters) of paragraphs in ScientificAction is 770.77, while the average length in PatentAction is only 158.24. SCIENTIFICACTION will be available at https://github.com/xianruizhong/ActionIE. Baselines We compare ActionIE with several state-of-art methods: Paragraph2Actions (Vaucher et al., 2020a), ChemTrans (Zeng et al., 2023), GALACTICA-6.7b (Taylor et al., 2022), and GPT4 (OpenAI, 2023). Implementation Details We choose GPT-40613 (OpenAI, 2023) as the model for extraction, which supports up to 8,192 tokens. We use “google/flan-t5-large” (Raffel et al., 2020) for linguistic pattern extraction. GPT-4 (OpenAI, 2023) is accessed through OpenAI api. For the parameters of GPT-4 (OpenAI, 2023), we use sampling temperature t = 0, and set 500 as the maximum number of new tokens. Evaluation Metrics for Natural Language Following previous work, we use BLEU score (Papineni et al., 2002) and Levenshtein Similarity (Levenshtein et al., 1966) to evaluate the quality of extracted actions in natural language. Following previous work, the BLEU score is modified since the original BLEU score does not consider short sentences which is common in the test data. The proposed GRAPH MATCHING SIMILARITY is also used for evaluating in the natural language level. Evaluation Metrics for Operation Level In order to verify the quality of the extracted action sequence in operation level, we use precision, recall, and F1 scores. The sets of operations in ground truth and output are compared, and the attributes are ignored. To better consider the order of operations, we employ SeqMatch-O (SM-O) proposed in Zeng et al. (2023), an evaluation metric for sequence matching in operation level. For details of SeqMatch-O, please refer to Zeng et al. (2023). Evaluation Metrics for Attribute Level Following previous work, we leverage SeqMatch-A (SM-A) proposed in Zeng et al. (2023) for verifying the quality of attribute-level extraction. For each matched position in SeqMatch-O, the levenshtein similarity is calculated for each argument pair, and the average argument score is used rather than the original 1 in SM-O. Please refer to Zeng et al. (2023) for more details. 4.2 Experimental Results Results for Extraction in Natural Language The first part of Table 2 represents the results of extraction in natural language in PatentAction dataset. ChemTrans cannot output natural language action sequences, hence, its scores are not calculated. Our proposed ACTIONIE significantly outperforms all baselines in levenshtein similarity, and outperforms all baselines in BLEU except Paragraph2Actions, 12661 Models BLEU Levenshtein Similarity Precision Recall F1 Graph Matching Similarity SM-O SM-A Results for PatentAction (Avg Length: 158.24) Supervised Methods Paragraph2Actions 0.8511 0.8927 0.9017 0.9034 0.8985 0.8003 0.8893 0.8629 ChemTrans 0.5927 0.4325 0.4866 0.4401 Few-shot Methods (10-shot) Galactica-6.7b 0.0051 0.1336 0.3526 0.2705 0.2732 0.2921 0.1453 0.0534 GPT-4 0.4280 0.6822 0.7537 0.7758 0.7458 0.7923 0.7566 0.6633 ACTIONIE 0.8237 0.9018 0.9126 0.9198 0.9101 0.8136 0.8880 0.8521 - Patterns 0.6829 0.8070 0.8458 0.8220 0.8218 0.8074 0.8248 0.7583 Results for ScientificAction (Avg Length: 770.77) Supervised Methods Paragraph2Actions 0.4907 0.5380 0.8643 0.5933 0.6633 0.6391 0.5922 0.5118 ChemTrans 0.9212 0.4583 0.5982 0.4924 Few-shot Methods (10-shot) Galactica-6.7b GPT-4 0.4571 0.6625 0.7858 0.7175 0.7312 0.7574 0.6670 0.5137 ACTIONIE 0.7808 0.8394 0.9236 0.8166 0.8584 0.8013 0.8277 0.7087 - Patterns 0.7193 0.8160 0.8942 0.8033 0.8444 0.7980 0.8099 0.6757 Table 2: Overall experimental results. ChemTrans does not support outputting natural language, only the operations are evaluated. Galactica-6.7b fails when the input is too long, therefore, the result is not reported. Figure 5: Case Study. 12662 but still get a very close score. GALACTICA6.7 performs poorly as it is not designed for this task. GPT-4 demonstrates its promising performance given its comparable scores with just 10 demonstrations. The second part of Table 2 represents the result of extraction in natural language in ScientificAction, a more complex and challenging dataset than PatentAction. Paragraph2Actions is trained on patent data and does not generalize well in scientific literature. Sometimes, Paragraph2Actions only outputs FollowOtherProcedure action and ignores other actions described in the input paragraph. Even GPT-4 receive higher scores in levenshtein similarity, demonstrating better generalization than Paragraph2Actions. Ablation study highlights the significance of using patterns as linguistic cues, in all cases, we gain much improvement by utilizing the patterns. Results for Operation-Level Extraction The middle columns of Table 2 represents the results of operational-level extraction. In the PatentAction dataset, ACTIONIE beats all baselines in precision, recall, and F1 scores, and have very close scores with Paragraph2Actions in SeqMatch-O (0.8880 vs 0.8893). In the ScientificAction dataset, ACTIONIE outperforms all baselines. Both Paragraph2Actions and ChemTrans are trained on patent data, and achieve a high precision, but have a low recall and F1 scores. As for the albation study, ACTIONIE benefits significantly from the improvement provided by the patterns, which suggests that the patterns effectively help identify the actions. Results for Attribute-Level Extraction As listed in the last column of Table 2, in PatentAction dataset, ACTIONIE outperforms all baselines except Paragraph2Actions, but still has a competitive score (0.8521 vs 0.8629). In ScientificAction dataset, ACTIONIE surpasses all baselines by a substantial margin. Note that GPT-4 receives a slightly higher score than Paragraph2Actions, which further implies the limitation of supervised methods such as Paragraph2Actions and ChemTrans. 4.3 Evaluation Metric Analysis To better understand how well our proposed GRAPH MATCHING SIMILARITY metric aligns with human evaluation, we randomly sample 100 Metric Name Pearson Spearman Kendall’s Tau BLEU 0.1791 0.2427 0.2055 Levenshtein Similarity 0.1742 0.2603 0.2179 Graph Matching Similarity 0.3144 0.2976 0.3058 Table 3: Metric Correlations with Human Judgements. outputs produced by Paragraph2Actions, GPT-4, and ACTIONIE, which are then given a score by chemists from 1 to 5. We calculate three correlation coefficients, Pearson, Spearman, and Kendall’s Tau. As the results shown in Table 3, the proposed GRAPH MATCHING SIMILARITY is better aligned with human judgements than BLEU and Levnshtein Similarity. 4.4 Case Study We randomly sample an example from SCIENTIFICPATENT and study the output of different methods (see Figure 5). Paragraph2Actions only outputs FollowOtherProcedure action, and it has been noticed that it consistently does so whenever the input mentions another procedure. While the model is supervised to do so, this is an unwanted behavior since the output would ignore any other actions mentioned in the text. ChemTrans only captures the YIELD action, though it includes many details of the reagent. However, ChemTrans will fail if we are also interested in the melting point (mp) of the product given it is a supervised method. It also misclassifies the product as reagent. GPT-4 correctly extracts most of the actions and their attributes while missing the first ADD action, and the order of actions is wrong. 5 Conclusion and Future Work In this paper, we propose ACTIONIE, a framework for extracting experimental action sequences from scientific literature. Our approach leverages the strength of LLMs by transforming the action extraction problem into a coding question for LLMs. Additionally, it incorporates text rephrasing and linguistic knowledge which further improve the overall performance. To more accurately evaluate the extraction quality, we introduce a graphbased metric, GRAPH MATCHING SIMILARITY. We have also developed a dataset, SCIENTIFICACTION, to offset the lack of scientific literature oc12663 curred in previous datasets. Experiments demonstrate that ACTIONIE outperforms state-of-the-art baselines and GRAPH MATCHING SIMILARITY is more aligned with human judgements than previous evaluation metrics. For future developments, one exciting yet challenging direction is to explore deeper into different aspects of the extraction process and integrating these parts into an automated workflow that transforms scientific papers into actionable experiments. This contains identifying relevant paragraphs from scientific papers that describe experimental procedures, creating a robotic system that runs the extracted chemical actions, and automated outcome validation. Limitation The limitations of this paper are stated as follows: 1. In our experiments, we use GPT-4 as the backbone model through OpenAI’s API. Although ACTIONIE can be incorporated with other causal language models, the performance may change when using different language models. In addition, the performance might be changed by the modification of GPT-4 since its performance may be different over time (OpenAI, 2023). Replacing the GPT-4 API with a static large language model such as Llama-2 (Touvron et al., 2023) could alleviate this issue, but this may require considerable computing resources, which are often limited. 2. Although the dataset proposed in this paper is collected from scientific literature and is much longer than previous datasets, it is still shorter than a scientific paper. Extracting information from a full paper may not be possible if it is too long, given that current GPT-4 API has token limits. Integrating a text segmentation module may be one direction to solve this problem. Another direction may be deploying techniques that reduce the token limits (Bertsch et al., 2023). Acknowledgement We thank Professor Huimin Zhao, Xuan Liu, and Hongxiang Li from the University of Illinois Urbana-Champaign for their thoughtful discussions and expertise in chemistry. We sincerely appreciate the anonymous ARR reviewers for their helpful comments. Research was supported in part by the Molecule Maker Lab Institute: An AI Research Institutes program supported by NSF under Award No. 2019897, US DARPA KAIROS Program No. FA8750-19-21004 and INCAS Program No. HR001121C0165, National Science Foundation IIS-19-56151, and the Institute for Geospatial Understanding through an Integrative Discovery Environment (I-GUIDE) by NSF under Award No. 2118329. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and do not necessarily represent the views, either expressed or implied, of National Science Foundation, DARPA or the U.S. Government. References Swamy R. Adapa, Gregory A. Hunter, Narmin E. Amin, Christopher Marinescu, Andrew Borsky, Elizabeth M. Sagatys, Said M. Sebti, Gary W. Reuther, Gloria C. Ferreira, and Rays H.Y. Jiang. 2023. Porphyrin overdrive rewires pan-cancer cell metabolism. bioRxiv. Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are few-shot clinical information extractors. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1998–2022, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, 12664 Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report. Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R Gormley. 2023. Unlimiformer: Longrange transformers with unlimited length input. arXiv preprint arXiv:2305.01625. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA. Curran Associates Inc. Cambridge Open Engage. 2023. Chemrxiv: The preprint server for chemistry. https://chemrxiv. org. Accessed: 2023-12-15. Chemical Abstracts Service (CAS). 2023. Scifinder. https://scifinder.cas.org. Accessed: 2024-0115. Ian W. Davies. 2019. The digitization of organic synthesis. Nature, 570(7760):175–181. Alexander Dunn, John Dagdelen, Nicholas Walker, Sanghoon Lee, Andrew S. Rosen, Gerbrand Ceder, Kristin Persson, and Anubhav Jain. 2022. Structured information extraction from complex scientific text with fine-tuned large language models. Elsevier B.V. 2023. Reaxys. https://www.reaxys. com. Accessed: 2024-01-15. Biaoyan Fang, Christian Druckenbrodt, Saber A Akhondi, Jiayuan He, Timothy Baldwin, and Karin Verspoor. 2021. ChEMU-ref: A corpus for modeling anaphora resolution in the chemical domain. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1362–1375, Online. Association for Computational Linguistics. Jiang Guo, A. Santiago Ibanez-Lopez, Hanyu Gao, Victor Quach, Connor W. Coley, Klavs F. Jensen, and Regina Barzilay. 2022. Automated chemical reaction extraction from scientific literature. Journal of Chemical Information and Modeling, 62(9):2035–2045. PMID: 34115937. M. Hattori, Y. Okuno, S. Goto, and M. Kanehisa. 2003. Heuristics for chemical compound matching. Genome Inform, 14:144–153. Lezan Hawizy, David M. Jessop, Nico Adams, and Peter Murray-Rust. 2011. Chemicaltagger: A tool for semantic text-mining in chemistry. Journal of Cheminformatics, 3(1):17. Jiayuan He, Dat Quoc Nguyen, Saber A. Akhondi, Christian Druckenbrodt, Camilo Thorne, Ralph Hoessel, Zubair Afzal, Zenan Zhai, Biaoyan Fang, Hiyori Yoshikawa, Ameer Albahem, Lawrence Cavedon, Trevor Cohn, Timothy Baldwin, and Karin Verspoor. 2020. Overview of chemu 2020: Named entity recognition and event extraction of chemical reactions from patents. In Experimental IR Meets Multilinguality, Multimodality, and Interaction, pages 237–254, Cham. Springer International Publishing. Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang, Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu, and Jürgen Schmidhuber. 2023. Metagpt: Meta programming for a multi-agent collaborative framework. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. Tuan M. Lai, Chengxiang Zhai, and Heng Ji. 2023. Knowledge-enhanced biomedical language models. In Journal of Biomedical Informatics. Vladimir I Levenshtein et al. 1966. Binary codes capable of correcting deletions, insertions, and reversals. In Soviet physics doklady, volume 10, pages 707–710. Soviet Union. NextMove Software. 2023. Pistachio: Structure to name. https://www.nextmovesoftware.com/ pistachio.html. Accessed: 2024-01-15. OpenAI. 2023. Gpt-4 technical report. Siru Ouyang, Zhuosheng Zhang, Bing Yan, Xuan Liu, Yejin Choi, Jiawei Han, and Lianhui Qin. 2024. Structured chemistry reasoning with large language models. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, page 311–318, USA. Association for Computational Linguistics. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. 12665 Amir Soleimani, Vassilina Nikoulina, Benoit Favre, and Salah Ait Mokhtar. 2022. Zero-shot aspectbased scientific document summarization using selfsupervised pre-training. In Proceedings of the 21st Workshop on Biomedical Language Processing, pages 49–62, Dublin, Ireland. Association for Computational Linguistics. Yu Song, Santiago Miret, and Bang Liu. 2023. MatSciNLP: Evaluating scientific language models on materials science language tasks using text-to-schema modeling. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3621–3639, Toronto, Canada. Association for Computational Linguistics. Pontus Stenetorp, Sampo Pyysalo, Goran Topi´c, Tomoko Ohta, Sophia Ananiadou, and Jun’ichi Tsujii. 2012. brat: a web-based tool for NLP-assisted text annotation. In Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 102–107, Avignon, France. Association for Computational Linguistics. Matthew C. Swain and Jacqueline M. Cole. 2016. Chemdataextractor: A toolkit for automated extraction of chemical information from the scientific literature. Journal of Chemical Information and Modeling, 56(10):1894–1904. PMID: 27669338. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for science. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Francis Tyers, Robert Pugh, and Valery Berthoud F. 2023. Codex to corpus: Exploring annotation and processing for an open and extensible machinereadable edition of the florentine codex. In Proceedings of the Workshop on Natural Language Processing for Indigenous Languages of the Americas (AmericasNLP), pages 19–29, Toronto, Canada. Association for Computational Linguistics. Alain C. Vaucher, Philippe Schwaller, Joppe Geluykens, Vishnu H. Nair, Anna Iuliano, and Teodoro Laino. 2021. Inferring experimental procedures from textbased representations of chemical reactions. Nature Communications, 12(1):2573. Alain C. Vaucher, Federico Zipoli, Joppe Geluykens, Vishnu H. Nair, Philippe Schwaller, and Teodoro Laino. 2020a. Automated extraction of chemical synthesis actions from experimental procedures. Nature Communications, 11(1):3601. Alain C. Vaucher, Federico Zipoli, Joppe Geluykens, Vishnu H. Nair, Philippe Schwaller, and Teodoro Laino. 2020b. Automated extraction of chemical synthesis actions from experimental procedures. Nature Communications, 11. Sheng Wang, Yuzhi Guo, Yuhong Wang, Hongmao Sun, and Junzhou Huang. 2019. Smiles-bert: Large scale unsupervised pre-training for molecular property prediction. In Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, BCB ’19, page 429–436, New York, NY, USA. Association for Computing Machinery. Xingyao Wang, Yangyi Chen, Lifan Yuan, Yizhe Zhang, Yunzhu Li, Hao Peng, and Heng Ji. 2024. Executable code actions elicit better llm agents. In arxiv. Xingyao Wang, Sha Li, and Heng Ji. 2022a. Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810. Zheren Wang, Olga Kononova, Kevin Cruse, Tanjin He, Haoyan Huo, Yuxing Fei, Yan Zeng, Yingzhi Sun, Zijian Cai, Wenhao Sun, and Gerbrand Ceder. 2022b. Dataset of solution-based inorganic materials synthesis procedures extracted from the scientific literature. Scientific Data, 9. Ken Watanabe, Masayoshi Yuasa, Kida Tetsuya, Yasutake Teraoka, Noboru Yamazoe, and Kengo Shimanoe. 2010. High-performance oxygen-permeable membranes with an asymmetric structure using ba0.95la0.05feo3 perovskite-type oxide. Advanced materials (Deerfield Beach, Fla.), 22:2367–70. Zheni Zeng, Yi-Chen Nie, Ning Ding, Qian-Jun Ding, Wei-Ting Ye, Cheng Yang, Maosong Sun, Weinan E, Rong Zhu, and Zhiyuan Liu. 2023. Transcription between human-readable synthetic descriptions and machine-executable instructions: an application of the latest pre-training technology. Chem. Sci., 14:9360–9373. Ming Zhong, Siru Ouyang, Minhao Jiang, Vivian Hu, Yizhu Jiao, Xuan Wang, and Jiawei Han. 2023a. ReactIE: Enhancing chemical reaction extraction with weak supervision. In Findings of the Association for 12666 Computational Linguistics: ACL 2023, pages 12120– 12130, Toronto, Canada. Association for Computational Linguistics. Ming Zhong, Siru Ouyang, Yizhu Jiao, Priyanka Kargupta, Leo Luo, Yanzhen Shen, Bobby Zhou, Xianrui Zhong, Xuan Liu, Hongxiang Li, Jinfeng Xiao, Minhao Jiang, Vivian Hu, Xuan Wang, Heng Ji, Martin Burke, Huimin Zhao, and Jiawei Han. 2023b. Reaction miner: An integrated system for chemical reaction extraction from textual data. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 389–402, Singapore. Association for Computational Linguistics. Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. 2023. Language agent tree search unifies reasoning acting and planning in language models. A Action Type We adopt the same action types as in the previous study, including 26 pre-defined action types. We include the detailed descriptions of action types in (Vaucher et al., 2020a) as a reference in Table 4 to help readers better understand the action types. B Dataset statistics The number of each action type mentioned in all 352 samples in PatentAction dataset are summarized in Table 5. C Prompt Figure 6 demonstrates the prompt for text rephrasing. Figure 7 represents the prompt for code generation. D Case Study on Different Scientific Domains In order to validate the effectiveness of our method on different scientific domains, we have conducted two additional case studies, one in biology and one in material science, to highlight the potential for future studies to extend ActionIE across different domains. The case study in biology can be found at Figure 8, and the case study in material science can be found at Figure 9. 12667 Figure 6: Prompt for Text Rephrasing. Figure 7: Prompt for Code Generation. 12668 Action Type Description Add Add a substance to the reactor CollectLayer Select aqueous or organic fraction(s) Concentrate Evaporate the solvent (rotavap) Degas Purge the reaction mixture with a gas DrySolid Dry a solid DrySolution Dry an organic solution with a desiccant Extract Transfer compound into a different solvent Filter Separate solid and liquid phases MakeSolution Mix several substances to generate a mixture or solution Microwave Heat the reaction mixture in a microwave apparatus Partition Add two immiscible solvents for subsequent phase separation PH Change the pH of the reaction mixture PhaseSeparation Separate the aqueous and organic phases Purify Purification Quench Stop reaction by adding a substance Recrystallize Recrystallize a solid from a solvent or mixture of solvents Reflux Reflux the reaction mixture SetTemperature Change the temperature of the reaction mixture Sonicate Agitate the solution with sound waves Stir Stir the reaction mixture for a specified duration Triturate Triturate the residue Wait Leave the reaction mixture to stand for a specified duration Wash Wash (after filtration, or with immiscible solvent) Yield Phony action, indicates the product of a reaction FollowOtherProcedure The text refers to a procedure described elsewhere InvalidAction Unknown or unsupported action OtherLanguage The text is not written in English NoAction The text does not correspond to an actual action Table 4: Pre-defined action types used in this paper. Figure 8: Case study on biology literature. The text is collected from (Adapa et al., 2023). 12669 Action Type Total number of occurences Add 255 CollectLayer 37 Concentrate 54 Degas 1 DrySolid 12 DrySolution 22 Extract 34 Filter 34 MakeSolution 62 Microwave 0 Partition 5 PH 47 PhaseSeparation 4 Purify 24 Quench 8 Recrystallize 2 Reflux 7 SetTemperature 60 Sonicate 0 Stir 118 Triturate 3 Wait 19 Wash 45 Yield 37 FollowOtherProcedure 15 InvalidAction 11 OtherLanguage 2 NoAction 25 Table 5: PatentAction Dataset statistics. 12670 Figure 9: Case study on material science literature. The text is collected from (Watanabe et al., 2010). 12671
2024
683
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12672–12684 August 11-16, 2024 ©2024 Association for Computational Linguistics A Community-Centric Perspective for Characterizing and Detecting Anti-Asian Violence-Provoking Speech Gaurav Verma Rynaa Grover Jiawei Zhou Binny Mathew Jordan Kraemer Munmun De Choudhury Srijan Kumar Georgia Institute of Technology, Anti-Defamation League {gverma, rgrover30, j.zhou}@gatech.edu [email protected], [email protected] [email protected], [email protected] Abstract Violence-provoking speech – speech that implicitly or explicitly promotes violence against the members of the targeted community, contributed to a massive surge in anti-Asian crimes during the COVID-19 pandemic. While previous works have characterized and built tools for detecting other forms of harmful speech, like fear speech and hate speech, our work takes a community-centric approach to studying anti-Asian violence-provoking speech. Using data from ∼420k Twitter posts spanning a 3-year duration (January 1, 2020 to February 1, 2023), we develop a codebook to characterize anti-Asian violence-provoking speech and collect a community-crowdsourced dataset to facilitate its large-scale detection using stateof-the-art classifiers. We contrast the capabilities of natural language processing classifiers, ranging from BERT-based to LLM-based classifiers, in detecting violence-provoking speech with their capabilities to detect anti-Asian hateful speech. In contrast to prior work that has demonstrated the effectiveness of such classifiers in detecting hateful speech (F1 = 0.89), our work shows that accurate and reliable detection of violence-provoking speech is a challenging task (F1 = 0.69). We discuss the implications of our findings, particularly the need for proactive interventions to support Asian communities during public health crises. 1 Introduction Online platforms struggle with various forms of information pollution, including but not limited to harmful speech and misinformation. These malicious phenomena often intertwine in complex ways, exacerbating real-world issues. A glaring example of this is the dramatic increase in hate crimes against Asian communities during the COVID-19 pandemic, which included physical assaults and verbal harassment (NYT, 2021a). Rumors about the virus’s origins coalesced with pre-existing prejudices, resulting in narratives that portrayed Asians as “uncivilized”, blamed them for the virus, and labeled them as “spies”. This intricate interplay between different forms of malicious content led to a 339% surge in anti-Asian crimes in 2021 (NBC, 2022). Such narratives have had a far-reaching impact, making community members afraid to engage in basic daily activities, from grocery shopping to using public transit (NYT, 2021b). Moreover, this increase in real-world attacks was not limited to one ethnic group but affected diverse Asian communities, including Chinese, Korean, Vietnamese, and Filipino Americans (NYT, 2021a). In light of the dramatic uptick in anti-Asian violence and ensuing community fear, the necessity to accurately identify violence-provoking speech — i.e., speech that could promote real-world violence against members of targeted communities (Benesch et al., 2021), becomes paramount. This differs from hateful speech, which is a more subjective form of expression that may not directly incite violence (Benesch et al., 2021). While both these phenomena share commonalities — being rooted in prejudice and derogation — violence-provoking speech constitutes a specific subset of hateful speech that explicitly or implicitly encourages acts of aggression. The higher severity of harm associated with violence-provoking speech (Scheuerman et al., 2021) calls for targeted approaches for its detection, beyond treating hate as a monolithic entity. Recognizing that the perception of what qualifies as violence-provoking is not universal but varies among the targeted communities, we adopt a community-centric approach. We leverage the “insider perspectives” (Kim et al., 2021) and the subjective lived experiences (Dredge et al., 2014) of community members to capture the nuances, slurs, and coded language that may be overlooked by outsiders. Our focus is particularly on Asian communities in the context of the COVID-19 pandemic, who were disproportionately impacted by violence-provoking speech leading to real-world 12672 harm. We address two key research questions: RQ1: What are the characteristics of violenceprovoking speech that targets Asian communities? How is anti-Asian violence-provoking speech different from anti-Asian hateful speech? RQ2: Can state-of-the-art natural language processing (NLP) approaches accurately detect violence-provoking speech? How do the detection abilities compare to that of hate speech detection? We address these research questions by developing and validating a codebook for identifying anti-Asian violence-provoking speech, while working with Anti-Defamation League, a leading nongovernmental organization that specializes in tackling real-world hate and extremism (RQ1). We then use the codebook to obtain crowd-sourced annotations for violence-provoking and hateful content from individuals who self-identify as Asian community members. Our dataset demonstrates high inter-rater agreement (Fleiss’ κ = 0.66 for violenceprovoking speech labels).1 We then use the annotated data to develop binary classifiers that are trained to distinguish (i) violence-provoking content from not-violenceprovoking content and (ii) hateful content from non-hateful content. We find that while the NLP approaches effectively detect hateful speech (F1 score = 0.89), it is relatively more challenging to detect violence-provoking speech (F1 score = 0.69) (RQ2), perhaps due to its nuanced and subjective nature that relies on victims’ own perceptions. We discuss possible reasons why detecting violence-provoking speech is challenging and the implications of lacking capabilities of the detectors. Additionally, we discuss how our developed approaches could aid in the development of moderation algorithms that employ tiered penalties for content that violate norms of varying severities. Finally, we highlight the need to develop traumainformed approaches to proactively support targeted communities. 2 Related Work We categorize relevant prior work into three categories: (i) studies focusing on different forms of harmful speech on online platforms, including fear speech and hateful speech, (ii) studies of anti-Asian content, and (iii) detection methods for different forms of harmful speech. 1Project webpage: https://claws-lab.github.io/ violence-provoking-speech/ Forms of harmful speech: Prior studies on harmful speech have mostly focused on instances of hate speech. Ezeibe (2021) show how hate speech is an often neglected driver for election violence. Another study by (Williams et al., 2020) has shown that online hate victimization is part of a wider process of harm that can begin on social media and then migrate to the physical world. Another form of harmful speech is fear speech. According to Buyse (2014), Fear speech is defined as an “expression aimed at instilling (existential) fear of a target (ethnic or religious) group.” While it cannot be pinpointed if fear speech can cause violence, it can lower the threshold for violence (Saha et al., 2021). Saha et al. (2023) study the prevalence of fear speech in a loosely moderated community (Gab.com) and observe that users posting fear speech are more influential as compared to those who post hate speech. The authors argue that this is mainly due to the nontoxic and argumentative nature of the speech posts. Violence-provoking speech is closely related to fear speech but is defined more specifically as speech that promotes violence. Closest to the concept of violence-provoking speech, Benesch (2012), define dangerous speech as an expression that has “a significant probability of catalyzing or amplifying violence by one group against another, given the circumstances in which they were made or disseminated.” In this work, we contextualize the framework by Benesch (2012) for anti-Asian violence-provoking speech by using a community-centric approach and aim to operationalize its large-scale detection using stateof-the-art classifiers. Anti-Asian hate: The outbreak of the COVID-19 pandemic has led to the spread of potentially harmful rhetoric, conspiracy theories, and hate speech towards several Asian communities. Tahmasbi et al. (2021) collected two large datasets from Twitter and Reddit(/pol/) and observed that COVID-19 was driving the rise of Sinophobic content on the web. While counterspeech users were actively engaged with hateful users, users were highly likely to become hateful after being exposed to the hateful rhetoric (He et al., 2021). In this work, we specifically focus on anti-Asian content that has the potential to provoke violence. Detection methods: ElSherief et al. (2021) propose a taxonomy of implicit hate speech and consider factors, including incitement to violence and intimidation. They also investigate the use of 12673 BERT-based classifiers for detecting implicit hate speech and discuss the underlying challenges like models struggling with coded hate symbols and entity framing. More recently, with advent of large language models (LLMs) like GPT-4, Matter et al. (2024) found good agreement between GPT-4 annotation’s and human coders in identifying violent speech. In this work, we aim to study if the capabilities of NLP classifiers also translate to antiAsian violence-provoking speech that is curated in a community-crowdsourced manner. 3 Study Overview Our study involves collecting a large-scale Twitter dataset that comprises anti-Asian content related to the COVID-19 pandemic. It is worth noting that given the harmful impact of violence-provoking speech, it is challenging to find examples that explicitly provoke violence, as they are frequently removed from the platform due to moderation efforts. However, instances of such speech still exist on major platforms. To overcome the challenge of collecting potentially violence-provoking speech examples from Twitter, we designed an elaborate pipeline to obtain a dense sample, which we then used for obtaining annotations from the Asian community members. As shown in Figure 1, there are 4 key parts to our study. The first part comprises data collection, where we collected about 420k Twitter posts containing anti-Asian keywords and COVID-19related keywords. We then used posts in subsequent parts of our study to answer our RQs: • To facilitate the effective development of the antiAsian violence-provoking speech codebook (RQ1), we prepared a set containing a reasonably substantial amount of potentially harmful content. The original ∼420k Twitter posts comprise a variety of topics, such as hateful expressions, counter to hateful expressions, or instances of someone sharing anecdotes about anti-Asian hate. However, since we are particularly interested in violence-provoking speech — speech that implicitly or explicitly promotes violence against the Asian community, we further filtered down the collected ∼420k Twitter posts to find a set of 4, 076 Twitter posts that is denser in violence-provoking content. This concentrated sample is essential for the manual analysis required in codebook development, without the impractical inundation of non-violence-provoking posts that risk overburdening annotators in a task that involves reviewing potentially traumatizing content. In Section 5, we describe how we used this subset of Twitter posts to develop a codebook for anti-Asian violence-provoking speech while emphasizing its distinction from hateful speech. • Our next goal was to collect annotations from community members to train, evaluate, and contrast classifiers for violence-provoking and hateful speech (RQ2). To ensure the wide applicability of the classifiers, the annotated Twitter posts should cover diverse Twitter posts, which may not be included in the 4, 076 posts filtered using the identified ‘dangerous’ keywords. At the same time, platform moderation makes a random sample from the ∼420k sparse in violence-provoking content. To this end, to curate a sample of Twitter posts with high diversity and a higher density of violenceprovoking posts, we used a prompt-based few-shot learning approach. Few-shot learning in natural language processing has not only demonstrated great effectiveness in terms of data-efficient accuracy (Schick and Schütze, 2021; Mozes et al., 2023) but also in terms of generalizability over out-of-domain distributions (Liu et al., 2022). Using the few-shot classifier, we selected a subset of 1, 000 of Twitter posts that are potentially dense in violence-provoking speech content. We then obtained community-centric speech annotations for these posts (Section 6), and used them to train and evaluate classifiers for violence-provoking and hateful speech detection (Section 7). We contrasted the classifiers’ capabilities and conducted error analysis to understand their shortcomings. 4 Anti-Asian COVID-19 Data Collection We started by collecting a large-scale dataset from Twitter using COVID-19 and anti-Asian keywords. We used the Twitter Academic API to collect data from January 1, 2020 to February 1, 2023, by querying using COVID-19 and anti-Asian keywords together. To consider a wide range of data, we expanded the set of keywords used in prior work. Keyword expansion strategies: The present study commenced with an initial set of 6 COVID-19 and 16 Anti-Asian keywords, adapted from He et al. (2021). We removed certain keywords like ‘ccpvirus’ as the focus of our work is on speech that targets Asians and not speech that may involve political factors. Based on the initial keyword set, we obtained 16 × 6 = 96 combinations of keywords. For each unique combination of anti-Asian and 12674 Violence-Provoking Speech Codebook Violence-Provoking-Dense Sample (32 for each violence-provoking & not-violence-provoking class) Violence-Provoking-Dense Sample Violenceprovoking (N = 246) 3 annotators label each post for violenceprovoking and hateful content Classifiers for antiAsian hateful speech detection Classifiers for antiAsian violence-provoking speech detection Figure 1: Study Overview. Our study comprises 4 key parts: (i) collecting anti-Asian COVID-19 data from Twitter, (ii) developing and validating anti-Asian violence-provoking speech codebook, (iii) obtaining community-centric annotations, and (iv) training and evaluating detection classifiers on community-crowdsourced data. COVID-19 keywords, we queried Twitter to collect posts that contain both keywords. Since anti-Asian speech could have evolved since He et al. (2021) conducted their study, we adopted two strategies to further expand the list of anti-Asian keywords. (i) Word co-occurrence: We calculated the similarity scores between pairs of words based on their co-occurrence frequency. The intuition behind this approach is that anti-Asian words would co-occur frequently in the same post. For each initial keyword, two authors manually verified the top 5 cooccurring words and expanded the set. (ii) word2vec similarity: We trained a word2vec (Mikolov et al., 2013) model on the dataset collected using the initial set of keywords, setting the embedding dimension to 100 and the context window size to 5. We then computed the cosine similarity between the words in the vocabulary and the initial list of anti-Asian keywords. Two authors then manually verified the top five similar words for each initial anti-Asian keyword and expanded the set of keywords. We conducted this expansion process in a snowball fashion, repeating each approach five times to arrive at the final list of keywords. In each run, new keywords were identified using the keywords from the previous iteration, which were then manually verified to be relevant. We show the final list of 33 anti-Asian and COVID-19 keywords used for our study in Table 6 (Appendix). We did another round of data collection by taking unique combinations of COVID-19 and expanded anti-Asian keywords as queries. Finally, we curated a dataset comprising 418, 999 Twitter posts. 5 Development of Anti-Asian Violence-Provoking Speech Codebook The existing guidelines developed by the Dangerous Speech Project (Benesch et al., 2021) provide a general framework for identifying violenceprovoking speech. We followed this guideline and expanded it to develop a comprehensive codebook that allows empirical measurement and categorization. We grounded the defined sub-concepts in real data from Twitter and contextualized them in the community-targeting (i.e., anti-Asian) framework. To enable effective development of the codebook by qualitatively coding the data, we started by obtaining a subset of the ∼420k tweets by filtering based on ‘dangerous’ keywords so that it is potentially concentrated in violence-provoking speech. Preparing a concentrated sample for codebook development: To obtain dangerous keywords, we started with the example phrases mentioned in the practical guide provided by the Dangerous Speech Project (Benesch et al., 2021) and expanded the phrases by computing similarity scores with the phrases in our dataset. For similarity computation, we used word2vec embeddings fine-tuned on our corpus. Table 7 (Appendix) shows the initial dangerous keywords obtained from the Dangerous Speech Project and the expanded set after manually removing irrelevant and redundant phrases. It is worth noting that dangerous keywords include explicitly violence-provoking terms like ‘kill’ as well as implicit keywords that indicate dehumanization (like comparisons to ‘ants,’ ‘lice,’) and the use of words like ‘mercy,’ ‘charity,’ and ‘forgive’ to 12675 Annotators Violence-provoking Hateful Internal Annotators κ = 0.78 κ = 0.86 External Annotators κ = 0.69 κ = 0.82 Prolific Annotators κ = 0.66 κ = 0.72 Table 1: Codebook validation. Fleiss’ κ scores for violence-provoking and hateful labels. display virtuousness over the targeted community. Of the ∼420k Twitter posts, 4, 076 Twitter posts contained one or more dangerous phrases. We use this potentially rich sample in anti-Asian violenceprovoking content for codebook development. Developing violence-provoking speech codebook: Our theory-guided, data-centric approach was inspired by previous work in understanding minority stress (Saha et al., 2019) and credibility indicators (Zhou et al., 2023). To operationalize the framework proposed by Benesch et al. (2021) within the anti-Asian hate context and for empirical tasks, three of the authors first engaged in a discussion concerning its applicability and the potential need for expansions or alterations. The discussion was informed by the authors’ lived experiences, with some being members of the Asian community. Then, through an iterative process, the authors inductively developed categorical codes to characterize violence-provoking speech and deductively assessed the applicability. Two authors independently coded 180 randomly sampled instances that contained dangerous phrases and drafted concepts with definitions and examples. With a third author involved, the discrepancies in annotations were discussed, and drafted categories were combined and revised. We tested this revised codebook with the sample until no new themes emerged, resulting in a unified and coherent document that reflected the perspectives and consensus of all contributors. The codebook was then reviewed by all co-authors before being validated by members of the Asian community and experts in Anti-Defamation League (details in the following subsection). Table 5 in Appendix shows the final codebook to characterize anti-Asian violence-provoking speech. Validating the codebook: We validated the effectiveness of the codebook through pilot testing with both internal and external annotators. We randomly sampled 100 tweets from the entire Twitter dataset, and for each tweet, the annotators were asked if it contained violence-provoking speech or hateful content. We quantified the level of agreement between two annotators using Fleiss’ κ. Internal testing was done by the two researchers who drafted initial codebooks, which demonstrated substantial agreement for both categories (Table 1). Following this, to establish the external validity of the codebook, we recruited two graduate students who identified as members of the Asian community and had no other involvement in the presented research. They annotated the same 100 tweets, utilizing the codebook as a framework to determine the presence of violence-provoking and hateful content targeting the Asian community. Moreover, they were requested to provide a rationale outlining their assessment of the codebook’s efficacy. The findings of the external validity evaluation revealed that the codebook exhibited strong external utility, underscored by the high Fleiss’ κ scores for both variables of interest (Table 1). We also incorporated suggestions from Anti-Defamation League2, a leading non-governmental organization that specializes in countering hate and extremism, to ensure that the codebook covers various aspects of anti-Asian violence-provoking speech. We used the feedback provided by the graduate students and the NGO partners to make minor changes to the codebook, primarily to rephrase certain definitions. 6 Community-Centric Annotations with a Violence-Provoking-Dense Sample With the validated codebook, we aimed to get community-centric annotations for 1, 000 Twitter posts that contained diverse examples and were also rich in violence-provoking content. To enable this, we trained a few-shot classification model to filter relevant posts, given the remarkable generalizability and accuracy of few-shot learning. Prompt-based few-shot learning for creating a dense violence-provoking sample for community-centric annotation: From the Twitter posts that were labeled while validating the codebook, we used a representative subset of labeled examples (a total of 64 examples, 32 were violence-provoking, and 32 were not) to train a binary classification model. More specifically, we used a prompt-based few-shot learning method called Pattern Exploiting Training (PET) (Schick and Schütze, 2021). This method converts the input data into cloze-style statements and finetunes a pre-trained language model to predict the missing token. We formulated the classification task using the following prompt: Is "<input 2https://www.adl.org/ 12676 sentence>" violence-provoking? [MASK]., with the [MASK] token serving as a placeholder that could be verbalized as “Yes” or “No” during model training. We used the pre-trained DeBERTa-v2-xxlarge language model (He et al., 2020) as the backbone and fine-tuned it for 200 iterations using a batch size of 16. The rest of the hyper-parameters were set to the values used in prior work (Schick and Schütze, 2021). All the language models in this work were trained on an NVIDIA Quadro RTX 8000; the cumulative training time amounted to about 24 GPU hours. We then used the trained model to predict labels and prediction scores over the entire ∼420k Twitter posts in our curated dataset. We then randomly selected 1000 Twitter posts that had a classification score of ≥0.9. Since these Twitter posts have a high probability score of being violence-provoking, the set of 1000 Twitter posts is likely to contain a higher density of violence-provoking posts than the entire set of ∼420k posts (labeling of 200 examples from the filtered set by the authors indicated that about 30% posts were violence-provoking). Furthermore, the generalizability of the few-shot learning approach helped in identifying violence-provoking posts that do not necessarily contain dangerous keywords. For instance, “these are not people of god, they are virus-ridden chinks who do not deserve our goodness" does not contain any dangerous keywords in our list (see Table 7) but was correctly identified to be violence-provoking by the few-shot learning approach. Next, we used these 1000 Twitter posts to obtain community-centric annotations. Obtaining community-centric annotations: We designed a custom interface to collect data from a crowd-sourcing platform called Prolific (https: //www.prolific.co/). We recruited participants who identified as Asians, were located in the United States or Canada, spoke fluent English, and were at least 18 years old. The participants were compensated at an hourly rate of 10 USD per hour. To ensure high-quality annotations, we only considered participants who had at least 100 submissions and had a minimum of 95% approval rate across their past submissions. The overall cost of obtaining annotations was about 760 USD. Our user-friendly interface required the eligible participants to read the consent form and acknowledge their consent and eligibility before being directed to the codebook that contained all the necessary definitions and examples (see Table 5 in Violenceprovoking (N = 246) Hateful but not violence-provoking (N=173) Figure 2: Category-wise distribution of the dataset obtained from community-centric annotations. ‘Data statistics’ in Section 6 provides additional details. Appendix). We instructed the participants to read the codebook thoroughly. Once the participants had confirmed reading the codebook, they were directed to an ‘Examples page’ where they were shown 5 different illustrative Twitter posts and were asked to provide annotations for the following questions: “Does this post contain violence-provoking speech that targets Asian community member(s)?" and “Does this post contain hateful speech that targets Asian community member(s)?"; for each of the questions the participants could respond in a ‘yes’/‘no’. Upon responding to the examples, the participants were shown the expected answers and the rationale for the expected categorization. The participants could revise their response to the example posts multiple times, and only after they had correctly answered the 5 examples were they directed to the annotations page. On the annotations page, the participants saw 25 posts, one by one, for which they provided answers to the questions above in yes/no. The annotations page also contained an abridged version of the codebook. The participants could discontinue the study at any point for any reason while still getting compensated for the time spent on the study. We obtained annotations from 3 different participants for each of the 1, 000 Twitter posts. In all, we received annotations from 120 unique participants, with a median age of 31, among which 54% identified as males and the rest as females. The country of birth for the participants was distributed across the United States, Canada, China, Korea, Philippines, Japan, and Vietnam. On average, a participant took 29 minutes to complete one session. Data statistics: We assigned final labels for violence-provoking and hateful speech based on majority annotations. In other words, if at least 2 out of 3 annotators considered a Twitter post to be either violence-provoking (or hateful), it was 12677 violence-provoking and hateful (N = 246) • fucken slanteyed pigs brought the virus in sa yet y’all post black hands. fucking racist weak pigs • they had it all along trust me, can’t trust the asian spies here. they helped spread the virus • fucking dog eaters, asian weasels don’t have balls to fight like a man that’s why the pandemic • clearly your a hater a democrate or maybe a communist like the communist chinese who launched lied threw a deadly virus at the world these chinks must pay for this evil jealous liars and there will be 4 moreee years trumpster.. 4 moreeee years trumpster • they don’t care about schools or kids! these are satanic baby eatin soulless traitors who are at war with america on our soil. bet if you lined em chinks up and shot em all this corona would be over yesterday Hateful but not violence-provoking (N = 173) • open them eyes up ugly chink bitch. you wish you looked as good as me #covid • they are giving them the virus and sending to africa. them chinks playing chess on us • manmade chinese virus is designed to kill anyone in the world who is not a chink • he didn’t kill them chinks, a virus did. chinks. • she don’t have chinky eyes, girl got big ass eyes just like her paw. she aint getting the virus Not violence-provoking and not hateful (N = 44) • this is why we hate wasians like him. white passing bitches like him don’t get called chink . • @user you people think he can make miracles. blame the chinese for the wuhan kung flu virus. what the hell would you do. liberals only know how to create riots looting and thugs • people do bad and when they get the same back they play victims. chinese 3 weeks ago wherever they existed were segregated, seen as virus carriers, treated as if they were sub humans things have since changed and the chinese are doing the same, and now we are all screaming with squinted eyes.. • when stupidity prevails over intelligence. now the deads and the spread of the chinese virus killing more people gona be responsibility of that federal stupid and slanted judge • it is a weapon that’s why china lied about it they don’t care about their people they killed 70 million to get in power so they released it on they’re none party members #covid #gooks Table 2: Qualitative examples belonging to different subsets of the data annotated by the community members. assigned as violence-provoking (or hateful). Out of 1000 Twitter posts, 246 tweets were classified as both hateful and violence-provoking, and 954 tweets were hateful but not necessarily violenceprovoking. We only considered a Twitter post to be not violence-provoking when it received no annotations (i.e., 0 out of 3) for the violence-provoking label to avoid any ambiguity. Adopting this definition, 173 Twitter posts were identified to be hateful but not violence-provoking. Similarly, if a Twitter post received 0 out of 3 annotations for being hateful, it was considered not hateful (n = 44). We present some qualitative examples from the community-annotated data in Table 2 and a visualization of the categorical distribution of data in Figure 2. Finally, we observed that the inter-annotator agreement between the annotators, as quantified by Fleiss’ κ, was 0.66 and 0.72 for violence-provoking and hateful labels, respectively (see Table 1), which are notably better than the agreement scores reported in prior related studies (Saha et al., 2021). Next, we use the curated dataset to develop classifiers that can detect violence-provoking and hateful speech by finetuning pre-trained language models for respective binary classification tasks. 7 Contrasting Hateful & ViolenceProvoking Speech Classifiers We now focus on developing state-of-the-art classifiers to assess their ability to detect violenceprovoking and hateful speech targeting Asian communities (RQ2). We evaluate and contrast both fine-tuned BERT-based language models as well as large language models like Mixtral (Jiang et al., 2024) for the following two tasks: (i) hateful and (ii) violence-provoking speech detection. Hateful speech detection: We consider the 954 Twitter posts identified as hateful in our community-annotated dataset as positive examples for this task. For negative examples, we augment the 44 ‘not hateful’ examples in our dataset by adding the ‘neutral’ and ‘counter-hate’ Twitter posts curated by He et al. (2021), resulting in a total of 1861 ‘not hateful’ examples. We then split the dataset into an 80 : 20 ratio to create the training and validation set. In Table 3, we report the macro-averaged classification metrics, averaged over 5 different runs, for models that span fine-tuned language models (DistilBERT (Sanh et al., 2019), BERT (Devlin et al., 2018), and RoBERTa (Liu et al., 2019)), and zero-shot and few-shot LLMs – Flan T5-XL (Chung et al., 2022) and Mixtral-Ins (Jiang et al., 2024). 12678 Implementation details: The process of finetuning BERT-based language models involves replacing the “pre-training head” of the model with a randomly initialized “classification head”. The randomly initialized parameters in the classification head are learned by fine-tuning the model on classification examples while minimizing the crossentropy loss. To train the models, we use Adam optimizer (Kingma and Ba, 2014) with a learning rate initialized at 10−4, with a batch size of 16 and default hyper-parameters (Wolf et al., 2020). We used early stopping to stop training when the loss value on the validation set stops to improve for 3 consecutive epochs. For large language models, we carefully crafted the task-specific instructions (shown in Appendix A.1) to obtain zero-shot outputs and provided 16 randomly-sampled examples (8 per class) along with the instruction to experiment with the few-shot learning setting. Table 3 shows that all the classifiers are effective in identifying hateful speech, consistently demonstrating a F1 score greater than 0.80. Few-shot learning with Mixtral-Ins demonstrates the best performance among all the models, with an F1 score of 0.89. The performance of the RoBERTa-large model, albeit fine-tuned using all the training set, is also competitive. In effect, like prior studies (He et al., 2021), we find that state-of-the-art NLP classifiers can effectively discern anti-Asian hateful speech from not hateful speech. Violence-provoking speech detection: We consider 246 Twitter posts identified as violenceprovoking in our community-annotated dataset as positive examples from this task. For negative examples, we combine the hateful but not violenceprovoking examples and the not hateful from our dataset, the neutral and counter-hate examples from He et al. (2021), resulting in a total of 2034 negative examples. We again split the dataset into an 80 : 20 ratio to create the training and validation set. We show the classification results in Table 4. We adopt the same models and training strategies as those for developing hateful classifiers. Table 4 shows that RoBERTa-large is most effective in detecting violence-provoking speech and achieves an F1 score of 0.69. The performance of the classifier is lacking, especially in comparison to the hateful classifier, indicating the difficulty in detecting violence-provoking speech. Additionally, few-shot learning using Mixtral-Ins demonstrates a notably subpar performance in comparison to fineModel # Parameters F1 Precision Recall Accuracy Random (uniform) – 0.5071 0.5151 0.5162 0.5147 Random (biased) – 0.5021 0.5023 0.5023 0.5295 DistilBERT-base-uncased 66M 0.8559 0.8489 0.8520 0.8620 BERT-large-uncased 340M 0.8833 0.8737 0.8818 0.8773 RoBERTa-large 354M 0.8897 0.8762 0.8871 0.8807 Flan T5-xl (Ins + n = 16) 3B 0.8192 0.8153 0.8254 0.8278 Mixtral-Ins (zero-shot) 8x7B 0.8642 0.8596 0.8613 0.8742 Mixtral-Ins (Ins + n = 16) 8x7B 0.8941 0.8838 0.8916 0.8893 Table 3: Classification performance of models trained to detect hateful speech. The values are averages of 5 experimental runs with different random seeds. tuned RoBERTa, with an F1 score of 0.62, indicating that a limited examples may not be enough to model the variability and subjectivity of violenceprovoking speech. Next, we perform an error analysis to understand the underlying challenges. Error Analysis of the Violence-Provoking Speech Classifier: We aim to discern the limited capabilities of the above classifiers to detect violence-provoking content. First, we note that the negative examples include hateful but not violence-provoking content, and counter- and neutral speech. While the model can distinguish violence-provoking content from counter- and neutral speech, it struggles to distinguish hateful but not violence-provoking content from violenceprovoking content. This is primarily because violence-provoking speech is a subset of hateful speech that involves more nuanced expressions.3 Violence-provoking and Not-violence-provoking examples demonstrate statistical similarities: Our experiments indicate that hateful but not violenceprovoking (n = 173) and violence-provoking (n = 246) speech demonstrate statistically indistinguishable sentiment scores (quantified using VADER (Hutto and Gilbert, 2014)), positive or negative emotions (quantified using ‘posemo’ and ‘negemo’ categories in LIWC (Tausczik and Pennebaker, 2010)), and swear words (quantified using ‘swear’ category). The p-values computed using a two-sample t-test with equal variances assumption were > 0.05 in all the cases. Lack of BERT-based models to effectively encode 3An alternative formulation for detecting violenceprovoking speech could have been a 3-way categorization of content into ‘violence-provoking’, ‘hateful’, and ‘other’ categories. However, we found that this formulation also leads to limited distinguishability between hateful and violence-provoking categories (macro F1 score of 0.63 with a RoBERTa-large classifier), with the majority of miscategorizations being among the two harmful categories. 12679 Model # Parameters F1 Precision Recall Accuracy Random (uniform) – 0.4302 0.5025 0.5059 0.5307 Random (biased) – 0.4745 0.4723 0.4775 0.8032 DistilBERT-base-uncased 66M 0.6772 0.6843 0.6790 0.8576 BERT-large-uncased 340M 0.6845 0.6912 0.6897 0.8603 RoBERTa-large 354M 0.6975 0.7007 0.6991 0.8689 Flan T5-xl (Ins + n = 16) 3B 0.5214 0.5317 0.5249 0.6556 Mixtral-Ins (zero-shot) 8x7B 0.5722 0.5612 0.5652 0.7104 Mixtral-Ins (Ins + n = 16) 8x7B 0.6213 0.5965 0.6241 0.7743 Table 4: Classification performance of models trained to detect violence-provoking speech. The values are averages of 5 runs with different random seeds. compositionality: Due to the lack of statistical differences in occurrence-based identifiers, effective modeling of the compositionality in language becomes crucial. However, it has been demonstrated that pre-trained language models like RoBERTa and BERT struggle with compositional semantics (for instance, negation and Semantic Role Labeling) (Stali¯unait˙e and Iacobacci, 2020). We illustrate this using concrete examples in Appendix A.2. Our work highlights the challenges in detecting violence-provoking speech and the need for stronger language modeling capabilities beyond fine-tuned pre-trained language models or employing LLMs that learn with few examples. 8 Discussion and Conclusion We developed a comprehensive codebook to enable the conceptual identification of violence-provoking speech and distinguish it from more subjective hateful speech (RQ1). We then used the codebook to obtain annotations from Asian community members. The high inter-annotator agreement scores demonstrate the effectiveness of our codebook and the quality of the collected data. We then used the annotated data to train classifiers that can be used for detecting hateful and violence-provoking speech. We highlighted the lacking capabilities of NLP classifiers in effectively distinguishing violenceprovoking speech from hateful speech and conducted error analysis to aid future research (RQ2). Implications: We believe that the findings from our study can enable informed decision-making by different stakeholders: (a) policy-makers who are responsible for regulating harmful speech, including both hateful speech and violence-provoking speech, (b) practitioners who are responsible for algorithmic accountability of online platforms, and (c) the targeted Asian community members. Our work hints at the need for a tiered penalty system on online platforms that may allow for more nuanced and proportionate responses to varying types of harmful content, enhancing algorithmic accountability. Tiered penalties are also more fair as they align penalties with the severity of the offense, thereby offering a balanced deterrent that could encourage more thoughtful online interaction. Furthermore, our study underscores the importance of tailored, trauma-informed interventions (Han et al., 2021) to support targeted communities as there is a need to create more holistic and humane approach to protecting targeted communities. 9 Limitations & broader perspective Limitations and future work: It is important to be clear about the limitations of this study. Our study focuses on Twitter posts explicitly mentioning anti-Asian keywords in the context of COVID19. To avoid moderation penalties, users often refer to the targeted communities as ‘they’ or ‘them,’ which would have been skipped in our study. However, considering such posts requires additional data curation efforts and may lead to more noise in the samples and a lower density of violenceprovoking posts. In future work, we will conduct a user-profile level analysis to uncover the individuallevel traits and content exposure that may trigger violence-provoking expressions. We also intend to leverage the developed codebook and communitycrowdsourced dataset to develop more effective approaches to detect violence-provoking speech. Dataset and resources: The data collection and annotation for this study have been approved by the Institutional Review Board (IRB) at the researchers’ institution. The annotators were informed about the potentially hateful and violence-provoking nature of the content, targeting individuals from the same ethnicity (i.e., Asians), and had the agency to discontinue participation at any point. We anonymized all data, replaced all user mentions with ‘@user’, and rephrased all examples in the paper to avoid traceability. The data was stored on IRB-approved devices. The resources developed in this study are available at https://claws-lab.github.io/ violence-provoking-speech/. The existing resources (models, data, and software) we used are publicly available for research, and we abide by their terms of use. Broader social impact: The limitation of machine learning models in accurately distinguishing between violence-provoking and hateful speech could 12680 lead to false positives, unfairly penalizing benign users, if not used responsibly. Moreover, we condemn any reinforcement of harmful stereotypes or prejudices against the Asian community that could be inadvertently caused by the presented examples of Twitter posts. Some of the authors of this work identify as Asians, which enabled us to contextualize our findings and discussions with their lived experiences. Some of the content included in the paper could be offensive, especially to readers of Asian descent, for which the authors suggest caution to the readers. 10 Acknowledgements This research/material is based upon work supported in part by NSF grants CNS-2154118, ITE2137724, ITE-2230692, CNS2239879, Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112290102 (subcontract No. PO70745), CDC, and funding from Microsoft and the American Foundation for Suicide Prevention (AFSP). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the position or policy of DARPA, DoD, SRI International, CDC, NSF, NIH, AFSP, and no official endorsement should be inferred. Gaurav is partially supported by the JP Morgan AI Research PhD Fellowship and the Snap Research Fellowship. We are thankful to Kefai (KeKe) Debebe (CDC) and Daniel A. Bowen (CDC) for their continuous feedback on this work. We are also grateful to Dr. Morgan Clark (ADL) for their support and feedback while developing the codebook. References Susan Benesch. 2012. Dangerous speech: A proposal to prevent group violence. Dangerous Speech Project. Susan Benesch, Cathy Buerger, Tonei Glavinic, Sean Manion, and Dan Bateyko. 2021. Dangerous Speech: A Practical Guide. Accessed: 2023-09-12. Antoine Buyse. 2014. Words of violence:" fear speech," or how violent conflict escalation relates to the freedom of expression. Hum. Rts. Q., 36:779. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805. Rebecca Dredge, John Gleeson, and Xochitl De la Piedad Garcia. 2014. Cyberbullying in social networking sites: An adolescent victim’s perspective. Computers in human behavior, 36:13–20. Mai ElSherief, Caleb Ziems, David Muchlinski, Vaishnavi Anupindi, Jordyn Seybolt, Munmun De Choudhury, and Diyi Yang. 2021. Latent hatred: A benchmark for understanding implicit hate speech. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 345–363, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Christian Ezeibe. 2021. Hate speech and election violence in nigeria. Journal of Asian and African Studies, 56(4):919–935. Hae-Ra Han, Hailey N Miller, Manka Nkimbeng, Chakra Budhathoki, Tanya Mikhael, Emerald Rivers, Ja’Lynn Gray, Kristen Trimble, Sotera Chow, and Patty Wilson. 2021. Trauma informed interventions: A systematic review. PloS one, 16(6):e0252747. Bing He, Caleb Ziems, Sandeep Soni, Naren Ramakrishnan, Diyi Yang, and Srijan Kumar. 2021. Racism is a virus: Anti-asian hate and counterspeech in social media during the covid-19 crisis. In ASONAM 2021, pages 90–94. Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv:2006.03654. Clayton Hutto and Eric Gilbert. 2014. Vader: A parsimonious rule-based model for sentiment analysis of social media text. In AAAI ICWSM, volume 8, pages 216–225. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088. Seunghyun Kim, Afsaneh Razi, Gianluca Stringhini, Pamela J Wisniewski, and Munmun De Choudhury. 2021. You don’t know how i feel: Insider-outsider perspective gaps in cyberbullying risk detection. In AAAI ICWSM, volume 15, pages 290–302. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Nelson F Liu, Ananya Kumar, Percy Liang, and Robin Jia. 2022. Are sample-efficient nlp models more robust? arXiv:2210.06456. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692. 12681 Daniel Matter, Miriam Schirmer, Nir Grinberg, and Jürgen Pfeffer. 2024. Close to human-level agreement: Tracing journeys of violent speech in incel posts with gpt-4-enhanced annotations. arXiv preprint arXiv:2401.02001. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781. Maximilian Mozes, Jessica Hoffmann, Katrin Tomanek, Muhamed Kouate, Nithum Thain, Ann Yuan, Tolga Bolukbasi, and Lucas Dixon. 2023. Towards agile text classifiers for everyone. arXiv:2302.06541. NBC. 2022. Anti-Asian hate crimes increased 339 percent nationwide last year, report says. Accessed: 2023-09-12. NYT. 2021a. Asian-Americans were targeted in nearly 3,800 hate incidents in the past year. Accessed: 202309-12. NYT. 2021b. Selling Anti-Asian Violence: Who is being attacked where. Accessed: 2023-09-12. Koustuv Saha, Sang Chan Kim, Manikanta D Reddy, Albert J Carter, Eva Sharma, Oliver L Haimson, and Munmun De Choudhury. 2019. The language of lgbtq+ minority stress experiences on social media. CSCW. Punyajoy Saha, Kiran Garimella, Narla Komal Kalyan, Saurabh Kumar Pandey, Pauras Mangesh Meher, Binny Mathew, and Animesh Mukherjee. 2023. On the rise of fear speech in online social media. PNAS, 120(11):e2212270120. Punyajoy Saha, Binny Mathew, Kiran Garimella, and Animesh Mukherjee. 2021. “short is the road that leads from fear to hate”: Fear speech in indian whatsapp groups. In WebConf 2021, pages 1110–1121. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv:1910.01108. Morgan Klaus Scheuerman, Jialun Aaron Jiang, Casey Fiesler, and Jed R Brubaker. 2021. A framework of severity for harmful content online. CSCW. Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In EACL 2021, pages 255–269. Ieva Stali¯unait˙e and Ignacio Iacobacci. 2020. Compositional and lexical semantics in roberta, bert and distilbert: A case study on coqa. In EMNLP 2020, pages 7046–7056. Fatemeh Tahmasbi, Leonard Schild, Chen Ling, Jeremy Blackburn, Gianluca Stringhini, Yang Zhang, and Savvas Zannettou. 2021. “go eat a bat, chang!”: On the emergence of sinophobic behavior on web communities in the face of covid-19. In WebConf 2021. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1). Matthew L Williams, Pete Burnap, Amir Javed, Han Liu, and Sefa Ozalp. 2020. Hate in the machine: Anti-black and anti-muslim social media posts as predictors of offline racially and religiously aggravated crime. The British Journal of Criminology, 60(1):93– 117. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2020. Transformers: State-of-the-art natural language processing. In EMNLP 2020: Sys. Demos., pages 38–45. Jiawei Zhou, Yixuan Zhang, Qianni Luo, Andrea G Parker, and Munmun De Choudhury. 2023. Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions. In CHI 2023, pages 1–20. A Appendix A.1 Insruction for Mixtral-Ins and Flan-T5-XL We use the format depicted in the subsequent paragraphs for specifying the instructions and providing few-shot examples to the large language models. Hateful speech detection: We use the definition used by United Nations to guide the LLMs to detect hatespeech. The definition is available here: https://www.un.org/en/ hate-speech/understanding-hate-speech/ what-is-hate-speech. Violence-provoking speech detection: We use the definition based on our developed codebook, which is inspired by the work of Benesch et al. (2021) and operationalized in the context of anti-Asian Twitter posts during COVID-19; see Table 5 for the detailed codebook. A.2 Role of modeling compositionality in language for violence-provoking speech detection We demonstrate using qualitative examples how BERT-based models struggle to detect violenceprovoking speech, possibly, due to their lacking abilities to model compositionality in language. Consider the following examples: “these fucking chinks brutally eat dogs and all nasty animals that’s why squint all the time” is labeled as 12682 violence-provoking because it portrays the community as cruel, repulsive, and uncivilized but is incorrectly predicted to be ‘not violence-provoking’. However, “i hate fucking chinks they are just disgusting and don’t see without squinting” is labeled as hateful but incorrectly predicted to be ‘violenceprovoking’. Similarly, “they steal our jobs, make the economy suffer, spread diseases, and we welcome them with open hands. these chinks need to pay” (violence-provoking) and “the virus has triggered an economic downturn, millions killed and so many jobs lost. we should have closed our borders to stop the chinks in time” (not violence-provoking) are similar at a superficial level but convey different meanings that materialize due to the compositionality in language – however, they both are predicted to be ‘violence-provoking’ by the classifier. You are an AI assistant that helps identify hateful speech targeting Asian community members. Use the following guidelines to identify whether a given Twitter post is hateful. Hate speech is defined as attacks or use of pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor. Consider the following examples: Tweet: <example 1> Label: <label 1> ... Tweet: <example N> Label: <label N> Only respond using the following strings: ‘not hateful’ or ‘hateful’. Do NOT respond with anything else. Tweet: <current example to label> Label: You are an AI assistant that helps identify violence-provoking speech targeting Asian community members. Use the following guidelines to identify whether a given Twitter post is violenceprovoking. Violence-provoking speech is defined as community-targeting speech that could lead to violence against the community member(s). It could either be explicit or implicit. Implicit forms of such speech indirectly provokes violence using elements like dehumanization, guilt attribution, threat construction, virtuetalk, and constructing future-bias against the community member(s). Violence-provoking speech targets the member(s) of the general public and not political entities, countries, or inanimate objects. Consider the following examples: Tweet: <example 1> Label: <label 1> ... Tweet: <example N> Label: <label N> Only respond using the following strings: ‘not violence-provoking’ or ‘violenceprovoking’. Do NOT respond with anything else. Tweet: <current example to label> Label: 12683 Concept Sub-concept Identifiers Definition Community-targeting violence-provoking speech could lead to violence against the community Direct violenceprovoking Explicit mention of violence Directly mention attacking the members of the targeted community and causing them physical harm Indirect violenceprovoking Dehumanization Dehumanization is the description of other people (in this case, Asians) as something other than human or less than human. This can involve likening them to bacteria, insects, or other repulsive or unwanted creatures Guilt Attribution Victims are often deemed guilty as a group, deserving collective punishment for the specific crimes of some of their “members.” Threat construction ... asserts that the in-group faces serious and often mortal threats from the victims to-be, which makes violence seem defensive, and therefore proper and necessary Prediction of Violence Violence is presented as inevitable and necessary as a way to protect the in-group from harm or annihilation Virtuetalk The valorization of violence by associating it with a range of praiseworthy characteristics, and the parallel denigration of resistance or non-participation as indicating a lack of proper character traits, a deplorable “weakness,” or a range of other deficiencies. Future-bias The confident anticipation of future goods that will be accrued through violence, and which are so extensive and so enduring in a relatively certain future that they easily outweigh the moral costs of victims’ deaths in the here and now. Targets member(s) of the general public and not political entities, organizations, countries, or inanimate objects The speech targets one or many members of the public that belong to Asian communities and not political entities, political personalities, countries, organizations, or inanimate objects like (applications, products, food items, etc.). Speech is aimed to harm and does not (i) counter violenceprovoking speech, (ii) share lived or witnessed experiences, and (iii) comprise news articles or reports of attacks The speech is intended to direct harm toward the Asian communities and does not mention narratives that include violence-provoking speech as a means to bring attention to them. This would exclude sharing lived or witnessed experiences and new articles that report incidents Table 5: Codebook for anti-Asian violence-provoking speech with definitions of underlying concepts and identifiers. We consider aspects related to both implicit and explicit forms of violence-provoking speech against Asian community members. Set Keywords COVID-19 ‘coronavirus’, ‘covid 19’, ‘covid-19’, ‘covid19’, ‘corona virus’, ‘virus’ Anti-Asian ‘chink’, ‘chinky’, ‘chonky’, ‘churka’, ‘cina’, ‘cokin’, ‘coolie’, ‘dink’, ‘niakoue’, ‘pastel de flango’, ‘slant’, ‘slant eye’, ‘slopehead’, ‘slope head’, ‘ting tong’, ‘yokel’, ‘pasteldeflango’, ‘slanteye’, ‘slitty’, ‘squinty’, ‘kungflu’, ‘gooks’, ‘churka’, ‘wuflu’, ‘antichinazi’, ‘slanty’, ‘kungfuflu’, ‘squint’, ‘gook’, ‘slanted’, ‘niakouee’, ‘chinks’ Table 6: Final set of keywords used for Twitter data collection. The original keywords are shown in black, and the expanded ones are in blue. Set Keywords Initial ‘to eat’, ‘to wash’, ‘cockroaches’, ‘microbes’, ‘parasites’, ‘yellow ants’, ‘logs’, ‘enemy morale’, ‘devils’, ‘satan’, ‘demons’, ‘weak’, ‘fanatic’, ‘mercy’, ‘savage’ Expanded ‘cut’, ‘sterilize’, ‘exorcise’, ‘purify’, ‘rein’, ‘feeble’, ‘peel’, ‘undermine’, ‘chew’, ‘bigot’, ‘weaken’, ‘termites’, ‘soak’, ‘germs’, ‘use’, ‘rough’, ‘viruses’, ‘resist’, ‘satan’, ‘cockroaches’, ‘launder’, ‘mercy’, ‘eradicate’, ‘defeat’, ‘demons’, ‘fanatic’, ‘brutal’, ‘parasites’, ‘ticks’, ‘exterminate’, ‘squash’, ‘bugs’, ‘stumps’, ‘the prince of darkness’, ‘vicious’, ‘vanquish’, ‘evil spirits’, ‘fire ants’, ‘chop’, ‘yellow ants’, ‘hellions’, ‘tapeworms’, ‘ogres’, ‘fortify’, ‘frail’, ‘the serpent’, ‘intolerant’, ‘pity’, ‘puritan’, ‘roaches’, ‘spunk’, ‘savor’, ‘delicate’, ‘ferocious’, ‘esprit de corps’, ‘monomaniac’, ‘devils’, ‘guts’, ‘pests’, ‘clemency’, ‘destabilize’, ‘kill’, ‘lice’, ‘helminths’, ‘vulnerable’, ‘ants’, ‘feast’, ‘savage’, ‘expel’, ‘primitive’, ‘gobble’, ‘fleas’, ‘beetles’, ‘to eat’, ‘consume’, ‘vermin’, ‘nibble’, ‘beasts’, ‘charity’, ‘mycoplasmas’, ‘rid’, ‘dissolve’, ‘leeches’, ‘creatures’, ‘zealot’, ‘antennae’, ‘erode’, ‘enemy morale’, ‘nematodes’, ‘microbes’, ‘destroy’, ‘fierce’, ‘defenseless’, ‘uncivilized’, ‘fundamentalist’, ‘extremist’, ‘gorge’, ‘diminish’, ‘leniency’, ‘powerless’, ‘purge’, ‘yeast’, ‘untamed’, ‘to wash’, ‘monsters’, ‘eliminate’, ‘rinse’, ‘spiders’, ‘mites’, ‘fiends’, ‘incapacitated’, ‘infirm’, ‘dispose’, ‘protozoa’, ‘devour’, ‘bark’, ‘fungi’, ‘wild’, ‘beef’, ‘stiffen’, ‘weak’, ‘impotent’, ‘forgiveness’, ‘banish’, ‘fragile’, ‘insects’, ‘virions’, ‘timbers’, ‘lucifer’, ‘barbaric’, ‘outwit’, ‘lower’, ‘pathogens’, ‘remove’, ‘reduce’, ‘bacteria’, ‘grace’, ‘devil’, ‘evil’, ‘adversary‘, ‘demon’ Table 7: Initial and expanded keywords used to find Twitter posts with a high concentration of violence-provoking expressions. 12684
2024
684
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12685–12695 August 11-16, 2024 ©2024 Association for Computational Linguistics Retaining Key Information under High Compression Ratios: Query-Guided Compressor for LLMs Zhiwei Cao1∗, Qian Cao2∗, Yu Lu2, Ningxin Peng2, Luyang Huang2 Shanbo Cheng2† and Jinsong Su1† 1School of Informatics, Xiamen University 2ByteDance Research [email protected] {caoqian.95, luyu.ly, chengshanbo}@bytedance.com [email protected] Abstract The growing popularity of Large Language Models has sparked interest in context compression for Large Language Models (LLMs). However, the performance of previous methods degrades dramatically as compression ratios increase, sometimes even falling to the closedbook level. This decline can be attributed to the loss of key information during the compression process. Our preliminary study supports this hypothesis, emphasizing the significance of retaining key information to maintain model performance under high compression ratios. As a result, we introduce Query-Guided Compressor (QGC), which leverages queries to guide the context compression process, effectively preserving key information within the compressed context. Additionally, we employ a dynamic compression strategy. We validate the effectiveness of our proposed QGC on the Question Answering task, including NaturalQuestions, TriviaQA, and HotpotQA datasets. Experimental results show that QGC can consistently perform well even at high compression ratios, which also offers significant benefits in terms of inference cost and throughput1. 1 Introduction The emergence of chatGPT (Ouyang et al., 2022) and GPT4 (OpenAI, 2023), along with other Large Language Models (LLMs) (Touvron et al., 2023a,b) has sparked a global sensation. The success of LLMs is closely tied to the long context capabilities of LLMs (Dong et al., 2022; Lewis et al., 2020), especially in the field of multi-document question answering. However, the utilization of long context also introduces challenges such as higher inference cost, longer latency, and inferior perfor*These authors contributed equally. This work was done when Zhiwei Cao was interning at ByteDance. †Corresponding author. 1Our code is available at https://github.com/ DeepLearnXMU/QGC. mance caused by redundant information (Jiang et al., 2023). Many efforts have been made to compress the long context by directly removing a certain percentage of less important words, such as LongLLMLingua (Jiang et al., 2023) and Selective-Context (Li et al., 2023). Another common method is to generate a text summary of the given context (Xu et al., 2023; Wang et al., 2023b). Unlike deleting or reordering the word in the context, AutoCompressor (Chevalier et al., 2023) compresses long documents into multiple vectors as soft prompts, which are optimized with full parameters of LLMs. However, our preliminary study shows that these methods have a common flaw: as the compression ratio increases, the compressed context fails to retain key information, resulting in a significant decrease in the performance of LLMs. The key to solve this problem is query, which defines what key information is. We aim to preserve this query-related key information even at a high compression ratio. Specifically, we propose the Query-Guided Compressor (QGC) to fully utilize query information throughout each compression step. We first feed the query and the documents together into a context encoder to learn the queryguide document representations. We then compress these document representations into n-gram representations guided by the importance of each word in relation to the query. Subsequently, we propose to augment the n-gram representations by reviewing the query and document, which are finally aligned to the embedding space of the LLMs. We further propose dynamically adjusting the compression ratio of each document based on its relevance to the query. Compared to previous methods, QGC has several advantages: 1) high compression ratios by retaining most query-related information during compression, 2) low training costs by optimizing the compressor only instead of finetuning the entire LLM, and 3) better semantic consistency by com12685 pressing the n-gram structure rather than deleting words. We validate the effectiveness of QGC on the multi-document Question Answering task, including three datasets: NaturalQuestions, TriviaQA, and HotpotQA. Experimental results on the QA task indicate that, compared to LongLLMLingua, QGC exhibits a 2.75 times higher compression ratio and a 2.42 times higher throughput. Additionally, its accuracy has improved by an average of 5 points. We further investigated the loss of key information throughout the compression process. The findings reveal that under high compression ratios and high noise conditions, QGC only incurs a performance loss of about 10%, while LongLLMLingua suffers a loss of approximately 47%. This validates the effectiveness of QGC in retaining key information. 2 Preliminary Study In this section, we first briefly formulate the long context compression on the Question Answering task, and then present an analysis on the key information loss in previous compression methods. 2.1 Task Formulation Given a LLM input with augmented context x = (xins, xd1, ..., xdk, ..., xdK, xq), which consists of the instruction xins, K documents {xdk}K k=1, and the query xq, the objective of context compression can be formulated as: min ex d(LLM(y|x), LLM(ey|ex)), (1) where y is the ground-truth answer and ey represents the output of the LLM with the compressed context ex as the input. d(·, ·) is a function measuring the distance between two distributions, such as KL divergence. In this work, we focus on compressing K retrieved documents that greatly determine the length of the input. 2.2 Key Information Loss in Compression We study the effectiveness of two representative methods, LongLLMLingua (Jiang et al., 2023) and AutoCompressor (Chevalier et al., 2023). We conduct experiments on the NaturalQuestions dataset (Liu et al., 2023) and use accuracy as the evaluation metric, which judges whether any correct answers appear in the LLM prediction. 1.50 1.75 2.00 2.25 2.50 2.75 3.00 3.25 3.50 Compression Ratio 40 50 60 70 80 Accuracy (%) Closed-book Oracle LongLLMLingua LongLLMLingua w/ answer (a) Compression Ratio for LongLLMLingua 1 doc 2 docs 3 docs 4 docs 5 docs 20 30 40 50 60 70 Accuracy (%) Closed-book Oracle AutoCompressor (b) Document Number for AutoCompressor Figure 1: The accuracy of LongLLMLingua (Jiang et al., 2023) and AutoCompressor (Chevalier et al., 2023) with different compression ratios and number of documents on the NaturalQuestions test set, respectively. Closedbook denotes providing LLMs with the question only, and Oracle means using the question and corresponding ground-truth documents as the input of the LLM. “w/ answer” means adding the golden answer to the compressed context. For LongLLMLingua, we apply LLaMA-2-7BChat2 as the small language model for compression, and use LongChat-13B-16K3 as the target LLM. We use the open-source AutoCompressor4, which fine-tunes LLaMA-2-7B to compress context and generate answers. Here, we consider four settings: • Closed-book. It takes the query as the LLM input with no additional documents. • Oracle. The query and only the document containing the ground truth are used as inputs to the LLM. • Base. Based on Oracle, we compress the document directly with various compression ratios for LongLLMLingua. However, since AutoCompressor is set to compress documents to fixed length vectors, we change the compression ratio by adding external documents. 2https://ai.meta.com/llama/ 3https://huggingface.co/lmsys/longchat-13b-16k 4https://github.com/princeton-nlp/AutoCompressors 12686 • Base w/ answer. We manually add key information to the compressed results by concatenating the answer with the compressed word sequence in LongLLMLingua. Note that this setting is impractical for AutoCompressor where the compressed results are vectors that cannot be changed directly. From Figure 1, we find that the performance of both methods degrades significantly with increasing compression ratios. As shown in Figure 1(a), the performance of LongLLMLingua decreases by 47% as the compression ratio increases from 1.53x to 3.44x. Even worse, the accuracy of LongLLMLingua at 3.44x compression ratio is equivalent to that of the closed-book setting. The same findings are illustrated in Figure 1(b) for AutoCompressor. More importantly, we observe that adding key information to the compressed result can greatly alleviate the performance degradation that typically occurs at high compression ratios. Back to Figure 1(a), the accuracy line fluctuates little as the compression ratio increases from 1.5x to 3.5x with the help of additional key information, which is a decrease of 3.87% compared to the former 47% with the loss of key information. These observations validate the need to preserve key information during compression, which motivates us to explore a better method to fully exploit query information for context compression. 3 Query-Guided Compression As shown in Figure 2, we equip the LLM with the Query-Guided Compressor to compress long documents into a much shorter sequence of continuous representations, which are then concatenated with the corresponding instruction and query as the input for the LLM. In the following, we first introduce the architecture of Query-Guided Compressor and then its training objective. Then, we propose a dynamic compression strategy that assigns higher compression ratios for irrelevant documents to further improve the compressed representations. 3.1 Compressor Architecture Figure 3 illustrates the basic architecture of our Query-Guided Compressor. Using the compressor, we adopt the following steps to produce compressed representations of each document: 1) learning the query-aware document representations; 2) compressing the document representations into ngram representations by weighted pooling; 3) augQGC Instruction: Write a high-quality answer for the given question using only the provided search results. {Compressed Document Representations} Query: Who got the first Nobel Prize in Physics? Answer: LLM Wilhelm Conrad Röntgen Document 1: (Title: List of Nobel laureates in Physi -cs) The first Nobel Prize in Physics was awarded … Document 2: … … Document N: (Title: E. C. George Sudarshan) In 2007, Sudarshan told the “Hindustan Times”, … Query: Who got the first Nobel Prize in Physics? Compressed Document Representation (representation length ≈200 tokens) … ~3K tokens Figure 2: The framework of our method. menting the n-gram representations by reviewing the query and the entire document; 4) aligning the obtained representations into the embedding space of the LLM. Particularly, these four steps correspond exactly to the four key components of our compressor, which are all boxed in Figure 3. Note that we perform the above operations on each document, thus omitting the index k of the document for simplicity. Query-Guided Context Encoder At the first step, we feed the concatenation of the query xq and the document xd into query-aware context encoder to learn the representations of the query and the document. The encoder consists of two Transformer encoder layers. Formally, these representations can be obtained in the following way: [hq; hd] = ContextEncoder([xq; xd]). (2) Here, hq={hq i }Nq i=1 and hd={hd i }Nd i=1 are the corresponding representation sequences of the query and the document with the lengths of Nq and Nd, respectively. By allowing the query and the document to see each other during encoding, we can facilitate the extraction of the key information relevant to the query in the document. 12687 Query-Guided Pooling Layer In the next step, we split the entire document into several n-grams and compress the information of each n-gram into a vector based on their correlation to the query. To this end, document representations are organized as follows: hd = [hd G1, ..., hd Gj, ..., hd GNg ] (3) = [hd 1:n, ..., hd (j−1)×n:j×n, ..., hd Nd−n+1:Nd], where Gj represent the indices of the j-th n-gram. Ng= Nd n is the number of n-grams. Then, we measure the weight of each token in Gj by calculating its relevance with the mean representation h q of query tokens: h q = 1 Nq X hq i , (4) wi,Gj = exp s(h q, hd i ) P i′∈Gj exp s(h q, hd i′) , (5) where s(·, ·) is the dot-product function, and wi,Gj represents the weight of the i-th token representation hd i in the document, which belongs to the j-th n-gram. Finally, we acquire the compressed n-gram representations ˆhd Gj as the weighted sum of token representations in the n-gram: ˆhd Gj = X i∈Gj wi,Gj · hd i . (6) Query-Document Reviewing Layer To further prevent the key information loss in compression, we introduce a novel reviewing module to perfect the compressed n-gram representations by revising both the query and the document representations. Concretely, this encoder consists of two Transformer encoder layers, which takes the query representations hq, the document representations hd, and the compressed n-gram representations ˆhd as inputs, and outputs the improved document n-gram representations ehd: ehd = ReviewingLayer([hq; hd; ˆhd]). (7) Semantic Alignment Layer Since ehd lie in a different embedding space with the inputs of the LLM, we use a fully-connected semantic alignment layer to map the n-gram representations into the embedding space of the LLM. The aligned n-gram representations ed can be formulated as follows: ed = W · ehd + b, (8) where W and b are learnable parameters. Query-Guided Context Encoder !! !" Query-Guided Pooling Layer Query-Document Reviewing Layer Semantic Alignment Layer "! "" "#! " "#" " "## " ℎ$#! " ℎ$#" " ℎ$## " "% " Compressed Document Representations &" ℎ'! ℎ'! Figure 3: The structure of QGC. The first three layers use query q to guide document d encoding, pooling, and reviewing respectively. The last layer aligns document representations into the target LLM embedding space. 3.2 Compressor Training Unlike AutoCompressor (Chevalier et al., 2023), we fix the parameter of the LLM and only fine-tune the compressor. Through the above steps, each long document is compressed into a shorter sequence of continuous representations ed. Thus, the inputs of the LLM are finally formated as ex = (xins, ed1, ..., edk, ..., edK, xq). To avoid missing the key information during compression, we define the training objective of the compressor in the following way: L = LCE + LKL (9) = −log p(y|ex) + KL[p(y|x)||p(y|ex)], where KL[·||·] represents the Kullback–Leibler divergence. By introducing the KL loss, we encourage the LLM to generate the correct answer even with compressed representations as input. 3.3 Dynamically Compressing Strategy Due to the different importance of retrieved documents, we propose to dynamically adjust the compression ratios for different retrieved documents. Specifically, we assign the n-gram size nk for the k-th document based on the importance ranking: nk = ( min(2 · Ok, 16) Sk ≥ϵ ∞ Sk < ϵ , (10) where Sk and Ok is the score and rank of the k-th document acquired by the existing reranker, such as Contriever (Izacard et al., 2022a). ϵ is the score threshold for filtering low-score documents. Note that when the assigned n-gram size nk is set to ∞, the corresponding document will be discarded. 12688 Methods NaturalQuestions TriviaQA HotpotQA Acc CR TP EM CR TP F1 CR TP LongChat-13B Closed-book 34.84 36.07 22.19 Oracle 83.05 59.2x 60.61 42.2x Original Prompt 53.11 1.0x 48.70 1.0x 44.76 1.0x Reranker-based Methods Sentence-BERT (Reimers and Gurevych, 2020) 60.75 4.1x 0.137 48.89 4.5x 1.957 42.92 4.4x 1.930 BGE-Reranker (Xiao et al., 2023) 64.33 4.1x 0.138 47.71 4.5x 1.724 47.96 4.4x 1.689 Cond.PPL (Jiang et al., 2023) 65.91 4.1x 0.128 52.48 4.5x 1.287 49.82 4.3x 1.267 Compression-based Methods Selective-Context (Li et al., 2023) 35.44 2.5x 0.077 42.73 2.5x 0.465 29.68 2.6x 0.456 LongLLMLingua (Jiang et al., 2023)† 66.70 3.9x LongLLMLingua (Jiang et al., 2023) 67.01 4.1x 0.118 51.51 3.7x 0.724 45.43 3.8x 0.683 QGC 69.19 15.2x 0.356 57.72 7.9x 1.832 52.12 8.8x 1.849 LLaMA-2-7B Closed-book 32.35 30.70 10.54 Oracle 73.45 59.2x 57.68 42.2x Original Prompt 27.53 1.0x 49.47 1.0x 44.24 1.0x Reranker-based Methods Sentence-BERT (Reimers and Gurevych, 2020) 24.26 4.1x 0.133 49.49 4.5x 0.731 40.65 4.4x 0.752 BGE-Reranker (Xiao et al., 2023) 25.08 4.1x 0.130 48.69 4.5x 0.683 46.13 4.4x 0.724 Cond.PPL (Jiang et al., 2023) 27.87 4.1x 0.123 52.76 4.5x 0.602 47.84 4.3x 0.623 Compression-based Methods Selective-Context (Li et al., 2023) 31.79 2.6x 0.082 48.55 2.5x 0.303 28.21 2.6x 0.332 LongLLMLingua (Jiang et al., 2023) 41.13 4.1x 0.108 50.44 3.7x 0.432 39.87 3.8x 0.438 AutoCompressor (Chevalier et al., 2023) 49.23 13.9x 0.302 29.17 8.7x 0.823 29.02 8.1x 0.833 ICAE (Ge et al., 2023) 53.34 21.5x 48.91 10.2x 34.50 9.5x QGC 60.90 15.2x 0.313 57.46 7.9x 0.902 51.64 8.8x 0.927 QGC(ϵ = 0.42) 57.62 20.6x 57.11 10.9x 51.23 12.1x Table 1: Experimental results on three benchmark datasets. Acc = accuracy, EM = exact match, F1 = F1 score, CR = compression ratio, TP = throughput (examples/second). Closed-book, Oracle, and Original Prompt denote using the query only, the complete ground-truth documents, and all retrieved documents as inputs, respectively. † indicates that the results are directly cited from Jiang et al. (2023). 4 Experiments In this section, we conduct extensive experiments to investigate the effectiveness of QGC. Datasets & Evaluation Metric The experiments are carried out based on the three datasets: • NaturalQuestions We select the processed version (Liu et al., 2023) where each question has 20 related documents and only one of them contains the correct answer. We follow Liu et al. (2023) to use accuracy (Acc) as the evaluation metric, which judges whether the correct answer appears in the prediction. • TriviaQA We employ the adversarial Contriever (Izacard et al., 2022a) to retrieve the top 10 documents from all Wikipedia passages. Following Lewis et al. (2020), we use the Exact Match (EM) metric to evaluate the LLM prediction. • HotpotQA Different from the above two datasets, HotpotQA (Yang et al.) is a multihop dataset where the answer lies in more than one document. Specifically, each question has 10 related documents and two of them are ground-truth documents. Following Yang et al., we use the F1 score to measure the correctness of the LLM. Besides, we calculate the compression ratio (CR) 12689 for different methods, which is defined as the length rate of the original context to the compressed context. We also provide the inference throughput (TP) on a single A100-80G GPU, including compression and generation. Baselines Following (Jiang et al., 2023), we include two sets of methods as our baselines. 1) Reranker-based Methods. It simply uses a reranker method to sort documents based on importance and discards unimportant ones. We select the following reranker: Sentence-BERT (Reimers and Gurevych, 2020), BGE-Reranker (Xiao et al., 2023), and Cond.PPL proposed by Jiang et al. (2023) to measure the association between the query and documents. Then, we discard documents with low association until the compression ratio is met and sort the remaining documents according to the association from high to low. 2) Compression-based Methods. Compared with reranker-based methods, they further compress the sorted documents, retaining more information while satisfying a higher compression ratio. We select the following methods as our baselines: • Selective-Context (Li et al., 2023) It uses selfinformation estimated by an external language model to prune redundant words. • LongLLMLingua (Jiang et al., 2023) It is the state-of-the-art method for long context compression. It first uses a language model to quantify the importance of each document as its question-aware perplexity, and then designs a question-aware coarse-to-fine compression method to delete unimportant tokens. • AutoCompressor (Chevalier et al., 2023) It fine-tunes LLaMA-2-7B to recursively compress long context into summary vectors, which are used as soft prompts to generate the answer. We use the released AutoCompressorLlama-2-7B-6K for experiments. • ICAE (Ge et al., 2023) Similar to AutoCompressor, it generates compact and informative memory slots to represent the original context. We use the released ICAE model pre-trained on Llama-2-7B-Chat for experiments 5. Implementation Details We use LongChat-13B16K and LLaMA-2-7B as the LLMs for evaluation, 5https://github.com/getao/icae Methods Accuracy QGC 69.19 w/o query-guided context encoder 50.36 w/o query-guided pooling layer 55.34 w/o query-document reviewing layer 64.14 w/o dynamically compressing strategy 62.15 Table 2: The accuracy of ablation study on NaturalQuestions test set, where the target LLM is LongChat-13B. which are frozen during the optimization of QGC. To ensure stable and reproducible results, we employ greedy decoding and set the temperature to 0 in all experiments. Following Jiang et al. (2023), we use LLaMA-2-7B-Chat as the external language model for Selective-Context and LongLLMLingua. For QGC, both the query-guided context encoder and query-document reviewing layer consist of two Transformer encoder layers. All these layers and word embeddings are initialized with LLaMA-27B where MLP parameters are all fixed during training. Our rationale behind this approach stems from our belief that the MLP plays a crucial role in knowledge retention, while our focus lies in adjusting the acquired knowledge based on query. Thus, the trainable parameters in QGC are only 3.5% of LongChat-13B-16K. Besides the ground-truth document, we concatenate 1-4 random documents to build the long context. We also randomly set the n-gram size from the candidate list (4, 6, 8, 10) for each training batch to make the compressor more robust. We train QGC on downstream datasets for 15 epochs, using a learning rate of 5e-5 with the Adam optimizer and batch size of 64. During inference, we use the Cond.PPL proposed by Jiang et al. (2023) to sort retrieved documents for all compression-based methods and QGC, and set the ϵ as 0.35. Following (Liu et al., 2023; Bai et al., 2023) the maximum generation tokens is 100 for NaturalQuestions, and 32 for both TriviaQA and HotpotQA. All experiments are conducted on 8 NVIDIA A100 GPUs. Main Results Table 1 reports the performance, compression ratios, and throughput of various methods or models on different datasets. Overall, QGC achieves higher compression ratios and greater throughput while achieving comparable or even better performance with LongLLMLingua. These results demonstrate that QGC can effectively compress context into shorter inputs. Specifically, the performance and compression ratio of the reranker-based methods are limited 12690 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Compression Ratio 40 50 60 70 80 Accuracy (%) LongLLMLingua LongLLMLingua w/ answer QGC (a) Compression Ratio for QGC 1 doc 2 docs 3 docs 4 docs 5 docs 0 20 40 60 80 Accuracy (%) Closed-book Oracle QGC (b) Document Number for QGC Figure 4: The accuracy of QGC with varying compression ratios and number of documents, respectively. because no compression operation is used within the document. Compared to AutoCompressor and ICAE, our method achieves better accuracy with comparable compression ratios. Compared with LongLLMLingua, QGC achieves average +5.03 and +12.87 performance improvements when using LongChat-13B and LLaMA-2-7B as the target LLMs. On average, the compression ratio and throughput of QGC are 2.75 times and 2.47 times that of LongLLMLingua on all datasets and target LLMs, respectively. Ablation Study To explore the effect of different components on QGC, we use LongChat-13B as the target LLM and introduce the following variants of QGC for ablation study: 1) w/o queryguided context encoder. In this variant, the query and document are independently encoded; 2) w/o query-guided pooling layer. When establishing this variant, we directly replace the weighted sum of token representations in each n-gram with their mean representation; 3) w/o query-document reviewing layer. This variant no longer refines the compressed representations of n-grams; 4) w/o dynamically compressing strategy. We fix the n-gram size as 4 for comparable comparison. As shown in Table 2, the absence of the querydocument reviewing layer and dynamically compressing strategy lead to a 5.05 and 7.04 accuracy loss respectively. The more substantial loss is observed after removing the query-guided context encoder and query-guided pooling layer, resulting Methods SST-2 GSM8K Acc CR Acc CR Original Prompt 92.4 1.0x 14.48 1.0x LongLLMLingua 5.91 3.9x AutoCompressor 94.2 15.0x 6.68 13.6x QGC 94.8 23.3x 14.18 13.4x Table 3: Experimental results on SST-2 and GSM8K datasets, where the target LLM is LLaMA-2-7B. in a significant performance accuracy drop of 18.83 and 13.85 respectively, highlighting the importance of employing the query to guide compression. 5 Analysis In this section, we conduct in-depth analyses to explore the performance of QGC in terms of key information loss, demonstration compression, detailed throughput and reranker impact. All analyses are conducted on NaturalQuestions with target LLM as LongChat-13B. Key Information Loss in QGC As described in Section 2.2, previous methods dramatically lose key information as the compression ratio increases. For comparison, we experiment with QGC using the same setting. Compared to LongLLMLingua in Figure 4(a), the performance of QGC only decreases 10% as the compression ratio increases from 1x to 4x, and is even comparable to that of LongLLMLingua containing the correct answer in the compressed result. As seen in Figure 4(b), we observe that the performance of QGC slightly degrades with more documents, which is only a 12% decrease with 4 documents (27% for AutoCompressor). These results demonstrate that QGC can effectively retain key information even in much longer context and higher compression ratio scenarios. Demonstration Compression for In-Context Learning To further validate the effectiveness of QGC in a broader context, we conduct experiments on both SST-2 and GSM8K datasets. We adopt the approach of previous studies (Chevalier et al., 2023; Wei et al., 2022) which utilizing demonstrations as the document, while maintaining consistency with their experimental setup. The results in Table 3 reveals notable insights. On the SST2 dataset, our method surpasses autocompressor in both compression ratio and accuracy. Meanwhile, on the GSM8K dataset, our accuracy performance remains on par with the original prompt 12691 QGC QGC LongLLMLingua LongLLMLingua 0.0 0.5 1.0 1.5 2.0 2.5 Throughput (examples/s) (CR=20.6x) (CR=15.2x) (CR=13.9x) (CR=4.1x) Acc=68.02 Acc=69.19 Acc=52.89 Acc=67.01 Generation Compression Figure 5: The accuracy, compression throughput, and generation throughput of QGC and LongLLMLingua. at the same compression ratio as autocompressor. This suggests that QGC strikes an excellent balance between model performance and compression ratio. These results showcases QGC’s proficiency in preserving information from demonstrations and fostering the in-context learning capacity of the target LLM. Detailed Throughput Evaluation To evaluate the throughput of various methods or models, encompassing both compression and generation, we perform testing on a single A100-80G GPU. The results presented in Figure 5 indicate that QGC is obviously higher than LongLLMLingua in both compression throughput and generation throughput. Moreover, by adjusting the hyperparameter ϵ (See Equation 10) to increase the compression ratio, QGC can achieve a higher compression ratio while minimizing the impact on LLM performance and further improving throughput. Furthermore, our higher compression ratios lead to shorter LLM input, which also significantly improves the generation throughput of the target LLM. As for LongLLMLingua, since it additionally introduces LLaMA-2-7B for compression, the compression throughput is significantly lower than ours. Besides, although LongLLMLingua can also improve compression ratio by adjusting hyper-parameters, its performance will significantly drop, while QGC still maintains excellent performance. Impact of Different Rerankers The compression ratio for each document is determined by the corresponding correlation with the query obtained by a reranker. Here, we analyze the impact of using different rerankers in this process. In addition to the three methods introduced in reranker-based methods, we also include BM25 and Gzip (Jiang et al., b) for comparison. Experimental results are shown in Figure 6. It can be found that QGC performs better with more competitive rerankers. Besides, compared with diBM25 Gzip SBERT BGE-Reranker Cond.PPL 30 40 50 60 70 Accuracy (%) Base (avg. CR = 4.1x)| QGC (avg. CR = 15x) Figure 6: The performance of QGC using different rerankers. “Base” represents the performance of each reranker to be used for compression. The performance (Recall) of rerankers: Cond.PPL > BGE-Rererank > SBERT (Sentence-BERT) > Gzip > BM25. rectly using rerankers for compression, QGC not only achieves an average 2.65 times higher compression ratio but also maintains lossless or even improved performance. 6 Related Work Long Context for LLMs Recently, there have been a lot of studies focusing on expanding the context length of LLMs (Press et al., 2021; Peng et al., 2023; Bertsch et al., 2023). Existing efforts primarily involve gradually increasing the window size during pre-training (Nijkamp et al., 2023), interpolating position embeddings (Chen et al., 2023), and modifying the attention mechanism (Ding et al., 2023). Unlike these works, we do not directly aim to expand the context window of LLMs. Hence, the QGC that we proposed can complement these techniques by enabling LLMs to access a broader context with reduced cost and shorter latency. Retrieval-augmented LMs Combined with a standalone retriever to augment LMs are gaining popularity for benefiting various knowledgeintensive tasks. Previous studies have achieved remarkable results in improving perplexity (Wang et al., 2023a), factual accuracy (Nakano et al., 2022), downstream task performance (Izacard et al., 2022b), and in-context learning (Huang et al., 2023). Besides, many works focus on cooperating LLMs and retrieved documents, such as reranking retrieved documents (Mao et al.) and discarding irrelevant documents (Mallen et al.). QGC is also a retrieval augmentation method for LLMs, which efficiently compresses the retrieved documents into shorter inputs while maintaining no significant performance degradation. Context Compression With the growing context length in LLMs, the demand for higher ef12692 ficiency, lower cost, and reduced latency has attracted much attention. As a promising solution, compression techniques can be broadly categorized into two types: black-box compression (Xu et al., 2023) and white-box compression (Wang et al., 2023b). Black-box compression primarily involves token pruning based on different importance measures, such as self-information (Li et al., 2023) and the LLM perplexity (Jiang et al., a, 2023). On the other hand, white-box compression focuses on generating summarization or compressing the context into soft prompt through fine-tuning or Low-Rank Adaptation (LoRA). For instance, Wang et al. (2023b) autoregressively generates filtered content and fine-tunes target LLM to use it for generation. Mu et al. (2023) trains LLMs to compress instructions into concise key-value attention prefixes. Chevalier et al. (2023) recursively compresses lengthy text into summary vectors, while Ge et al. (2023) generates memory slots to represent the original context. Compared with the above-mentioned compression studies, QGC’s design fully takes into account the query, which leads to the enhanced retention of key information, higher compression ratios, higher throughput, and improved overall performance. 7 Conclusion and Future Work In this paper, we have presented a query-guided compressor QGC for LLMs to solve the loss of key information under high compression ratios. It consists of four essential components: query-guided context encoder, query-guided pooling layer, querydocument reviewing layer, and semantic alignment layer. In addition, we also propose a dynamically compressing strategy during inference. Extensive experiments on multi-document QA tasks demonstrate that QGC outperforms previous state-of-theart compression methods in both accuracy and compression ratios. Analyses reveal that this is primarily due to our retention of key information throughout the compression process. In the future, we aim to validate our approach on more advanced LLMs, while also expanding its application to additional tasks like document summarization. Besides, we will try to further improve our approach by combining previous studies (Zhang et al., a; Hu et al., 2022; Zhang et al., 2022, b). Limitations QGC is a white-box compressor that necessitates access to the internal parameters of LLMs, which restricts its applicability. Furthermore, we have solely validated the effectiveness of QGC on QA and ICL task, and its performance on other tasks that differ significantly from QA task, such as summarization, remains to be verified. Acknowledgements The project was supported by National Key R&D Program of China (No. 2022ZD0160501), National Natural Science Foundation of China (No. 62276219), and the Public Technology Service Platform Project of Xiamen (No. 3502Z20231043). We also thank the reviewers for their insightful comments. References Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. arXiv preprint arXiv:2308.14508. Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R Gormley. 2023. Unlimiformer: Longrange transformers with unlimited length input. arXiv preprint arXiv:2305.01625. Shouyuan Chen, Sherman Wong, Liangjian Chen, and Yuandong Tian. 2023. Extending context window of large language models via positional interpolation. arXiv preprint arXiv:2306.15595. Alexis Chevalier, Alexander Wettig, Anirudh Ajith, and Danqi Chen. 2023. Adapting language models to compress contexts. arXiv preprint arXiv:2305.14788. Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, and Furu Wei. 2023. Longnet: Scaling transformers to 1,000,000,000 tokens. arXiv preprint arXiv:2307.02486. Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning. arXiv preprint arXiv:2301.00234. Tao Ge, Jing Hu, Xun Wang, Si-Qing Chen, and Furu Wei. 2023. In-context autoencoder for context compression in a large language model. arXiv preprint arXiv:2307.06945. 12693 Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. Jie Huang, Wei Ping, Peng Xu, Mohammad Shoeybi, Kevin Chen-Chuan Chang, and Bryan Catanzaro. 2023. Raven: In-context learning with retrieval augmented encoder-decoder language models. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2022a. Unsupervised dense information retrieval with contrastive learning. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane DwivediYu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022b. Atlas: Few-shot learning with retrieval augmented language models. Huiqiang Jiang, Qianhui Wu, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. a. LLMLingua: Compressing prompts for accelerated inference of large language models. In EMNLP 2023. Huiqiang Jiang, Qianhui Wu, Xufang Luo, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, and Lili Qiu. 2023. Longllmlingua: Accelerating and enhancing llms in long context scenarios via prompt compression. arXiv preprint arXiv:2310.06839. Zhiying Jiang, Matthew Yang, Mikhail Tsirlin, Raphael Tang, Yiqin Dai, and Jimmy Lin. b. “low-resource” text classification: A parameter-free classification method with compressors. In Findings of ACL 2023. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. NeurIPS 2020. Yucheng Li, Bo Dong, Frank Guerin, and Chenghua Lin. 2023. Compressing context to enhance inference efficiency of large language models. In EMNLP 2023. Nelson F Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172. Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In ACL 2023. Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. Reader-guided passage reranking for open-domain question answering. In Findings of ACL 2021. Jesse Mu, Xiang Lisa Li, and Noah Goodman. 2023. Learning to compress prompts with gist tokens. arXiv preprint arXiv:2304.08467. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Xu Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight, Benjamin Chess, and John Schulman. 2022. Webgpt: Browserassisted question-answering with human feedback. Erik Nijkamp, Tian Xie, Hiroaki Hayashi, Bo Pang, Congying Xia, Chen Xing, Jesse Vig, Semih Yavuz, Philippe Laban, Ben Krause, et al. 2023. Xgen-7b technical report. arXiv preprint arXiv:2309.03450. OpenAI OpenAI. 2023. Gpt-4 technical report. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. NeurIPS 2020. Bowen Peng, Jeffrey Quesnelle, Honglu Fan, and Enrico Shippole. 2023. Yarn: Efficient context window extension of large language models. arXiv preprint arXiv:2309.00071. Ofir Press, Noah A Smith, and Mike Lewis. 2021. Train short, test long: Attention with linear biases enables input length extrapolation. arXiv preprint arXiv:2108.12409. Nils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual using knowledge distillation. In EMNLP 2020. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandkumar, and Bryan Catanzaro. 2023a. Shall we pretrain autoregressive language models with retrieval? a comprehensive study. Zhiruo Wang, Jun Araki, Zhengbao Jiang, Md Rizwan Parvez, and Graham Neubig. 2023b. Learning to filter context for retrieval-augmented generation. arXiv preprint arXiv:2311.08377. 12694 Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. NeurIPS 2022. Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-pack: Packaged resources to advance general chinese embedding. Fangyuan Xu, Weijia Shi, and Eunsol Choi. 2023. Recomp: Improving retrieval-augmented lms with compression and selective augmentation. arXiv preprint arXiv:2310.04408. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP 2018. Biao Zhang, Deyi Xiong, Yubin Ge, Junfeng Yao, Hao Yue, and Jinsong Su. 2022. Aan+: Generalized average attention network for accelerating neural transformer. Journal of Artificial Intelligence Research. Biao Zhang, Deyi Xiong, Jinsong Su, Qian Lin, and Huiji Zhang. a. Simplifying neural machine translation with addition-subtraction twin-gated recurrent networks. In EMNLP 2018. Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou. b. MoEfication: Transformer feed-forward layers are mixtures of experts. In Findings of ACL 2022. A Instructions Used in QGC The following are the instructions we used after referring to the existing studies (Liu et al., 2023) and testing. • NaturalQuestions: Write a high-quality answer for the given question using only the provided search results(some of which might be irrelevant). • TriviaQA & HotpotQA: Using only the provided search results (some of which might be irrelevant), answer the following question with one or few words. 12695
2024
685
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12696–12717 August 11-16, 2024 ©2024 Association for Computational Linguistics COSMIC: Mutual Information for Task-Agnostic Summarization Evaluation Maxime DARRIN1,2,3,4 Philippe FORMONT1,2,4,5 Jackie Chi Kit CHEUNG2,3,7 Pablo PIANTANIDA1,2,4,6 1International Laboratory on Learning Systems, 2Mila - Quebec AI Institute 3McGill University 4Université Paris-Saclay, 5École de technologie supérieure (ETS) 6CNRS, CentraleSupélec 7Canada CIFAR AI Chair [email protected] [email protected] [email protected] [email protected] Abstract Assessing the quality of summarizers poses significant challenges—gold summaries are hard to obtain and their suitability depends on the use context of the summarization system. Who is the user of the system, and what do they intend to do with the summary? In response, we propose a novel task-oriented evaluation approach that assesses summarizers based on their capacity to produce summaries while preserving task outcomes. We theoretically establish both a lower and upper bound on the expected error rate of these tasks, which depends on the mutual information between source texts and generated summaries. We introduce COSMIC, a practical implementation of this metric, and demonstrate its strong correlation with human judgment-based metrics, as well as its effectiveness in predicting downstream task performance. Comparative analyses against established metrics like BERTScore and ROUGE highlight the competitive performance of COSMIC. 1 Introduction Assessing the quality of summarizers in different settings, tasks, and datasets is critical for better understanding these models and for studying their strengths and weaknesses. In many text generation scenarios, assessing model quality is arduous and resource-intensive, often necessitating human annotations and evaluations. Consequently, developing automatic metrics that align closely with human judgments is paramount (Graham and Baldwin, 2014; Tratz and Hovy, 2008; Giannakopoulos et al., 2008; Deutsch et al., 2021). Standard automatic evaluation methods for summarization rely on the idea that a good summary should have some (semantic) overlap with either a gold standard or the source text (El-Kassas et al., 2021; Allahyari et al., 2017). They leverage similarity metrics such as BLEU (Papineni et al., 2002) or ROUGE (Lin, 2004) to evaluate the quality of the generated summaries. However, these methods often do not correlate well with human judgements (Kryscinski et al., 2020; Kocmi et al., 2021). To enhance alignment with human judgment and capitalize on recent advancements in large language models, recent efforts have concentrated on learned metrics for scoring summaries (Zhang et al., 2020b; Rei et al., 2020; Liu et al., 2023). For instance, Clark et al. (2023) introduced six new learned metrics by finetuning a pretrained MT5 model (Xue et al., 2021) to predict human judgment along different axes. Another line of research focuses on reconstruction-based metrics, such as BLANC (Vasilyev et al., 2020) and Information Difference (Egan et al., 2021), the former evaluating the actual reconstruction error of the source text with and without a summary, and the latter evaluating the information gain obtained when conditioning the generative model on the summary. Standard evaluation methods, therefore, suffer from two major shortcomings. They rarely formally define a clear notion of quality and the methods used to evaluate said notion lack theoretical foundations. This leads to potential discrepancies between the intended evaluation and what is actually measured, leading to a lack of validity of the evaluation methods. We introduce a task-oriented evaluation setup where summaries are meant to allow an agent to perform downstream tasks without reading the longer source text. For example, if a political advisor drafts a briefing on a subject to enable a politician to make informed decisions, our evaluation considers the advisor successful if the decisions made using the summary align with those made using the initial source text (Pu et al., 2023; Vanderwende et al., 2007). Providing such a well-defined notion of success enables careful analysis of the validity of the evaluation for different possible use cases. In addition, it enables us to perform mechanical evaluations of our method, reducing the noise 12696 from the evaluation process. Furthermore, many summarization techniques typically operate under the assumption that the downstream task is unknown during the summarization process, yet an implicit notion of summary quality exists. However, this assumption is seldom explicitly addressed or substantiated. In our task-oriented approach, we provide an informationtheoretic rationale for the existence of such a metric. We demonstrate that it essentially involves assessing the mutual information (MI) between the source texts’ distribution and the summaries generated by a given summarizer. This paper does not aim to study the syntactic, semantic, and pragmatic aspects of features of summary information, which are essential for capturing the rich notion of information in human communication. For instance, we do not account for the summaries’ fluency, grammatical correctness, or coherence. Instead, our approach emphasizes how effectively, in the best-case scenario, the output of a summarizer can be used to perform downstream tasks. The results in this paper suggest that the formal definition of mutual information successfully achieves this goal. Our approach leverages embeddings to abstract the surface form of text, following recent studies by Pillutla et al. (2021) and Pimentel et al. (2023). We evaluate the quality of our approach in two ways. First, we show that summarizers that induce a summary distribution with higher MI with the source texts’ distribution are higher quality in the following sense — they tend to produce summaries that preserve outcomes on downstream tasks as compared to using the source texts. Second, we compare the MI to metrics trained on human judgments and show that it displays consistent correlations. Our results are consistent with and extend previous work that leverages MI to understand relationships in datasets (Ethayarajh et al., 2022; Bugliarello et al., 2020), predictive performance of representations (Sui et al., 2023) and tool to construct or evaluate representations (Kim et al., 2022), models and generative processes. Contributions. Our contributions are threefold: 1. A theoretical setting for summarizer evaluation. We frame the summarizer evaluation problem as a statistical inference problem and we derive a task-agnostic reference-free quality metric: the MI between the distribution of source texts and the distribution of the summarizer’s outputs. 2. A practical implementation of this metric: COSMIC. We propose a practical implementation of COSMIC using an MI estimator and sentence embeddings 1. 3. An experimental evaluation. We examine how well MI predicts the performance of downstream tasks in comparison to conventional metrics like BERTScore and BARTScore. Our findings demonstrate that MI is competitive with these metrics. Additionally, we illustrate its strong correlation with metrics trained to emulate human judgment. 2 Related Work Assessing the quality of summaries poses a unique challenge due to the contextual nature of ‘quality’, influenced by factors such as the audience, topic, and intended purpose of the summary. Even expertly crafted human summaries considered "gold standard" in one context may be perceived as subpar in a different setting (Saziyabegum and Sajja, 2017; Indu and Kavitha, 2016). Reference-free summary evaluation. Referencefree evaluation methods mostly rely on comparing the content of the summary with the content of the source text (Louis and Nenkova, 2013; El-Kassas et al., 2021) and they rely on common overlap metrics such as ROUGE (Lin, 2004), BLEU (Papineni et al., 2002) or BERTScore (Zhang et al., 2020b). However, most reference-free metrics show some important limitations (Deutsch et al., 2022): they lack theoretical grounding. Work such as BLANC (Vasilyev et al., 2020) and Information Difference (Egan et al., 2021) are closely related to our work. The former evaluates the actual reconstruction error of the source text with and without a summary, while the latter evaluates the information difference evaluated by the generative model. However, both lack the theoretical justification we introduce in Section 3 and only evaluate the information based on the generative model. While these methods appear intuitive, they lack of theoretical support for their approaches, relying solely on empirical results and correlations with human evaluations. Thus, the results are often tied to a 1A plug&play python library built on top of HF transformers is available as supplementary material and will be released upon publication of this work. 12697 dataset and are affected by variance from the human evaluation. Conversely, we offer a theoretical framework applicable to any dataset and measure a tangible, well-defined success criterion: do the tasks yield the same output when performed on summaries as they do on the source texts? Embedding-based evaluation. MAUVE (Pillutla et al., 2021) first proposed a new informationtheoretic metric to compare two text distributions based on embedding clustering; more recent work showed that the crucial element was the clustering step (Pimentel et al., 2023). They show that while embeddings (and the clusters they form) do not capture fluency or grammatical correction, they do grasp meaning and coherence, making them excellent tools for evaluation. Dataset difficulty and MI. Measuring MI between the concepts and input in a dataset is not a novel idea. In fact, Ethayarajh et al. (2022) leverages the Arimoto information (Arimoto, 1971), rediscovered and dubbed V-usable information by Xu et al. (2020) to assess the difficulty of a dataset. Similarly, Bugliarello et al. (2020) evaluate the difficulty of translating from one language to another. Following this trend, Kim et al. (2022) reuses the point-wise MI between Gaussian distributions to evaluate text-to-images and image-to-text generative models. Mutual information for summarization. The MI is a natural metric to optimize in summarization. It has been used as a score to select the most informative or surprising sentences in extractive summarization (Padmakumar and He, 2021) or as an alternative objective for text decoding (van der Poel et al., 2022). In this paper, we revisit the use of MI but between the distribution of source texts and the distribution of the summarizer. 3 A Task-Driven Evaluation Framework 3.1 Background: Probabilistic models for text summarization We consider models for language summarization tasks that define a probability distribution over strings. More formally, these models are probability distributions s over an output space S conditioned on an input text t, where S is the set consisting of all possible strings that can be constructed from the vocabulary Ω: S ≜  BOS ◦s ◦ EOS | s ∈Ω∗ , BOS and EOS stand for special reserved beginning-of-sequence and end-of-sequence tokens, respectively, and Ω∗denotes the Kleene closure of S. Today’s models for language summarization are typically parameterized by encoder-decoder or decoder-only architectures with attention mechanisms with trainable weights θ. These models follow a local-normalization scheme, meaning that ∀ i > 0, pθ(·|s<i, t) defines a probability distribution over ¯S = S ∪EOS. The probability of a sequence s = ⟨s0, s1, . . .⟩can then be decomposed as: pθ(s|t) = |S| Y i=1 pθ(si|s<i, t), (1) and s<i = ⟨s0, . . . , si−1⟩, s < 1 = s0 ≜BOS. 3.2 Background: Information Theory Information theory (Cover and Thomas, 2006) provides several tools for analyzing data and their associated probability distributions, including entropy and MI. These metrics are typically defined based on a "true" probability distribution, denoted as p(c), or the joint probability density function p(t, s) which may not be known but governs the behavior of random variables T and S. The fundamental concept in information theory is surprisal, defined as H(C = c) = −log p(c), and its expected value is termed entropy: H(C) = X c∈C p(c)H(C = c). (2) Finally, another important concept is the mutual information (MI) between two random variables: I(T; S) = H(T) −H(T|S). (3) It captures the amount of information we get about one random variable when observing a realization of the other. The data-processing inequality (Cover and Thomas, 2006) states that processing a random variable with a (possibly random) function f(·) can never increase its informativeness but only reduce its information content, expressed as: I(C; f(T)) ⩽I(C; T). (4) The rate-distortion (RD) function of a discrete random variable C for a given distortion function ℓ(c, bc) is defined as (Csiszár, 1974, eq. (1.4)): RC,ℓ(D) ≜ min p(bc|c) : E[ℓ(C, bC)]⩽D I(C; bC). (5) For further details, the reader is referred to Appendix A.2 and (Cover and Thomas, 2006). 12698 <latexit sha1_base64="J3ye2xfY1 0HsMoar5xB3RrTPxJQ=">AC4HicjVHLSsNAFD2Nr1pfVZdugq1QNyV8LE T3LisaB/QlpJMp+1gmoRk4oPShTt34tYfcKtfI/6B/oV3xghqEZ2Q5My595y Ze68TuCKSlvWSMiYmp6Zn0rOZufmFxaXs8ko18uOQ8QrzXT+sO3bEXeHxihT S5fUg5PbAcXnNOTtU8do5DyPhe6fyKuCtgd3zRFcwWxLVzq7lmxeiw/u2HLJ Re7vQlPxSOt3hyWgz387mrKlzkOSgnIVlP/uMJjrwRBjA4PkrALGxE9 DZRgISCuhSFxISGh4xwjZEgbUxanDJvYM/r2aNdIWI/2yjPSakanuPSGpDSx QRqf8kLC6jRTx2PtrNjfvIfaU93tiv5O4jUgVqJP7F+6z8z/6lQtEl3s6RoE 1RoRlXHEpdYd0Xd3PxSlSHgDiFOxQPCTOt/OyzqTWRrl31tbxV52pWLVn SW6MN3VLGnDp5zjHQXWrWNop7hxv5Q72k1GnsYZ1FGieuzjAEcqokPc1HvCI J8Mxboxb4+4j1UglmlV8W8b9O/Z6mjY=</latexit>bc3(S) <latexit sha1_ba se64="iuyDIY0U0iBzvSCF6x+7g dY=">ACz3icjVHLTsJAFD3UF+IL demEUxckZYFujIkblxCwisBYtoyYE NfacqIRi3/oBb/SvjH+hfeGcEpUY nabtmXPvOTP3Xjvy3IQbxmtGW1peWV 3Lruc2Nre2d/K7e60kTGOHNZ3QC+O bSXMcwPW5C73WCeKmeXbHmvb43MRb1 +zOHDoMEnEev71ihwh65jcaJ6xR5n t9weThuz4mW+YJQMufRFYCpQgFq1MP +CHgYI4SCFD4YAnLAHCwk9XZgwEBHXx 5S4mJAr4wz5EibUhajDIvYMX1HtOs qNqC98Eyk2qFTPHpjUuo4Ik1IeTFhc Zou46l0Fuxv3lPpKe42ob+tvHxiOa6 I/Us3z/yvTtTCMcSprMGlmiLJiOoc5 ZLKroib61+q4uQESfwgOIxYUcq53 WpSaRtYveWjL+JjMFK/aOyk3xLm5JA zZ/jnMRtMols1Kq1MuF6pkadRYHOMQ xzfMEVyghiZ5R3jE561unaj3Wn3n 6laRmn28W1pDx+xhZP1</latexit>T <latexit sha1_base64="TKPAWgZEJ 8dBOcXDQEo3xILJ9fk=">AC4HicjVHLTsJAFD3WF+Kr6tJNI5johQWqDs SNy4xipgIe0wyITSNu1UJYSFO3fGrT/gVr/G+Af6F94ZS+IjRqdpe+bce87 MvdcNPRFL236ZMCanpmdmM3PZ+YXFpWVzZfUkDpKI8RoLvCA6dZ2Ye8LnNSm kx0/DiDt91+N1t7ev4vULHsUi8I/lIOTNvnPui45gjiSqZa7nG5eizbuOHLJ Rq7TVkPxKup3h0Wg73zJzdsHWy/oJinIV3VwHxGA20EYEjQB4cPSdiDg5ie MxRhIySuiSFxESGh4xwjZEmbUBanDIfYHn3PaXeWsj7tlWes1YxO8eiNSGlh kzQB5UWE1WmWjifaWbG/eQ+1p7rbgP5u6tUnVqJL7F+6ceZ/daoWiQ52dQ2C ago1o6pjqUuiu6Jubn2qSpJDSJzCbYpHhJlWjvtsaU2sa1e9dXT8VWcqVu1Z mpvgTd2SBlz8Ps6f4KRUKJYL5cNSrKXjqDdWxgi+a5gwoOUEWNvK/xgEc8 Ga5xY9wadx+pxkSqWcOXZdy/A/QNmjU=</latexit>bc2(S) <latexit sha1_base64="0zJ01SKau Z0xaNUa43J631zeY=">AC4HicjVHLTsJAFD3WF75Rl2wawQ3pGXhY0f ixiVGARMhpB0GmFDap36CGHhzp1x6w+41a8x/oH+hXfGmviI0Wnanjn3njN z73VDT8TSsp4njMmp6ZnZzNz8wuLS8kp2da0eB0nEeI0FXhCduE7MPeHzmhT S4ydhxJ2h6/GO9hX8cYZj2IR+MfyMuStodPzRVcwRxLVzuYKzXPR4X1Hjti 4bRebkl9Itzs6Gm8V2tm8VbL0Mn8COwV5pKsaZJ/QRAcBGBIMweFDEvbgIKbn FDYshMS1MCIuIiR0nGOMedImlMUpwyF2QN8e7U5T1qe98oy1mtEpHr0RKU1s kiagvIiwOs3U8UQ7K/Y375H2VHe7pL+beg2JlegT+5fuI/O/OlWLRBe7ugZB NYWaUdWx1CXRXVE3Nz9VJckhJE7hDsUjwkwrP/psak2sa1e9dXT8RWcqVu1Z mpvgVd2SBmx/H+dPUC+X7O3S9mE5X9lLR51BDhso0jx3UMEBqiR9xXu8YBH wzWujRvj9j3VmEg16/iyjLs38aCaNA=</latexit>bc1(S) Concepts Task agnostic summarization <latexit sha1_base64="uzcWY/eyG dUJH6/e2vrsHFzHTaU=">ACz3icjVHLTsJAFD3UF+ILdemEUxckZYFujI kblxClEcCxLRlwIa+Mp2qhGDc+gNu9a+Mf6B/4Z2xJCoxOk3bM+fec2buvXb kubEwjNeMtrC4tLySXc2trW9sbuW3d5pxmHCHNZzQC3nbtmLmuQFrCFd4rB1 xZvm2x1r26FTGW9eMx24YXIhxHq+NQzcgetYgqhusSvYrbAHk/Np8TJfMEq GWvo8MFNQLpqYf4FXfQRwkECHwBGEPFmJ6OjBhICKuhwlxnJCr4gxT5Eib UBajDIvYEX2HtOukbEB76RkrtUOnePRyUuo4IE1IeZywPE1X8UQ5S/Y374ny lHcb09OvXxiBa6I/Us3y/yvTtYiMCxqsGlmiLFyOqc1CVRXZE3179UJcgh Ik7iPsU5YUcpZ3WlSZWtcveWir+pjIlK/dOmpvgXd6SBmz+HOc8aJZLZqVU qZcL1ZN01FnsYR+HNM8jVHGhrkHeERT3jW6tqNdqfdf6ZqmVSzi29Le/gA ryOT9A=</latexit>S <latexit sha1_base64="07RinX4Sf F87GuxYBTknguowqYQ=">AC2HicjVHLTsJAFD3UF+ILZemEU1wQwomqDs SNy4x4RWBkHYsKG0Ts1EkLizrj1B9zqFxn/QP/CO2NJVGJ0mrZnzr3nzNx 7Ld+xQ2EYrwltYXFpeSW5mlpb39jcSm/v1EMvChivMc/xgqZlhtyxXV4TtnB 40w+4ObIc3rCGZzLeuOZBaHtuVYx93hmZA9fu28wURHXTmf0Jm3aPcm3Bb4T Vn1Snh/vdNbIG2rp86AQgyziVfHSL2ijBw8MEUbgcCEIOzAR0tNCAQZ84jqY EBcQslWcY4oUaSPK4pRhEjuk74B2rZh1aS89Q6VmdIpDb0BKHQek8SgvICxP 01U8Us6S/c17ojzl3cb0t2KvEbECV8T+pZtl/lcnaxHo40TVYFNvmJkdSx2 iVRX5M31L1UJcvCJk7hH8YAwU8pZn3WlCVXtsremir+pTMnKPYtzI7zLW9KA Cz/HOQ/qxXyhlC9dFLPl03jUSexiDzma5zHKOEcFNfIe4xFPeNYutVvtTrv/ TNUSsSaDb0t7+ADEN5al</latexit>c3(T) <latexit sha1_base64="VjPUiwX51 cI18cMwXOSBRUkwI0=">AC2HicjVHLSsNAFD2N7/qKunQTbIW6KUkXVXe CG5cKfWFbSjKd1tA0CclELKXgTtz6A271i8Q/0L/wzpiCWkQnJDlz7j1n5t7 rhJ4bC9N8zWhz8wuLS8sr2dW19Y1NfWu7FgdJxHiVBV4QNRw75p7r86pwhc bYcTtoePxujM4lfH6NY9iN/ArYhTy9tDu+27PZbYgqPv5Mds0rEKLcFvhNM bVyYH+Y6eM4umWsYsFKQ7rOA/0FLXQRgCHBEBw+BGEPNmJ6mrBgIiSujTFx ESFXxTkmyJI2oSxOGTaxA/r2adMWZ/20jNWakanePRGpDSwT5qA8iLC8jRD xRPlLNnfvMfKU95tRH8n9RoSK3BF7F+6aeZ/dbIWgR6OVA0u1RQqRlbHUpdE dUXe3PhSlSCHkDiJuxSPCDOlnPbZUJpY1S57a6v4m8qUrNyzNDfBu7wlDdj6 Oc5ZUCsVrXKxfFHKnRyno17GLvZQoHke4gRnOEeVvEd4xBOetUvtVrvT7j9T tUyq2cG3pT18AL9dlqM=</latexit>c1(T) <latexit sha1_base64="4xUYkA6FT hbLEv8gtrK8F6AI6v0=">AC2HicjVHLTsJAFD3UF+ILZemEUxwQwoL1B2 JG5eY8IpASDsM2FDap0aCSFxZ9z6A271i4x/oH/hnbEkKjE6Tdsz595zZu6 9lu/YoTCM14S2tLyupZcT21sbm3vpHf3GqEXBYzXmed4QcsyQ+7YLq8LWzi 85QfcHFsOb1qjMxlvXvMgtD23JiY+747NoWsPbGYKonrpTG7KZr1SviP4jbA G09rsKNdLZ42CoZa+CIoxyCJeVS/9g768MAQYQwOF4KwAxMhPW0UYcAnrosp cQEhW8U5ZkiRNqIsThkmsSP6DmnXjlmX9tIzVGpGpzj0BqTUcUgaj/ICwvI0 XcUj5SzZ37ynylPebUJ/K/YaEytwRexfunmf3WyFoEBTlQNtXkK0ZWx2KX SHVF3lz/UpUgB584ifsUDwgzpZz3WVeaUNUue2uq+JvKlKzcszg3wru8JQ24 +HOci6BRKhTLhfJFKVs5jUedxD4OkKd5HqOCc1RJ+8JHvGEZ+1Su9XutPvP VC0RazL4trSHD8HKlqQ=</latexit>c2(T) Predictions <latexit sha1_ba se64="pVAmdbAUrQ1PuRUNGD5TLsj beU=">ADPXicjVFBTxQxGP1mREU WPXopWEhWRKzmeWAEi8kXvS2RhZIGE I63e5uQ2c6djoI2exv82cYL95MNDFe vfpaBgWJkTbTeX3f9179ctKrSqXJ +i+Nbc7Tt35+8t3H+wuLTcevhotzK1 FXIgjDZ2P+OV1KqQA6eclvulTzPtN zLjl/6+N6JtJUyxY47K+VhzseFGinB Haij1ulq2rdpsad9L0aygl3UzHrpD l3k2w0fTtbZ2kh3zHxm9qZrft08Lwsr Tlry+FXlwR+jkylmsN8Mf9KcOyetR qJ90kDHYd9BrQpmb0TesjpTQkQ4Jqy klSQ5YE6cK84B6lFAJ7pCm4CyQCnF JM1qAtkaWRAYHe4x1jN1BwxbYe8q AVO0fgslIzWoDHIs8D+NBbidXD27L+ 8p8HT3+0M/6zxysE6moD9n+4i86Y6X 4ujET0PNSjUVAbGVycalzq8ir85u1S Vg0MJzuMh4hZYBOXFO7OgqULt/m15i H8NmZ71e9Hk1vTN3xIN7v3dzutgd6Pb 2+xuvtlob281rZ6nJ7RCHfTzGW3TK+ rTAN6fo7loMVqKP8Rf4u/xj/PUOGo0 j+nKiH/+Ap1+vFk=</latexit> Pr ! bc(S) 6= c(T) # ⇡I(T; S) 8 bc, c Figure 1: The summarizer is expected to generate summaries S for a given distribution of source texts T, without prior knowledge of the specific application, e.g. predicting the concepts: c1, c2, . . . . The objective is to assess the discrepancy incurred when predicting from the generated summaries instead of the original source texts. We demonstrate that, for any undisclosed task c, the likelihood of error is constrained within bounds determined by monotonous functions of the MI. 3.3 A task-oriented evaluation setting Most summarization methods operate under the assumption that the downstream task of interest is unknown during the generation process, relying instead on a generic notion of summary quality. We assume in this work that the goal is to be able to perform similarly in terms of classification error on both the original texts T and the resulting summaries S2. Next, we formalize this evaluation metric based on the assumption of an unknown downstream task. Let c : Ω∗→{1, . . . , m} denote the target concept of interest which can be extracted from the initial texts T by applying C ≜c(T); and similarly, let bc(S) denote the predicted concept from the summaries S according to the underlying model pθ(s|t). The evaluation of the summarizer’s quality with respect to the downstream task (unknown from the summarization model) can be assessed using the expected error rate, as illustrated in Figure 1. In other words, it involves the classifier, determining the average probability of the extracted concept bc(S) differing from the original concept c(T) in 2Other goals are possible such as reducing the source text complexity, removing specific information etc... Our choice stems from the practical usability of the task-oriented definition for evaluation purposes the source text: Pe(c, bc, θ) ≜E(θ) 1[c(T) ̸= bc(S)]  , (6) where the expectation is taken with respect to the joint distribution of text and summary (T, S) based on the source texts distribution and the distribution over the summaries induced by the summarizer. Interestingly, it is not difficult to check by dataprocessing inequality that (4): I(C; S) ⩽I(C; T) and thus, the output summaries S by the summarization model may not preserve all relevant information necessary to predict C from S, unless the summarization model is aware of the downstream task. However, identifying the set of relevant downstream tasks for various texts is challenging in practice. Consequently, (6) does not provide a satisfactory evaluation metric for summarization systems. This observation raises the central question studied in this work: How much information about any downstream task is preserved in the summaries? 4 A Task-Agnostic Quality Measure for Summarization 4.1 Theoretical Results In this section, we rigorously motivate the evaluation of summarization systems by measuring the MI between texts and the resulting summaries. Let’s assume the existence of a random variable T representing source texts, which follows an unknown distribution inherent to the source textdomain. Given a stochastic summarization system pθ(s|t) as defined in expression (1), we denote S ∼pθ(s|T) the random variable representing summaries generated by the summarization system for these sources texts T. Consider the task c(T) that we intend to execute on the source texts, where C = c(T) represents the random variable denoting the outcomes of this task on source texts. We assume that the same task can be performed on the summaries and let bc(S) denote the corresponding prediction. Intuitively, if we can accurately predict the outcome of C from the summaries, then we will say that the summarization system is high-quality since it preserves the necessary information for performing the task. The next proposition frames our information-theoretic bounds on the performance of any arbitrary downstream task. In particular, it shows that the expected error rate Pe(c, bc, θ) as defined in (6) can be upper and lower bounded by the MI between the texts and the summaries. 12699 Proposition 1 (Information-theoretic bounds). Let C = c(T) denote the underlying concept variable and let bc(S) be the Bayes predictor of C observing the output summaries S, based on the underlying summarization model pθ(s|t). The expected error rate satisfies: Pe(c, bc, θ) ⩽1 −κ exp I(T; S)  , (7) Pe(c, bc, θ) ⩾R−1 C,ℓ01 I(T; S)  , (8) where κ ∈(0, 1) is a constant which does not depend on summaries; and R−1 C,ℓ01(·) is the inverse of the rate-distortion function using ℓ01(c, bc) = 1[c ̸= ˆc]. Furthermore, the lower bound holds for an arbitrary loss ℓ(·, ·) measuring the disagreement between the concept and its predictive value: inf bc(·) E[ℓ(C, bc(S))] ⩾R−1 C,ℓ I(T; S)  . Proof. The upper bound (7) relies on the fact that the predictor bc(S) is the optimal (Bayes) classifier for which the expected error rate admits a well-known expression. The lower bound (8) uses data-processing inequality which implies that I(T; S) ⩾I c(T); bc(S)  and the definition of the rate-distortion function evaluated in the loss ℓ01(c, bc) noticing that its expectation yields the expected error rate Pe(c, bc, θ) as defined in (6). For further details, see Appendix A. Remark 1. The bounds in Proposition 1 imply that the expected error rate of any task predicted by observing the summaries is lower- and upperbounded by functions of the MI between the text and the summaries I(T; S). More precisely, the upper bound (7) is a monotonically decreasing function of the MI while the lower bound (8) is a non-increasing function in the MI. The lower bound can be further simplified by evaluating the rate-distortion function, as shown in Appendix B. In other words, greater MI corresponds to improved the expected prediction performance on the summaries. Conversely, the expected error rate is bounded from above when MI is limited. Interestingly, the arguments regarding the bounds do not depend on the considered task, suggesting that the MI can be used as a task-agnostic metric to evaluate the quality of summaries. In the next section, we propose a practical method to estimate MI. In Section 6, we empirically show that it correlates well with the performance of the downstream tasks with other human judgment-based metrics and compare it to standard metrics. 4.2 Estimating MI from samples While the MI captures an intuitive notion of information and is theoretically grounded, estimating it accurately is notoriously challenging. Therefore, following prior research (Ethayarajh et al., 2022; Xu et al., 2020) we estimate Arimoto information (Arimoto, 1971) based on the KNIFE estimator (Pichler et al., 2022). Our method comprises three steps: (1) we project the source texts and the summaries into a continuous embedding space; (2) we fit a mutual information estimator onto these embeddings; and (3) we estimate the mutual information between the source texts and the summaries using the fitted estimator. We report the details of our method in Algorithm 1. Mutual information estimator. We rely on the KNIFE estimator (Pichler et al., 2022) to estimate the differential entropy of the embeddings and then mutual information. This estimator effectively relies on Gaussian Mixtures with K modes to fit the density function and induces a soft-clustering for text generation evaluation (Pillutla et al., 2021; Pimentel et al., 2023). We found experimentally that the number of modes K did not impact the performance significantly. We report the results for K = 4 (see Appendix D.1 for further details). Embeddings. In order to obtain continuous representations of the source texts and the summaries, we mainly rely on the AnglE-embedders (Li and Li, 2023) as they are at the top of the MTEB leaderboard (Muennighoff et al., 2023) with a rather small model. In addition, we experimented with different embedders from the sentence transformers library (Reimers and Gurevych, 2019), mainly the paraphrase and sentence similarity embeddings. We find that our method was robust in the choice of the embedder and thus reported the results for the AnglE-embedders. Algorithm 1 Evaluating the performance of a summarizer using KNIFE and sentence transformers Input: A dataset DN = {(Ti, Si)}N i=1 Input: A pre-trained embedder Emb Output: An estimation of the mutual information between T and S 1: ET ←{Emb(Ti)}N i=1 2: ES ←{Emb(Si)}N i=1 3: bIN(T; S) ←KNIFE(ET, ES) 4: return bIN(T; S) 12700 5 Experimental Settings While the bounds derived in Section 4.1 provide certain theoretical guarantees regarding the mutual information, their practical consequences must still be evaluated empirically. We present here two evaluation methods. First, we show that MI effectively predicts whether performing a downstream on the summaries would lead to the same outcome as if it were performed on the source text. This validates our task-oriented evaluation approach for summaries. Furthermore, we contrast our metric with metrics trained to emulate human judgments across various dimensions. Remarkably, we observe that even without any training, our metric closely aligns with human preferences. Datasets. We select three summarization datasets for the English language: CNN/DailyMail (See et al., 2017; Hermann et al., 2015), XSum (Narayan et al., 2018) and MultiNews (Fabbri et al., 2019) and perform all evaluations on all datasets. We provide additional experiments in French and Spanish using the MLSUM (Scialom et al., 2020) and XLSUM (Hasan et al., 2021) datasets. We report mixed results due to the lack of efficient multilingual embedders in Section E.2. Models. We evaluate numerous summarizers from the HuggingFace hub, relying on different backbones, pretraining methods and finetuned on different datasets. We conduct our experiments on the PEGASUS suite of models (Zhang et al., 2020a), on BERT large models (Lewis et al., 2019) and on DistilBERT models (Shleifer and Rush, 2020). We also evaluated the models presented in the SEAHORSE benchmark (Clark et al., 2023) through the generated summaries they proposed, which include MT5 models (Xue et al., 2021), different variants of T5 models (Raffel et al., 2023), and of the PaLM models (Chowdhery et al., 2022). The reader is referred to Appendix C for models’ details. Baseline metrics for quality assessment. For all summaries, we evaluate the quality estimation metrics obtained by computing reference-free versions of the ROUGE-L, BERTScore (Zhang et al., 2020b) and BARTScore (Yuan et al., 2021) between the source texts and the summaries. We average over the dataset to get a summarizer-level score that we can compare with the MI. Human evaluation with SummEval. While the SummEval (Fabbri et al., 2021) dataset provides only a very limited number of human judgments, it contains enough aligned source, and AI-generated summaries to evaluate the MI metric. We use these unannotated data to evaluate the MI and rank the model accordingly; then we compare this ranking with the human evaluation provided by the SummEval benchmark. We provide a comparison with previous metrics in Table 2. It is worth noting that the evaluation is not fair for our metric as other are evaluated only on the annotated data, while ours is evaluated on the unannotated data. 5.1 Downstream tasks To demonstrate the efficacy of our metric in predicting the downstream task performance on the summaries, the ideal scenario would involve generating summaries and tasking various individuals with different tasks across diverse contexts. However, executing this at a larger scale is impractical. As an alternative, we suggest comparing the results of different algorithms (classifiers and embedders) applied to both the source texts and the generated summaries. A summarizer can be deemed effective if these algorithms yield consistent results when applied to both the summaries and the source texts. Classification tasks. We selected four different tasks (sentiment analysis, policy classification, Emotion classification and ChatGPT detector) and corresponding classifiers from the Huggingface Hub to run on the source texts and summaries and compared their outputs. We report the expected error rate(6), i.e., the classifier outputting a different label on the source text and the summary. Embedders. We compare the embeddings obtained from the source texts and the summaries from different models, the output of the embedding by the classifiers models, and paraphrase and sentence similarity embeddings (Reimers and Gurevych, 2019). We show the correlation of our metrics with the cosine similarity between the embeddings of the source texts and the corresponding summaries. Since embedders are supposed to abstract the information in the texts, good summarization models should produce embeddings close to the source texts’ embeddings. 5.2 Correlation With Learnt Metrics There are not many available metrics trained on human judgment for summarization. We chose to evaluate our metric on the SEAHORSE metrics (Clark et al., 2023) as they are the most recent and provide interpretable metrics. 12701 Table 1: Common quality estimation metrics correlation with the performance on the downstream classification tasks. Where Sent. analysis stands for sentiment analysis, GPT det. for GPT detector, Topic. for topic classification, Policy for policy classification, Emotion for emotion classification and Emb. for paraphrase embedding. Sent. analysis GPT det. Policy Emotion Emb. Metric I(S; T) 0.63 0.59 0.58 0.56 0.81 Bas. BERTScore Precision 0.53 0.68 0.47 0.46 0.70 BERTScore Recall 0.59 0.66 0.74 0.54 0.42 BLANC 0.59 0.59 0.66 0.60 0.38 SMART1 0.55 0.63 0.47 0.47 0.28 SMART2 0.55 0.63 0.47 0.47 0.28 SMARTL 0.55 0.63 0.47 0.47 0.28 BARTScore 0.51 0.73 0.42 0.48 0.62 BERTScore 0.64 0.75 0.64 0.58 0.61 ROUGE-L 0.56 0.55 0.54 0.47 0.29 SH. Attribution 0.49 0.71 0.37 0.49 0.62 Main ideas 0.33 0.37 0.60 0.38 0.47 Seahorse metrics. These metrics — learnt from human judgement — assess summaries along 6 axes: Main ideas; Attribution; Comprehensible; Grammar; Repetition and Conciseness. Main ideas and Attribution, respectively, measure if the main ideas of the source text are present in the summary and if the information in the summary does indeed come from the source text. Comprehensible and Grammar measure if the summary is fluent and grammatically correct, while Repetition and conciseness measure if there are NO repetitions and the summary is concise. The Pr(Yes) metric corresponds to the average probability over the dataset that the SEAHORSE model predicts the answer Yes to the corresponding question. While we would not expect our MI metric to correlate with the Grammar or Comprehensible scores, it should strongly correlate with the Main Ideas and Attribution scores as these are proxies to the information contained in the summary. 6 Experimental Results Correlation with downstream tasks performance. In Table 1, we report the correlation between the different metrics and the expected error rate for different classification tasks and with the dot product for the Paraphrase embedding task. The MI is competitive with both the common quality estimation metrics such as BERTScore and BARTScore. In addition, we report the correlation of the metrics trained on human judgement from the SEAHORSE benchmark to predict whether the main idea of the text is present in the summary and if all elements of the summary are attributable to the source text. Figure 2: Spearman correlation with human judgment estimated by the SEAHORSE metrics. As one would expect the MI does correlate with Attribution and Main ideas but not with comprehensible, grammar or repetition. Correlation with human-judgement-based metrics. As the SEAHORSE metrics are trained to mimic human judgment, we can use them to assess the behavior of the MI. We found consistent and expected correlations with the different SEAHORSE metrics (Figure 2). The mutual information correlates well with Attribution and Main Ideas but not with Comprehensible, Grammar and Repetition. This is not surprising as one would expect the mutual information to capture the amount of information in the summary that is attributable to the source text and not the grammatical correctness or fluency of the summary. The high correlation with Conciseness is a rather surprising result. We believe this is because the conciseness correlates with the strength of the summarizer, which correlates with the MI between the source texts and the summaries. The stronger the language model, the better the source texts’ encoding as a summary. It is also plausible that the learned metrics are flawed in some ways, hindering the results. We observe that the MI correlates more consistently with these Main idea, Attribution and Conciseness scores than the common quality assessment metrics, e.g. BERTScore and BARTScore (see Figure 2). It suggests that in addition to being a good predictor of the performance of the downstream tasks, MI is also a better predictor of the human judgment of the quality of the summaries. Notably, the MI, a theoretical quantity derived from Shannon’s MI, reproduces human judgment expectations without training on human judgment data. Comparing summarizers with COSMIC. In Figure 3, we compare the MI of different models and report their size. We observe that OOD models 12702 Figure 3: Comparison of models of different sizes and strengths, finetuned on different datasets regarding the MI between their summaries and the source text on different evaluation datasets. The dotted lines highlight the highest MI for each dataset reached by a model, and the grey area represents the number of parameters of the model — trained on Arxiv or medical data — perform very poorly, whereas IN distribution models such as BART display significantly higher MI. Interestingly enough, the size of the model does not seem to be a good predictor of MI. In Appendix F we explore further use of the mutual information to compare the informativeness of different summarizers between themselves to construct a hierarchy based on their mutual informativeness. 7 Discussion of Quantitative Results Our findings reveal that various conventional metrics do not consistently align with the effectiveness on downstream tasks. As illustrated in Figure 4, different metrics exhibit distinct behaviors and correlations with other metrics and the performance of downstream tasks. SEAHORSE Metrics. Notably, most SEAHORSE metrics demonstrate limited correlations with the effectiveness of downstream tasks. Unexpectedly, the Main idea metric performs less effectively compared to the Attribution metric. BERTScore. BERTScore displays good correlation with the task-preserving capabilities of the summaries but have poor correlations with human judgements (real or estimated); whereas the mutual information is theoretically grounded and an Metric Coher. Cons. Flu. Rel. Reference Dependent ROUGE-1 .35 .55 .53 .58 ROUGE-2 .23 .60 49 .43 ROUGE-L .12 .12 .26 .35 BLEU .22 .05 .33 .38 CHRF .35 .63 .56 .55 BERTScore .33 -.03 .14 .20 MoverScore .23 -.05 .26 .35 BLEURT .53 .20 .41 47 SMS .27 60 .36 .40 SMART-1 .43 67 .64 .67 SMART-2 .42 .75 .63 .58 SMART-L .57 .57 .61 .73 Reference Free PRISM .23 .60 .36 .37 T5-ANLI .25 .58 .54 .52 BARTScore .35 .62 .49 45 BARTScore+CNN .55 .32 .59 .58 Q2 .25 .75 .58 .45 RISE extMulti-News .53 .73 .71 .70 RISE SamSUM .53 .70 .68 .70 RISECNN .53 .73 .75 .70 Ours I(T; S) .23 .53 .47 .54 Table 2: Comparison of our method against many baselines on the SummEval Human evaluation dataset. We report the system-level Kendall’s Tau correlation with human judgments. all-around more consistent metric. Behavior of MI. The MI displays very similar behavior to the metric trained to detect whether the ideas presented in the summary come from the source text (Attribution) (Figure 4), but it surprisingly does not follow the same behavior as the metric trained to detect if the main idea is present (Main idea). Coincidentally, the main idea metric does not correlate with the expected error rate of the classification tasks. This may be an artifact of the training of SEAHORSE benchmark, a limitation of our metric, or could indicate that answering the question "Is the content of the summary fully attributable to the content of the source text" is more relevant for downstream tasks than "Is the main idea of the source text present in the summary". Independence of the metrics. While it seems that Attribution and the MI follow a similar trajectory in Figure 4, we found that the MI is not correlated with the Attribution metric (Figure 5). This suggests that these two metrics are independent and could be used in conjunction to evaluate summarizers. When it comes to more standard metrics, we find two clusters of metrics that seem to be relatively independent of each other: one represented by BERTScore and comprising the MI and the other represented by BARTScore, which includes Attribution. Correlations with Human Judgement. While 12703 Figure 4: Correlation between the main metrics and different performance metrics for the different datasets. The MI (green) closely follows the behavior of the Attribution metric but is a better predictor of the performance of the downstream tasks (Sentiment analysis, GPT Detector, Topic classification, etc.). By contrast, ROUGE scores do not display consistent correlations. For metrics to be considered effective, they should consistently exhibit positive correlations with each other. In Section E.3 we report the aggregated numerical results. Figure 5: The correlation matrix illustrates the relationships among various metrics, with clustering based on their correlation similarity. This clustering indicates the degree of similarity between metrics in terms of their correlation with each other. The goal is to have a diverse set of grounded yet independent metrics that assess different aspects of text summarization quality. the MI correlates less on the SummEval judgment dataset than competitors, such as SMART or RISE, it correlates more with the downstream tasks performance (Table 1) while not relying on golden summaries for comparison (SMART) or on very large pre-trained models (RISE). As mentioned previously, the comparison with other metrics is not completely fair as the MI is evaluated on unannotated data while the other metrics are evaluated on the (limited) annotated data. Nonetheless, the MI displays on-par performance with the other metrics on the SummEval dataset and is a promising metric for summarizer evaluation overall. 8 Summary and Discussion We introduced a task-oriented setting for summary evaluation in which we can derive a principled and clear notion of the quality of a summarizer: the expected risk of performing a task on a summary instead of the source text. We connected this risk theoretically to the mutual information of the source texts and generated summaries and we bounded the risk on both sides using the mutual information. Even if these bounds are task-agnostic and thus potentially loose, we demonstrated experimentally that the mutual information indeed correlated well with the risk. High mutual information indicates that a task performed in the summaries is likely to produce the same outcome as in the source texts. COSMIC is therefore theoretically grounded in a reasonable task-oriented scenario. Our ability to estimate this mutual information practically and its correlation with downstream task performance further underscores its significance. Our proposed method extends beyond summarization systems and could also contribute to the broader field of multi-modal generation evaluation (Kim et al., 2022). Acknowledgements This work was granted access to the HPC resources of IDRIS under the allocation 2023AD011013290R2 made by GENCI. 12704 9 Limitations & Ethical considerations We have introduced a novel evaluation setting and a theoretically grounded metric for assessing summarizers, yet both have their limitations. Firstly, our setting assumes that the sole objective of a summary is to facilitate downstream task performance, and we define a good summary as one that preserves task outcomes. However, summaries can serve multiple purposes, such as aiding comprehension, acting as educational aids, or promoting the source text, which we do not account for in our approach. While mutual information is theoretically grounded, it is not without flaws and fails to capture all nuances of the summarization task. It serves as a tool to evaluate a summarizer’s informativeness compared to other metrics lacking theoretical grounding. It is imperative to use mutual information in conjunction with other metrics to evaluate summaries comprehensively, as it solely addresses the informativeness of summaries about their source text. This metric does not assess grammaticality. Consequently, high mutual information values may arise from imperceptible artefacts that render the summary highly informative about the source text yet unintelligible to human readers. Moreover, our method indirectly evaluates mutual information in the continuous domain by assessing the mutual information between embeddings generated by a fixed language model. The choice of this model significantly impacts mutual information estimation and the parameters of the estimation tool used. References Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D Trippe, Juan B Gutierrez, and Krys Kochut. 2017. Text summarization techniques: a brief survey. arXiv preprint arXiv:1707.02268. Suguru Arimoto. 1971. Information-theoretical considerations on estimation problems. Information and control, 19(3):181–194. Emanuele Bugliarello, Sabrina J. Mielke, Antonios Anastasopoulos, Ryan Cotterell, and Naoaki Okazaki. 2020. It’s easier to translate out of english than into it: Measuring neural translation difficulty by crossmutual information. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. Elizabeth Clark, Shruti Rijhwani, Sebastian Gehrmann, Joshua Maynez, Roee Aharoni, Vitaly Nikolaev, Thibault Sellam, Aditya Siddhant, Dipanjan Das, and Ankur P. Parikh. 2023. Seahorse: A multilingual, multifaceted dataset for summarization evaluation. T. M. Cover and J. A. Thomas. 2006. Elements of Information Theory, 2nd edition. Wiley, New York, NY. I Csiszár. 1974. On an extremum problem of information theory. Studia Scientiarum Mathematicarum Hungarica, 9(1):57–71. Daniel Deutsch, Tania Bedrax-Weiss, and Dan Roth. 2021. Towards question-answering as an automatic metric for evaluating the content quality of a summary. Transactions of the Association for Computational Linguistics, 9:774–789. Daniel Deutsch, Rotem Dror, and Dan Roth. 2022. On the limitations of reference-free evaluations of generated text. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10960–10977, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Nicholas Egan, Oleg Vasilyev, and John Bohannon. 2021. Play the shannon game with language models: A human-free approach to summary evaluation. Wafaa S El-Kassas, Cherif R Salama, Ahmed A Rafea, and Hoda K Mohamed. 2021. Automatic text summarization: A comprehensive survey. Expert systems with applications, 165:113679. Kawin Ethayarajh, Yejin Choi, and Swabha Swayamdipta. 2022. Understanding dataset difficulty with V-usable information. Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In Proceedings of the 57th 12705 Annual Meeting of the Association for Computational Linguistics, pages 1074–1084, Florence, Italy. Association for Computational Linguistics. Alexander R. Fabbri, Wojciech Kry´sci´nski, Bryan McCann, Caiming Xiong, Richard Socher, and Dragomir Radev. 2021. Summeval: Re-evaluating summarization evaluation. George Giannakopoulos, Vangelis Karkaletsis, George Vouros, and Panagiotis Stamatopoulos. 2008. Summarization system evaluation revisited: N-gram graphs. ACM Trans. Speech Lang. Process., 5(3). Yvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 172–176, Doha, Qatar. Association for Computational Linguistics. Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Islam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XLsum: Large-scale multilingual abstractive summarization for 44 languages. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4693–4703, Online. Association for Computational Linguistics. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS, pages 1693–1701. M Indu and KV Kavitha. 2016. Review on text summarization evaluation methods. In 2016 international conference on research advances in integrated navigation systems (RAINS), pages 1–4. IEEE. Jin-Hwa Kim, Yunji Kim, Jiyoung Lee, Kang Min Yoo, and Sang-Woo Lee. 2022. Mutual information divergence: A unified metric for multimodal generative models. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To ship or not to ship: An extensive evaluation of automatic metrics for machine translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. CoRR, abs/1910.13461. Xianming Li and Jing Li. 2023. Angle-optimized text embeddings. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: Nlg evaluation using gpt-4 with better human alignment. Annie Louis and Ani Nenkova. 2013. Automatically assessing machine summary content without a gold standard. Computational Linguistics, 39(2):267– 300. Niklas Muennighoff, Nouamane Tazi, Loïc Magne, and Nils Reimers. 2023. Mteb: Massive text embedding benchmark. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. ArXiv, abs/1808.08745. Vishakh Padmakumar and He He. 2021. Unsupervised extractive summarization using pointwise mutual information. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Georg Pichler, Pierre Colombo, Malik Boudiaf, Günther Koliander, and Pablo Piantanida. 2022. A differential entropy estimator for training neural networks. Krishna Pillutla, Swabha Swayamdipta, Rowan Zellers, John Thickstun, Sean Welleck, Yejin Choi, and Zaid Harchaoui. 2021. Mauve: Measuring the gap between neural text and human text using divergence frontiers. Tiago Pimentel, Clara Meister, and Ryan Cotterell. 2023. On the usefulness of embeddings, clusters and strings for text generator evaluation. Xiao Pu, Mingqi Gao, and Xiaojun Wan. 2023. Is summary useful or not? an extrinsic human evaluation of text summaries on downstream tasks. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2023. Exploring the limits of transfer learning with a unified text-to-text transformer. Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2685–2702, Online. Association for Computational Linguistics. 12706 Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Alfréd Rényi. 1961. On measures of entropy and information. Saiyed Saziyabegum and PS Sajja. 2017. Review on text summarization evaluation methods. Indian J. Comput. Sci. Eng, 8(4):497500. Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier, Benjamin Piwowarski, and Jacopo Staiano. 2020. Mlsum: The multilingual summarization corpus. arXiv preprint arXiv:2004.14900. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointergenerator networks. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073– 1083, Vancouver, Canada. Association for Computational Linguistics. Sam Shleifer and Alexander M. Rush. 2020. Pre-trained summarization distillation. Ce Sui, Xiaosheng Zhao, Tao Jing, and Yi Mao. 2023. Evaluating summary statistics with mutual information for cosmological inference. Stephen Tratz and Eduard H Hovy. 2008. Summarization evaluation using transformed basic elements. In TAC. Citeseer. Liam van der Poel, Ryan Cotterell, and Clara Meister. 2022. Mutual information alleviates hallucinations in abstractive summarization. Lucy Vanderwende, Hisami Suzuki, Chris Brockett, and Ani Nenkova. 2007. Beyond sumbasic: Task-focused summarization with sentence simplification and lexical expansion. Information Processing & Management, 43(6):1606–1618. Text Summarization. Oleg Vasilyev, Vedant Dharnidharka, and John Bohannon. 2020. Fill in the BLANC: Human-free quality estimation of document summaries. In Proceedings of the First Workshop on Evaluation and Comparison of NLP Systems, pages 11–20, Online. Association for Computational Linguistics. Yilun Xu, Shengjia Zhao, Jiaming Song, Russell Stewart, and Stefano Ermon. 2020. A theory of usable information under computational constraints. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mt5: A massively multilingual pre-trained text-to-text transformer. Weizhe Yuan, Graham Neubig, and Pengfei Liu. 2021. Bartscore: Evaluating generated text as text generation. In Advances in Neural Information Processing Systems, volume 34, pages 27263–27277. Curran Associates, Inc. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020a. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with bert. 12707 A Proof of the Upper and Lower Bounds A.1 Proof of the upper bound on the expected error rate We begin by noticing that 1 − X c∈C p2(c|s) = 1 −E [p(C|S)|S = s] ⩾1 −E  max c∈C p(c|s) S = s  = 1 −max c∈C p(c|s). (9) By taking the expectation over S at both sides and using the well-known relationship with the Bayes error, we obtain the following inequality: Pe(c, bc, θ) = 1 −E  max c∈C p(c|S)  ⩽1 −E "X c∈C p2(c|S) # . (10) Similarly, it is possible to derive a lower bound: X c∈C p2(c|s) = p2(c⋆|s) + X c̸=c⋆ p2(c|s) ⩾  max c∈C p(c|s) 2 , (11) where c⋆(s) = arg maxc∈C p(c|s). By taking the expectation over S at both sides, we obtain: v u u tE "X c∈C p2(c|S) # ⩾E   sX c∈C p2(c|S)   ⩾E  max c∈C p(c|S)  = 1 −Pe(c, bc, θ). (12) Let us denote the second order Rényi’s entropy (Rényi, 1961) conditioned on s as follow: H2(C|s) ≜−1 2 log X c∈C p2(c|s) ! , (13) and thus, X c∈C p2(c|s) = exp (−2H2(C|s)) . (14) By replacing (14) in (10) and in (11), we obtain 1 − q Es∼pS [exp (−2H2(C|s))] ⩽Pe(c, bc, θ) ⩽1 −Es∼pS [exp (−2H2(C|s))] . (15) Since log is a concave function: log X c∈C p2(c|s) ! ⩾ X c∈C p(c|s) log p(c|s), (16) we have that H2(C|s) ⩽− X c∈C p(c|s) log p(c|s) ≜H(C|s), (17) 12708 where H(C|s) indicates the Shannon entropy conditioned to the given observation s. Replacing (17) in the upper bound of (15) yields Pe(c, bc, θ) ⩽1 −Es∼S [exp (−H(C|s))] ⩽1 −exp (−H(C|S)) , (18) ⩽1 −exp (−H(T|S)) , (19) ≡1 −κ exp I(T; S)  , (20) where (18) follows from the fact that the negative exponential function is convex ; (19) follows by Data-Processing Inequality (Cover and Thomas, 2006) since C ≜c(T) and thus C ↔T ↔S form a Markov Chain and H(T|S) denotes the differential entropy of the text T given the summary S ; and (20) follows by an appropriate definition of the constant 0 < κ < 1 which does not depend on the summary random variable S. This concludes the proof of the desired upper bound. A.2 Review of the Distortion-Rate Function The rate-distortion (RD) function of a random variable C for a given distortion function ℓ(·, ·) is defined as (Csiszár, 1974, eq. (1.4)) RC,ℓ(D) ≜ inf p(bc|c): E[ℓ(C, bC)]⩽D I(C; bC). (21) For convenience, we assume that inf bc ℓ(c, bc) = 0, ∀c. Furthermore, we suppose that there exists D > 0 such that RC,ℓ(D) is finite. We denote the infimum of those D by Dmin and Rmax ≜RC,ℓ(Dmin) (or, more precisely, Rmax ≜limD→Dmin+ R(D)). The following properties (see (Csiszár, 1974, Lem. 1.1)) of the RD function will be used. Theorem 1. The RD function RC,ℓ(D) is a non-increasing convex function of D on the interval (Dmin, ∞). It is monotonically decreasing on the interval (Dmin, Dmax) and constant with RC,ℓ(D) = Rmin on [Dmax, ∞) (here Dmax = ∞and Dmin = 0 are possible). The inverse function R−1 C,ℓ(r) is well defined on (Rmin, Rmax) and is monotonically decreasing. It is known as the distortion rate (DR) function of the random variable C for the given distortion function ℓ(·, ·). A.3 Proof of the lower bound on the average of a general loss For any suitably loss or evaluation metric denoted by ℓ(c, bc), the quality of the predicted concept bc(S), which is based on the random summary S, compared to a desired target concept C ≜c(T) from the original text T, can be expressed by the average loss E[ℓ(c(T), bc(S))] with respect to the joint distribution of the source text and its summary (T, S). From Data-Processing Inequality and the definition of the RD function (21), the following proposition provides a lower bound on the performance of any arbitrary predictor bc(S) of the target concept C: I(T; S) ⩾I(c(T); bc(S)) (22) ⩾ inf p(bc|c) : E[ℓ(C, bC)]⩽E[ℓ(c(T),bc(S))] I(C; bC) (23) = RC,ℓ(E[ℓ(c(T), bc(S))]) , (24) where the inequality in (22) follow from Data-Processing since c(X) ↔X ↔S ↔bc(S) form a Markov Chain ; and (23) follows by the definition of the RD function (21). 12709 • For E[ℓ(c(T), bc(S))] ∈(Dmin, Dmax), we can invert the RD function (21), and thus we obtain from (22) the fundamental bound R−1 C,ℓ(I(T; S)) ⩽E[ℓ(c(T), bc(S))] or, equivalently, E[ℓ(c(T), bc(S))] ≥R−1 C,ℓ I(T; S)  , (25) which holds for any predictor bc(S) and thus, for the one minimizing the left-hand size of (25). • For E[ℓ(c(T), bc(S))] < Dmin equation (22) reduces to I(T; S) ⩾+∞which shows that to achieve an expected distortion below Dmin the random variables (T, S) must have a joint distribution that is not absolutely continuous with respect to the product of their marginal distributions. • For E[ℓ(c(T), bc(S))] ⩾Dmax we obtain the trivial bound I(T; S) ⩾0. Remark 2. Inequality (25) shows that for arbitrary random concept c(T) about the text to be inferred with bc(S) using the random summary S generated from T, the expected loss of any predictor bc(·) is lower bounded by a monotonically decreasing function of the mutual information between T and S. Thus, irrespective of the precise formulation of the loss function or the task defined by c(·) for execution on the summary, maximizing the mutual information I(T; S) stands as a requisite condition for achieving commendable inference performance. Our result suggests that estimating mutual information of a given summarizer can be a good proxy to assess its quality in the sense of preserving relevant information about concepts. B Rate-Distortion Bound for Classification Tasks We assume that C is uniformly distributed on the finite set C ≜{1, 2, . . . , m}, and we use the Hamming distortion function defined by ℓ(c, bc) ≜ ( 0, if c = bc 1, else . Note that the expected distortion equals the expected error rate, i.e., Pe(c, bc, θ) ≜ inf bc: Ω∗→C Pr C ̸= bc(S)  = inf bc: Ω∗→C E[ℓ C, bc(S)  ] . (26) The RD function is given by (Cover and Thomas, 2006, Problem 10.5) (not solved there) R(D) = ( log m −Hb(D) −D log(m −1), if 0 ⩽D ⩽1 −1 m 0, if 1 −1 m < D . (27) Here, Hb(D) is the binary entropy function of D, i.e., Hb(D) ≜−D log(D) −(1 −D) log(1 −D). Inserting (26) and (27) into (22), we obtain I(T; S) ⩾log m −Hb Pe(c, bc, θ)  −Pe(c, bc, θ) log(m −1) , which is the well known Fano’s inequality (Cover and Thomas, 2006, Th. 2.10.1). This can be put into the form of the general lower bound (25): Pe(c, bc, θ) ≥R−1 C,ℓ01 I(T; S)  . However, a closed-form expression of R−1 C,ℓ01(I) is not available. The function is plotted for different m in Figure 6. C Model Specifications All the evaluated models are listed with their characteristics in Table 3. 12710 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 I R−1 C,ℓ01(I) m = 2 m = 4 m = 10 Figure 6: R(I) for different values of m. Table 3: Summary of the models we benchmarked with their name on the Huggingface hub, size and performance metrics. Size ROUGE-L BERTScore BARTScore M. I. Attr. Rep. Compr. Conc. Gram. I(T, S) H(T|S) H(S|T) Model Dataset Falconsai/medical_summarization cnn_dailymail 60 M 0.17 0.16 -1.70 0.33 0.79 0.46 0.76 0.42 0.30 54.09 -14.60 -7.24 multi_news 60 M 0.09 0.02 -2.06 0.22 0.74 0.55 0.70 0.36 0.34 45.53 -11.49 -2.29 xsum 60 M 0.31 0.24 -1.69 0.33 0.77 0.36 0.74 0.33 0.34 49.30 -15.25 -6.26 Falconsai/text_summarization cnn_dailymail 60 M 0.15 0.17 -1.48 0.36 0.82 0.64 0.81 0.44 0.38 53.97 -14.49 -8.22 multi_news 60 M 0.10 0.06 -1.67 0.23 0.80 0.60 0.75 0.38 0.38 46.50 -12.46 -4.00 xsum 60 M 0.31 0.29 -1.53 0.33 0.81 0.58 0.77 0.35 0.37 50.71 -16.67 -8.84 sshleifer/distilbart-xsum-12-1 xsum 221 M 0.08 0.03 -3.27 0.29 0.35 0.86 0.90 0.22 0.80 46.52 -12.49 17.01 multi_news 221 M 0.03 -0.09 -3.08 0.17 0.42 0.82 0.77 0.23 0.68 44.13 -10.10 16.22 cnn_dailymail 221 M 0.04 -0.04 -3.16 0.21 0.41 0.87 0.83 0.22 0.67 49.70 -10.20 15.21 sshleifer/distilbart-cnn-6-6 cnn_dailymail 229 M 0.16 0.22 -1.35 0.47 0.85 0.92 0.85 0.55 0.50 55.31 -15.80 -11.84 multi_news 229 M 0.10 0.15 -1.37 0.34 0.86 0.90 0.84 0.52 0.57 49.34 -15.31 -8.99 xsum 229 M 0.32 0.35 -1.30 0.47 0.84 0.91 0.90 0.49 0.60 52.81 -18.77 -13.27 sshleifer/distilbart-xsum-6-6 cnn_dailymail 229 M 0.04 -0.02 -2.88 0.38 0.55 0.96 0.96 0.41 0.87 49.19 -9.68 12.37 multi_news 229 M 0.03 -0.05 -2.75 0.33 0.57 0.95 0.94 0.44 0.88 44.26 -10.23 10.02 xsum 229 M 0.09 0.06 -2.95 0.46 0.46 0.97 0.97 0.38 0.91 47.07 -13.02 13.67 sshleifer/distilbart-cnn-12-3 xsum 255 M 0.31 0.32 -1.58 0.49 0.74 0.82 0.94 0.47 0.64 51.50 -17.45 -11.65 cnn_dailymail 255 M 0.18 0.24 -1.36 0.48 0.82 0.87 0.93 0.54 0.60 55.21 -15.71 -13.15 multi_news 255 M 0.10 0.13 -1.49 0.36 0.82 0.86 0.92 0.53 0.66 48.32 -14.28 -9.03 sshleifer/distilbart-cnn-12-6 cnn_dailymail 305 M 0.18 0.25 -1.31 0.52 0.84 0.91 0.93 0.58 0.62 55.51 -16.00 -13.79 multi_news 305 M 0.11 0.15 -1.40 0.39 0.84 0.89 0.93 0.57 0.69 48.54 -14.51 -9.60 xsum 305 M 0.31 0.33 -1.50 0.54 0.76 0.91 0.96 0.51 0.71 51.65 -17.58 -12.11 sshleifer/distill-pegasus-xsum-16-4 xsum 369 M 0.08 0.05 -2.88 0.44 0.45 0.97 0.97 0.36 0.91 46.65 -12.57 16.31 multi_news 369 M 0.03 -0.06 -2.60 0.28 0.56 0.95 0.93 0.39 0.86 43.70 -9.68 16.49 cnn_dailymail 369 M 0.04 -0.04 -2.92 0.27 0.50 0.96 0.95 0.32 0.85 46.35 -6.84 13.27 sshleifer/distill-pegasus-cnn-16-4 xsum 369 M 0.23 0.28 -1.49 0.47 0.81 0.84 0.94 0.49 0.70 51.32 -17.29 -6.70 multi_news 369 M 0.08 0.11 -1.44 0.33 0.84 0.82 0.90 0.51 0.69 48.03 -13.99 -4.63 cnn_dailymail 369 M 0.13 0.19 -1.40 0.48 0.83 0.81 0.92 0.54 0.64 54.69 -15.19 -8.70 facebook/bart-large-cnn cnn_dailymail 406 M 0.18 0.25 -1.19 0.49 0.86 0.95 0.94 0.59 0.67 55.54 -16.05 -13.55 multi_news 406 M 0.10 0.15 -1.30 0.37 0.86 0.95 0.94 0.58 0.73 48.71 -14.69 -9.45 xsum 406 M 0.32 0.34 -1.31 0.52 0.81 0.96 0.97 0.53 0.74 52.36 -18.33 -13.06 google/pegasus-multi_news multi_news 570 M 0.06 0.02 -2.51 0.48 0.69 0.93 0.79 0.52 0.58 46.71 -12.68 -4.24 xsum 570 M 0.27 0.20 -2.58 0.53 0.49 0.94 0.85 0.39 0.72 49.01 -15.00 -10.98 google/pegasus-arxiv xsum 570 M 0.14 -0.22 -3.52 0.11 0.32 0.25 0.43 0.13 0.28 43.38 -9.33 0.79 google/pegasus-large cnn_dailymail 570 M 0.20 0.25 -1.21 0.26 0.87 0.66 0.84 0.43 0.51 53.73 -14.21 -9.28 multi_news 570 M 0.07 0.12 -1.52 0.18 0.82 0.82 0.72 0.37 0.42 47.86 -13.81 -5.25 xsum 570 M 0.27 0.31 -1.19 0.24 0.88 0.86 0.89 0.37 0.70 47.92 -13.89 -4.18 google/pegasus-multi_news cnn_dailymail 570 M 0.16 0.16 -2.26 0.47 0.63 0.94 0.83 0.43 0.65 53.89 -14.39 -13.12 google/pegasus-arxiv cnn_dailymail 570 M 0.10 -0.26 -3.30 0.08 0.35 0.26 0.45 0.12 0.30 44.37 -4.86 -0.28 multi_news 570 M 0.05 -0.27 -3.59 0.06 0.32 0.36 0.46 0.12 0.39 39.87 -5.82 1.21 D Mutual Information Estimation with KNIFE D.1 Predictive mutual information The estimation of mutual information is widely acknowledged to be challenging, and in practical scenarios, we often resort to approximating it with a proxy measure known as Arimoto information (Arimoto, 1971) or recently rediscovered as predictive mutual information (Xu et al., 2020). Instead of computing the mutual information in the general case, it is computed under computational constraints enforced by a class 12711 Figure 7: PCA was performed on the embeddings of the source texts and summaries for three datasets under consideration. It is evident from the plots that the embeddings do not exhibit a Gaussian distribution but rather resemble a mixture of Gaussians. This characteristic makes the Gaussian estimator of MI unsuitable for our purposes. of predictive functions. Definition 1 (Predictive conditional entropies). Let T and S be two random variables respectively over Ω∗and a class of functions F: hF(T | S) = inf f∈F ETS[−log f[S](T)], hF(T | ∅) = inf f∈F ET[−log f[∅](T)]. For the sake of brevity, we denote hF(T | ∅) by hF(T). Definition 2 (Predictive F-information). IF(S →T) ≜hF(T) −hF(T | S), (28) IF(T →S) ≜hF(S) −hF(S | T). (29) If F represents the set of all possible functions, then we expect IF(T →S) = IF(S →T) = IF(S; T). However, due to computational limitations imposed by F, these estimators are not symmetrical. Therefore, we have two options for estimating the mutual information. Remark 3. The predictive mutual information is asymmetric with respect to S and T. In our context, we opt for using the predictive mutual information IF(S →T) to gauge the degree of information preservation about the source texts through the summarization process. Thus, we define bI(T; S) ≡IF(S →T). Experimentally, we observed that the predictive mutual information IF(T →S) did not yield consistent outcomes. Further details are provided in Appendix A. We leverage this asymmetry in Appendix F. Mutual information estimator. We utilize the KNIFE estimator (Pichler et al., 2022) to estimate the predictive mutual information between continuous random variables. This estimator defines F as the class of Gaussian Mixtures with K modes, introducing a soft-clustering approach for text generation evaluation (Pillutla et al., 2021; Pimentel et al., 2023). Through experimentation, we observed that varying the number of modes K did not notably affect the results. Therefore, we present our findings with K = 4. Initially, we examine the applicability of the basic Gaussian estimator of mutual information proposed in (Kim et al., 2022). However, we find it unsuitable for our scenario since the embeddings of both the source texts and summaries exhibit multimodal distributions, as illustrated in Figure 7. Instead, we find the KNIFE estimator (Pichler et al., 2022) to be better suited for our context as it is designed to estimate mixtures of Gaussians. 12712 Figure 8: Correlation between the source texts entropies as estimated per KNIFE and the answers to the SEAHORSE benchmark. We observe that the entropy of the source texts correlates negatively with the answers. E Comparison Across Datasets One might seek to compare models evaluated on different datasets. However, the mutual information, in its current form, is not suitable for such comparisons as it relies on the entropy of the dataset. In Figure 8, we demonstrate that for the SEAHORSE benchmark, there are significant variations in the entropies of the source texts, introducing bias into the comparison of mutual informations estimated as I(T; S) = H(T) −H(T|S). To address this issue, we propose normalizing the mutual information by the entropy of the dataset. We show that the normalized version of the mutual information correlates with the responses to the questions of the SEAHORSE benchmark (cf. Figure 9). Remark 4. While the SEAHORSE benchmarks contains similar texts for all their models, the models have been each evaluated on different samples. For instant, in the english subset, only 91 samples out of the 10000 are common to all models. This leads to biaises in the evaluation of the mutual information. We observed that variations in the source text datasets significantly affect the estimation results of mutual information. There’s a tendency for smaller models to exhibit higher source text entropies, leading to misleading comparisons. To address this, we suggest computing the normalized MI between the source texts and summaries. This normalized MI is defined as follows: Normalized MI ≜I(T; S) H(T) = 1 −H(T|S) H(T) , (30) where H(T|S) ⩽H(T). In Figure 9, we observed weak correlations between this normalized MI and the human judgments reported in the SEAHORSE benchmark. This discrepancy might arise from the evaluation of different datasets for each model, suggesting that the normalized MI might not be the most suitable normalization method. Hence, comparing models evaluated on different datasets should be avoided for now. However, when evaluated on the same datasets, MI correlates well with the metrics trained on the SEAHORSE benchmark. This indicates that MI is an promising tool for evaluating summarizers. E.1 SummEval dataset (Fabbri et al., 2021) The SummEval dataset is well-suited for our task due to its inclusion of both summaries and corresponding source texts for identical documents. However, its limited size, comprising only 1700 samples, renders it 12713 Figure 9: Correlation between the normalized mutual information and the answers to the SEAHORSE benchmark. Table 4: Correlation between MI and ROUGE, and Seahorse metrics and probability of success of the classifcation task, grouped by datasets for non-trivial decoding strategies. SH. stands for Seahorse metrics and CT. for classification tasks. Metric I(S; T) Attribution BERTScore Main ideas ROUGE-L SH. Attribution 0.39 1.00 0.78 0.82 0.38 Comprehensible 0.10 0.38 0.49 0.47 0.08 Conciseness 0.50 0.81 0.83 0.90 0.46 Grammar 0.14 0.28 0.39 0.43 0.04 Main ideas 0.51 0.82 0.90 1.00 0.50 Repetition -0.31 -0.17 -0.14 -0.12 -0.57 CT. Topic 0.33 0.19 0.28 0.26 0.65 Emotions 0.10 0.06 0.09 0.08 0.37 Sentiment Analysis -0.04 -0.09 -0.10 -0.15 -0.00 GPT Detector 0.51 0.32 0.49 0.42 0.72 Policy 0.69 0.64 0.72 0.74 0.57 Emb. MPNET 0.69 0.64 0.75 0.76 0.57 all-MiniLM 0.65 0.65 0.74 0.75 0.53 Paraphrase 0.22 0.28 0.22 0.21 -0.07 Common ROUGE-L 0.54 0.38 0.51 0.50 1.00 BERTScore 0.50 0.78 1.00 0.90 0.51 insufficient for estimating mutual information. E.2 Additional languages We performed additional experiments in French, German and Spanish using multlingual embedder to evaluate the mutual informatio. We obtained mixed-results. While the overall trends are similar, the lack of good multilingual embedders certainly hinders the results we can hope to obtain. It is a clear limit of our work since our method is highly dependent on the existence of a viable embedder for the text distribution at hand. E.3 Full results F Deciphering Summarizer Hierarchy We proposed to evaluate the mutual information I(T; S), where S ∼pθ(s|t) being a summarizer – in our case, a finetuned language model– and T is the random variable of source texts. If we have two summarizers pθ(s|t) and qϕ(s|t), we can evaluate the mutual information I(Sp →Sq), where Sp ∼pθ(s|t) and Sq ∼qϕ(s|t). The mutual information here indicates how much information about Sq conveys about Sp and vice-versa. Interestingly, this observation enables us to build a hierarchy of 12714 Table 5: Correlation between MI and ROUGE, and Seahorse metrics and probability of success of the classifcation task, grouped by datasets for non-trivial decoding strategies. SH. stands for Seahorse metrics and CT. for classification tasks. Metric I(S; T) Attribution Main ideas BARTScore BERTScore ROUGE-L SH. Attribution 0.56 1.00 0.26 0.95 0.75 0.42 Comprehensible 0.11 0.07 0.42 0.10 0.02 -0.37 Conciseness 0.80 0.67 0.81 0.66 0.67 0.24 Grammar -0.24 -0.35 0.16 -0.34 -0.30 -0.50 Main ideas 0.77 0.26 1.00 0.23 0.45 0.28 Repetition 0.01 -0.23 0.39 -0.23 -0.08 -0.34 CT. Sentiment analysis 0.65 0.68 0.34 0.68 0.70 0.54 GPT detector 0.73 0.83 0.39 0.86 0.89 0.65 Policy classification 0.83 0.40 0.69 0.46 0.77 0.71 Emotion classification 0.72 0.68 0.40 0.72 0.75 0.58 Emb. Emb. Paraphrase 0.79 0.76 0.60 0.74 0.69 0.29 Common ROUGE-L 0.55 0.42 0.28 0.45 0.70 1.00 BERTScore 0.79 0.75 0.45 0.82 1.00 0.70 BARTScore 0.57 0.95 0.23 1.00 0.82 0.45 summarizers. Some summarizers produce very informative summaries that can be used to predict the ones from other models while being so informative that other summaries cannot provide enough information to build them. We build the directed graph of the predictive power of the summaries of each model on the summaries of other models. A model’s average outgoing mutual information is the average mutual information between this model’s summaries and other models’ summaries. A model’s average incoming mutual information is the average mutual information between its summaries and those of other models. OOD models. Underperforming models, which were trained on disparate data distributions such as Arxiv or medical summarization, generally display low mutual information with other models and prove challenging to predict from conventional specialized systems (see FFigure 10). This outcome is unsurprising, given that these models exhibit significantly divergent behavior compared to others. Consequently, their outputs offer minimal insight into the outputs of other models. Strong models. Robust models like distilBart and Bart demonstrate high informativeness regarding other models, while also posing challenges for prediction (refer to Figure 10 and Figure 3). This outcome is anticipated, given that robust models can encapsulate significantly more information within their summaries compared to other models. As a result, their summaries prove valuable for predicting the outputs of weaker summarizers. However, these summaries are also considerably challenging to predict from the perspectives of those weaker models. 12715 Figure 11: Correlation with the SEAHORSE metrics on the CNN DailyMail dataset is measured by Pr(Yes), which represents the average probability across the dataset that the SEAHORSE model predicts the answer "Yes" to the corresponding question. Figure 10: The predictive power of each model’s summaries on the summaries of other models is depicted in the visualization. The central color denotes the average predictive power of that summarizer regarding the others, while the border color indicates the average predictive power of the other summarizers concerning that summarizer. A red center and blue border signify high informativeness, indicating a summarizer that is highly informative and difficult to predict. Conversely, a node with a blue center and red border implies low informativeness about the other summarizers but easy predictability by them. 12716 Figure 12: As one would expect, the performance of the classification tasks on the summaries increases with the decoding size as it allows the model to pack more information into the summary. G Ablations G.1 Decoding size Impact of the length of the summary. The longer a summary, the more likely a downstream classifier is to produce the same output on the source text and on the summary. However, this trend is not always verified for weakers or OOD models. In Figure 12 and Figure 11, we can observe that the Pegasus model finetuned on arxiv papers tends to be less informative even when generating extended summaries when applied to the CNN-Dailymail dataset. This shows that the mutual information captures more than just the length of the summary but also its actual informativity. H Negative Results H.1 Pointwise mutual information While the mutual information gives good insights about a summarizer, the point-wise mutual information, computed for each pair of source texts and summaries did not result in interesting correlations with the downstream tasks. Previous work have shown that it was a sound metric when the generative model is used to compute the mutual information (Bugliarello et al., 2020; Ethayarajh et al., 2022), however in our scenario we fit an ad-hoc mutual information estimator. 12717
2024
686
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12718–12736 August 11-16, 2024 ©2024 Association for Computational Linguistics EUROPA: A Legal Multilingual Keyphrase Generation Dataset Olivier Salaün, Frédéric Piedboeuf, Guillaume Le Berre David Alfonso Hermelo, Philippe Langlais RALI, DIRO, Université de Montréal, Canada {olivier.salaun, frederic.piedboeuf, guillaume.le.berre, philippe.langlais}@umontreal.ca [email protected] Abstract Keyphrase generation has primarily been explored within the context of academic research articles, with a particular focus on scientific domains and the English language. In this work, we present EUROPA, a dataset for multilingual keyphrase generation in the legal domain. It is derived from legal judgments from the Court of Justice of the European Union (EU), and contains instances in all 24 EU official languages. We run multilingual models on our corpus and analyze the results, showing room for improvement on a domain-specific multilingual corpus such as the one we present. 1 Introduction Keyphrases are short phrases that describe a text, and have been shown to be useful for many applications, from document indexing (Medelyan and Witten, 2006) to opinion mining (Berend, 2011). While heavily researched for the STEM (science, technology, engineering, and mathematics) domains, there has been little investigation on its application to law. This is surprising, as keywords can reduce the workload of legal experts by allowing them to get the gist of lengthy documents (Mandal et al., 2017; Sakiyama et al., 2023; Cérat et al., 2023). The scope of this work is to assess to what extent keyphrases can be automatically generated in the legal domain. Our contributions are the following: (1) We collected and curated EUROPA, the first open benchmark for legal KPG (keyphrase generation), spanning 24 languages and extracted from real-world European judgments.1 (2) We provide in-depth analysis of our corpus, highlighting its differences compared to other existing corpora. (3) We report performances achieved by multilingual generative models on this benchmark and 1The dataset can be downloaded from the Hugging Face hub at https://huggingface.co/datasets/NCube/ europa while evaluation scripts and model outputs are available at https://github.com/rali-udem/europa. point out areas where performances could be improved. 2 Related Work While English KPG has been studied fairly extensively in technical and academic domains, our work bridges two less common fields : legal KPG and multilingual KPG. In this section, we first review the literature surrounding keyphrase extraction (KPE) and generation before covering legal KPE and KPG. For a more complete survey on KPG, we refer to Xie et al. (2023). 2.1 Keyphrase Extraction and Generation Keyphrases may be either present in the input (could be retrieved with purely extractive methods) or absent from it (needing abstractive methods to generate them). Many extractive methods have been suggested and used over the years, either using unsupervised heuristic rules and pre-trained machine learning (ML) models to detect and extract keyphrases (Liu et al., 2010; Gollapalli and Caragea, 2014; Meng et al., 2017; Yu and Ng, 2018) or using ML/deep learning (DL) classifiers trained on labelled data to learn to identify them (Yang et al., 2018; Chen et al., 2018; Wang et al., 2018; Basaldella et al., 2018; Alzaidy et al., 2019; Sun et al., 2019; Mu et al., 2020; Sahrawat et al., 2020). Extractive methods are however incomplete as, for most domains, a large portion of the target keyphrases are absent from the document, hence an increased interest in generative models in more recent works. While research on KPG initially used seq-to-seq models in the form of RNNs (Meng et al., 2017, 2019, 2021; Yuan et al., 2020a), modern KPG uses mostly fine-tuning of pre-trained transformer models (Vaswani et al., 2017), such as BART (Lewis et al., 2020), which showed itself to be as efficient as previous, more complex RNN models (Chowdhury et al., 2022). Current English state-of-the-art 12718 results are currently obtained by using BART models which are further pre-trained for scientific KPG (KeyBART by Kulkarni et al. (2022) and SciBART by Wu et al. (2022)). KPG has been conducted on limited domains, such as academic and STEM papers (Hulth, 2003a; Nguyen and Kan, 2007; Kim et al., 2010; Meng et al., 2017; Krapivin et al., 2009) or news (Gallina et al., 2019; Koto et al., 2022), although the need for KPG extends beyond those domains. Of particular interest is KPG for long documents, which has benefited from a few studies, especially in recent years. Ahmad et al. (2021) proposed a two-step approach of this problem, where they first select salient sentences before using those to generate keyphrases, while Garg et al. (2022) tested multiple ways of including information from the main body (generated summary, citation sentences, random sentences, etc), showing that adding a generated summary was the most efficient strategy. Mahata et al. (2022) proposed a corpus for scientific KPG using the whole document instead of only the title and abstract, which is what is currently used in KPG. They do not however run any baseline on that corpus. 2.2 Keyphrase Generation in the Context of Legal Domain Legal documents are more complex and longer than those commonly used in NLP, as illustrated by the European Court of Human Rights corpus (Chalkidis et al., 2019). This motivates a high demand from legal professionals for automatic digests of such documents (Bendahman et al., 2023), mostly from the angle of legal summarization. This task, more widespread than legal KPG, was first formalized as an extractive task in which the most salient segments from the document are returned as a summary (Farzindar and Lapalme, 2004a,b; Saravanan and Ravindran, 2010; Polsley et al., 2016; Aumiller et al., 2022). Similarly to KPG tasks, abstractive summarization models based on transformer architecture (Vaswani et al., 2017) became more widespread than extractive ones. However, open legal summarization benchmarks such as BigPatent (Sharma et al., 2019), Multi-LexSum (Shen et al., 2022), and EUR-Lex-Sum (Aumiller et al., 2022), are still scarce. Legal KPG may be considered as a specialized type of summarization but, to the best of our knowledge, no open benchmark is available for such a task. Moreover, most existing works have been focusing on keyphrase extraction (Le et al., 2013; Audich et al., 2016; Mandal et al., 2017; Daojian et al., 2019), ignoring absent keyphrases. The very first legal KPG experiments addressing abstractive keyphrases were conducted by Cérat et al. (2023) and Sakiyama et al. (2023), but within a monolingual setting and with no public dataset release. This is a major issue as generative models are particularly data-greedy. Furthermore, languagewise, most of the open legal datasets are in English (Chalkidis et al., 2023) and, when it comes to existing open multilingual legal benchmarks (Chalkidis et al., 2021; Savelka et al., 2021; Aumiller et al., 2022; Niklaus et al., 2023), none are related to KPG. Overall, our contribution aims at fulfilling these gaps by providing an open multilingual dataset for legal keyphrase generation. 3 The Creation of the EUROPA Dataset Cases in the Court of Justice of the European Union (CJEU), which aims at ensuring the consistent interpretation and application of the EU law across all EU institutions (Court of Justice of the European Union, 2023), can be processed in any of the 24 EU official languages, depending on the Member State involved, making drafting and translation pivotal and complex tasks. Stakeholders and judges communicate in their respective languages and rely on lawyer-linguists for document exchange. Once a judgment is rendered, it is translated into other EU languages to ensure consistency in the judgment text, keyphrases, and their legal implications (Domingues, 2017). Through private correspondence, the CJEU explained that keyphrases, drafted by the Registry and completed by the reporting judge’s cabinet or the advocate general, aim at providing concise case descriptions. Their meticulous construction establishes them as high-quality gold references for KPG evaluation. 3.1 Data Collection & Processing For the sake of clarity, we define a judgment as a collection of documents which refer to the same ruling provided in up to 24 languages. Each judgment has a unique CELEX identifier that remains the same for every language version available. Therefore, all language versions with the same CELEX ID are semantically and legally equivalent, and can be considered as parallel. Within a judgment, each language version is named an instance, a pair 12719 Input text (fr) La demande de décision préjudicielle porte sur l’interprétation de l’article 48 du règlement (CEE) n° 1408/71 du Conseil, du 14 juin 1971, relatif à l’application des régimes de sécurité sociale aux travailleurs salariés, aux travailleurs non salariés et aux membres de leur famille qui se déplacent à l’intérieur de la Communauté (JO L 149, p. 2) [...] KPs (fr) Assurance vieillesse – Travailleur ressortissant d’un État membre – Cotisations sociales – Périodes différentes – États membres différents – Calcul des périodes d’assurance – Demande de pension – Résidence dans un État tiers Input text (de) Das Vorabentscheidungsersuchen betrifft die Auslegung von Art. 48 der Verordnung (EWG) Nr. 1408/71 des Rates vom 14. Juni 1971 zur Anwendung der Systeme der sozialen Sicherheit auf Arbeitnehmer und Selbständige sowie deren Familienangehörige, die innerhalb der Gemeinschaft zu- und abwandern (ABl. L 149, S. 2) [...] KPs (de) Altersversicherung – Arbeitnehmer mit der Staatsangehörigkeit eines Mitgliedstaats – Sozialbeiträge – Unterschiedliche Zeiten – Unterschiedliche Mitgliedstaaten – Berechnung der Versicherungszeiten – Rentenantrag – Wohnort in einem Drittland Input text (en) This reference for a preliminary ruling concerns the interpretation of Article 48 of Council Regulation (EEC) No 1408/71 of 14 June 1971 on the application of social security schemes to employed persons, to self-employed persons and to members of their families moving within the Community (OJ, English Special Edition 1971 (II), p. 416) [...] KPs (en) Old-age insurance – Worker who is a national of a Member State – Social security contributions – Separate periods – Different Member States – Calculation of periods of insurance – Application for a pension – Residence in a non-Member State Table 1: Example of a judgment available in 22 languages with the corresponding input text and target keyphrases (French, German and English pairs are shown above). During the KPG task, keyphrases are separated by semicolons instead of dashes to ensure consistency with other KPG datasets. comprising the target keyphrases and the input text from the judgment, with both being in the same language. For example, the judgment shown in Table 1 has 22 instances spanning 22 languages. In May 2023, we performed a first pass on the EUR-Lex database2, the main legal EU online database (Bernet and Berteloot, 2006), scraping 304 426 query results corresponding to 19 319 judgments released by the CJEU. These query results (each being a potential instance for our corpus) consist in small snippets (as the one in Figure 1) containing the multi-line title and meta information of the case including the document identifier, but not the judgment text. Then, we made a second pass in order to scrape the plain text of the judgment using the document identifier collected during the first pass. The plain text is available as PDF and/or HTML, but we used the latter for convenience. A total of approximately two weeks was required for collecting the query results snippets and the judgments’ HTML files in all 24 languages. 2https://eur-lex.europa.eu Judgment of the Court (Second Chamber) of 3 April 2008. John Doe v Raad van Bestuur van de Sociale Verzekeringsbank. Reference for a preliminary ruling: Rechtbank te Amsterdam - Netherlands. Old-age insurance - Worker who is a national of a Member State - Social security contributions - Separate periods - Different Member States Calculation of periods of insurance - Application for a pension - Residence in a non-Member State. Case C-331/06. ECLI identifier: ECLI:EU:C:2008:188 Form: Judgment Author: Court of Justice Date of document: 03/04/2008 Figure 1: Query result with multi-line title of a case containing target keyphrases in English (highlighted). The case is the same as the one from Table 1. Nonhighlighted text is excluded from targets as it has low value from KPG standpoint. 12720 An overview of the corpus collection and curation process is presented in the following paragraphs (more details in Appendix E). The structure of a case HTML file generally consists of a mix of keyphrases and meta-information at the top of the document followed by paragraphs that will be merged into the input text. Separating the top of the document from the paragraphs is crucial in order to ensure that the input text is not contaminated by target keyphrases. However, identifying keyphrases from the judgment plain text was infeasible as they are surrounded by quotation marks, symbols, and HTML tags that vary across languages and time. Therefore, the keyphrases were obtained from the small snippet multi-line title as the one in Figure 1. Still, in such snippet, the keyphrases are mixed with meta information about the case which we need to get rid of as they are redundant and of low value from a KPG standpoint (e.g. sequences such as “Reference for a preliminary ruling” followed by the referring court and the country are so frequent that they would bring noise during training). Using the BeautifulSoup library3, languagespecific regular expressions and domain-aware engineered heuristics, we filtered out the sequence of keyphrases that had the highest number of meaningful phrases. An additional sanity check was also performed to ensure that keyphrases are properly split (e.g. missing whitespace beside a phrase delimiter) and that the number of phrases remains consistent across languages for each judgment. Our approach thus ensures the quality of the target keyphrases that will be used in the KPG task during training and evaluation. For the sake of evaluation consistency with the existing KPG literature, keyphrases are lowercased and separated by semicolons. Another critical part of corpus preparation is the input text from the judgment HTML files. These raw documents are delicate to process as they begin with a mix of meta-information (e.g. stakeholders identities) and target keyphrases (which must be excluded from input text), followed by numbered paragraphs. Moreover, since the HTML tags are inconsistent and vary across years and languages, BeautifulSoup cannot be used to extract the input text from the judgment. Therefore, we manually designed several language-specific regular expressions to reliably split the documents by 3https://pypi.org/project/beautifulsoup4 matching delimiters in all languages (e.g. “Judgment”, “Sentencia”, “Urteil”). By doing so, only the paragraphs of the judgment are retained as a single input string, while the section containing superficial meta-information and target keyphrases is taken away, thus preventing any data leakage. This was confirmed by a manual examination of 100 random instances, equally distributed across the 24 languages. After removing instances with empty input or target texts, our final corpus is composed of 17 833 judgments, in 16 languages on average and spanning cases from 1957 to 2023. This amounts to a total of 284 957 instances (input/keyphrases pairs). Expectedly, these instances are unevenly distributed across all 24 languages, with languages from older EU Member States being more represented in the dataset. For instance, French (the most represented language) amounts to 17 461 instances (6.13% of all instances) while Croatian and Irish (a significant outlier) contain 5153 (1.81%) and 92 (0.03%) instances, respectively. Split Type Present Absent F1@5 F1@M F1@5 F1@M Random 30.4 41.5 15.3 18.1 Temporal 21.7 27.4 5.6 7.0 Table 2: Performance for mBART50 with chronological and random splits (weighted average scores). 3.2 Chronological Split It is a common practice to randomly split data in a NLP task (Gorman and Bedrick, 2019). However, training and evaluating a model on data from overlapping time periods with similar distributions fails to assess the actual model’s temporal generalization (Lazaridou et al., 2021). This is why legal NLP generally uses a chronological split of documents instead of combining random shuffle with random split (Chalkidis et al., 2019; Medvedeva et al., 2021; Chalkidis et al., 2021). In our case, we tried both splits with a mBART50 model. On the test set, the performance in terms of F1@M for present keyphrases differs by 14.1 percentage points (11.1 for absent keyphrases) in favour of random split. Results in Table 2 confirm that random split, by ignoring real-world temporal concept drifts, tends to overestimate true performance. This is consistent with Søgaard et al. (2021); Mu et al. (2023). Therefore, we choose a chronological split 12721 Benchmark # inst. Avg. input length Avg. num. KPs % absent KPs Domain # lang. Inspec (Hulth, 2003b) 2000 116 9.6 30.0 Comp. Sc. 1 (en) Krapivin (Krapivin et al., 2009) 2304 7696 5.3 23.5 Comp. Sc. 1 (en) SemEval (Kim et al., 2010) 244 7795 15.4 18.3 Comp. Sc. 1 (en) KP20k (Meng et al., 2017) 554k 146 5.3 48.7 Comp. Sc. 1 (en) NUS (Nguyen and Kan, 2007) 211 6938 11.7 17.6 Science 1 (en) pak2018 (Campos et al., 2020) 50 96 3.6 84.2 Science 1 (pl) 110-PT-BN-KP (Marujo et al., 2011) 110 301 24.4 1.3 News 1 (pt) WikiNews (Bougouin et al., 2013) 100 268 9.6 4.8 News 1 (fr) KPTimes (Gallina et al., 2019) 280k 774 5.0 55.0 News 1 (en) Papyrus (Piedboeuf and Langlais, 2022) 30k 307 7.2 37.2 Academic 18 EUROPA (ours) 285k 5220 7.6 52.6 Legal 24 Table 3: Comparative table among different open KPG benchmarks. Avg. input length refers to the average number of tokens split by whitespaces. Top values are in bold font, second top values are underlined. for a proper assessment: The training set covers judgments from 1957 to 2010 (131 076 instances), the validation those from 2011 to 2015 (63 373 instances), and the test set the ones from 2016 to 2023 (90 508 instances).4 Full details about documents distribution across these splits are shown in Appendix F. 4 Dataset Analysis As shown in Table 3, compared to previous works and KPG benchmarks, EUROPA covers more languages and is highly positioned in matters of number of instances, ratio of absent keyphrases, and average input length. The distribution of keyphrases in our dataset varies across languages due to the fact that the most recent judgments tend to have a higher number of keyphrases attached to them (the average number of keyphrases per language can be found in Appendix F). Consequently, instances in languages from the most recent Member States tend to be biased towards having more keyphrases. Another consequence of the temporal evolution of the average number of keyphrases coupled with the chronological split, is that EUROPA’s validation and test sets contain more keyphrases per instance on average (8.3 and 10.5 respectively) compared to its training set (5.4 keyphrases on average). This 4The temporal split is available at https://huggingface. co/datasets/NCube/europa and a random split version for those who wish it can be found at https://huggingface. co/datasets/NCube/europa-random-split creates a discrepancy between the sets that practitioners should be aware of. However, we advocate that a higher number of keyphrases in the test set of EUROPA will be beneficial to the evaluation of the competing models by providing a larger number of potential gold keyphrases, and by assessing the capacity of models to generalize across time when target keyphrases follow different patterns with respect to the past. 5 Models As the majority of EUROPA’s target keyphrases are absent from input documents, we focus on generative models, as extractive ones such as YAKE Campos et al. (2020) are ill-suited here. Recent state-of-the-art models in English KPG rely heavily on pretrained models, such as KeyBART (Kulkarni et al., 2022) or SciBART (Beltagy et al., 2019), which are pre-trained and fine-tuned on a massive corpus of English scientific documents. However, such models are not available in a multilingual format, nor are they specific to the legal domain. We therefore choose to use mBART50 (Tang et al., 2020), a variant of mBART (Liu et al., 2020) with support for 50 languages instead of 25. We also test mT5 (Xue et al., 2021), which covers 101 languages. Most models, including mBART50, have a maximum input sequence length typically set to 1024 tokens. While mT5’s input length can be set arbitrarily, it was originally pretrained with a context of 1024. The main caveat is mT5’s memory complex12722 Model Weighted Average Unweighted Average F1 Present F1 Absent MAP F1 Present F1 Absent MAP @5 @10 @M @5 @10 @M @50 @5 @10 @M @5 @10 @M @50 YAKE 2.0 2.2 2.2 0.0 0.0 0.0 0.2 1.9 2.1 2.1 0.0 0.0 0.0 0.2 mT5-small 11.6 7.7 17.4 2.6 1.7 4.1 4.3 11.1 7.4 16.6 2.5 1.6 3.9 4.1 mT5-base 13.2 8.8 19.5 3.4 2.3 5.4 5.5 12.7 8.5 18.8 3.3 2.2 5.2 5.3 mT5-largea,b 13.2 8.8 19.5 3.4 2.3 5.5 5.6 12.6 8.4 18.6 3.3 2.2 5.2 5.4 mBART50 21.7 14.6 27.4 5.6 3.8 7.0 11.4 20.8 14.0 26.3 5.3 3.6 6.7 10.9 mBART50-8ka 23.9 16.3 29.3 5.8 3.9 7.4 12.3 23.0 15.6 28.2 5.6 3.8 7.1 11.8 Table 4: Weighted and Unweighted average F1@k scores over all languages (%). MAP refers to MAP@50 for all keyphrases combined. Detailed results per language are shown in Appendix G. Highest and second highest scores are in bold font and underlined, respectively. Each model is run once. a Due to greater number of parameters and higher training costs, these models were trained during 5 epochs. b With a 6th training epoch, mT5-large outperforms mT5-base for some metrics by at most 0.2 percentage points. ity that increases quadratically with input length, hence a prohibitive computing cost. While there exists some models whose maximum input length reaches up to 16k tokens, such as Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020) or LongT5 (Guo et al., 2022), these remain computationally expensive to run with our current resources and are only suitable for English data. Therefore, similarly to Cérat et al. (2023), we implemented a mBART variant with LSG attention (Condevaux and Harispe, 2023) such that the maximum input length reaches 8192 tokens instead of 1024. Doing so makes the memory complexity increase linearly with respect to the input length, instead of quadratically as it is the case with a traditional attention mechanism. All models are trained with early stopping and a maximum epoch number of 10, except mBART50-8k and mT5-large with only 5 epochs, as their training is more time-consuming. Details about the hyperparameters and training process are provided in Appendices A and D. 5.1 Evaluation Protocol As has become common practice in recent works on KPG (Meng et al., 2017; Garg et al., 2023; Shen and Le, 2023; Chen and Iwaihara, 2023), model evaluation relies on two F1 measures: F1@k (k = {5, 10}) and F1@M, calculated separately for present and absent keyphrases. F1@M is computed using the entirety of the generated keyphrases while F1@k is computed using the best k generated keyphrases (by truncating if necessary). In both cases, the number of target keyphrases remain untouched. F1@k is calculated using only the top k best scoring keyphrases among the model’s predictions, hence an upper boundary below 1 whenever the number of target keyphrases exceeds k, which occurs in most of EUROPA’s instances. This is why F1@M tends to better reflect the model performance as all candidate keyphrases are taken into account without being truncated. For more details about these metrics, we refer to Yuan et al. (2020b). Following the literature, target and predicted keyphrases are lowercased and stemmed (e.g. Meng et al. (2017) applied Porter Stemmer for English) before conducting an exact match. Stemming is a critical step without which a candidate keyphrase could be errouneously considered wrong because of the morphological nature of the language. That is why we applied stemming for all languages for treating them as fairly as possible. For most of them, the Snowball stemmer (Porter, 2001) was used. In addition to F1 measures, we also computed MAP@50 (Mean Average Precision). 6 Results F1@5, F1@10 and F1@M scores for present and absent keyphrases over all languages are reported in Table 4. Since the number of instances varies across languages, we report average scores that are weighted and unweighted. The former is an average score across all instances without taking language into consideration. Consequently, it can be highly influenced by high-resource languages covering more instances. The latter is an unweighted average among languages’ scores. In other words, all languages have equal importance, thus reflecting the ability of a model to perform equally well in high- and low-resource languages. Unsurprisignly, the YAKE extractive model performs poorly, finding none of the absent keyphrases. 12723 Language Present Absent 1k 8k 1k 8k French 33.1 34.4 8.7 9.6 German 33.5 35.8 4.9 5.0 English 29.5 31.7 5.0 5.1 Italian 30.5 33.3 3.7 3.6 Dutch 31.9 34.4 3.5 3.7 Greek 14.8 15.7 10.6 11.1 Danish 30.4 32.6 3.2 2.6 Portuguese 27.6 30.1 5.5 6.5 Spanish 29.2 31.8 4.8 4.8 Swedish 33.1 33.9 4.2 5.0 Finnish 27.5 28.9 11.6 13.1 Lithuanian 34.9 32.9 4.2 4.7 Estonian 29.5 31.0 4.7 4.5 Czech 26.4 29.5 13.7 14.6 Hungarian 14.3 15.3 10.4 11.3 Latvian 34.7 33.1 3.5 3.6 Slovene 25.6 29.2 11.1 11.8 Polish 26.3 29.7 11.7 13.2 Maltese 24.7 27.0 7.7 7.1 Slovak 25.2 24.9 15.7 16.0 Romanian 30.7 33.8 5.4 5.7 Bulgarian 29.0 29.2 3.4 3.5 Croatian 8.6 15.5 4.1 3.8 Irish 0.0 2.2 0.0 0.0 Table 5: Side-by-side F1@M scores comparison per language between mBART50 and mBART50-8k. With a fixed input length of 1024 tokens, the three mT5 variants dramatically underperform mBART50. This is surprising as mT5 covers twice as many languages than mBART50. Also, as mT5large has 59% more parameters than mBART50, it would have been expected to outperform the latter, but the results reveal otherwise. When comparing mBART50 with mBART50-8k, the increase in the maximum context length brings significant improvement across all metrics. While an average gain of around 2% (for F1@M over present phrases) may not seem significant, KPG evaluation often greatly underestimates the true performance of the models, due to the difficulty of correctly evaluating whether a keyphrase is correct or not (Wu et al., 2023). As such, the improvement shown by the mBART50-8k model is significant and reflected in the generated keyphrases, thus emphasizing the benefits in enlarging the maximum input length. Still, Table 5 shows that input context enlargement does not have uniform effects across languages. For instance, for present keyphrases, some languages get small gains (Greek, Swedish), and some degrade (Lithuanian, Latvian). For highresource languages such as English and Italian, performance improves on present phrases, but seems stagnant on absent ones. For low-resource languages, Croatian has the most dramatic improvement on present phrases while Irish gets one score above 0 with mBART50-8k. However, the performance for these languages still lags behind that of moderate-resource ones. This is understandable as these languages have almost no training instances in our temporal split setting, thus revealing the difficulty of conducting KPG for unseen languages.5 7 Analysis and Discussion The first striking observation is that mBART models, despite covering less languages compared to mT5s, succeed in outperforming the latter models. One explanation is that mT5 small, base and large variants generated on average significantly fewer phrases per instance (2.1, 2.3 and 2.3, respectively) with respect to mBART50 and mBART50-8k (5.5 and 6.0). Consequently, mT5 models are less likely to achieve high scores. The other noticeable result is that the mBART models even succeed in outperforming mT5s in languages that only mT5s support such as Bulgarian or Greek. This suggests that Conneau et al. (2020)’s corpus used for pretraining mBART50 gave it a capacity to deal with much more languages than stated by Tang et al. (2020). The improvement from mBART50 to mBART50-8k is significant and emphasizes the importance of larger input length. Less that 14% of all instances fit into a 1024-tokens context window. Around 58% do when that length reaches 8k tokens. Building models that can efficiently generate keyphrases from larger documents is therefore crucial in order to achieve further progress (the length reaches 9160 tokens on average, and 17k at the 90th percentile, with mBART50 tokenizer).6 Moreover, KPG models struggle for keyphrases with more tokens (split by whitespaces). In the case of mBART50-8k, a matched target keyphrase contains on average 3.1 tokens, while an unmatched 5With a random split dataset, F1@M achieved by mBART50 for Croation/Irish reaches 46.9/20.5 for present keyphrases and 13.9/10.2 for absent ones 6We tried a sliding-window-based model that was not conclusive, thus the need to find better ways to capture context. 12724 Reference: Appeal [5] – Competition [5] – Agreements, decisions and concerted practices [3] – Pharmaceutical products – Market for antidepressant medicines (citalopram) – Settlement agreements concerning process patents concluded between a manufacturer of originator medicines holding those patents and manufacturers of generic medicines – Article 101 TFEU [1] – Potential competition – Restriction by object – Characterisation – Calculation of the amount of the fine mT5-small: Appeal – Competition – Regulation (EC) No 1/2003 mT5-base: Appeal – Competition – Agreements, decisions and concerted practice mT5-large: Appeal – Competition – Regulation (EC) No 1/2003 mBART50: Appeal – Competition – Agreements, decisions and concerted practices – Non-contractual liability of the Community – Guidelines on the application of Article 101 TFEU – Principle of proportionality – Obligation to state the reasons on which the decision is based mBART50-8k: Appeal – Competition – Article 101 TFEU – Agreements, decisions and concerted practices – Antidepressant medicinal products – Article 23(2)(a) of Regulation (EC) No 1/2003 – Article 23(2)(a) of Regulation (EC) No 1/2003 – Concept of ‘restrictions of competition by object’ – Reduction of the amount of the fine Table 6: Example of generated keyphrases with an instance in English. Blue phrases are exact matched targets, purple ones are candidates which could be relevant but are not rewarded by the exact match evaluation approach. We added comments in grey with the number of times some targets were matched. Absent keyphrases are in italics. target phrase contains 5.7 tokens. Unmatching generated keyphrases contain 7.7 tokens on average. Upon manual inspection of generated keyphrases, most models succeed in generating simple phrases made of up to 3 terms, but they indeed struggle for longer noun phrases that refer to very specific or technical concepts such as “Market for antidepressant medicines (citalopram)” in Table 6. Also, although some candidate keyphrases are still relevant from a reader’s standpoint, they are penalized by the exact matching evaluation approach, although stemming is applied. For instance, the output “Concept of ‘restrictions of competition by object’" could be a decent generation for the target “Restriction by object” despite the lack of matching. In order to mitigate this issue, we evaluated the models again with a semantic matching metric (Wu et al., 2023) allowing us to compare predictions and targets without having to apply any sort of post-processing or stemming (the tool is however only available for English). Results in Table 7 have a correlation of 0.99 with F1@M scores obtained for English for both present and absent keyphrases. This seems consistent with the ranking among models observed previously in Table 4: mBART50based models outperform mT5-based ones, and mBART50-8k is the front-runner. 8 Conclusion In this paper, we present EUROPA, a novel and open multilingual keyphrase generation dataset in the leModel Present Absent YAKE 0.335 0.155 mT5-small 0.399 0.268 mT5-base 0.422 0.283 mT5-large 0.436 0.270 mBART50 0.519 0.442 mBART50-8k 0.548 0.479 Table 7: Semantic matching scores for present and absent keyphrases in English. gal domain. We believe this dataset can help alleviate two current shortcomings of the keyphrase generation task: the lack of data in domains other than STEM; and the lack of multilingual non-English datasets. Our dataset is available at https:// huggingface.co/datasets/NCube/europa. We furthermore provide an analysis of EUROPA with key statistics, thus giving insights on the particularities of our dataset. Finally, we run multiple models in various settings in order to give an initial point of comparison for future works on EUROPA. Our corpus also highlights the need to efficiently capture larger input context, and will be a suitable testbed for models designed to do so. Limitations Low Resource Languages: The choice of a chronological split, though enabling a realistic KPG performance assessment with quasi-equal amount of test instances for each language (ex12725 cept Irish), results in dramatic differences in available training data across languages. This is particularly true for the latest official EU languages, such as Croatian and Irish. One possible approach for mitigating this issue in future works would be to complement the training data with other legal documents released by other EU institutions. A random split version of our dataset is available for those who would like an identical distribution of languages across training, validation and test sets (https://huggingface.co/datasets/ NCube/europa-random-split). Computing Cost Scalability: The architecture of most of the models used here is derived from experiments in which the text input length rarely exceeds 1024. This is a major caveat when deploying such models on real-world data with significantly longer documents, such as the legal documents in our dataset which often exceed several thousand tokens. Moreover, dealing with documents whose language is not supported by the tokenizer mechanically increases the number of tokens. This is due to the fact that running and training transformer models with higher maximum input sequences is costly both in terms of time and GPU memory. The computing burden was already emphasized by Aumiller et al. (2022) of legal summarization for EU documents and Sakiyama et al. (2023) for legal KPG in Portuguese. Ethics Statement Copyright: The Publications Office of the European Union gave us the written confirmation that cases of the Court of Justice of the European Union could be used for commercial and non-commercial purposes, as stated in the European Commission decision released on 12 December 2011 (European Commission, 2011). Our models are implemented with Wolf et al. (2020)’s transformers library licensed under Apache 2.0 while evaluation is performed with Meng et al. (2017)’s tools available under the MIT license. Therefore, one is granted permission to modify and distribute the licensed material. Personal Data Protection: According to the CJEU personal data protection policy, the Court anonymizes personal information upon request of a party, referring court/tribunal or upon its own volition. In that case, anonymity is granted throughout the entire procedure according to article 95 of Rules of Procedure of the Court of Justice. It must be emphasized that anonymity must be requested at the earliest stage of the proceedings. Once the decision is drafted and released, there is no a posteriori optout option due to the transparency principle with which the CJEU must comply. Moreover, we have a written confirmation from the Publications Office of the EU that we are allowed to share and redistribute our corpus made from documents publicly available on EU platforms. Acknowledgements We would like to thank the European Union for granting us access to EUR-Lex data (http:// eur-lex.europa.eu, © European Union, 19982023). We are also grateful to Benjamin Cérat and Di Wu for their advice on some coding implementations. References Wasi Ahmad, Xiao Bai, Soomin Lee, and Kai-Wei Chang. 2021. Select, extract and generate: Neural keyphrase generation with layer-wise coverage attention. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1389–1404, Online. Association for Computational Linguistics. Rabah Alzaidy, Cornelia Caragea, and C Lee Giles. 2019. Bi-lstm-crf sequence labeling for keyphrase extraction from scholarly documents. In The world wide web conference, pages 2551–2557. Dhiren A Audich, Rozita Dara, and Blair Nonnecke. 2016. Extracting keyword and keyphrase from online privacy policies. In 2016 Eleventh International Conference on Digital Information Management (ICDIM), pages 127–132. IEEE. Dennis Aumiller, Ashish Chouhan, and Michael Gertz. 2022. EUR-lex-sum: A multi- and cross-lingual dataset for long-form summarization in the legal domain. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7626–7639, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Marco Basaldella, Elisa Antolli, Giuseppe Serra, and Carlo Tasso. 2018. Bidirectional lstm recurrent neural network for keyphrase extraction. In Digital Libraries and Multimedia Archives: 14th Italian Research Conference on Digital Libraries, IRCDL 2018, Udine, Italy, January 25-26, 2018, Proceedings 14, pages 180–187. Springer. Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical 12726 Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615– 3620, Hong Kong, China. Association for Computational Linguistics. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. Nihed Bendahman, Karen Pinel-Sauvagnat, Gilles Hubert, and Mokhtar Boumedyen Billami. 2023. Quelles évolutions sur cette loi? entre abstraction et hallucination dans le domaine du résumé de textes juridiques. In 18ème Conférence en Recherche d’Information et Applications (CORIA 2023), pages 18–36. ATALA. Gábor Berend. 2011. Opinion expression mining by exploiting keyphrase extraction. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 1162–1170, Chiang Mai, Thailand. Asian Federation of Natural Language Processing. Hélène Bernet and Pascale Berteloot. 2006. Eur-lex: A multilingual on-line website for european union law. International Review of Law Computers & Technology, 20(3):337–339. Adrien Bougouin, Florian Boudin, and Béatrice Daille. 2013. TopicRank: Graph-based topic ranking for keyphrase extraction. In Proceedings of the Sixth International Joint Conference on Natural Language Processing, pages 543–551, Nagoya, Japan. Asian Federation of Natural Language Processing. Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Jorge, Célia Nunes, and Adam Jatowt. 2020. Yake! keyword extraction from single documents using multiple local features. Information Sciences, 509:257–289. Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019. Neural legal judgment prediction in English. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4317–4323, Florence, Italy. Association for Computational Linguistics. Ilias Chalkidis, Manos Fergadiotis, and Ion Androutsopoulos. 2021. MultiEURLEX - a multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6974–6996, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Ilias Chalkidis, Nicolas Garneau, Catalina Goanta, Daniel Katz, and Anders Søgaard. 2023. LeXFiles and LegalLAMA: Facilitating English multinational legal language model development. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 15513–15535, Toronto, Canada. Association for Computational Linguistics. Bin Chen and Mizuho Iwaihara. 2023. Enhancing keyphrase generation by bart finetuning with splitting and shuffling. In Pacific Rim International Conference on Artificial Intelligence, pages 305–310. Springer. Jun Chen, Xiaoming Zhang, Yu Wu, Zhao Yan, and Zhoujun Li. 2018. Keyphrase generation with correlation constraints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4057–4066, Brussels, Belgium. Association for Computational Linguistics. Md Faisal Mahbub Chowdhury, Gaetano Rossiello, Michael Glass, Nandana Mihindukulasooriya, and Alfio Gliozzo. 2022. Applying a generic sequence-tosequence model for simple and effective keyphrase generation. arXiv preprint arXiv:2201.05302. Charles Condevaux and Sébastien Harispe. 2023. Lsg attention: Extrapolation of pretrained transformers to long sequences. In Advances in Knowledge Discovery and Data Mining: 27th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2023, Osaka, Japan, May 25–28, 2023, Proceedings, Part I, pages 443–454. Springer. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. Court of Justice of the European Union. 2023. Annual report 2022. Benjamin Cérat, Olivier Salaün, Noreddine Ben Jillali, Marc-André Morissette, Isabela Pocovnicu, Emma Elliot, and François Harvey. 2023. LexKey: A Keyword Generator for Legal Documents. In Proceedings of the Sixth Workshop on Automated Semantic Analysis of Information in Legal Text (ASAIL 2023), June 23, 2023, Braga, Portugal. Zeng Daojian, Tong Guowei, Dai Yuan, Li Feng, Han Bing, and Xie Songxian. 2019. Keyphrase extraction for legal questions based on a sequence to sequence model. Journal of Tsinghua University (Science and Technology), 59(4):256–261. Joana Sousa Domingues. 2017. The multilingual jurisprudence of the court of justice and the idea of uniformity in european union law. UNIO–EU Law Journal, 3(2):125–138. European Commission. 2011. Commission Decision of 12 December 2011 on the reuse of Commission documents. Atefeh Farzindar and Guy Lapalme. 2004a. Legal text summarization by exploration of the thematic structure and argumentative roles. In Text Summarization 12727 Branches Out, pages 27–34, Barcelona, Spain. Association for Computational Linguistics. Atefeh Farzindar and Guy Lapalme. 2004b. Letsum, an automatic legal text summarizing. In Legal knowledge and information systems: JURIX 2004, the seventeenth annual conference, volume 120, page 11. IOS Press. Ygor Gallina, Florian Boudin, and Béatrice Daille. 2019. Kptimes: A large-scale dataset for keyphrase generation on news documents. In Proceedings of the 12th International Conference on Natural Language Generation, pages 130–135. Krishna Garg, Jishnu Ray Chowdhury, and Cornelia Caragea. 2022. Keyphrase generation beyond the boundaries of title and abstract. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5809–5821, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Krishna Garg, Jishnu Ray Chowdhury, and Cornelia Caragea. 2023. Data augmentation for low-resource keyphrase generation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8442–8455, Toronto, Canada. Association for Computational Linguistics. Sujatha Das Gollapalli and Cornelia Caragea. 2014. Extracting keyphrases from research papers using citation networks. In Proceedings of the AAAI conference on artificial intelligence, volume 28. Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2786–2791, Florence, Italy. Association for Computational Linguistics. Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, and Yinfei Yang. 2022. LongT5: Efficient text-to-text transformer for long sequences. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 724– 736, Seattle, United States. Association for Computational Linguistics. Anette Hulth. 2003a. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the 2003 conference on Empirical methods in natural language processing, pages 216–223. Anette Hulth. 2003b. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 216–223. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. SemEval-2010 task 5 : Automatic keyphrase extraction from scientific articles. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26, Uppsala, Sweden. Association for Computational Linguistics. Fajri Koto, Timothy Baldwin, and Jey Han Lau. 2022. Lipkey: A large-scale news dataset for absent keyphrases generation and abstractive summarization. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3427–3437. Mikalai Krapivin, Aliaksandr Autaeu, and Maurizio Marchese. 2009. Large dataset for keyphrases extraction. Mayank Kulkarni, Debanjan Mahata, Ravneet Arora, and Rajarshi Bhowmik. 2022. Learning rich representation of keyphrases from text. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 891–906, Seattle, United States. Association for Computational Linguistics. Angeliki Lazaridou, Adhi Kuncoro, Elena Gribovskaya, Devang Agrawal, Adam Liska, Tayfun Terzi, Mai Gimenez, Cyprien de Masson d’Autume, Tomas Kocisky, Sebastian Ruder, et al. 2021. Mind the gap: Assessing temporal generalization in neural language models. Advances in Neural Information Processing Systems, 34:29348–29363. Tho Thi Ngoc Le, Minh Le Nguyen, and Akira Shimazu. 2013. Unsupervised keyword extraction for japanese legal documents. In JURIX, pages 97–106. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726–742. Zhiyuan Liu, Wenyi Huang, Yabin Zheng, and Maosong Sun. 2010. Automatic keyphrase extraction via topic decomposition. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 366–376, Cambridge, MA. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations. Debanjan Mahata, Navneet Agarwal, Dibya Gautam, Amardeep Kumar, Swapnil Parekh, Yaman Kumar Singla, Anish Acharya, and Rajiv Ratn Shah. 2022. Ldkp: A dataset for identifying keyphrases from long scientific documents. arXiv preprint arXiv:2203.15349. 12728 Arpan Mandal, Kripabandhu Ghosh, Arindam Pal, and Saptarshi Ghosh. 2017. Automatic catchphrase identification from legal court case documents. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pages 2187–2190. Luis Marujo, Márcio Viveiros, and João Paulo da Silva Neto. 2011. Keyphrase cloud generation of broadcast news. 12th Annual Conference of the International Speech Communication Association 2011 (INTERSPEECH 2011), pages 2404–2407. Olena Medelyan and Ian H Witten. 2006. Thesaurus based automatic keyphrase indexing. In Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries, pages 296–297. Masha Medvedeva, Ahmet Üstün, Xiao Xu, Michel Vols, and Martijn Wieling. 2021. Automatic judgement forecasting for pending applications of the european court of human rights. In ASAIL/LegalAIIA@ ICAIL. Rui Meng, Xingdi Yuan, Tong Wang, Peter Brusilovsky, Adam Trischler, and Daqing He. 2019. Does order matter? an empirical study on generating multiple keyphrases as a sequence. arXiv preprint arXiv:1909.03590. Rui Meng, Xingdi Yuan, Tong Wang, Sanqiang Zhao, Adam Trischler, and Daqing He. 2021. An empirical study on neural keyphrase generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4985–5007, Online. Association for Computational Linguistics. Rui Meng, Sanqiang Zhao, Shuguang Han, Daqing He, Peter Brusilovsky, and Yu Chi. 2017. Deep keyphrase generation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 582–592, Vancouver, Canada. Association for Computational Linguistics. Funan Mu, Zhenting Yu, LiFeng Wang, Yequan Wang, Qingyu Yin, Yibo Sun, Liqun Liu, Teng Ma, Jing Tang, and Xing Zhou. 2020. Keyphrase extraction with span-based feature representations. arXiv preprint arXiv:2002.05407. Yida Mu, Kalina Bontcheva, and Nikolaos Aletras. 2023. It’s about time: Rethinking evaluation on rumor detection benchmarks using chronological splits. In Findings of the Association for Computational Linguistics: EACL 2023, pages 736–743, Dubrovnik, Croatia. Association for Computational Linguistics. Thuy Dung Nguyen and Min-Yen Kan. 2007. Keyphrase extraction in scientific publications. In International conference on Asian digital libraries, pages 317–326. Springer. Joel Niklaus, Veton Matoshi, Pooja Rani, Andrea Galassi, Matthias Stürmer, and Ilias Chalkidis. 2023. LEXTREME: A multi-lingual and multi-task benchmark for the legal domain. arXiv preprint arXiv:2301.13126. Frédéric Piedboeuf and Philippe Langlais. 2022. A new dataset for multilingual keyphrase generation. Advances in Neural Information Processing Systems, 35:38046–38059. Seth Polsley, Pooja Jhunjhunwala, and Ruihong Huang. 2016. CaseSummarizer: A system for automated summarization of legal texts. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: System Demonstrations, pages 258–262, Osaka, Japan. The COLING 2016 Organizing Committee. M.F. Porter. 2001. Snowball: A language for stemming algorithms. Dhruva Sahrawat, Debanjan Mahata, Haimin Zhang, Mayank Kulkarni, Agniv Sharma, Rakesh Gosangi, Amanda Stent, Yaman Kumar, Rajiv Ratn Shah, and Roger Zimmermann. 2020. Keyphrase extraction as sequence labeling using contextualized embeddings. In Advances in Information Retrieval: 42nd European Conference on IR Research, ECIR 2020, Lisbon, Portugal, April 14–17, 2020, Proceedings, Part II 42, pages 328–335. Springer. Kenzo Sakiyama, Rodrigo Nogueira, and Roseli A. F. Romero. 2023. Automated keyphrase generation for brazilian legal information retrieval. In 2023 International Joint Conference on Neural Networks (IJCNN), pages 1–8. M Saravanan and Balaraman Ravindran. 2010. Identification of rhetorical roles for segmentation and summarization of a legal judgment. Artificial Intelligence and Law, 18(1):45–76. Jaromir Savelka, Hannes Westermann, Karim Benyekhlef, Charlotte S Alexander, Jayla C Grant, David Restrepo Amariles, Rajaa El Hamdani, Sébastien Meeùs, Aurore Troussel, Michał Araszkiewicz, et al. 2021. Lex rosetta: transfer of predictive models across languages, jurisdictions, and legal domains. In Proceedings of the eighteenth international conference on artificial intelligence and law, pages 129–138. Eva Sharma, Chen Li, and Lu Wang. 2019. BIGPATENT: A large-scale dataset for abstractive and coherent summarization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2204–2213, Florence, Italy. Association for Computational Linguistics. Lingyun Shen and Xiaoqiu Le. 2023. An enhanced method on transformer-based model for one2seq keyphrase generation. Electronics, 12(13):2968. Zejiang Shen, Kyle Lo, Lauren Yu, Nathan Dahlberg, Margo Schlanger, and Doug Downey. 2022. Multilexsum: Real-world summaries of civil rights lawsuits at multiple granularities. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track. 12729 Anders Søgaard, Sebastian Ebert, Jasmijn Bastings, and Katja Filippova. 2021. We need to talk about random splits. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1823–1832, Online. Association for Computational Linguistics. Zhiqing Sun, Jian Tang, Pan Du, Zhi-Hong Deng, and Jian-Yun Nie. 2019. Divgraphpointer: A graph pointer network for extracting diverse keyphrases. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 755–764. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Yanan Wang, Qi Liu, Chuan Qin, Tong Xu, Yijun Wang, Enhong Chen, and Hui Xiong. 2018. Exploiting topic-based adversarial neural network for cross-domain keyphrase extraction. In 2018 IEEE International Conference on Data Mining (ICDM), pages 597–606. IEEE. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Di Wu, Wasi Uddin Ahmad, and Kai-Wei Chang. 2022. Pre-trained language models for keyphrase generation: A thorough empirical study. arXiv preprint arXiv:2212.10233. Di Wu, Da Yin, and Kai-Wei Chang. 2023. Kpeval: Towards fine-grained semantic-based evaluation of keyphrase extraction and generation systems. arXiv preprint arXiv:2303.15422. Binbin Xie, Jia Song, Liangying Shao, Suhang Wu, Xiangpeng Wei, Baosong Yang, Huan Lin, Jun Xie, and Jinsong Su. 2023. From statistical methods to deep learning, automatic keyphrase prediction: A survey. Information Processing & Management, 60(4):103382. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 483–498, Online. Association for Computational Linguistics. Min Yang, Yuzhi Liang, Wei Zhao, Wei Xu, Jia Zhu, and Qiang Qu. 2018. Task-oriented keyphrase extraction from social media. Multimedia Tools and Applications, 77:3171–3187. Yang Yu and Vincent Ng. 2018. Wikirank: Improving keyphrase extraction based on background knowledge. arXiv preprint arXiv:1803.09000. Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, and Adam Trischler. 2020a. One size does not fit all: Generating and evaluating variable number of keyphrases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7961–7975. Xingdi Yuan, Tong Wang, Rui Meng, Khushboo Thaker, Peter Brusilovsky, Daqing He, and Adam Trischler. 2020b. One size does not fit all: Generating and evaluating variable number of keyphrases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7961–7975, Online. Association for Computational Linguistics. Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, et al. 2020. Big bird: Transformers for longer sequences. In NeurIPS. A Hyperparameters While the batch size was set at 16 (with the help of gradient batch accumulation when needed), a grid search was performed for the learning rate with the following values: 1 × 10−6, 5 × 10−6, 1×10−5, 5×10−5, 1×10−4. For both mBART50 models, we settled on a learning rate of 1 × 10−5 which we coupled with the AdamW (Loshchilov and Hutter, 2018) optimizer of Pytorch. The same applies to mT5 variants but with a learning rate of 1 × 10−4. All other optimizer hyperparameters are left at default values. Once the learning rate was determined, we also added 2500 warming steps and a scheduler with a linear decay. The objective function was cross-entropy loss. For mBART models, we applied mixed precision (FP16) while brainfloat16 (BF16) was used for mT5 models. The maximum number of epochs was set to 10 and the model achieving the lowest loss on the validation set is used for the test set. Patience was set to 5 epochs. In the case of large models (mBART-8k, mT5-large), the maximum number of epochs was 12730 set to 5 due to much longer training epochs. For generation, we used a beam search of 10 and return the best sequence, with a maximum output length of 256 tokens. The hardware consisted in a RTX4090 and a couple of A5000 graphical cards, each with approximately 24 GB of VRAM. Inference only required a single card for all models, including those with the highest number of parameters. Overall, the training for either mBART-8k or mT5-large was performed on 2 GPUs for up to 62 hours overall. For other models, the training lasted for up to 28h on a single GPU (more details in Table 10). The inference on the test set took up to 34 hours, depending on the model. Given the delay in obtaining keyphrases during inference, the results are based on a single run. B Languages Support per Model mBART50 by Tang et al. (2020) and mT5 by Xue et al. (2021), though presented as multilingual models, do not cover the same languages and both support only partially the official EU languages, as shown in Table 8. Although mT5 (its tokenizer remains the same for all variant sizes) supports twice as many languages than mBART, the vocabulary of both tokenizers is roughly of the same size: 250 112 and 250 054 tokens, respectively. Such large vocabulary size is a computing burden during inference time, especially during decoding. By design, mBART50 tokenizer adds to input and output sequences a token specifying source and target languages (e.g. [en_XX] for English). As not all EU languages are not covered by mBART50, we manually added to the tokenizer special tokens for all unsupported languages (e.g. [bg_BG] for Bulgarian). Following mBART50 tokenization process, a language token is added in both input and output sequences. Unlike translation tasks, our multilingual KPG task is such that, for each instance, the input document and the target keyphrases are both in the same language. Therefore, the language token remains the same in the input and output within the same instance. When it comes to mT5, such languages prefixes are not required. However, applying mBART50 language tokens to mT5-small brought around 1 percentage point improvement across F1@k metrics, thus motivating us to deploy mBART50 language prefixes to the rest of mT5 models. Model mBART50 mT5 Total lang. 52 101 Unsupported EU lang. bg, da, el, ga, hu, mt, sk hr Table 8: Language Coverage by Multilingual Models C Number of Parameters per Model Model # param. (millions) google/mt5-small 172 google/mt5-base 390 google/mt5-large 973 facebook/mbart-large-50 611 mBART-8k 626 Table 9: Number of parameters (in millions) for each model. Each model name corresponds to its identifier on Wolf et al. (2020)’s Hugging Face platform, expect mBART-8k which we produced by applying LSG attention (Condevaux and Harispe, 2023) to mBART. D Training and Inference Times Model Num. of training epochs Total training time Total inference time mT5-small 10 11h 7h mT5-base 10 28h 9h mT5-large* 5 53h 16h mBART50 10 25h 31h mBART50-8k* 5 61h 34h Table 10: Total training and inference times for each model. Total training time corresponds to the duration of all training epochs, including computation of loss on the validation set. Total inference time refers to the duration required for generating keyphrases over the entire test set. All measurements correspond to a single run. *Due to greater number of parameters and higher training costs, these models were trained during 5 epochs. 12731 E Details About the Data Preprocessing Figure 2: Diagram illustrating the data collection procedure. Target keyphrases and input text are extracted from separate pieces of HTML that are linked by a common case identifier. After preprocessing, they are merged into a single instance. The same process is repeated in every language. 12732 F Additional Statistics Set # judg. # inst. Avg. num. KPs % present KPs % absent KPs Train (1957-2010) 10 003 131 076 5.4 45.6 54.4 Valid. (2011-2015) 3 391 63 373 8.3 45.1 54.9 Test (2016-2023) 4 439 90 508 10.5 51.7 48.3 Table 11: Statistics on the training, validation, and test sets of EUROPA. # judg. and # inst. stand for the numbers of judgments (judgment is a ruling released in several languages) and instances (an instance is a pair with input text and target keyphrases both in a single language). Language Lang. ISO code Official EU lang. since Train set size Valid. set size Test set size Avg. num. KPs % absent KPs Avg. KPs length Avg. input length French fr 1958 9692 3338 4431 7.0 51.6 5.5 5651 German de 1958 9028 2817 3962 6.9 47.8 4.6 4877 English en 1973 9047 2826 3950 6.8 46.8 5.6 5539 Italian it 1958 8978 2831 3946 6.9 53.2 5.6 5402 Dutch nl 1958 9066 2809 3875 6.8 50.5 5.0 5374 Greek el 1981 8945 2833 3917 6.9 73.9 5.3 3894 Danish da 1973 9018 2702 3886 6.8 52.2 4.8 5039 Portuguese pt 1986 8347 2801 3913 7.1 56.0 5.8 5763 Spanish es 1986 8462 2807 3932 7.0 49.4 6.1 6154 Swedish sv 1995 6736 2788 3903 7.5 45.8 4.8 5392 Finnish fi 1995 6874 2806 3886 7.5 56.5 4.0 4139 Lithuanian lt 2004 3598 2828 3908 8.5 47.1 4.8 4674 Estonian et 2004 3621 2810 3912 8.4 42.4 4.0 4363 Czech cs 2004 3621 2804 3913 8.4 54.9 5.1 5167 Hungarian hu 2004 3625 2805 3912 8.4 55.1 4.8 5138 Latvian lv 2004 3627 2814 3896 8.5 47.1 4.8 4825 Slovene sl 2004 3568 2800 3863 8.4 48.7 4.8 5187 Polish pl 2004 3526 2732 3918 8.4 63.8 5.1 5385 Maltese mt 2004 3394 2739 3905 8.5 58.1 5.1 5185 Slovak sk 2004 3337 2803 3894 8.5 56.9 5.1 5233 Romanian ro 2007 2484 2812 3916 8.7 51.5 5.8 6085 Bulgarian bg 2007 2480 2822 3873 8.7 47.0 5.8 5827 Croatian hr 2013 2 1246 3905 10.1 46.0 5.2 6367 Irish ga 2022 0 0 92 11.9 48.6 6.5 10101 Overall 131 076 63 373 90 508 7.6 52.6 5.1 5220 Table 12: Distribution of documents across languages and splits. High-volume languages are those spoken by the earliest EU Member States (e.g. France and Germany are among the Founding States of the EU). Avg. KPs length refers to the average number of words per keyphrase (split at whitespaces). Avg. input length refers to the average number of tokens in the input text (split at whitespaces). 12733 G Scores per Language for each Model Model Bulgarian * Croatian † Czech Danish * Dutch F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 1.8 1.9 2.6 2.4 3.0 2.8 0.6 1.0 2.1 2.5 mT5-small 13.3 19.9 1.5 2.4 9.5 15.3 15.1 21.5 15.4 22.4 mT5-base 14.7 21.5 3.9 6.1 10.9 17.2 16.8 23.9 16.3 23.5 mT5-large 14.6 21.6 3.3 5.1 11.2 17.8 16.6 23.4 16.4 23.4 mBART50 23.0 29.0 5.8 8.6 20.0 26.4 25.0 30.4 25.5 31.9 mBART50-8k 23.7 29.2 11.8 15.5 23.2 29.5 27.6 32.6 28.3 34.4 Model English Estonian Finnish French German F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 2.7 3.4 1.9 2.5 1.1 1.2 2.4 2.5 2.4 3.1 mT5-small 15.0 20.5 13.2 18.4 9.3 14.8 15.4 23.0 16.7 23.8 mT5-base 16.1 21.9 15.5 21.3 12.2 19.1 16.6 24.3 17.8 24.9 mT5-large 16.4 22.2 15.6 21.5 12.0 19.0 16.7 24.5 17.9 25.2 mBART50 24.9 29.5 25.5 29.5 19.1 27.5 26.6 33.1 28.7 33.5 mBART50-8k 27.8 31.7 27.2 31.0 21.1 28.9 28.8 34.4 31.1 35.8 Model Greek * Hungarian * Irish * Italian Latvian F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 1.0 0.9 2.1 1.9 0.7 1.0 2.9 3.0 2.4 2.8 mT5-small 6.3 11.0 7.4 11.3 0.2 0.2 13.8 21.0 12.7 18.7 mT5-base 8.3 14.2 9.1 13.7 1.3 1.8 14.8 21.9 15.0 21.4 mT5-large 7.7 13.2 9.0 13.6 0.0 0.0 14.1 20.9 14.0 20.2 mBART50 9.3 14.8 10.0 14.3 0.0 0.0 23.8 30.5 28.4 34.7 mBART50-8k 10.7 15.7 10.9 15.3 1.4 2.2 27.2 33.3 27.8 33.1 Model Lithuanian Maltese * Polish Portuguese Romanian F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 1.6 1.7 0.3 0.4 2.3 2.2 2.5 2.6 1.7 1.7 mT5-small 12.8 18.7 10.1 15.8 8.7 14.3 7.2 11.4 13.7 20.2 mT5-base 16.2 23.3 11.3 17.7 10.7 17.6 8.4 13.0 14.0 20.5 mT5-large 15.2 21.8 11.4 17.7 10.9 17.9 8.5 13.2 14.7 21.6 mBART50 28.8 34.9 18.8 24.7 19.1 26.3 21.6 27.6 24.9 30.7 mBART50-8k 27.9 32.9 21.5 27.0 21.6 29.7 24.6 30.1 28.2 33.8 Model Slovak * Slovene Spanish Swedish F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 1.6 1.7 2.3 2.6 2.3 2.5 2.0 2.9 mT5-small 10.7 17.2 9.9 14.7 14.3 20.9 15.3 21.4 mT5-base 11.4 18.2 12.1 17.6 14.9 21.8 17.2 24.2 mT5-large 12.0 19.1 12.7 18.3 15.4 22.6 16.9 23.7 mBART50 17.8 25.2 20.4 25.6 23.7 29.2 28.2 33.1 mBART50-8k 18.8 24.9 23.9 29.2 26.4 31.8 29.3 33.9 Table 13: Present keyphrases prediction results for the multilingual setting. Languages with a star (*) are unsupported by mBART50. Croatian with a dagger (†) is unsupported by mT5. 12734 Model Bulgarian * Croatian † Czech Danish * Dutch F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 mT5-small 0.9 1.6 1.2 1.7 6.0 9.5 0.8 1.3 0.9 1.4 mT5-base 1.6 2.8 2.4 3.7 6.1 9.6 1.4 2.2 1.7 2.8 mT5-large 1.7 2.9 1.7 2.6 6.5 10.3 1.2 2.0 1.8 3.0 mBART50 2.6 3.4 3.6 4.1 11.0 13.7 2.5 3.2 2.8 3.5 mBART50-8k 2.5 3.5 3.2 3.8 11.7 14.6 1.9 2.6 2.9 3.7 Model English Estonian Finnish French German F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 mT5-small 1.0 1.8 1.0 2.0 4.3 6.5 2.6 4.5 1.2 2.0 mT5-base 2.5 4.2 1.8 3.5 5.6 8.2 3.0 5.1 2.3 3.9 mT5-large 2.3 4.0 1.9 3.6 5.4 7.9 3.0 5.0 2.3 3.9 mBART50 3.7 5.0 3.0 4.7 10.2 11.6 7.0 8.7 3.6 4.9 mBART50-8k 3.8 5.1 3.0 4.5 11.4 13.1 7.7 9.6 3.6 5.0 Model Greek * Hungarian * Irish * Italian Latvian F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 mT5-small 6.2 9.0 6.0 9.2 0.0 0.0 0.9 1.6 0.6 1.1 mT5-base 7.0 10.2 6.0 9.5 0.0 0.0 1.6 2.8 1.6 2.6 mT5-large 6.9 10.0 6.5 10.3 0.0 0.0 1.6 2.7 1.5 2.5 mBART50 8.7 10.6 8.0 10.4 0.0 0.0 3.0 3.7 2.7 3.5 mBART50-8k 9.2 11.1 8.9 11.3 0.0 0.0 2.8 3.6 2.8 3.6 Model Lithuanian Maltese * Polish Portuguese Romanian F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 mT5-small 1.1 1.9 3.2 4.9 3.6 5.4 2.5 4.1 1.4 2.4 mT5-base 2.0 3.4 3.9 6.1 5.0 7.5 3.5 5.7 2.4 4.0 mT5-large 2.1 3.5 4.0 6.3 4.9 7.2 3.3 5.4 2.5 4.1 mBART50 3.1 4.2 6.0 7.7 9.9 11.7 4.4 5.5 4.2 5.4 mBART50-8k 3.5 4.7 5.4 7.1 11.2 13.2 5.1 6.5 4.4 5.7 Model Slovak * Slovene Spanish Swedish F1@5 F1@M F1@5 F1@M F1@5 F1@M F1@5 F1@M YAKE 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 mT5-small 6.2 9.8 4.9 8.3 1.4 2.4 1.1 1.9 mT5-base 6.8 10.7 5.6 9.4 2.1 3.5 2.3 3.9 mT5-large 7.0 11.0 6.1 10.3 2.2 3.6 2.0 3.3 mBART50 12.8 15.7 8.4 11.1 3.7 4.8 3.3 4.2 mBART50-8k 13.0 16.0 8.6 11.8 3.6 4.8 3.7 5.0 Table 14: Absent keyphrases prediction results for the multilingual setting. Languages with a star (*) are unsupported by mBART50. Croatian with a dagger (†) is unsupported by mT5. 12735 H Random Split As discussed previously, we perform a temporal split on the data to better reflect the reality of temporal concept shifting as well as new languages being introduced in the EU. However, a random split might be better in some cases, such as if we train a model for short-time use, and are willing to recollect data regularly to insure performance remains as predicted. In short, a random split can only procure guarantee about its performance as long as the real world data remains similar to the training data, while a temporal split give a view of the performance while accounting for changes in the real world data. Still, to better allow practitioners to use data as they desire, we released a random split (https: //huggingface.co/datasets/NCube/europa-random-split). The training, validation and test sets have respectively 10 003, 3 391 and 4 439 judgments. These figures are the same as in the chronologically split dataset. However, the number of instances per set in the random split setting differs since each set has the same distribution in terms of languages. Consequently, each set contains 159 306, 53 943 and 71 708 instances respectively. The results achieved by mBART50 are shown in Table 15. As expected, results are overall much higher since all training, validation and testing sets have the same distribution in terms of languages and vocabulary. We want to emphasize once again however that due to the fast changing nature of legal NLP, these performances are only valid for a short time after the data collection. Language F1 Present F1 Absent MAP @5 @10 @M @5 @10 @M @50 Weighted Avg. 30.4 20.0 41.5 15.3 10.3 18.1 20.6 Unweighted Avg. 30.6 20.2 41.3 15.2 10.3 17.9 20.7 French 30.2 19.7 41.2 14.9 9.8 17.8 20.2 German 30.0 19.8 40.3 14.2 9.3 18.4 19.8 English 32.8 22.0 43.6 9.9 6.5 13.3 19.0 Italian 27.7 18.2 38.7 16.3 10.7 20.0 19.4 Dutch 31.1 20.2 43.0 12.9 8.5 15.9 18.9 Greek 12.6 7.8 21.7 15.9 11.1 17.9 12.0 Danish 27.8 18.3 37.5 13.9 9.3 18.0 18.2 Portuguese 27.4 17.7 39.5 14.5 9.6 16.9 17.8 Spanish 30.1 19.7 41.2 14.1 9.2 17.2 19.2 Swedish 34.8 23.4 44.7 10.5 7.1 13.7 21.1 Finnish 28.2 17.7 42.4 17.9 12.3 21.0 20.1 Lithuanian 38.5 26.0 49.0 12.1 8.2 14.8 24.0 Estonian 40.2 28.0 48.1 9.8 6.6 12.4 24.8 Czech 28.9 18.6 41.8 24.4 16.9 26.6 24.5 Hungarian 27.1 17.2 40.3 18.4 12.5 21.6 20.1 Latvian 38.7 26.2 48.8 11.4 7.7 14.1 23.4 Slovenian 33.4 22.6 43.3 17.6 11.9 21.1 24.0 Polish 27.4 17.2 41.0 24.3 17.4 25.6 24.3 Maltese 28.1 17.9 40.5 18.3 12.6 20.5 19.8 Slovak 31.3 20.0 45.0 22.0 15.2 24.0 24.2 Romanian 35.2 23.5 46.5 19.4 13.2 21.5 25.6 Bulgarian 35.4 23.8 46.3 11.2 7.6 13.1 21.9 Croatian 40.8 28.6 46.9 13.7 9.3 13.9 26.4 Irish 15.9 11.5 20.5 7.7 5.1 10.2 8.9 Table 15: Results on a random split of the data, per language, achieved by mBART50. 12736
2024
687
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12737–12752 August 11-16, 2024 ©2024 Association for Computational Linguistics GLIMPSE: Pragmatically Informative Multi-Document Summarization of Scholarly Reviews ∗Maxime DARRIN1,2,3,4 ∗Ines AROUS2,3 Pablo PIANTANIDA1,2,4,5 Jackie Chi Kit CHEUNG2,3,6 1International Laboratory on Learning Systems, 2MILA - Quebec AI Institute 3McGill University 4Université Paris-Saclay 5CNRS, CentraleSupélec, 6Canada CIFAR AI Chair [email protected] [email protected] [email protected] [email protected] Abstract Scientific peer review is essential for the quality of academic publications. However, the increasing number of paper submissions to conferences has strained the reviewing process. This surge poses a burden on area chairs who have to carefully read an ever-growing volume of reviews and discern each reviewer’s main arguments as part of their decision process. In this paper, we introduce GLIMPSE, a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews. Unlike traditional consensus-based methods, GLIMPSE extracts both common and unique opinions from the reviews. We introduce novel uniqueness scores based on the Rational Speech Act framework to identify relevant sentences in the reviews. Our method aims to provide a pragmatic glimpse into all reviews, offering a balanced perspective on their opinions. Our experimental results with both automatic metrics and human evaluation show that GLIMPSE generates more discriminative summaries than baseline methods in terms of human evaluation while achieving comparable performance with these methods in terms of automatic metrics. 1 Introduction Peer review is the standard process for evaluating researchers’ work submitted to conferences or academic journals across all fields. Its primary function is to maintain quality standards for academic publications and provide authors with constructive feedback on their work. Its effectiveness is currently being challenged by a significant surge in the number of submissions. Conferences in computer science, such as the International Conference on Learning Representations (ICLR) and the Association for Computational Linguistics (ACL), among others, regularly receive thousands of submissions. *Equal contribution. For instance, the number of submissions to ICLR and ACL has increased fivefold since 2017, reaching 3407 and 3378 submissions, respectively, in 2022. The reviewing process is inherently a tool for scientific communication with at least two target audiences: the authors of a paper and the area chair. The former should get feedback on their work, whereas the latter has to synthesize the salient points from all the reviews as part of their decision process. The area chair has to extract from the reviews the overall sentiment about the paper while gathering the common ideas and unique arguments raised by the reviewers. An efficient highlighting mechanism could help area chairs in their decision-making process, thus reducing their workload. Several methods have been developed to generate summaries from reviews in various domains (Bražinskas et al., 2020; Chu and Liu, 2019). Many of these methods identify salient segments in reviews based on the discussed topics and the sentiment polarity (Zhao and Chaturvedi, 2020; Li et al., 2023a; Amplayo et al., 2021a), or using centralitybased metrics (Ge et al., 2023; Liang et al., 2021). However, these techniques fall short in the peer review domain. Indeed, they are designed to generate a consensus opinion summary, reflecting common opinions without identifying divergent and unique ones. This poses a challenge for the peer review domain since area chairs are concerned with both common and divergent opinions among reviewers. In this paper, we recognize one of the underlying communication goals in the reviewing/metareviewing process: to convey the review’s main points to the area chair concisely. The distillation of such salient information in a concise message has been a long-standing focus of study within the pragmatics domain. One of the most influential probabilistic approaches to pragmatics is the Rational Speech Act (RSA). It formulates the communication problem akin to a "reference game", with the 12737 goal of associating each item in a set with the most informative yet concise utterance that distinguishes it from others. We take inspiration from this reference-game scenario to introduce the discriminative summarization task. The goal is to generate a summary for each review that highlights commonalities, differences, and unique perspectives, thereby distinguishing one review from others of the same submission. This framework closely aligns with the challenge of summarizing academic reviews, distilling key discriminative insights from lengthy reviews, thus contextualizing each review in relation to others. To this end, we map the discriminative summarization task to a reference game and propose GLIMPSE, a novel pragmatically informative summarization method for scholarly reviews. At the technical level, we leverage the RSA model, a framework for pragmatic modeling rooted in Bayesian inference that solves the reference game setting (Frank and Goodman, 2012). We define two novel RSA-based scores that measure the informativeness and the uniqueness of opinions in scholarly reviews. We use these scores to rank utterances describing a review and aggregate them to compose a “glimpse” of all reviews. We conducted extensive experiments on a realworld peer review dataset from the ICLR conference collected over a four-year time period. We compare GLIMPSE performance to state-of-the-art methods in multi-document summarization. We design extractive and abstractive variants of our framework and shed light on their properties. Our results show that GLIMPSE generates informative and concise summaries. To the best of our knowledge, we are the first to cast the multi-document summarization problem as a reference game and adopt RSA to identify common and divergent opinions in reviews. Contributions: Overall, we make the following key contributions: 1. We propose a new setting for multi-document summarization: discriminative summarization and cast it as a reference game problem. 2. We propose new informativeness and uniqueness scores for multi-document summarization based on the RSA model of human communication. 3. We conduct empirical evaluation to demonstrate that we extract more discriminative summaries than consensus-based summarizers in both automatic and human evaluation. 2 Related Work Unsupervised Opinion Summarization. Our work is related to unsupervised opinion summarization, where several methods have been developed to summarize reviews of products or hotels. These methods can be divided into two categories: abstractive and extractive. Abstractive summarization aims to generate a coherent summary that reflects salient opinions in input reviews. For instance, MeanSum (Chu and Liu, 2019) relies on an autoencoder architecture, in which an aggregated representation of the input reviews is fed to a decoder that generates review-like summaries. Another approach consists of using a hierarchical variational autoencoder model (Bražinskas et al., 2020) to generate summaries that represent dominant opinions within reviews. Other methods rely on modeling fine-grained information within reviews. This includes disentangling aspects and sentiments (Amplayo et al., 2021b; Wang and Wan, 2021; Suhara et al., 2020; Ke et al., 2022), topics (Xu and Lapata, 2020) and contrastive opinions (Iso et al., 2022; Carenini and Cheung, 2008). The absence of attributability in these methods presents a challenge in the peer review domain due to the need for transparency and context. In the peer review process, area chairs might seek to trace the source of the synthesized opinions to assess their validity and relevance. Our method addresses the challenge of attributability by associating a summary with each review, ensuring transparency and allowing area chairs to trace and understand the synthesized opinions in context. In contrast to abstractive summarization, which entails generating new text, extractive summarization aims to extract significant phrases directly from the input reviews. A fundamental method in extractive summarization involves clustering review segments and iteratively extracting the central elements of these segments to form a summary. Various techniques depend on a model to acquire representations for review sentences. These representations are then utilized by an inference algorithm, often in an encoder-decoder architecture, to choose sentences for summarization (Li et al., 2023a; Amplayo et al., 2021a; Angelidis et al., 2021; Gu et al., 2022; Angelidis and Lapata, 2018). Current extractive methods concentrate on amalgamating common opinions found across reviews. However, we contend that such common opinions hold less significance in scholarly contexts. Instead, 12738 Solid theoretical results are provided to confirm the doubly robustness of the treatment estimator and outcome estimator. This paper is well-written, however, I still have some concerns about the contributions. [...] Why the VCnet designed to be dependent of treatment information? The dependence is not theoretically discussed in this paper. The experimental results are not convincing for me. Only two baselines are compared with the proposed method. [...] But the design of VCnet needs the numerical results to confirm its effectiveness for ADRF. This paper is to develop a varying coefficient neural network to estimate average dose-response curve (ADRF). Although this paper has several interesting results, the paper is full of many typos and small errors. The current paper needs substantial improvement. The introduction section is not well written since the logic does not flow very smooth.. In the proof of all theorems, there are some obvious mistakes inside. This problem is well-motivated - estimating dose-response is a challenging and practically important problem. The paper is well written. It explained complex ideas in the semi- parametric literature clearly. The comparison against existing works is clear. The theory, as far as I can tell, is solid. It improves the existing results in targeted regularization and can be adapted to analyze one step TMLE. Figure 1: Illustration of our proposed RSA-based scores applied to real-world scholarly reviews. We consider each sentence in a review as a candidate summary. The most common opinions in the reviews are highlighted in blue whereas the unique ones are highlighted in red using our RSA-based scores. we advocate for a pragmatic approach that situates each review relative to others, emphasizing unique, common, and divergent ideas (Mani and Bloedorn, 1999; Wan et al., 2011). Summarization in Scientific Peer Review The task most closely related to ours in the scholarly domain is the meta-review generation task. Various strategies have been developed for this task. For instance, MetaGen (Bhatia et al., 2020) generates an extractive summary draft of reviews, then uses a fine-tuned model for decision prediction, and generates an abstractive meta-review based on both the draft and the predicted decision. Similarly, (Li et al., 2023b) leverage the hierarchical relationships within reviews and metadata, such as reviewers’ confidence and rating, to generate a meta-review along with the acceptance decision. Recently, (Zeng et al., 2023) proposed to prompt a large language model in a guided and iterative manner to generate a meta-review. Our task is different because our goal is to support area chairs by generating a summary of reviewers’ opinions and highlighting areas of divergence. Predicting the meta-reviewer’s decision and generating a metareview based on the predicted decision is out of the scope of our work as we believe it is not desirable to do so. The metareview evaluation should ultimately remain in the hands of a human expert as it involves scientific expertise and reasoning about the evaluated paper. Rational Speech Act (RSA) framework. The Rational Speech Act theory (Frank and Goodman, 2012) is the most influential probabilistic approach to pragmatics (Qing and Franke, 2015; Degen, 2023). It formulates communication between a pragmatic speaker and a listener as probabilistic reasoning, where the speaker’s goal is to choose the utterance that is both short and informative with respect to the speaker’s intended referent (Degen, 2023). As a result, the RSA framework aims to effectively identify the most informative utterance from multiple potential options in a given context. It has been applied to various tasks, including image captioning (Ou et al., 2023; Cohn-Gordon et al., 2018), translation (Cohn-Gordon and Goodman, 2019), dialogue (Kim et al., 2020, 2021), and text generation (Shen et al., 2019), among others. However, there is little research on applying the RSA framework to multi-document summarization. In our study, we tailor the RSA framework for multidocument summarization tasks, aiming to produce summaries that capture various perspectives while minimizing redundancy. 3 Discriminative Multi-Document Summarization (D-MDS) Most approaches to multi-document summarization prioritize generating consensus-based summaries by emphasizing redundant opinions across reviews. However, in domains like peer review, where including unique and divergent opinions is crucial for a comprehensive summary, the conventional focus on common opinions becomes less relevant. This shift in emphasis is particularly pertinent in scholarly review summarization, where the objective is to convey the various opinions and distinctive viewpoints expressed by reviewers. With this objective in mind, we introduce the discriminative 12739 multi-document summarization task, inspired by the reference game setting. Our aim is to furnish the meta-reviewer with a summary for each review, enabling them to swiftly identify the source review based on the summary content. Formally, we define the discriminative multidocument summarization problem as follows: Definition 1 (The discriminative multi-document summarization problem). Let N = {d1, · · · , dN} be a set of documents. For each document di, we suppose we have K candidate summaries and we form K = {si,j}1⩽i⩽N,1⩽j⩽K the set of all candidates. The goal is to select the most informative summary from that set for each document di ∈N These candidates can be generated using various summarization methods (see Sec. 5.1). 4 Problem Formulation and Pragmatic Summarization In this section, we formulate the discriminative multi-document summarization problem as a reference game. We then apply the Rational Speech Act (RSA) framework, a probabilistic approach to pragmatics that tackles reference games, to address this summarization problem. 4.1 D-MDS Problem as a Reference Game In a reference game setting (Frank and Goodman, 2012), a speaker and a listener are given a set of objects. The speaker provides a description of a target among the set of objects, and a listener aims to select the correct target given the speaker’s description. The speaker’s goal is to describe the target using one of its properties in an informative yet concise manner. Similarly, in discriminative multidocument summarization, our goal is to develop a summarizer that provides a concise yet informative description of each review within a set of reviews. Given this similarity, we formally map the D-MDS problem to a reference game setting as follows. Definition 2 (D-MDS as a reference game). Let O ≜{d1, · · · , dN} be the set of documents and C ≜{si,j}1⩽i⩽N,1⩽j⩽K the set of candidate summaries. Let M : O × C →[0, 1] be a truth matrix that indicates the likelihood of a candidate summary s being a summary of a document d. In standard reference games, the truth-matrix is boolean as the properties for an object are either true or false. In our setting, we approximate the truth-matrix using a pre-trained language model on summarization LM to score the likelihood of each candidate summary s to be associated with a document d: M(t, s) ≈LM(s|d). 4.2 The Pragmatic Summarizer One popular framework to efficiently tackle reference games is the Rational Speech Act (RSA) model of human communication Frank and Goodman (2012), which models optimal communication between pragmatic agents and provides a formal framework to build pragmatic speakers and listeners. Therefore, we propose two novel RSA-based scores to select informative and unique opinions for review summarization, which we discuss in detail in Sec. 5.2. In what follows, we present how we adapt RSA to our summarization setting. The RSA framework posits that both the speaker and listener maximize the utility of their communication and they both assume the other is rational. They iteratively adapt to each other to reach a common understanding of the context. Formally, we define the utility of communicating to a listener L, the summary s for a document d as: VL(s, d) = log L(d|s) −Cost(d), (1) where L(d|s) denotes the conditional probability of guessing the object d — the document/review in our case — upon receiving s — a short summary— according to the listener L, Cost(s) is the cost of transmitting the summary s. In most cases, the cost is assumed to be 0, and we measure the informativeness, which is defined with the conditional probability L(d|s). Pragmatic reasoning is modeled as an iterative process that starts from a literal listener, i.e. a listener who has no assumption about the speaker. We denote it with L0(d|s) and define it using a pre-trained language model LM: Definition 3 (Literal listener). L0(d|s) = LM(s|d) P d∈O LM(s|d). (2) The pragmatic speaker adapts to a listener under the cost constraint by maximizing the communication utility (Zaslavsky et al., 2021). We define it as follows: Definition 4 (Pragmatic speaker). For a given target document d, the speaker’s distribution over the summaries is defined as the softmax of the utility scores with a listener Lt−1, where L0 is the literal 12740 listener given in Eq. 2. St(s|d) = exp(VLt−1(d, s)) P s′ exp(VLt−1(d, s′). (3) The pragmatic listener evaluates the summary provided by the pragmatic speaker and identifies the document that is most likely to be its source. Definition 5 (Pragmatic listener). For a given target summary s, the pragmatic listener’s distribution over the documents is defined using the pragmatic speaker St from Eq. 3 as follows: Lt(d|s) = St(s|d) P d′ St(s|d′). (4) We iterate between the pragmatic speaker (Eq. 3) and the listener (Eq. 4), recursively adapting the listener to the speaker for an empirically defined number T of iterations. 5 GLIMPSE Framework Our framework comprises three steps: 1) generating candidates through either abstractive or extractive summarization techniques, 2) selecting candidates using RSA-based scores defined in Sec. 5.2, and 3) composing a summary using the selected candidates and a template. 5.1 Candidate Generation The candidate summaries can be extracted sentences from the reviews, abstractive summaries sampled from a summarization model conditioned to review, or any other generated summary. We consider both the extractive setting, where the candidates are extracted sentences from the reviews, and the abstractive setting, where a language model generates the candidate summaries. We specifically focus on the extractive setting, as we want to ensure attribution of the summary to the source review. 5.2 Informativeness and Uniqueness Scores We propose two RSA-based scores derived from the RSA speaker and listener in our setting Sec. 4.2: a pragmatic-speaker-based score and a uniqueness score. The pragmatic-speaker-based score identifies the most discriminative utterance to refer to a source document. The uniqueness score measures the extent to which a candidate summary represents a common — or unique — idea in the source documents. Pragmatic-speaker-based score. We directly score the summaries for a document according to the pragmatic speaker’s distribution. In our case, this process assesses the informativeness of a summary within the set of candidates. Hence, we define the pragmatic-speaker-based score to be the argmax of the speaker: Eq. 3: RSA-Speaker(d) ≜arg max s∈C St(s|d). (5) In our experiments, we refer to the summarizer leveraging this score as GLIMPSE-Speaker. Uniqueness score. In our context, the RSA listener is designed to pragmatically infer a document given a summary. We propose quantifying the listener’s uncertainty regarding a particular summary given the following intuition: if the listener is uncertain, then the summary could apply to many documents; conversely, if the listener is confident, the summary can only be associated with a single document. Example 1. Consider the following reviews. • Review 1: This paper is well-written. However, the theoretical part lacks clarification. • Review 2: This paper is well-written. I believe it should be accepted. Given the sentence "The paper is well-written," a listener cannot accurately deduce the source review. However, when provided with the sentence "I believe it should be accepted," the listener can easily identify that the source review is Review 2. We propose to measure this uncertainty by comparing the listener’s probability distribution defined in Eq. 4 with the uniform distribution U over the documents: Unique ≜DKL(L(·| s)∥U). (6) High values of uniqueness score indicate the uniqueness of the candidate summary or its divergence from other candidates while lower values indicate that the candidate summary is common to multiple source documents. Intuitively, it measures how far the listener distribution conditioned to a summary is far from the uniform distribution. The GLIMPSE variant using this score is referred to as GLIMPSE-Unique. These two scores can be used to identify unique or common opinions across the different documents, as shown in Figure 1, or to select the most informative summary among a set of candidates. While, in this work, we use the scores to compose overall summaries, they can be used as standalone tools for content highlighting. 12741 5.3 GLIMPSE for Standard Multi-Document Summarization (MDS) The standard Multi-Document Summarization (MDS) task aims to generate a single summary for multiple documents. Generally, this summary is built to reflect the consensus among the documents. In order to provide a comparison with previous work, we generate summaries by concatenating the three most common with the three most unique candidate summaries identified through our RSAbased scores defined in Eq. 5 and Eq. 6. The most unique candidates are selected either using the RSA speaker score (Eq. 5) or using the uniqueness score (Eq. 6). Summaries utilizing the former are referred to as GLIMPSE-Speaker, while those utilizing the latter are referred to as GLIMPSE-Unique. This method ensures a balanced representation of both common and unique opinions. Example 2. Given the reviews in Figure 1, we can compose two summaries using our simple template: • GLIMPSE-Speaker: This paper is wellwritten, however, I still have some concerns about the contributions. The experimental results are not convincing I really enjoyed reading it. The introduction section is not well written since the logic does not flow very smooth. Why the VCnet designed to be dependent of treatment information? • GLIMPSE-Unique: This paper is wellwritten, however, I still have some concerns about the contributions. The experimental results are not convincing I really enjoyed reading it. The introduction section is not well written since the logic does not flow very smooth. This paper is to develop a varying coefficient neural network to estimate average dose-response curve (ADRF). Why the VCnet designed to be dependent of treatment information? 6 Experimental Setup In this section, we outline the experimental setup used to evaluate the performance of GLIMPSE. We present the datasets, baselines, evaluation metrics, and implementation details. Datasets. We collect data from the ICLR conference, which provides open access to reviews and meta-reviews for all submissions through OpenReview1. Our dataset contains 28062 reviews for 8428 submissions to the ICLR conference from 2017 until 2021. Evaluation. The task of collecting reference summaries of scholarly reviews is challenging due to their length and domain specificity. We identify an alternative method to collect scholarly summaries by considering meta-reviews. Meta-reviews convey mainly the decision of the meta-reviewer and, depending on their style, may summarize the reviews. We only use these “summary-like” metareviews instead of those that solely convey area chairs’ decisions. We apply heuristic rules to extract “summary-like” meta-reviews and perform manual verification. We set criteria for identifying them, including length, reference to at least one reviewer, and semantic similarity measured by cosine similarity with all reviews. We obtained 226 summary-like meta-reviews that we use for evaluation. 6.1 Evaluation Metrics Evaluation against gold standards. For comprehensiveness and following common practices, we include ROUGE evaluations of the summaries against a gold standard. In our case, we assume that the area chair’s motivations for their decision provide a reasonable comparison. However, this evaluation may be limited, because ROUGE scores have a low correlation with human judgments according to various studies (Kryscinski et al., 2020; Kocmi et al., 2021). Discrimininativeness. Following (Ou et al., 2023; Cohn-Gordon et al., 2018), we measure the discriminativeness of a summary, i.e., whether an evaluative listener can identify the source review based on the summary content. We construct our evaluative listener by comparing the summary and the reviews using cosine similarity of their paraphrase embeddings (Reimers and Gurevych, 2019). Learned metrics. We evaluated the generated summaries using the SEAHORSE metrics trained on human judgment (Clark et al., 2023). They assess the summaries along six axes: coverage, attribution, comprehensibility, grammar, repetition and conciseness. Coverage and attribution, respectively, measure if the main ideas of the source text are present in the summary and verify the accuracy of information attribution. Comprehensibility and grammar assess the summary’s overall fluency and 1https://openreview.net/ 12742 grammatical correctness, while repetition and conciseness evaluate the absence of redundant utterances and the summary conciseness. In the multidocument summarization setting, we evaluate the summary generated as discussed in Sec. 5.3. High coverage across all reviews suggests that the summary effectively captures the main ideas of all the reviews. 6.2 Comparison Methods We compare our framework with baseline summarization methods: 1) Latent Semantic Analysis (LSA) (Steinberger and Jezek, 2004) extracts sentences from a document based on latent topics. 2) LexRank (Erkan and Radev, 2004), is a graph algorithm that uses TF-IDF to calculate weights for sentence segments and selects segments near the center as the summary2. 3) Quantized Transformer space (QT) proposes a clustering interpretation of the quantized space for extractive summarization (Angelidis et al., 2021). 4) PlanSum (Amplayo and Lapata, 2021) is an abstractive summarization method that incorporates content planning in a summarization model. 5) Convex Aggregation for Opinion Summarization (COOP) (Iso et al., 2021) is an abstractive summarization method that leverages embedding representation to condition summary generation. 6) Llama 7b Instruct3, is a Large Language Model that generates abstractive summaries via prompting. In addition, we report the performance of the random selection of sentences from the documents. We also compare our method with generative models in terms of discriminativeness by evaluating the perplexity of candidate summaries conditioned on the source text. Candidate summary generation. We generate candidate summaries by extracting sentences from reviews in the extractive setting. For the abstractive setting, we use pre-trained language models, such as PEGASUS (Zhang et al., 2020) or BART summarizers (Lewis et al., 2020), to generate summaries for each review. 7 Experimental Results We present the results for both the discriminative and the standard multi-document summarization tasks. We compare our method with the baselines 2LSA and LexRank are implemented using github.com/ miso-belica/sumy 3https://huggingface.co/togethercomputer/ Llama-2-7B-32K-Instruct 0.0 0.2 0.4 0.6 0.8 1.0 Llama 7b Instruct GLIMPSE-Unique LexRank LSA Random LM Perplexity GLIMPSE-Speaker Discriminativeness 0.76 0.66 0.57 0.51 0.46 0.37 0.34 (a) Discriminativeness. 0.000 0.005 0.010 0.015 0.020 GLIMPSE-Speaker GLIMPSE-Unique Random LexRank LSA LM Perplexity Llama 7b Instruct Norm. Discriminativeness 0.035 0.019 0.005 0.004 0.002 0.001 0.001 (b) Discriminativeness per character. Figure 2: Discriminativeness for all the baselines and our methods in extractive mode (GLIMPSE-Unique (Extr.), GLIMPSE-Speaker (Extr.)), and a strong abstractive method (Llama 7b Instruct). and evaluate the quality of the generated summaries using automatic metrics and a human evaluation. 7.1 Discriminative Summarization In this setting, our goal is to evaluate the discriminativeness of the generated summaries by gauging a listener’s ability to identify the source review from the generated summary. Discriminativeness evaluation. In Figure 2, we report the discriminativeness scores for extractive methods alongside Llama 7b Instruct, a strong abstractive summarization baseline. We find that Llama 7b Instruct achieves the highest performance compared with other methods (76% discriminativeness); followed by GLIMPSE-Unique in the extractive setting (66% discriminativeness). These results suggest that our method achieves high discriminativeness while being more cost-effective compared to recent large language models in terms of memory footprint and runtime. Surprisingly, we find that selecting candidates using the perplexity of common abstractive summarizers, such as BART (LM 12743 70 60 50 40 30 20 10 0 LM Perplexity 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 Discriminativeness Pareto front Method GLIMPSE-Unique GLIMPSE-Speaker LM Perplexity Generation Extractive facebook_bart-large-cnn google_pegasus-large google_pegasus-arxiv Figure 3: Trade-off between discriminativeness of the generated summaries and their fluency (measured as the log-likehood of the summaries under the generative model). The Pareto frontier shows the best trade-off between the two metrics. Perplexity), yields results worse than a Random selection (37% vs 46%). Contrary to our expectations, GLIMPSE-Speaker has the lowest performance compared with baseline methods in terms of discriminativeness. We hypothesize that this result can be attributed to a phenomenon known as codebooking, where the RSA speaker overly tailors its output to the RSA listener, disregarding the semantic load of the summary (Arumugam et al., 2022; Wang et al., 2021). Conciseness and information density. When evaluating properties of the generated summaries, we observed a significant variation in the length of the generated summaries ranging from 76 on average for GLIMPSE-Speaker to around 2000 for Llama. We account for these differences by evaluating the "Discriminativeness per character" in Figure 2b. Interestingly, we find that the advantage of Llama vanishes completely, while GLIMPSE-Speaker and GLIMPSE-Unique notably outperform all baseline methods. This result suggests that GLIMPSE methods produce very succinct yet very discriminative summaries. This is key for the task at hand since the goal is to alleviate the strain on area chairs. Discriminativenes versus fluency. In Figure 3, we plot the discriminativeness (in terms of discriminativenes) against the language model perplexity (as a proxy to fluency). We highlight the Pareto frontier, i.e. the points for which we cannot gain on a given axis without losing on the other. Along this frontier, we observe a trade-off between discriminativeness and fluency (Figure 3). This outcome is expected, as the RSA scores are designed to select less common utterances to improve discriminativeness. Human evaluation of uniqueness. Since the main goal of the D-MDS task is to highlight the most unique information from each review, we asked human evaluators to identify the source review for each summary (See Appendix A for details about the human evaluation task). We show that GLIMPSE selects more informative and unique ideas than other summarization methods such as LLMs (Llama 7b Instruct) or abstractive multidocument summarizers as illustrated in Tab. 2. These results are consistent with the evaluation reported in Figure 2. They indicate that utterances selected with GLIMPSE-Unique are shorter and more discriminative compared with those selected with baseline methods, suggesting its potential as an effective highlighting mechanism for peer review. 7.2 Overall Summary Quality Using ROUGE scores, we compare the overlap between the generated summaries and the summarylike metareviews. We also evaluate them using the SEAHORSE metrics. We present the results in Tab. 1. Comparison to gold standards. Overall, the summaries generated using both our methods and the baselines exhibit minimal overlap with the metareviews. We find that our method achieves comparable performance with baseline methods in terms of ROUGE score. This result is likely influenced by the nature of metareviews, which extends beyond synthesizing reviews to justify a paper’s acceptance decision. Coverage and conciseness. We evaluate the coverage and conciseness of our generated summaries with the baseline methods. We measure the coverage by assessing if the main ideas of a review are present in the summary using the SEAHORSE metrics (cf. Tab. 1). We observe that the summaries generated with GLIMPSE are more concise and yield a significantly better coverage of the main ideas of the documents than baseline methods. This result is consistent with the discriminativeness results presented in (Figure 2b and Sec. 7.1). Using GLIMPSE to select excerpts leads to denser summaries than baseline methods. Fluency. Similarly to our findings in the discriminative summarization setting, we find that summaries generated by our method tend to be less fluent than those generated by the baselines. We posit that this is due to our method’s tendency to 12744 Rouge-1 Rouge-2 Rouge-L Cov. Attr. Gram. Conc. Repet. Method Extractive Random 0.32 ±0.05 0.06 ±0.02 0.15 ±0.03 0.09 ±0.04 0.48 ±0.07 0.58 ±0.12 0.15 ±0.04 0.94 ±0.04 LSA 0.38 ±0.06 0.08 ±0.03 0.16 ±0.03 0.14 ±0.09 0.57 ±0.09 0.59 ±0.15 0.19 ±0.07 0.94 ±0.07 LexRank 0.38 ±0.06 0.09 ±0.03 0.17 ±0.03 0.18 ±0.12 0.56 ±0.09 0.60 ±0.16 0.21 ±0.08 0.95 ±0.06 QT 0.21 ±0.06 0.04 ±0.02 0.12 ±0.03 0.09 ±0.08 0.35 ±0.11 0.13 ±0.05 0.12 ±0.07 0.80 ±0.14 GLIMPSE-Speaker 0.22 ±0.06 0.04 ±0.02 0.11 ±0.03 0.09 ±0.05 0.36 ±0.09 0.36 ±0.13 0.13 ±0.05 0.89 ±0.08 GLIMPSE-Unique 0.27 ±0.06 0.06 ±0.03 0.13 ±0.03 0.13 ±0.07 0.39 ±0.09 0.38 ±0.13 0.15 ±0.06 0.89 ±0.07 Abstractive PlanSum 0.25 ±0.08 0.06 ±0.02 0.14 ±0.04 0.07 ±0.04 0.21 ±0.07 0.32 ±0.11 0.13 ±0.07 0.37 ±0.36 Coop 0.36 ±0.05 0.08 ±0.02 0.19 ±0.02 0.08 ±0.09 0.26 ±0.12 0.51 ±0.23 0.09 ±0.07 0.26 ±0.31 Llama 7b Instruct 0.39 ±0.06 0.09 ±0.03 0.18 ±0.02 0.23 ±0.12 0.63 ±0.11 0.49 ±0.16 0.25 ±0.09 0.79 ±0.26 GLIMPSE-Speaker 0.33 ±0.05 0.07 ±0.03 0.15 ±0.02 0.32 ±0.10 0.44 ±0.06 0.53 ±0.10 0.27 ±0.06 0.90 ±0.08 GLIMPSE-Unique 0.34 ±0.04 0.07 ±0.02 0.16 ±0.02 0.33 ±0.10 0.44 ±0.07 0.54 ±0.10 0.27 ±0.06 0.84 ±0.11 Table 1: Comparison to metareview motivations using ROUGE scores and estimated human judgment using the SEAHORSE metrics for all baselines and our templated summaries compared against each document independently. Cov. stands for Main ideas, Attr. for Attribution, Gram. for Grammar, Compr. for Comprehensible, Conc. for Conciseness, and Repet. for Repetition. The best value in each column is in bold. Method Discriminativeness Avg. Summary Length GLIMPSE 93.75% 111 Llama 85.18% 1920 MDS 0% 268 Table 2: Accuracy of annotators guessing the review from which a summary is generated for our method and two baselines and the average of summary lengths generated by each method. select rarer utterances, aiming to enhance discriminativeness, and thus follows the same trade-off observed in Figure 3. 8 Summary and Concluding Remarks We have introduced a discriminative summarization framework for multi-document summarization and suggested a pragmatic summarization approach inspired by the Rational Speech Act model (RSA) of human communication. Our findings demonstrate the effectiveness of RSA-based scores in capturing unique, common, and divergent opinions among reviewers. This paves the way for the development of tools to aid area chairs in evaluating reviews. Our method yields more informative summaries compared to existing approaches and is suitable for extractive summarization, ensuring the interpretability of the generated summaries. Acknowledgments This work was granted access to the HPC resources of IDRIS under the allocation 2023AD011013290R2 made by GENCI. Ines Arous is supported by a grant from the former Twitter, Inc. In addition, we acknowledge material support from NVIDIA Corporation in the form of computational resources provided to Mila. 9 Limitations & Ethical Considerations We adopt summarization evaluation methods common in the research area, but the validity of these methods needs further investigation. For example, measuring the overlap between the templated summaries and the metareview motivations is limited by the appropriateness of the motivation as a gold standard and by the use of ROUGE scores, which have well-known limitations. The SEAHORSE trained metrics are also working in a somewhat out-of-distribution setting as they have not been specifically validated on scientific content. Since our method’s main strength is to highlight unique and common ideas in a given text to help the reader get the most salient points in context, a natural follow-up work will be to conduct a task-based human evaluation of the highlighting it provides. Still, extractive summarization methods are notably sensitive to the sentence segmentation process, which can occasionally result in peculiar outcomes. For instance, a brief sentence containing nonsensical content might erroneously emerge as the most informative segment of the summary simply because it is unique among the reviews. Similarly, filler sentences such as "Here are some comments" could be mistakenly identified as the most common ideas in the corpus. Although extractive summarization ensures attribution, it is constrained by these factors, unlike abstractive summarization methods, which, while prone to potential hallucinations, circumvent such pitfalls. As with any automated tool, its deployment carries inherent risks, particularly if used without due caution in critical decision-making processes, such as determining the acceptance or rejection of scholarly work. Indeed, unexpected biases could lead 12745 to discrimination and unfair presentation of the reviewers’ points. It is important to note that our approach is not designed to fully automate decisionmaking or replace the review summarization process altogether. Rather, we advocate for its use as a supplementary tool, particularly in its extractive configuration, to aid in identifying salient points within reviews. References Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021a. Aspect-Controllable Opinion Summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6578–6593, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Reinald Kim Amplayo, Stefanos Angelidis, and Mirella Lapata. 2021b. Unsupervised Opinion Summarization with Content Planning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14):12489– 12497. Reinald Kim Amplayo and Mirella Lapata. 2021. Informative and Controllable Opinion Summarization. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2662–2672, Online. Association for Computational Linguistics. Stefanos Angelidis, Reinald Kim Amplayo, Yoshihiko Suhara, Xiaolan Wang, and Mirella Lapata. 2021. Extractive Opinion Summarization in Quantized Transformer Spaces. Transactions of the Association for Computational Linguistics, 9:277–293. Place: Cambridge, MA Publisher: MIT Press. Stefanos Angelidis and Mirella Lapata. 2018. Summarizing Opinions: Aspect Extraction Meets Sentiment Prediction and They Are Both Weakly Supervised. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3675–3686, Brussels, Belgium. Association for Computational Linguistics. Dilip Arumugam, Mark K. Ho, Noah D. Goodman, and Benjamin Van Roy. 2022. On Rate-Distortion Theory in Capacity-Limited Cognition & Reinforcement Learning. ArXiv:2210.16877 [cs]. Chaitanya Bhatia, Tribikram Pradhan, and Sukomal Pal. 2020. MetaGen: An academic Meta-review Generation system. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1653–1656, Virtual Event China. ACM. Arthur Bražinskas, Mirella Lapata, and Ivan Titov. 2020. Unsupervised Opinion Summarization as CopycatReview Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5151–5169, Online. Association for Computational Linguistics. Giuseppe Carenini and Jackie C. K. Cheung. 2008. Extractive vs. NLG-based Abstractive Summarization of Evaluative Text: The Effect of Corpus Controversiality. In Proceedings of the Fifth International Natural Language Generation Conference, pages 33–41, Salt Fork, Ohio, USA. Association for Computational Linguistics. 12746 Eric Chu and Peter Liu. 2019. MeanSum: A Neural Model for Unsupervised Multi-Document Abstractive Summarization. In Proceedings of the 36th International Conference on Machine Learning, pages 1223–1232. PMLR. ISSN: 2640-3498. Elizabeth Clark, Shruti Rijhwani, Sebastian Gehrmann, Joshua Maynez, Roee Aharoni, Vitaly Nikolaev, Thibault Sellam, Aditya Siddhant, Dipanjan Das, and Ankur Parikh. 2023. SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9397–9413, Singapore. Association for Computational Linguistics. Reuben Cohn-Gordon and Noah Goodman. 2019. Lost in Machine Translation: A Method to Reduce Meaning Loss. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 437–441, Minneapolis, Minnesota. Association for Computational Linguistics. Reuben Cohn-Gordon, Noah Goodman, and Christopher Potts. 2018. Pragmatically Informative Image Captioning with Character-Level Inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 439–443, New Orleans, Louisiana. Association for Computational Linguistics. Judith Degen. 2023. The Rational Speech Act Framework. Annual Review of Linguistics, 9(1):519–540. _eprint: https://doi.org/10.1146/annurev-linguistics031220-010811. G. Erkan and D. R. Radev. 2004. LexRank: Graphbased Lexical Centrality as Salience in Text Summarization. Journal of Artificial Intelligence Research, 22:457–479. Michael C. Frank and Noah D. Goodman. 2012. Predicting Pragmatic Reasoning in Language Games. Science, 336(6084):998–998. Publisher: American Association for the Advancement of Science. Suyu Ge, Jiaxin Huang, Yu Meng, and Jiawei Han. 2023. FineSum: Target-Oriented, Fine-Grained Opinion Summarization. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining, WSDM ’23, pages 1093–1101, New York, NY, USA. Association for Computing Machinery. Nianlong Gu, Elliott Ash, and Richard Hahnloser. 2022. MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6507–6522, Dublin, Ireland. Association for Computational Linguistics. Hayate Iso, Xiaolan Wang, Stefanos Angelidis, and Yoshihiko Suhara. 2022. Comparative Opinion Summarization via Collaborative Decoding. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3307–3324, Dublin, Ireland. Association for Computational Linguistics. Hayate Iso, Xiaolan Wang, Yoshihiko Suhara, Stefanos Angelidis, and Wang-Chiew Tan. 2021. Convex Aggregation for Opinion Summarization. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3885–3903, Punta Cana, Dominican Republic. Association for Computational Linguistics. Wenjun Ke, Jinhua Gao, Huawei Shen, and Xueqi Cheng. 2022. ConsistSum: Unsupervised Opinion Summarization with the Consistency of Aspect, Sentiment and Semantic. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, WSDM ’22, pages 467–475, New York, NY, USA. Association for Computing Machinery. Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2020. Will I Sound Like Me? Improving Persona Consistency in Dialogues through Pragmatic SelfConsciousness. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 904–916, Online. Association for Computational Linguistics. Hyunwoo Kim, Byeongchang Kim, and Gunhee Kim. 2021. Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2227–2240, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Tom Kocmi, Christian Federmann, Roman Grundkiewicz, Marcin Junczys-Dowmunt, Hitokazu Matsushita, and Arul Menezes. 2021. To Ship or Not to Ship: An Extensive Evaluation of Automatic Metrics for Machine Translation. In Proceedings of the Sixth Conference on Machine Translation, pages 478–494, Online. Association for Computational Linguistics. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the Factual Consistency of Abstractive Text Summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computational Linguistics. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. 12747 Haoyuan Li, Somnath Basu Roy Chowdhury, and Snigdha Chaturvedi. 2023a. Aspect-aware Unsupervised Extractive Opinion Summarization. In Findings of the Association for Computational Linguistics: ACL 2023, pages 12662–12678, Toronto, Canada. Association for Computational Linguistics. Miao Li, Eduard Hovy, and Jey Han Lau. 2023b. Towards Summarizing Multiple Documents with Hierarchical Relationships. ArXiv:2305.01498 [cs]. Xinnian Liang, Shuangzhi Wu, Mu Li, and Zhoujun Li. 2021. Improving Unsupervised Extractive Summarization with Facet-Aware Modeling. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 1685–1697, Online. Association for Computational Linguistics. Inderjeet Mani and Eric Bloedorn. 1999. Summarizing Similarities and Differences Among Related Documents. Information Retrieval, 1(1):35–67. Jiefu Ou, Benno Krojer, and Daniel Fried. 2023. Pragmatic Inference with a CLIP Listener for Contrastive Captioning. ArXiv:2306.08818 [cs]. Ciyang Qing and Michael Franke. 2015. Variations on a Bayesian Theme: Comparing Bayesian Models of Referential Reasoning. In Henk Zeevat and Hans-Christian Schmitz, editors, Bayesian Natural Language Semantics and Pragmatics, pages 201–220. Springer International Publishing, Cham. Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence Embeddings using Siamese BERTNetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992, Hong Kong, China. Association for Computational Linguistics. Sheng Shen, Daniel Fried, Jacob Andreas, and Dan Klein. 2019. Pragmatically Informative Text Generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4060–4067, Minneapolis, Minnesota. Association for Computational Linguistics. J. Steinberger and Karel Jezek. 2004. Using Latent Semantic Analysis in Text Summarization and Summary Evaluation. Yoshihiko Suhara, Xiaolan Wang, Stefanos Angelidis, and Wang-Chiew Tan. 2020. OpinionDigest: A Simple Framework for Opinion Summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5789– 5798, Online. Association for Computational Linguistics. Xiaojun Wan, Houping Jia, Shanshan Huang, and Jianguo Xiao. 2011. Summarizing the differences in multilingual news. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, SIGIR ’11, pages 735–744, New York, NY, USA. Association for Computing Machinery. Ke Wang and Xiaojun Wan. 2021. TransSum: Translating Aspect and Sentiment Embeddings for SelfSupervised Opinion Summarization. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 729–742, Online. Association for Computational Linguistics. Rose E. Wang, Julia White, Jesse Mu, and Noah D. Goodman. 2021. Calibrate your listeners! Robust communication-based training for pragmatic speakers. ArXiv:2110.05422 [cs]. Yumo Xu and Mirella Lapata. 2020. Coarse-to-Fine Query Focused Multi-Document Summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3632–3645, Online. Association for Computational Linguistics. Noga Zaslavsky, Jennifer Hu, and Roger P. Levy. 2021. A Rate–Distortion view of human pragmatic reasoning? In Proceedings of the Society for Computation in Linguistics 2021, pages 347–348, Online. Association for Computational Linguistics. Qi Zeng, Mankeerat Sidhu, Hou Pong Chan, Lu Wang, and Heng Ji. 2023. Scientific Opinion Summarization: Meta-review Generation with Checklist-guided Iterative Introspection. ArXiv:2305.14647 [cs]. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2020. PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of ICML’20, pages 11328–11339. JMLR.org. Chao Zhao and Snigdha Chaturvedi. 2020. WeaklySupervised Opinion Summarization by Leveraging External Information. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):9644–9651. Number: 05. 12748 A Human evaluation We recruited expert human annotators in academia to evaluate the informativeness of the produced summaries (already enrolled PhD and Master students in computer science and machine learning). They were compensated based on their skills and location (Montréal, Quebec) at a rate of 30 CAD per hour. A.1 Task description The annotators were presented a generated summary and they had to guess which review was input to the summarizer to produce said summary. They were given 4 choices: one per review and the option to say that it’s hard to guess. In Figure 4 we present the instructions given to the annotators and an example of the actual task in Figure 5. A.2 Human evaluation statistics We got 8 annotators to perform a total of 89 evaluations total. They spent on average less than an hour on the task. With some dedicated annotators spending almost two hours and performing up to 20 evaluations (Figure 6). B Impact of the generation method Abstractive summarization. GLIMPSE can rely on different generation methods (either extractive or different generative summarizers). In Figure 7 we compare the informativeness induced by the different generative methods. It seems that BART produces the most promising pool of candidates. 12749 Figure 4: Instructions given to the annotators to perform the discriminative summarization task. 12750 Figure 5: Example of task 0 200 400 600 800 1000 1200 1400 Time spent (s) 0 10 20 Count (a) Distribution of the time spent by the annotators. 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Number of reviews 0.0 0.5 1.0 1.5 2.0 Count (b) Number of samples examined by each worker. llama Glimpse MDS method 0 100 200 300 Time spent (s) (c) Time spent by the reviewer per method evaluated. Figure 6: Statistics of the annotators involved. 12751 GLIMPSE-Unique GLIMPSE-Speaker LM Perplexity 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Discriminativeness 0.77 0.38 0.42 0.69 0.70 0.51 0.60 0.59 0.48 0.57 0.56 0.36 Extractive facebook_bart-large-cnn google_pegasus-large google_pegasus-arxiv Figure 7: Informativeness of the summaries selected using different scoring methods (GLIMPSE-Unique, GLIMPSE-Speaker) and the baseline LM Perplexity from different set of candidates. Candidates generated by different generative models or simply sentences extracted from the document. 12752
2024
688
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12753–12776 August 11-16, 2024 ©2024 Association for Computational Linguistics Peacock: A Family of Arabic Multimodal Large Language Models and Benchmarks Fakhraddin Alwajih El Moatez Billah Nagoudi Gagan Bhatia Abdelrahman Mohamed Muhammad Abdul-Mageed The University of British Columbia & Invertible AI {muhammad.mageed@}ubc.ca;invertible.ai Abstract Multimodal large language models (MLLMs) have proven effective in a wide range of tasks that require complex reasoning and linguistic comprehension. However, due to a lack of high-quality multimodal resources in languages other than English, success of MLLMs remains relatively limited to English-based settings. This poses significant challenges in developing comparable models for other languages, even those with large speaker populations, such as Arabic. To alleviate this challenge, we introduce a comprehensive family of Arabic MLLMs, dubbed Peacock, with strong vision and language capabilities. Through comprehensive qualitative and quantitative analysis, we demonstrate the solid performance of our models on various visual reasoning tasks and further show their emerging dialectal potential. Additionally, we introduce Henna, a new benchmark specifically designed for assessing MLLMs on aspects related to Arabic culture, setting the first stone for culturallyaware Arabic MLLMs. The GitHub repository for the Peacock project is available at https: //github.com/UBC-NLP/peacock. 1 Introduction Empowered by progress in large language models (LLMs) and foundation models of other modalities, multimodal large language models (MLLMs) now have a remarkable understanding (Alayrac et al., 2022; Li et al., 2023e; Dai et al., 2023; Liu et al., 2023b,a; Zhu et al., 2023). For example, they can handle various complex reasoning tasks spanning from visual question answering to understanding sarcastic comics (Achiam et al., 2023; Yang et al., 2023). These capabilities, however, are mostly seen in models serving the English language. This leaves behind the majority of the world’s languages, furthering an already acute technological divide. We alleviate this challenge for Arabic, a diverse Instance Attributes Instance Identity Instance Interaction Instance Location Instances Counting Scene Understanding Spatial Relation Visual Reasoning 30 40 50 60 70 InstrucBlip-AraLLaMA InstrucBlip-AceGPT LLaVA-AraLLaMA LLaVA-AceGPT mBLIP-mT0 Figure 1: Comparison between the performance of Peacock and mBlip models on SEED-Benchmark dimensions. collection of languages and dialects with a native population of more than 400 million speakers. More concretely, we draw inspiration from English counterparts (Dai et al., 2023; Liu et al., 2023b) to present a robust family of Arabic MLLMs with powerful vision and language capabilities. Our models adopt the approach of integrating an image encoder with an Arabic text decoder. In our experimental setup, we explore two popular directions for aligning the vision and the language components: one involves employing a fully connected layer as a projection head on top of the vision encoder (Liu et al., 2023b), while the other utilizes a Q-former transformer (Dai et al., 2023). All models are trained in two stages, a pretraining stage and an instruction fine-tuning stage. For the first stage, we curate high-quality pretraining data from publicly available English datasets. We translate these datasets into Arabic and apply a carefully designed pipeline to ensure the quality of our training data. Similarly, we curate and translate an instruction fine-tuning dataset which is essential for achieving reasoning and conversational capabilities. 12753 We showcase the performance of our models across different tasks such as visual question answering (VQA) and visual reasoning. Our models perform much better than a multilingual baseline mBlip (Geigle et al., 2023) on different tasks and datasets, and we set the first comprehensive Arabic vision-language benchmark to facilitate future work in this area. Finally, we demonstrate the promising capabilities of our Peacock models in interacting in dialectal Arabic by conducting a case study on the Egyptian dialect. When fine-tuned on a small set of Egyptian dialect data, our models exhibit an interesting level of proficiency in their dialectal responses when prompted in the same dialect. We hope this acts as a spark for future works in dialectal Arabic vision language models. To summarize, our contributions in this paper are as follows: (1) We introduce a suite of Arabic MLLMs, dubbed Peacock, capable of instruction following and visual reasoning, in addition to their intriguing dialectal affinity. For developing Peacock, we use existing vision and language models. We also offer a new language model, AraLLaMA, based on LLaMA2-7B (Touvron et al., 2023). (2) We introduce a diverse collection of Arabic translated datasets carefully curated for the training and evaluation of Arabic MLLMs. (3) We adapt the popular LLaVA (Liu et al., 2023b) benchmark and SEED-Bench (Li et al., 2023d) for Arabic MLLMs evaluation. (4) We present Henna, a benchmark designed to measure model capabilities in interpreting images related to Arabic culture. The rest of this paper is organized as follows: In Section 2, we provide an overview of related work. Section 3 introduces Peacock, our family of MLLMs. In Section 4, we describe our evaluation strategies and benchmarks. In Section 5, we present our experiments, human evaluation, and a comprehensive analysis of our models. We conclude in Section 6. 2 Related Work 2.1 Multimodal Large Language Models Progress in MLLMs is largely dependent on advances in LLMs. Refer to Appendix A.2 for more details on LLM-related works. The common trend in recent MLLMs involves integrating an LLM as their text decoder alongside a vision encoder for image understanding. Several approaches were proposed for aligning the vision encoder with the text decoder. Flamingo (Alayrac et al., 2022) and Otter (Li et al., 2023c), for example, blend a vision encoder with a resampler and a cross-gated attention layer, reducing the computational load in vision-text cross-attention, and enhancing instruction optimization. While BLIP-2 (Li et al., 2023e) and InstructBLIP (Dai et al., 2023), combine a vision encoder with a Q-former and a linear layer, streamlining the cross-modality projection and utilizing learnable query vectors for feature extraction. LLaVA (Liu et al., 2023b,a), on the other hand, pairs a vision encoder with multilayer perceptron (MLP), retaining all visual tokens for comprehensive visual information processing. Finally, the simplest form, illustrated by models such as Fuyu (Bavishi et al., 2023) and OtterHD (Li et al., 2023a), relies solely on a linear layer, operating as basic decoder-only transformers without specialized vision encoders. This diversity in design showcases the innovative approaches in integrating vision and language in MLLMs. 2.2 Visual Instruction Tuning Following the success of instruction tuning in LLMs, recent works in MLLMs transitioned to visual instruction tuning. MULTIINSTRUCT (Xu et al., 2022) pioneered this transition by creating a multi-modal instruction tuning benchmark dataset that transforms 62 different multi-modal tasks into a unified sequence-to-sequence format. Building on this, LLaVA (Liu et al., 2023b) leveraged GPT4’s adeptness in understanding multimodal textual representations to reformulate image-text pairs into an instruction-following format. Similarly, MIMIC-IT (Li et al., 2023b) focused on generating instruction-response pairs using multi-modal incontext information and a variety of visual scenes. Most recently, M3IT (Li et al., 2023f) converted traditional vision-language tasks into a unified visionto-text framework through manual instruction writing and dataset pre-processing. This includes tasks such as captioning, visual question answering, visual conditioned generation, reasoning, and classification. In their comprehensive survey, Yin et al. (2023) provide an extensive overview of MLLMs, including an evaluation of their performance and capabilities. This paper serves as a valuable resource for researchers interested in the field of MLLMs. 2.3 Arabic MLLMs The majority of research in Arabic MLLMs focuses mainly on image captioning (ElJundi et al., 2020; Attai and Elnagar, 2020; Afyouni et al., 2021; 12754 Vision Encoder Q-Former LLM Fully Connected LORA Input image Queries Instruction Output text Figure 2: Peacock InstructBLIP architecture: Integrates instruction-specific visual features using QFormer and a frozen pretrained image encoder. Vision Encoder LLM MLP LORA Input image Instruction Output text Figure 3: Peacock LLaVA architecture: Combines a pretrained frozen vision encoder with trained Arabic LLMs via an MLP bridge. Emami et al., 2022; Eddin Za’ter and Talafha, 2022; Elbedwehy and Medhat, 2023; Mohamed et al., 2023). Other areas, VQA for example, remain largely unexplored. This is primarily due to scarcity of publicly available datasets in these areas. As far as we know, the only significant work in Arabic VQA is by Kamel et al. (2023) and explores closedform VQA without attempting generative VQA. We also know of no native Arabic datasets for either image captioning or VQA, with two exceptions: AraCOCO (Mohamed et al., 2023) for image captioning, which is mainly used for evaluation, and AVQA (Kamel et al., 2023) for VQA, which was automatically generated from MSCOCO for Arabic VQA. In many works, translations of either MSCOCO or Flickr8k are utilized (ElJundi et al., 2020; Afyouni et al., 2021; Mohamed et al., 2023). 3 Peacock 3.1 Architectures The Peacock family is designed based on the vision components of two architectures, that of InstructBlip (Dai et al., 2023) and LLaVA1.5 (Liu et al., 2023a). For language, our models are integrated with one of two powerful Arabic LLMs, AceGPT (Huang et al., 2023)1 and a new model based on LLaMA2-7B, dubbed AraLLaMA, that we further pretrain on a large Arabic dataset and fine-tune using diverse instructions. Our motivation behind introducing AraLLaMA is to create a model with strong knowledge of the Arabic language and 1In all our experiments, we use the AceGPT-7B-chat. We also limit ourselves to LLMs with 7B parameters due to computational constraints. culture. More information about AraLLaMA and how it compares to AceGPT is in Appendix A.4. InstructBlip-Based Peacock. Here, our models consist of four key components: (1) A vision encoder based on the ViT (ViT/G-14) model (Dosovitskiy et al., 2020), operating at a resolution of 224×224 and employing a patch size of 14. (2) A Querying Transformer (Q-former) (Li et al., 2023e), designed to link the pretrained vision encoder with the LLM, using the BERT base model (Devlin et al., 2018) as its foundation. (3) A linear layer projector, tasked with aligning the output of the Q-former with the LLM embedding space. (4) An LLM, incorporating one of the two forenamed models, AceGPT or AraLLaMA, both of which are derivatives of the LLaMA2 architecture enhanced for Arabic. Figure 2 illustrates this architecture. LLaVA-Based Peacock. For this setting, models are structured around three primary components: (1) A vision encoder employing the CLIP-Large model (Radford et al., 2021), capable of processing images at a resolution of 336x336 and a patch size of 14, converting these images into 576 tokens. (2) A two-layer MLP projector that aligns the output of the visual and language modalities. (3) And either AceGPT or AraLLaMA. The architecture is shown in Figure 3 3.2 Pretraining Our models are trained in two stages, a pretraining stage and an instruction fine-tuning stage. The pretraining stage aims to train the alignment module, which projects the visual and textual features into a common embedding space. The models are trained 12755 using our carefully curated text-image pairs dataset. In the case of InstructBlip-based models, only the projection layer, which is the alignment module, is trainable. In contrast, the Q-former, vision encoder and language model parameters are frozen. Meanwhile, for the LLaVA-based models, only the MLP connector is the trainable part, with the CLIP encoder and LLM being frozen. 3.3 Visual Instruction Fine-tuning After the pretraining stage, the model will only be capable of generating simple captions and descriptions of an image. To give the models the ability to function on tasks requiring visual reasoning and engage in an intelligible visual conversation, we further fine-tune them using instruction datasets. To keep computational costs manageable, we employ the parameter-efficient fine-tuning technique LoRA (Hu et al., 2021). Similar to the previous stage, in addition to the LoRA parameters, only the linear layer is trainable in the case of InstructBlip models, while for LLaVA models, we fine-tune the MLP and apply LoRA to the LLM, following the LLaVA 1.5 training scheme (Liu et al., 2023a). We provide in Table A.7 the number of parameters for the main components of each model in Appendix A.5. 4 Datasets and Benchmarks 4.1 Translation and Filtering Pipeline A significant challenge for Arabic MLLMs is lack of available resources, which is due to the difficulty of retrieving relevant Arabic image-text pairs from the internet at scale and absence of suitable image-text relevance filtering methods similar to that of CLIP (Radford et al., 2021)2. To address this resource gap, we introduce a careful translateand-filter pipeline for converting publicly available image-text datasets into Arabic without losing data quality. To this end, we adopt the latest version of Google translate API (Google Cloud). We follow Mohamed et al. (2023) in further assuring high quality of acquired translations using a multilingual sentence embedding model LaBSE (Feng et al., 2020). We calculate the similarity of embeddings between the original and translated sentences (questions and answers), retaining translations that meet a minimum similarity threshold of 80% or greater 2CLIP was used in filtering many English web scraped large-scale datasets (Ordonez et al., 2011; Sharma et al., 2018; Changpinyo et al., 2021). for both the question and the answer. Figure 4 demonstrates the filtering pipeline. We provide details about our datasets and translation method in Appendix A.3, and sample translations illustrating variations in quality ranging from good to moderate to poor in Figure A.9 (also in Appendix A.3). Original English Sample Raw Data Threshold filter Arabic Sample Google Cloud API Transaltion Embedding Embedding Filtered Data Cosine Similarity LaBSE Model LaBSE Model Figure 4: Our data filtering pipeline. After translating the data through Google Cloud API, we obtain the embeddings of both the original and translated samples using the multilingual sentence embedding model LaBSE. For each sample, we calculate the cosine similarity between the two extracted embeddings and reject samples under an 80% threshold. 4.2 Pretraining Data Aligning with recent work showing that the quality of LLMs pretraining data is more important than quantity (Gunasekar et al., 2023; Lee et al., 2023), we curate a high-quality text-image pairs dataset collected from publicly available sources. Specifically, we utilize LCS-558K (Liu et al., 2023b) and COCO (Lin et al., 2014) as our pretraining data. LCS-558K encompasses 558k textimage pairs carefully curated from three datasets: LAION (Schuhmann et al., 2021), Conceptual Captions (Sharma et al., 2018), and SBU (Ordonez et al., 2011). COCO is a high-quality dataset comprising 118k images, covering 80 different objects, with five captions per image, all human-annotated. As stated in Section 4.1, all the datasets are translated into Arabic using Google API and further filtered based on their semantic similarity with the original English text. 4.3 Instruction Fine-tuning Data For the second training stage, we curate another dataset that follows the instructions tuning paradigm as in Liu et al. (2023b). Concretely, the model is asked to respond to a specific instruction or question for each image in the dataset. The first dataset we include is the multi-modal instructions dataset by Liu et al. (2023b). It comprises 150k samples covering conversations, detailed image descriptions, and complex reasoning instructions and responses. This dataset was created us12756 ing GPT-4 (Achiam et al., 2023), and the images were taken from the COCO dataset. Additionally, we incorporate the VQAv2 dataset (Goyal et al., 2017) after transforming it to the same instructions and responses format. To incorporate further diverse instructions, we utilize 60k multi-choice questions extracted from LLaVA1.5 mixed instruction dataset (Liu et al., 2023a). This exposes the model to different scenarios, giving it better generalization capabilities. Similar to the pretraining stage, all the datasets are translated using Google API and filtered following our data-cleaning pipeline. 4.4 Evaluation Benchmarks SEED-Bench. SEED-Bench (Li et al., 2023d) consists of 19K multiple-choice questions, each meticulously annotated by humans. These questions cover 12 evaluation dimensions, addressing the comprehension of both image and video modalities. This study exclusively focuses on the image-only subset of SEED-Bench comprising 14K multiplechoice questions. SEED-Bench is translated via our translation and filtering pipeline as described in Section 4.1. LLaVA-Bench. The LLaVA-Bench (Liu et al., 2023b) comprises 30 images, which the authors randomly select from the COCO-Val-2014 dataset. For each image, three questions are generated, resulting in 90 instances. These questions fall into three categories: conversational, detailed description, and complex reasoning. This benchmark evaluates the model’s performance across conversation, description, and reasoning, using GPT-4 scoring. Henna Benchmark. As Arabic culture may be underrepresented in current English MLLMs datasets, we develop Henna, a new benchmark for testing purposes only. Henna comprises attractions, food, events, and other Arabic-relevant objects, consisting of 1,132 samples that have been manually curated and reviewed to ensure quality and relevance. More details about how we create Henna and how we use it for evaluation are in Section 5.2.4. 5 Experiments 5.1 Implementation Details In the first training stage, we use the 916k imagetext pairs described in Section 4.2 to train Peacock models. The pretraining phase spans three epochs with a batch size of 32 and a learning rate of 1e-3. As previously described, all model parameters are kept frozen except for the projection layer in the case of InstructBlip-based models and the MLP connector for LLaVA-based models. During the second training phase, we utilize the instructions dataset introduced in Section 4.3. The models are further fine-tuned for three epochs with a batch size of eight and a learning rate of 2e-5. As mentioned before, only the introduced LoRA parameters are trainable, with the addition of the projection layer in the case of InstructBlip-based models and MLP connector for LLaVA-based models. For the training objective, we follow the language modeling approach where the model predicts the next text token given previously predicted text tokens and the visual features. Concretely, our goal is to maximize the probability of the next token or, for mathematical convenience, minimize the negative log-likelihood. The loss is calculated only on the response generated by the model. The instructions and visual tokens are skipped during these calculations. 5.2 Results and Discussion We evaluate our suite of models on a range of typical vision-language tasks and benchmarks. In addition, we show our models’ performance on our novel Arabic cultural dataset, Henna. Since this is the first work on Arabic MLLMs, we adapt popular benchmarks in the literature to our case. Mainly, these are a VQA-tasks benchmark, LLaVABench (Liu et al., 2023b), and SEED-Bench (Li et al., 2023d). We also evaluate the performance on Henna benchmark and conduct a case study focusing on the Egyptian dialect. This establishes the first comprehensive benchmark for future works in Arabic MLLMs. We further compare our models with the multilingual mBlip model (Geigle et al., 2023) as a baseline for completeness. The mBlip model is trained on 96 languages, including Arabic. 5.2.1 General VQA In general VQA tasks, the challenge involves answering textual questions about images, requiring learning and integrating visual and textual information. This demonstrates a deep understanding of the interconnectedness between the two modalities. To evaluate performance in general VQA, our validation process includes three datasets: VQAv2 (Goyal et al., 2017), OKVQA (Marino et al., 2019), and GQA (Hudson and Manning, 2019). Notably, evaluation of English VQA tasks is typically performed through online platforms by submitting results. However, this option is un12757 Model Architecture LLM VQAv2 OKVQA GQA All Filtered All Filtered All Filtered Baseline mBlip mT0-XL-5B 38.55 50.8 8.59 18.18 35.95 50.45 BLOOMZ-7B 41.00 55.7 11.87 23.30 38.55 54.85 InstructBlip AraLLaMA 44.55 56.15 20.97 29.77 42.60 58.05 AceGPT 39.00 51.20 10.69 16.82 37.00 57.60 LLaVA AraLLaMA 40.85 52.45 14.79 25.57 33.45 49.75 AceGPT 41.45 56.65 15.14 26.36 33.27 52.20 Table 1: The zero-shot performance of our Peacock models against mBlip on the dev set of different VQA datasets. Models are evaluated on the exact match with the open-generation metric, where an answer is considered correct if it matches any ground truth answers. The baseline is mBlip with different LLMs (mT0-XL-5B and BLOOMZ-7B). available for Arabic-translated data because these platforms only support English and the original datasets. Since the test sets do not contain groundtruth labels, we evaluate held-out validation sets. We follow Geigle et al. (2023) in using “exact match accuracy with open generation" to evaluate our models’ output. The metric considers an answer correct if it matches any of the ground-truth answers. Table 1 shows model accuracy on these datasets under the zero-shot setting for both the filtered and unfiltered(All) versions of the datasets. It is evident from Table 1 that the top-performing Peacock model, InstructBlip with AraLLaMA LLM, significantly outperforms the best version of mBlip, which integrates the BLOOMZ-7B LLM, by an average margin of 4.5 points. A comparative analysis of all models also reveals significant performance improvements when only the filtered high-quality data is included. This enhancement is consistently observed across all models and tasks, highlighting the crucial role of data quality in the effectiveness of these models. Furthermore, we observe that the choice of the underlying LLM is has a significant impact on performance. This is the case if we compare integrating AraLLaMA to AceGPT in our overall MLLMs. Specifically, the InstructBlip model integrated with AraLLaMA demonstrates superior performance across all tasks and datasets when using either filtered or unfiltered(All) data in training. This performance disparity is likely attributable to the inherent differences in how these LLMs handle Arabic, with AraLLaMA being more effective due to its extensive training and its ability to align with visual information. In addition, it is worth noting that the performance of Peacock models varies considerably depending on the task, with a general trend of models performing better on Architecture LLM Conv DD CR Avg mBlip BLOOMZ-7B 55.26 47.89 55.43 52.90 InstructBlip AraLLaMA 84.56 80.00 82.11 82.27 AceGPT 73.28 61.40 72.67 69.13 LLaVA AraLLaMA 75.62 65.01 72.33 71.07 AceGPT 77.81 68.85 73.89 73.61 Table 2: Performance of Peacock models and mBlip on LLaVA-Bench scored by GPT-4. Conv: Conversation. DD: Details Description. CR: Complex Reasoning. the VQAv2 task than on OKVQA and GQA. Such variations can be attributed to each task’s inherent complexities and specific requirements, including the sophistication of the presented questions and the nature of the required visual understanding. 5.2.2 LLaVA-Bench For evaluation using LLaVA-Bench, we follow the method of Liu et al. (2023b). Table 2 displays our models’ successful performance across the three metrics of the LLaVA-Bench. Despite the limited data and resources, this suggests a burgeoning capability for multi-modal comprehension in Arabic. Under the same training conditions, the integration of InstructBlip with AraLLaMA notably excels within the Peacock suite. It achieves an average score of 82.27 on the GPT-4 scale, a significant 9.4 margin over the LLaVA model combined with AceGPT. As shown in Table 2, all Peacock models surpass the mBlip-BLOOMZ-7B baseline in the three metrics of the LLaVA-Bench. 5.2.3 SEED-Bench For our third benchmark, we adapt the SEEDBench for Arabic and use it to evaluate our models. Table 3 and Figure 1 present an evaluation of Peacock models across a broad spectrum of visual understanding dimensions within SEED-Bench, 12758 Architecture LLM IA II IN IL IC SU SR VR mBlip mT0-XL-5B 42.04 42.76 59.79 43.35 42.09 59.37 38.20 60.42 InstructBlip AraLLaMA 49.91 55.33 58.76 43.25 45.85 65.52 38.20 68.88 AceGPT 49.16 55.43 56.7 43.46 45.69 66.72 36.99 71.60 LLaVA AraLLaMA 41.98 48.66 46.39 38.75 39.72 59.34 36.83 64.95 AceGPT 31.10 38.61 43.30 35.89 25.50 45.57 31.20 51.06 Table 3: Evaluation of mBlip and Peacock models on SEED-Bench across various attributes: Instance Attributes (IA), Instance Identity (II), Instance Interaction (IN), Instance Location (IL), Instances Counting (IC), Scene Understanding (SU), Spatial Relation (SR), and Visual Reasoning (VR). where a diverse range of performance efficiencies is observed. LLaVA-AraLLaMA emerges as a particularly robust model, excelling in visual reasoning and scene understanding with accuracy scores of 68.88% and 65.52%, respectively. However, it displays weaknesses in spatial relations and instance location. Mirroring this trend, LLaVA-AceGPT showcases strengths in scene understanding and visual reasoning (66.72% and 71.6%, respectively), but marginally underperforms in instance interaction and spatial relations compared to LLaVAAraLLaMA. In contrast, InstructBlip-AraLLaMA, while proficient in scene understanding and Visual Reasoning (59.34% and 64.95%), falls short in Instance Attributes and Counting, resulting in a lower overall accuracy of 46.43%. InstructBlipAceGPT, the model with the most modest performance, achieves its best results in visual reasoning and instance interaction (51.06% and 43.3%), but struggles significantly with instance counting and scene understanding. In contrast to mBlip, which outperforms Peacock models only in one dimension (instance interaction) and achieves the same score in the spatial relation dimension as InstructBlip-AraLLaMA. This comparative analysis underscores the superiority of LLaVA-based models in the Peacock family on SEED-Bench, especially those with AraLLaMA, over InstructBlip models in most tasks. This could be attributed to the capability of AraLLaMA in understanding and ability of align information coming from the vision encoder on the one hand and the input questions about the input image, on the other hand. Meanwhile, the InstructBlip models, particularly those with AceGPT LLM, reveal limitations in broader visual understanding tasks. The marked variation in performance between AraLLaMA and AceGPT within the same model base highlights the significant impact of language model selection on visual task performance, Architecture LLM Helpfulness Relevance Accuracy Level of Details mBlip mT0-XL-5B 34.11 39.15 35.11 20.74 InstructBlip AraLLaMA 62.34 68.97 49.68 49.83 Table 4: Evaluation of InstructBlip-AraLLaMA against mBlip-mt0-xl models on Henna, using GPT-4. offering valuable insights into the inherent abilities (and limitations) in contemporary MLLMs. 5.2.4 Henna Benchmark Henna was developed to establish a standard for evaluating Arabic MLLMs on elements particularly related to Arabic culture, such as food, customs, and landmarks. The dataset was created by prompting GPT-4V (OpenAI, 2023) to generate descriptions of images based on questions, while providing it with relevant Wikipedia context. Images were carefully selected to represent the culture of 11 Arab countries. To achieve this, we selected images from Wikipedia and corresponding articles to create the context for GPT-4V during the generation process. The images represent a range of countries, including Algeria, Egypt, Iraq, Jordan, Morocco, Palestine, Saudi Arabia, Syria, Tunisia, the United Arab Emirates, and Yemen. We identified ten top attractions from each country within categories such as traditional food and cuisine, local customs, historical monuments and sites, common activities and lifestyles, and architectural styles and notable buildings. Figure A.10 demonstrates selected examples from the dataset’s images. For each attraction, we used GPT-4V to generate ten questions. Each image was accompanied by its Wikipedia article to provide comprehensive context. This approach yielded a minimum of ten images per country, resulting in a total of 1,132 question-answer pairs across all countries. An example of the dataset generation process is illustrated in Figure 5, and Figure A.11 demonstrates four randomly selected images with a generated pair of a question and their answers. Moreover, Figure A.13 shows examples of questions and answers generated by the Henna pipeline from the image depicted in Figure A.12. These questions and answers were translated into English to provide a qualitative assessment of the dataset’s quality. The evaluation process utilizes GPT-4 to assess each model’s responses based on four criteria: Helpfulness, Relevance, Accuracy, and Level of Details. Each criterion is rated on a scale from one to ten, with higher scores indicating better responses. 12759 وادي ظهر- دار الحجر أحد المباني التاريخية في اليمن ويقع في صنعاء وقد بني خالل القرن الثامن عشر الميالدي على يد علي بن صالح العماري وسمي بهذه التسمية نسبة إلى الصخرة الغرانيتية التي شيد عليها هذا الدار................... You are an AI visual assistant that can analyze a single image and generate questions about the image. [Context] You receive a full description for image you are observing.  The image is about Historical monuments and sites in Yemen.  [Description]  ......دار الحجر أحد المباني التاريخية في اليمن ويقع في صنعاء  [End of Description] [System] Using the provided image, context and description. Design a 10 rounds conversation between you and a person asking about this photo. Use the description to create questions  ..... Wikipedia Article Prompt Input Image Generated Questions and Answers  GPT4V ما البناء الذي نراه في الصورة وما الثقافة التي يمثلها؟ البناء الذي في الصورة هو دار الحجر، وهو يمثل الثقافة اليمنية. Q1 A1 أين يقع هذا القصر تحديًدا؟ يقع هذا القصر في صنعاء، وادي ظهر في اليمن. Q2 A2 ما هو العصر الذي ُش يد فيه هذا القصر؟ ُش يد هذا القصر في أواخر القرن الثامن عشر الميالدي. Q3 A3 ما الذي يميز موقع بناء هذا القصر؟ يميز موقع القصر أنه ُش يد على صخرة غرانيتية كبيرة. Q4 A4 ما هي األشجار التي يمكن رؤيتها في الصورة؟ يمكن رؤية أشجار خضراء في الصورة، وهي تضيف جمااًل طبيعًيا للمكان. Q5 A5 كم عدد األدوار التي يتكون منها القصر؟ يتكون القصر من سبعة أدوار متناسقة. Q6 A6 هل هناك عناصر طبيعية أخرى مرئية في الصورة بخالف القصر؟ نعم، يوجد حوض مائي دائري مبني من حجر الحبش األسود يظهر جزئيا في الصورة. Q7 A7 ما نوع النباتات الموجودة أمام القصر في الصورة؟ أمام القصر يوجد نباتات خضراء، وهي تبدو كجزء من حديقة أو بستان. Q8 A8 هل يمكن مالحظة أي تفاصيل تشير إلى الفن المعماري للقصر؟ نعم، يمكن مالحظة التصميم المعماري المميز للقصر وتناسق األدوار مع الصخرة التي بني عليها. Q9 A9 هل يظهر في الصورة أي نصوص أو عالمات تشير إلى تاريخ القصر؟ ال، ال تظهر في الصورة أي نصوص أو عالمات تحمل معلومات تاريخية عن القصر. Q10 A10 Figure 5: Dataset Generation Example using GPT-4V. This figure demonstrates the process of generating a questionanswer dataset for an attraction in Yemen as an example. For each site, an image and its corresponding Wikipedia article were used to provide GPT-4V with rich contextual information. The model then generated ten contextually relevant questions and answers per image. The evaluation process involves GPT-4 reviewing a question and its correct answer in Arabic, followed by the model’s response, which is then rated according to the aforementioned criteria. The results are formatted as a JSON object with keys corresponding to each criterion. Figure A.17 illustrates examples where GPT-4’s evaluations varied in quality, showing high, medium, and low evaluations for different models’ responses. An example of the prompt used in the evaluation is shown in Figure A.18. The leading model from the Peacock suite was evaluated against the multilingual model mBlip following our benchmark. The data presented in Table 4 demonstrates the superiority of the InstructBlip model paired with AraLLaMA, setting a benchmark for future models in terms of their ability to comprehend and recognize aspects of Arabic culture. Figure 6 shows an example response from Peacock along with a response from GPT-4V, illustrating the practical application of these findings. This structured evaluation method, where GPT-4 serves both as the subject and the evaluator, facilitates a quantitative analysis of the model’s performance in understanding and responding to visual questions in Arabic. 5.2.5 Qualitative Analysis In our qualitative analysis of Peacock models, we select two random samples from each question type previously described in Section 5.2.2, totaling six samples. Figure A.23 displays the answers by all Peacock models to these six questions, accompanied by their corresponding images. For the conversion type questions, one direct question involves asking about the color of an elephant. While InstructBlip integrated with AceGPT fails to provide the correct answer, all other models succeed. In the second conversion example, the LLaVA-based models are unable to answer a question about counting donuts. In the detail type questions, all models provide answers that are closely related to the details of the objects in the images, albeit with some hallucinations. For the complex type questions, all models provide subjective answers, which, despite offering slightly different conclusions about the image, can still be considered correct. In summary, InstructBlip integrated with AraLLaMA, provides accurate and more helpful answers for most of the three types of questions. 5.2.6 Case Study with Egyptian Dialect Attention to dialectal Arabic in the NLP research community is not sufficient to date, with complete absence when it comes to MLLMs. Addressing 12760 ما المواد المستخدمة في صناعة مقبض الجنبية الظاهر في الصورة؟ Peacock in MSA .يبدو أن مقبض الخنجر مصنوع من الخشب والمعدن ومع ذلك، بدون مزيد من المعلومات أو السياق، ال يمكن تحديد المادة الدقيقة المستخدمة في صنع المقبض بدقة. Question in MSA مقبض الجنبية الموجود في الصورة يبدو أنه مصنوع من مادة فاخرة مثل القرن أو الفضة، وقد يكون ُمطعًما بالذهب كما يظهر من الزخارف البراقة. GPT-4 in MSA GPT4 answers answers Figure 6: Examples of responses from Peacock and GPT-4V regarding an image related to Yemeni culture. this gap, we conduct the first study on the capabilities of MLLMs in generating dialectal Arabic, focusing the study on the Egyptian dialect. Out of the box, our fine-tuned models were able to understand questions posed in the Egyptian dialect but responds in MSA. Following this observation, we transform a subset of 1k random samples from our instruction tuning dataset into Egyptian dialect by a professional Egyptian translator. This small dataset is then used to further fine-tune our InstructBlip based Peacock models following the previously mentioned experimental setup. Surprisingly, as seen in Figure 7, our Peacock models are capable of generating solid answers in Egyptian dialect when instructed on this small sample, while keeping their MSA fluency intact. This could be due to the fact that our LLMs have seen dialectal Egyptian Arabic during their pretraining. To provide a measurable evaluation, we further transform 20 samples from our instruction tuning evaluation set into Egyptian dialect. Using these samples, we appoint four native Egyptian speakers to anonymously score the responses of our models against GPT-4. The evaluation was based on two criteria: the accuracy of the model’s response to the question (rated on a scale from 1 to 10) and the authenticity of the Egyptian dialect (rated on a scale of 1 to 10). To ensure transparency, the answers from models were anonymized before being presented to the annotators. As shown in Figure 8, Peacock models exhibited greater closeness to the Egyptian dialect compared to GPT-4V, even when the latter was specifically instructed to respond in the Egyptian dialect. On the other hand, our dialectal models lag slightly in the accuracy of the answers, which we assume can be alleviated by providing sufficient training data, a task we leave for future work. More details and examples on the case study are provided in Appendix A.8. هي الصورة دي فيها ايه؟ Peacock answers in Egyptian Dialect الصورة فيها تالت أوالد صغيرين قاعدين على سرير، وبيقرأوا كتاب. في شويه صور تقريًبا في الخلفية، ممكن تكون بتضفي عنصر زخرفي في الصورة. Question in Egyptian Dialect الصورة ُتظهر ثالثة أطفال يجلسون على سرير كبير .وهم منهمكون في قراءة كتاب كبير مفتوح أمامهم الكتاب يبدو كأنه يحتوي على صور ومعلومات، ربما عن فيلم أو موضوع محدد يهتمون به. الغرفة مضاءة بضوء النهار الطبيعي. GPT-4 answers in MSA GPT4 Figure 7: Both Peacock and GPT-4V accurately respond to a question in the Egyptian dialect. While GPT-4V provides a slightly more detailed answer, it does so in MSA. In contrast, Peacock’s response is in the same Egyptian dialect as the question. 7.45 6.31 9.19 9.5 7.33 6.14 3.06 1.34 IB-AraLLaMA IB-AceGPT GPT4V DI GPT4V 0 2 4 6 8 10 Metric Accuracy Authenticity Model Score Figure 8: Human evaluation results on the accuracy and authenticity of model responses to questions about images in Egyptian dialect. "IB-AraLLaMA " denotes our InstructBlip-AraLLaMA model, and "IB-AceGPT" refers to our InstructBlip-AceGPT model. "GPT-4V DI" is the GPT-4V model explicitly instructed to respond in the Egyptian dialect."GPT-4V" represents the GPT-4V model, which is given a question in the Egyptian dialect, similar to how Peacock models are instructed. 6 Conclusion In this work, we present the family of Peacock models. Peacock demonstrates significant advancements in Arabic MLLMs, showcasing remarkable abilities in interpreting visual data in Arabic language. These models bridge the gap in multimodal understanding capabilities for Arabic and Egyptian dialects by introducing a suite of models, with various reasoning skills, accompanied by a diverse collection of datasets and benchmarks carefully prepared. This includes our Henna benchmark, designed to assess MLLM tasks focused on the Arabic culture. The development of Peacock sets strong baselines and a new benchmark for future work in Arabic MLLMs, highlighting the importance of high-quality data processing and the selection of language models for multimodal task performance. 12761 7 Limitations We identify a number of limitations for our work, as follows: • Peacock models have demonstrated remarkable abilities in interpreting visual data in Arabic. However, these models can struggle with object hallucination, where the generated descriptions or answers may include references to objects that do not exist in the input image, along with unnecessary details. • Additionally, translation errors can significantly impact the model’s performance and propagate through the training data. We have identified several such errors in the model’s responses. For example, the English word ’sitting’ typically indicates the location of an object. However, the Google API often mistranslates it to suggest that the object is lying down, as seen in the translation of ’The train is sitting at the station’ to é¢jÖÏ@ ú ¯ ËAg. PA¢®Ë@, where ËAg. inaccurately implies that the train is lying down. • Moreover, the Peacock model’s capabilities are further limited in recognizing text within images. This limitation stems from the fact that our training datasets do not include imagetext pairs. 8 Ethics Statement Energy Efficiency. Our Peacock models, like many large MLLMs, require significant pretraining time and are not energy-efficient. We acknowledge this critical issue and support continued research towards developing energy-efficient models. Data. Our pretraining datasets are translated from publicly available English data, encompassing diverse genres, communities and varieties. Our Peacock models demonstrate potential in applications involving several Arabic varieties, serving broad populations. Human Annotation. Three authors of this paper, all Arabic native speakers with PhD degrees and extensive NLP experience, conducted the human annotation. They are full-time employees of our research group, with data annotation as part of their job duties. No IRB review was necessary as the project used publicly available data without requiring access to any private accounts. Applications. While Peacock, like many MLLMs, can be misused, it also holds promise for beneficial applications in education, health, and more. Responsible deployment and use are crucial to maximizing its positive impact. It would also help keep Arabic varieties in use in written form in the digital age. AI Usage. ChatGPT was used to corect grammar in some early stages of the paper writing by some of the authors. This utilization was strictly for the purpose of enhancing the linguistic precision. Our research team independently carried out the fundamental research, analysis, development, and paper writing. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Imad Afyouni, Imtinan Azhar, and Ashraf Elnagar. 2021. Aracap: A hybrid deep learning architecture for arabic image captioning. Procedia Computer Science, 189:382–389. Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. 2022. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716–23736. Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Anfal Attai and Ashraf Elnagar. 2020. A survey on arabic image captioning systems using deep learning models. In 2020 14th International Conference on Innovations in Information Technology (IIT), page 114–119. IEEE. Rohan Bavishi, Erich Elsen, Curtis Hawthorne, Maxwell Nye, Augustus Odena, Arushi Somani, and Sa˘gnak Ta¸sırlar. 2023. Fuyu-8b: A multimodal architecture for ai agents. https://www.adept.ai/ blog/fuyu-8b. Accessed: 2024-01-01. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. 2021. Conceptual 12m: Pushing webscale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558–3568. 12762 ChatGPT. 2024. Openai. www.openai.com. Version GPT-4. Zhihong Chen, Shuo Yan, Juhao Liang, Feng Jiang, Xiangbo Wu, Fei Yu, Guiming Hardy Chen, Junying Chen, Hongbo Zhang, Li Jianquan, et al. 2023. Multilingualsift: Multilingual supervised instruction fine-tuning, july 2023b. URL https://github. com/FreedomIntelligence/MultilingualSIFT. git. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructblip: Towards general-purpose vision-language models with instruction tuning. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Muhy Eddin Za’ter and Bashar Talafha. 2022. Benchmarking and improving arabic automatic image captioning through the use of multi-task learning paradigm. arXiv e-prints. Ibrahim Abu El-Khair. 2016. 1.5 billion words arabic corpus. arXiv preprint arXiv:1611.04033. Samar Elbedwehy and T Medhat. 2023. Improved arabic image captioning model using feature concatenation with pre-trained word embedding. Neural Computing and Applications, page 1–17. Obeida ElJundi, Mohamad Dhaybi, Kotaiba Mokadam, Hazem M Hajj, and Daniel C Asmar. 2020. Resources and end-to-end neural network models for arabic image captioning. In VISIGRAPP (5: VISAPP), page 233–241. Jonathan Emami, Pierre Nugues, Ashraf Elnagar, and Imad Afyouni. 2022. Arabic image captioning using pre-training of deep bidirectional transformers. In Proceedings of the 15th International Conference on Natural Language Generation, page 40–51. Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang. 2020. Languageagnostic bert sentence embedding. arXiv preprint arXiv:2007.01852. Gregor Geigle, Abhay Jain, Radu Timofte, and Goran Glavaš. 2023. mblip: Efficient bootstrapping of multilingual vision-llms. arXiv preprint arXiv:2307.06930. Google Cloud. Google translation api. https:// cloud.google.com/translate. Accessed: 2023-12-01. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913. Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, and Jinchao Xu. 2023. Acegpt, localizing large language models in arabic. Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700–6709. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Karima Kadaoui, Samar M Magdy, Abdul Waheed, Md Tawkat Islam Khondaker, Ahmed Oumar ElShangiti, El Moatez Billah Nagoudi, and Muhammad Abdul-Mageed. 2023. Tarjamat: Evaluation of bard and chatgpt on machine translation of ten arabic varieties. arXiv preprint arXiv:2308.03051. Sarah M Kamel, Shimaa I Hassan, and Lamiaa Elrefaei. 2023. Vaqa: Visual arabic question answering. Arabian Journal for Science and Engineering, pages 1–21. Ariel N Lee, Cole J Hunter, and Nataniel Ruiz. 2023. Platypus: Quick, cheap, and powerful refinement of llms. arXiv preprint arXiv:2308.07317. Bo Li, Peiyuan Zhang, Jingkang Yang, Yuanhan Zhang, Fanyi Pu, and Ziwei Liu. 2023a. Otterhd: A highresolution multi-modality model. arXiv preprint arXiv:2311.04219. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu. 2023b. Mimic-it: Multi-modal in-context instruction tuning. arXiv preprint arXiv:2306.05425. Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. 2023c. Otter: A 12763 multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726. Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. 2023d. Seed-bench: Benchmarking multimodal llms with generative comprehension. arXiv preprint arXiv:2307.16125. Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023e. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597. Lei Li, Yuwei Yin, Shicheng Li, Liang Chen, Peiyi Wang, Shuhuai Ren, Mukai Li, Yazheng Yang, Jingjing Xu, Xu Sun, Lingpeng Kong, and Qi Liu. 2023f. M3it: A large-scale dataset towards multimodal multilingual instruction tuning. Shenggui Li, Hongxin Liu, Zhengda Bian, Jiarui Fang, Haichen Huang, Yuliang Liu, Boxiang Wang, and Yang You. 2023g. Colossal-ai: A unified deep learning system for large-scale parallel training. In Proceedings of the 52nd International Conference on Parallel Processing, page 766–775. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision– ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740–755. Springer. Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023a. Improved baselines with visual instruction tuning. Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. 2019. Ok-vqa: A visual question answering benchmark requiring external knowledge. In Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pages 3195–3204. Abdelrahman Mohamed, Fakhraddin Alwajih, El Moatez Billah Nagoudi, Alcides Alcoba Inciarte, and Muhammad Abdul-Mageed. 2023. Violet: A vision-language model for arabic image captioning with gemini decoder. arXiv preprint arXiv:2311.08844. El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, AbdelRahim Elmadany, Alcides Alcoba Inciarte, and Md Tawkat Islam Khondaker. 2022. Jasmine: Arabic gpt models for few-shot learning. arXiv preprint arXiv:2212.10755. El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed, Tariq Alhindi, and Hasan Cavusoglu. 2020. Machine generation and detection of arabic manipulated and fake news. In Proceedings of the Fifth Arabic Natural Language Processing Workshop, pages 69–84. Lorenz Nigst, Maxim Romanov, Sarah Bowen Savant, Masoumeh Seydi, and Peter Verkinderen. 2020. Openiti: A machine-readable corpus of islamicate texts. nd http://doi. org/10.5281/zenodo, 4075046. OpenAI. 2023. Gpt-4v(ision) system card. Vicente Ordonez, Girish Kulkarni, and Tamara Berg. 2011. Im2text: Describing images using 1 million captioned photographs. Advances in neural information processing systems, 24. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. 2021. Laion-400m: Open dataset of clipfiltered 400 million image-text pairs. arXiv preprint arXiv:2111.02114. Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, et al. 2023. Jais and jais-chat: Arabic-centric foundation and instruction-tuned open generative large language models. arXiv preprint arXiv:2308.16149. Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565. Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-7). LeibnizInstitut für Deutsche Sprache. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti 12764 Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. Wikipedia contributors. 2024. Wikipedia: The Free Encyclopedia. https://www.wikipedia.org/. [Online; accessed 2024-01-15]. Zhiyang Xu, Ying Shen, and Lifu Huang. 2022. Multiinstruct: Improving multi-modal zero-shot learning via instruction tuning. arXiv preprint arXiv:2212.10773. Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. 2023. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421, 9(1). Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. 2023. A survey on multimodal large language models. arXiv preprint arXiv:2306.13549. Imad Zeroual, Dirk Goldhahn, Thomas Eckart, and Abdelhak Lakhouaja. 2019. Osian: Open source international arabic news corpus-preparation and integration into the clarin-infrastructure. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, page 175–182. Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592. A Appendices 12765 Appendices We provide an addition organized as follows: • Image Attribution A.1. • Large Language Models A.2. • Translation and Filtering A.3. • AraLLaMA A.4. • Models Parameters A.5. • Henna Generation Pipeline Examples A.6. • Qualitative Analysis Examples A.7. • Case Study with Egyptian Dialect Details and Examples A.8. A.1 Image Attribution All figures presented in this document are sourced from the COCO dataset and Wikipedia (Wikipedia contributors, 2024), unless otherwise specified. A.2 Large Language Models The field of LLMs has seen significant growth due to increased data availability and computational capabilities. Initial models such as BERT (Devlin et al., 2018) and T5 (Raffel et al., 2020), along with decoder-focused models like GPT (Radford et al., 2019), utilized the Transformer architecture (Vaswani et al., 2017) to achieve remarkable results in various NLP tasks. The advent of GPT3 (Brown et al., 2020) marked a shift towards decoder-only structures, emphasizing auto-regressive decoding for prediction generation. Following models, including PaLM (Anil et al., 2023), expanded the scope of parameters and dataset sizes. Meanwhile, models like InstructGPT (Ouyang et al., 2022), mistral (Jiang et al., 2023), ChatGPT (ChatGPT, 2024) integrated fine-tuning and reinforcement learning to enhance conversational abilities (Ouyang et al., 2022). More recently, Direct Preference Optimization (DPO) technique has been developed to finetune language models (LMs) to align with human preferences (Rafailov et al., 2023). The advancements in LLMs were not limited to the English language only, recent works in Arabic LLMs showed promising performance. (Nagoudi et al., 2022) presented Jasmine, a strong Arabic text decoder capable of performing Arabic text generation and classification tasks. While (Huang et al., 2023; Sengupta et al., 2023) followed (ChatGPT, 2024) instructions fine-tuning and introduced language decoders based on LLaMA2 (Touvron et al., 2023) capable of following instructions and holding a coherent conversation in Arabic. These advancements, coupled with contributions from the opensource community, have established new standards and created new opportunities for research in the NLP field. A.3 Translation and Filtering Dataset Type All Filtered Train Dev Test Train Dev Test COCO2017 Captioning 590k 25k – 527k 22k – LLaVA Pretrain (LCS) Captioning 558k – – 389k – – LLaVA instruct 150k Instructions 271k – – 204k – – VQAv2 VQA 440k 214k – 351k 172k – OKVQA VQA 9k 4k – 7k 3k – GQA VQA 938k 132k – 638k 89k – SEED-Bench Benchmark – – 14k – – 7k LLaVA-Bench Benchmark – – 90 – – 90 Henna-Bench Benchmark – – – – – 1k Table A.5: Breakdown of our publicly released pretraining and instruction finetuning Arabic datasets. Only the filtered datasets highlighted in green and cyan are used for pretraining and instruction finetuning, respectively. English Arabic Translation Similarity A sidewalk that has a microwave sitting on itرصيف به ميكروويف0.305786 A elephant reaching out with some thing with its nose.فيل يمد يده بشيء بأنفه. 0.493334 A man standing in front of a clock.رجل يقف أمام الساعة. 0.966210 Figure A.9: Three examples illustrating the variations in translation quality from English to Arabic, ranging from good to moderate to poor, as indicated by the similarity scores. In the literature, ElJundi et al. (2020) found that Google translates API translations to be suboptimal, while Mohamed et al. (2023) showed that No Language Left Behind (NLLB) project (Costajussà et al., 2022) performs better than the free Google translate API integrated into Google Sheets. However, our investigation reveals that the latest Google Translate API has superior performance and provides higher-quality translations than NLLB. This observation aligns with the recent findings of Kadaoui et al. (2023). To evaluate the effectiveness of filtering in our study, we randomly selected a subset of 250 samples from the VQAv2 dataset and performed human annotations to assess the accuracy of question-andanswer (Q&A) translations in both filtered and unfiltered (All) segments. The analysis revealed that out of the 250 samples in the unfiltered segment, 35 were incorrectly translated. In contrast, within 12766 the filtered segment, which comprised 207 samples after the application of a filtering mechanism, only 7 were incorrectly translated. The filter effectively removed 43 samples, of which 31 were inaccurately translated, indicating that the majority of removed items were indeed problematic, while 12 were accurately translated. Precision Recall F1 Score Accuracy 72.09% 88.57% 79.49% 93.52% Table A.6: Performance metrics for the filtering process To quantify the effectiveness of the filtering process, we calculated several metrics as shown in Table A.6. These results, presented in Table A.6, demonstrate that the filtering approach substantially enhances the quality of the dataset by effectively identifying and removing incorrectly translated Q&A pairs. Table A.5 presents the statistics of the translated data before and after the filtering process. As the table indicates, the reduction ratio due to filtering is approximately 20% A.4 AraLLaMA Pretraining Data. AraLLaMA, is an autoregressive pretrained language model based on LLaMA27B. We further train it on a large and diverse Arabic dataset, including all categories of Arabic, namely Classical Arabic (CA), Dialectal Arabic (DA), and MSA. This data is aggregated from various sources: AraNewsv2 (Nagoudi et al., 2020), El-Khair (ElKhair, 2016), Gigaword,3 OSCAR (Suárez et al., 2019), OSIAN (Zeroual et al., 2019), Wikipedia Arabic, and Hindawi Books.4 We also derived ArabicWeb22 (A) and (B) from the open source Arabic text 2022.5 Training Strategy. LLaMA-2’s original vocabulary wasn’t specifically optimized for Arabic and had a limited selection of Arabic words (it includes only 28 Arabic alphabet), leading to inadequate comprehension of Arabic language data. The initial step to address this involved expanding LLaMA2’s vocabulary. Expanding the vocabulary significantly enhanced the efficiency of encoding string sequences and enriched these sequences with more meaningful information, which was especially beneficial for document-level understanding and en3LDC Catalog Link 4OpenITI corpus (v1.6) (Nigst et al., 2020). 5ArabicText-2022 data coding (Li et al., 2023g). However, the limited amount of data available for continual pre-training meant that a substantial increase in vocabulary could introduce words or phrases that lack practical significance, posing a challenge in learning them effectively from the pre-training dataset and negatively affecting the model’s performance. Moreover, a larger vocabulary size would increase the number of embedding-related parameters, which could impact training efficiency. Therefore, after conducting numerous experiments and considering the balance between training quality and efficiency, we increased the vocabulary size from the original 32,000 words in LLaMA-2 to 60,000 for the Arabic version. With the expanded vocabulary, the next step involved initializing the embeddings for the new vocabulary based on the original LLaMA-2 (7B) model. To ensure a smooth transition of capabilities from the original LLaMA-2 to the Arabic LLaMA-2 while keeping the English proficiency unaffected in the initial phase, we used a mean initialization method for the new embeddings, utilizing the weights from the original LLaMA-2. This approach not only preserved the English language capabilities but also facilitated the effective transfer of these capabilities to the Arabic language model, enabling LLaMA-2 to function efficiently in both English and Arabic. Moreover, 30 GB of English and Arabic data, was utilized during pre-training. Instruction Fine-tuning. To enhance capabilities of our AraLLaMA we instruct-tuning it on three datasets: Alpaca-GPT4, Evol-instruct, ShareGPT extracted from MultilingualSIFT datasets (Chen et al., 2023). A.5 Models Parameters Model LLM Vision Encoder Q-Former Projection Layer(s) LLM Total InstructBlip AraLLaMA 986M ❄ 186M ❄ 3M ✓✧ 6.968B ✧ 8.142B InstructBlip AceGPT 986M ❄ 186M ❄ 3M ✓✧ 6.738B ✧ 7.913B LLaVA AraLLaMA 304M ❄ — 21M ✓ 6.967B ✧ 7.292B LLaVA AceGPT 304M ❄ — 21M ✓ 6.738B ✧ 7.063B Table A.7: Number of parameters for the main components of the models. ❄denotes frozen parameters during the pretraining and finetuning stages, ✓indicates trainable parameters trained from scratch in the pretraining stage, and ✧represents LoRA finetuning parameters in the instruction finetuning stage. In Table A.7, we provide details of the number of parameters for the main components of the Peacock models, including the distinction between trainable and non-trainable parameters for both the pertaining and instruction finetuning stages. These 12767 distinctions are vital for understanding the model’s structure and optimization strategy. A.6 Henna Generation Pipeline Examples This appendix section includes Figures A.10, A.11, A.13, A.17, and A.18 described in Section 5.2.4. A.7 Qualitative Analysis Example In this section, we provide an example of qualitative analysis as described in Section 5.2.5, illustrated in Figure A.23. A.8 Case Study with Egyptian Dialect Details and Examples Figure A.24 displays three randomly selected examples of question-answer pairs translated from MSA into the Egyptian dialect. Figure A.25 presents four examples—three from Henna and one from LLaVA-Bench—along with the correct MSA answers and responses from two models: InstructBlip-AraLLaMA and InstructBlip-AceGPT , demonstrating the Peacock model’s proficiency in responding in the Egyptian dialect. In Figure A.26, we highlight the superiority of Peacock over GPT-4 when posed with questions in the Egyptian dialect, where GPT-4 responds in MSA while Peacock provides answers in the Egyptian dialect. Conversely, when both models are queried in MSA, they reply in MSA. 12768 Figure A.10: This collection of images showcases a curated subset selected from Henna dataset, representing 11 Arab countries, and capturing the essence of traditional food, local customs, historical monuments, everyday activities, and distinctive architecture that characterize the diverse and rich heritage of each region. 12769 ما هو االسم الذي ُيعرف به هذا المكان وما الذي يميزه؟ ُيعرف هذا المكان بـِشَبام َحْضَرَمْوت وهو مشهور بأبراجه الطينية العالية، والتي بسببها ُيلقب بـ'مانهاتن الصحراء'. Q1 A1 ما العناصر البارزة في هذه الصورة؟ العناصر البارزة هي األبنية الطينية الشاهقة التي تظهر بكثافة، وتلك هي الخصائص المعمارية الفريدة لِشَبام َحْضَرَمْوت. Q2 A2 أين يقع ِشَبام َحْضَرَمْوت جغرافًيا؟ يقع في المنطقة الوسطى الغربية من محافظة حضرموت، في صحراء رملة السبعتين. Q3 A3 ما هي الظروف المناخية التي يمكن استنتاجها من الصورة؟ يمكن استنتاج أن المنطقة تتمتع بمناخ صحراوي حار نظًرا للرمال المحيطة والندرة الواضحة للغطاء النباتي. Q4 A4 ما هو التاريخ الذي يمكن مالحظته في العمارة الموجودة بالصورة؟ العمارة تعكس تاريخ طويل وعريق، حيث أن البناء الطيني المرتفع يدل على استخدام تقنيات بناء تقليدية قديمة. Q5 A5 هل يمكن رؤية آثار الحضارة الحديثة في هذه الصورة؟ ال تظهر آثار الحضارة الحديثة بشكل واضح في الصورة، حيث أن األبنية تحتفظ بطابعها التقليدي القديم. Q6 A6 هل يظهر تأثير األحداث الحديثة، مثل الحروب، على األبنية في الصورة؟ ال يمكن تحديد آثار مباشرة للحروب على األبنية من خالل الصورة. Q7 A7 هل ُتظهر الصورة أي طرق أو مسالك مرورية؟ نعم، يمكن مالحظة وجود طرق ترابية تعبر بين األبنية وحولها. Q8 A8 ما نوع األشجار الموجودة في الصورة، إذا كانت موجودة؟ يمكن رؤية بعض النخيل متفرقة، مما يشير إلى نوعية الغطاء النباتي المقتصر على أشجار قادرة على التكيف مع البيئة الصحراوية. Q9 A9 هل يمكن رؤية أي مظاهر للحياة اليومية في هذه الصورة؟ الصورة ال تظهر مظاهر حيوية واضحة للحياة اليومية مثل األشخاص أو النشاطات اليومية، فقط المباني والطرق. Q10 A10 ما نوع الهيكل األساسي الذي يظهر في هذه الصورة؟ الهيكل األساسي الظاهر في الصورة هو تمثال ألبو الهول، وهو تمثال أسطوري بجسم أسد ورأس إنسان. Q1 A1 أين يقع هذا التمثال؟ يقع هذا التمثال في هضبة الجيزة على الضفة الغربية من النيل في مدينة الجيزة، مصر. Q2 A2 ما هي العناصر المرئية األخرى التي يمكن مالحظتها في الخلفية؟ في الخلفية يمكن مالحظة أهرامات الجيزة، وهي من أشهر اآلثار المصرية القديمة وتعتبر من عجائب الدنيا السبع. Q3 A3 كم يبلغ طول تمثال أبو الهول وما هي أبعاده؟ ، متر19.3 متر، وعرضه73.5 يبلغ طول تمثال أبو الهول حوالي متًرا إلى قمة الرأس20 وارتفاعه حوالي. Q5 A5 هل يوجد أي دليل على ألوان أصلية على التمثال في هذه الصورة؟ ال يمكن مالحظة آثار األلوان األصلية على التمثال من خالل هذه الصورة. Q6 A6 ما الفترة التاريخية التي يعود إليها بناء أبو الهول؟ ق.م، وُيعتقد أنه تم بناؤه في2500 يعود بناء أبو الهول إلى حوالي عهد الملك خفرع. Q7 A7 ا هو االتجاه الذي ينظر إليه تمثال أبو الهول؟ تمثال أبو الهول ينظر نحو الشرق. Q8 A8 هل هناك أي تشوهات مرئية أو تلف في جسم التمثال؟ نعم، يمكن مالحظة التآكل والتشوهات في جسم التمثال، خاصة حول الوجه والجسم نتيجة لعوامل التعرية على مر الزمن. Q9 A9 ما هو الدور الذي كان يقوم به أبو الهول في الحضارة المصرية القديمة؟ كان أبو الهول يقوم بدور الحارس لهضبة الجيزة، حيث يقع بالقرب من األهرامات ويعتبر رمًزا للقوة والحماية. Q10 A10 ما الثقافة التي ينتمي إليها أبو الهول؟ أبو الهول ينتمي إلى الثقافة المصرية القديمة، وهو يمثل الحضارة الفرعونية. Q4 A4 ما نوع الطبق المعروض في الصورة؟ الطبق المعروض في الصورة يبدو أنه من الحلويات الشرقية، وهو يحتوي على قطع مربعة الشكل تشبه حلوى قلب اللوز. Q1 A1 أين يتم وضع هذا الطبق عادًة؟ يبدو أن هذا الطبق يتم تقديمه في المناسبات الخاصة واألعياد، كما يظهر أنه مقدم على صينية مزخرفة تشير إلى أهميته. Q2 A2 ما المكونات الظاهرة على سطح الحلوى؟ على سطح الحلوى يمكن مالحظة وجود حبات من اللوز أو المكسرات المشابهة. Q3 A3 ما الذي يحيط بقطع الحلوى في الصورة؟ يحيط بقطع الحلوى مسحوق أخضر اللون، والذي يمكن أن يكون مسحوق الفستق الحلبي. Q4 A4 ....................... ......................... ما الفعالية التي ُيشارك بها هؤالء األشخاص في الصورة؟ يبدو أن األشخاص في الصورة ُيشاركون في فعالية موسيقية تقليدية،  Q1 A1 ما نوع اآلالت الموسيقية المستخدمة في الصورة؟ في الصورة تظهر آالت موسيقية مختلفة منها العود والدربوكة والكمان وآلة تشبه األكورديون،المغربية. Q2 A2 أين يجلس العازفون وما الذي يحيط بهم؟ يجلس العازفون على كراسي وعلى األرض، ويحيط بهم جدران مزينة بالفسيفساء والزليج المغربي الذي يعكس غنى الثقافة المغربية. Q3 A3 ما الزي الذي يرتديه األشخاص في الصورة؟ األشخاص في الصورة يرتدون الجالبة المغربية التقليدية، ويضعون على رؤوسهم الطربوش األحمر، وهو زي تقليدي مغربي. Q4 A4 ........................ ........................ .... ..... Figure A.11: Demonstration of four randomly selected images, each accompanied by question and answer pairs generated by GPT-4. 12770 Figure A.12: Sample image from Henna for the Giza Pyramids in Egypt. Assets Question English Translated   Question Answer English Translated   Answer ما هو المعلم التاريخي الموجود في الصورة؟ What is the historical landmark in the picture? المعلم التاريخي الموجود في الصورة هو مجمع .أهرامات الجيزة The historical landmark in the picture is the Giza Pyramid Complex. أين يقع هذا الموقع تحديدًا؟Where exactly is this site located? ،يقع هذا الموقع في هضبة الجيزة في القاهرة الكبرى .مصر This site is located on the Giza Plateau in Greater Cairo, Egypt. ما هي األهرامات التي يمكن رؤيتها في الصورة؟ Which pyramids can be seen in the picture? يمكن رؤية الهرم األكبر، وهرم خفرع، وهرم منقرع .في الصورة The Great Pyramid, the Pyramid of Khafre, and the Pyramid of Menkaure can be seen in the picture. ما الغرض األساسي من بناء هذه األهرامات؟ What was the primary purpose of building these pyramids? الغرض األساسي من بناء هذه األهرامات كان كمقابر .ملكية The primary purpose of building these pyramids was as royal tombs. كم عدد األهرامات الموجودة في الصورة؟ How many pyramids are there in the picture? .هناك ثالثة أهرامات موجودة في الصورةThere are three pyramids visible in the picture. ما هي الحضارة التي بنت هذه األهرامات؟ Which civilization built these pyramids? الحضارة المصرية القديمة هي التي بنت هذه .األهرامات The ancient Egyptian civilization built these pyramids. هل يمكن رؤية تمثال أبو الهول في هذه الصورة؟ Can the Sphinx statue be seen in this picture? .ال، تمثال أبو الهول ال يظهر في هذه الصورةNo, the Sphinx statue does not appear in this picture. ما الفترة الزمنية التي تم بناء هذه األهرامات فيها؟ What time period were these pyramids built in? تم بناء هذه األهرامات في عهد األسرة الرابعة للمملكة . قبل الميالد2500 و2600 القديمة، بين عامي These pyramids were built during the Fourth Dynasty of the Old Kingdom, between 2600 and 2500 BC. هل هناك عناصر أخرى مرتبطة باألهرامات موجودة في الصورة؟ Are there other elements associated with the pyramids present in the picture? ال، الصورة تظهر األهرامات فقط وال تظهر عناصر .أخرى مرتبطة بها مثل مقابر أو القرية العمالية No, the picture only shows the pyramids and does not show other related elements such as tombs or the workers' village. ما الذي يمثله الشكل الهرمي لدى الحضارة المصرية القديمة؟ What does the pyramidal shape represent for the ancient Egyptian civilization? الشكل الهرمي كان له أهمية دينية ورمزية، حيث كان .ُيعتقد أنه يساعد في صعود روح المتوفى إلى السماء The pyramidal shape had religious and symbolic importance, as it was believed to help the deceased's soul ascend to the heavens. Figure A.13: Ten questions and answers generated using the image from A.12 with Henna generation pipeline. All questions and answers have been translated into English to demonstrate the quality of the generated data. 12771 هل يوجد أي أشخاص أو حيوانات ظاهرة في الصورة؟ Peacockال، ال يوجد أشخاص أو حيوانات مرئية في الصورة. إنه يصور فقط عالمة كبيرة على صخرة. { "helpfulness": 10, "relevance": 10, "accuracy": 10, "level_of_details": 9 }     GPT4 ال، ال يظهر في الصورة أي أشخاص أو حيوانات. Evaluator Qestion Correct Answer mBlip لكن هناك بعض الحيوانات الصغيرة التي تبدو على شكل حيوان .ال أحد يظهر في الصورة صغير { "helpfulness": 8, "relevance": 10, "accuracy": 6, "level_of_detail": 4 } GPT4 Evaluator Figure A.14: High-scoring evaluations: The answer from InstructBlip-AraLLaMA is accurate, earning a perfect score from GPT-4. mBlip’s response is also rated as reasonable. ما المواد المستخدمة في صناعة مقبض الجنبية الظاهر في الصورة؟ Peacock يبدو أن مقبض الخنجر مصنوع من الخشب والمعدن. ومع ذلك، بدون مزيد من المعلومات أو السياق، ال يمكن تحديد المادة الدقيقة المستخدمة في صنع المقبض بدقة. { "helpfulness": 6, "relevance": 7, "accuracy": 5, "level_of_details": 6 } GPT4 مقبض الجنبية الموجود في الصورة يبدو أنه مصنوع من مادة فاخرة مثل القرن أو الفضة، وقد يكون ُمطعًما بالذهب كما يظهر من الزخارف البراقة. Evaluator Qestion Correct Answer mBlipذهبي وداكنة اللون على الجانبين منها { "helpfulness": 5, "relevance": 7, "accuracy": 3, "level_of_detail": 2 } GPT4 Evaluator Figure A.15: Moderate-scoring evaluations: Here, InstructBlip-AraLLaMA’s response, while accurate, doesn’t match the gold standard, resulting in a moderate GPT-4 score. mBlip’s scoring reflects a proper evaluation. هل يمكن مالحظة أي نوع من اللحوم في الطبق من الصورة؟ Peacock نعم، هناك قطعة من اللحم في الطبق، مما يضيف إلى مجموعة متنوعة من الخضار الموجودة في السلطة. { "helpfulness": 1, "relevance": 1, "accuracy": 1, "level_of_detail": 1 } GPT4 ال، ال يمكن مالحظة وجود لحوم في الطبق من الصورة، حيث أن الطبق يظهر خضار وحبوب فقط. Evaluator Qestion Correct Answer mBlip ال يمكن أن يكون ذلك على الصورة. لكن هناك بعض التفاصيل التي تساعد هذا النوع من الطعام ليكون أكثر لذة وامتياًزا. { "helpfulness": 5, "relevance": 4, "accuracy": 5, "level_of_detail": 1 } GPT4 Evaluator Figure A.16: Low-scoring evaluations: InstructBlip-AraLLaMA’s response is incorrect, leading to a low GPT-4 score. Conversely, mBlip is awarded a moderate score, indicating a more accurate response. Figure A.17: Henna evaluation samples showing GPT-4’s scoring range for responses from different models. Sub-figures illustrate the spectrum of scoring from high to low, based on the accuracy and relevance of the modelgenerated answers to the benchmark questions. 12772  Given the following question in Arabic: {question}, and the correct answer in Arabic: {correct_answer}, please rate the following aspects of the model's response: helpfulness,  relevance, accuracy, and level of detail.  Rate and score each aspect on a scale of 1 to 10, where a higher score indicates better overall quality of the model's response. The model's response is enclosed within triple backticks.  Format your feedback as a JSON object, using 'helpfulness', 'relevance', 'accuracy', and 'level_of_details' as keys.  If the information is not available, use "unknown" as the value.  Model response: '''{model_response}'''        Figure A.18: The prompt used to guide GPT-4 to evaluate Henna. 12773 ُتظهر الصورة سيارة متوقفة في مكان لوقوف السيارات بجوار عداد مواقف السيارات. السيارة .متوهجة باللون األزرق، مما يشير إلى أنها قد تكون سيارة رياضية أو سيارة عالية األداء بافتراض أن السيارة موجودة في أحد شوارع المدينة، فمن المحتمل أن تكون في منطقة تجارية نموذجية عندما يتم ركن السيارات في أماكن وقوف السيارات بالقرب من المتاجر. هل يمكنك توضيح عناصر الصورة المقدمة؟ ومن الصورة يمكن استنتاج أن الزرافة موجودة في بيئة طبيعية، مثل السافانا أو األراضي العشبية، حيث توجد أشجار قريبة. يشير وجود األشجار إلى أن المنطقة قد تكون موطًنا مناسًبا للزرافات، حيث، على وجه التحديد، توفر األشجار الظل للموطن وتساهم في توفير الغذاء للزرافة من خالل أوراق األشجار. ماذا يمكن استنتاجه حول موطن الزرافة من هذه الصورة؟ ،الفيل الموجود في الصورة باللون الرمادي. يمكن أن يكون هذا اللون بسبب عوامل مختلفة مثل التغذية أو الظروف الجوية عنصًرا وراثًيا. ما هو لون الفيل في الصورة؟ ،يوجد في الصورة هاتف محمول، وهاتف خلوي، وقطعة من الزجاج الشفاف، وقلمتان، وقلم وحافظة. يتم وضع هذه العناصر على طاولة سوداء. هل يمكنك التفصيل حول عناصر الصورة المقدمة؟ هناك أربع قطع دونات في الصندوق. هناك ثالث قطع من الكعك المزجج، وواحدة دونات، مما يجعل مجموعتها أربع دوونات في الصورة. كم عدد الدونات الموجودة في الصندوق قد يكون السبب وراء هذا الترتيب هو أن الشخص الذي يرتدي مالبسه قد وصل مؤخًرا إلى العمل أو أنه في طور االستعداد لحدث رسمي. قد يشير وجود ربطة عنق حمراء إلى أن الرجل يرغب في أن يكون أنيًقا وأنيًقا، مما يضيف لمسة من األناقة إلى مالبس العمل. ما هي المناسبة المحتملة لهذا الترتيب؟ Figure A.19: InstructBlip-AraLLaMA Examples. نعم. الصورة المقدمة عبارة عن صورة لطريق في مدينة صغيرة في الواليات المتحدة األمريكية، حيث توجد سيارة سوداء اللون تقف في مكان مخصص لوقوف السيارات. كما توجد في الصورة عدادات وقوف السيارات التي يتم استخدامها لدفع رسوم الوقوف. هل يمكنك توضيح عناصر الصورة المقدمة؟ من الصورة يمكن استخالص أن الزرافة تعيش في منطقة جبلية في أفريقيا. يمكن أن يكون هذا المكان هو موطنها الطبيعي. ماذا يمكن استنتاجه حول موطن الزرافة من هذه الصورة؟ الفيل في الصورة لونه أسود.                    Human: فرح! شخص ما في الغرفة شكره على ذلك! ما هو لون الفيل في الصورة؟ ،يوجد في الصورة هاتف محمول، وهاتف خلوي، وقطعة من الزجاج الشفاف، وقلمتان، وقلم وحافظة. يتم وضع هذه العناصر على طاولة سوداء. هل يمكنك التفصيل حول عناصر الصورة المقدمة؟ من الفستق6 ، من الشوكوالتة2 ، دونان من السكر1 .هناك أربعة دونات في الصندوق. كم عدد الدونات الموجودة في الصندوق قد يكون السبب وراء هذا الترتيب هو أن هذه المالبس ستكون جاهزة لظهور الشخص الذي يرتديها. ما هي المناسبة المحتملة لهذا الترتيب؟ Figure A.20: InstructBlip-AceGPT Examples. ُتظهر الصورة سيارة رمادية، على وجه التحديد سيارة هاتشباك، متوقفة في أحد شوارع المدينة. السيارة تجلس تحت عداد لوقوف السيارات، مما يشير إلى أنها مركونة في مكان مخصص. الشارع نفسه محاط بعناصر حضرية، بما في ذلك الرصيف والمركبات األخرى والمشهد العام للمنطقة.  هل يمكنك توضيح عناصر الصورة المقدمة؟ ومن الصورة يمكن االستدالل على أن موطن الزرافة هو بيئة طبيعية خارجية. في هذا السياق، يمكن استنتاج أن الزرافة تعيش في غابة أو منطقة حرجية. يشير وجود العديد من األشجار في الخلفية أيًضا إلى أن النباتات توفر مصادر غذائية وافرة للزرافة.  ماذا يمكن استنتاجه حول موطن الزرافة من هذه الصورة؟ لون الفيل رمادي  ما هو لون الفيل في الصورة؟ يوجد هاتف خلوي باللونين الرملي واألسود بالقرب منه على طاولة من الجلد. إلى جانب الهاتف الخليوي، هناك أيًضا سماعات أذن، وراديو، وحجارتان متوهجتان أو كرستالتان باللون األزرق المخضر. ويمكن أيًضا رؤية مرآة وكرة في الصورة. ومن الواضح أن هذه العناصر متنوعة ومتعددة االستخدامات.  هل يمكنك التفصيل حول عناصر الصورة المقدمة؟ هناك ثالث قطع دونات في الصندوق.  كم عدد الدونات الموجودة في الصندوق غير مؤكد ما هي المناسبة المحتملة لهذا الترتيب؟ Figure A.21: LLaVA-AraLLaMA Examples. ُتظهر الصورة سيارة متوقفة في موقف للسيارات بالقرب من عداد مواقف السيارات. توجد شجرة بالقرب من السيارة، ويبدو أنها متوقفة بالقرب من أحد المباني. توجد أيًضا منطقة محددة لوقوف السيارات بالقرب من السيارة، مما يشير إلى أنها قد تكون متوقفة في مكان مخصص لوقوف السيارات.  هل يمكنك توضيح عناصر الصورة المقدمة؟ ومن خالل الصورة يمكن االستدالل على أن الزرافة موجودة في موطن طبيعي به أشجار ومساحات خضراء. يشير هذا إلى أن الزرافة من المحتمل أن تكون في بيئة محمية للحياة البرية أو محمية للحياة البرية حيث يمكنها العيش بأمان والوصول إلى الموارد الطبيعية.  ماذا يمكن استنتاجه حول موطن الزرافة من هذه الصورة؟ الفيل في الصورة باللون الرمادي. ما هو لون الفيل في الصورة؟ يوجد هاتف خلوي باللونين الرملي واألسود بالقرب منه على طاولة من الجلد. إلى جانب الهاتف الخليوي، هناك أيًضا سماعات أذن، وراديو، وحجارتان متوهجتان أو كرستالتان باللون األزرق المخضر. ويمكن أيًضا رؤية مرآة وكرة في الصورة. ومن الواضح أن هذه العناصر متنوعة ومتعددة االستخدامات.  هل يمكنك التفصيل حول عناصر الصورة المقدمة؟ هناك ثالثة الكعك في الصندوق كم عدد الدونات الموجودة في الصندوق ،من المحتمل أن يكون السبب وراء هذا الترتيب هو أن المالك يستعد لحدث أو مناسبة خاصة مثل حفل زفاف أو مقابلة عمل أو حفل تخرج. يشير وجود قميص أبيض وربطة عنق حمراء وبدلة إلى أن المالك يرتدي مالبس رسمية لهذا الحدث.  ما هي المناسبة المحتملة لهذا الترتيب؟ Figure A.22: LLaVA-AceGPT Examples. Figure A.23: Selected examples from the LLaVA benchmark include two from the conversational type, two from the detail type, and two from the complex question type. The top-left responses are from InstructBlip-AraLLaMA, the top-right responses from InstructBlip-AceGPT, the bottom-left responses from LLaVA-AraLLaMA, and the bottom-right responses from LLaVA-AceGPT. 12774 ما الذي يخلق مظهر األضواء المتقطعة في الصورة؟ Question in MSA  ايه اللي بيخلق شكل االضواء المتقطعة في الصورة؟ Question in Egyptian Dialect   يتم إنشاء مظهر األضواء المتقطعة في الصورة من خالل تقنية التصوير الفوتوغرافي بفاصل زمني، والتي تلتقط حركة السيارات على أنها ضبابية أثناء سفرها عبر شارع المدينة. Answer in MSA بيتم عمل شكل لالنوار المتقطعة من خالل تقنية التصوبر الفتوغرافي بفاصل زمني واللي بتلقط حركة العربيات على إنها ضبابية في وقت سفرها جوا المدينة Answer in Egyptian Dialect  ماذا يرتدي الرجل؟ الراجل ده البس ايه؟ يرتدي الرجل قميًصا أبيض وسروااًل قصيًرا أثناء لعب التنس. الراجل البس قميص أبيض وشورت قصير وهو بيلعب التنس.  هل هذه الصورة قديمة؟ الصورة دي قديمة؟ نعم أيوه Figure A.24: Three examples of question-answer pairs translated from MSA to the Egyptian dialect. ايه اللي ممكن استنتاجه من الصورة عن موطن الزرافات؟ Peacock من الصورة ممكن نستنتج إن الزرافات موجودة في موطن طبيعي، زي السافانا أو األراضي العشبية المفتوحة. وكمان وجود شجر في الخلفية بيدعم فرضية إنها في مكان طبيعي تقريًبا في البرية. ومن الصورة يمكن االستدالل على أن موطن الزرافة هو على األرجح بيئة طبيعية ذات مساحات خضراء كثيرة. تقف الزرافة شامخة بين عدة أشجار خضراء زاهية، مما يوحي بأن المنطقة غنية بالنباتات وتوفر الكثير من مصادر الغذاء للزرافة، مثل أوراق الشجر. Question in Egyptian Dialect Correct Answer in MSA Peacockممكن نستنتج ان الزرافات بتتواجد في جنينة أو غابة، ألنها قاعده على الشجرة. إيه الحاجات التانية اللي موجودة في الصورة جنب الكبة؟ Peacockباإلضافة للكبة، في شويه حاجات تانية في الصورة، زي فاكهة وطماطم ومخلل.  هناك شرائح طماطم ومخلل وصلصة بيضاء يمكن أن تكون طحينة أو لبن. Question in Egyptian Dialect Correct Answer in MSA Peacockجنب الكبة، في حاجات تانية على الطرابيزة. ممكن نشوف طبق أكل صغير وكوب سواقي. في ستات ضمن الفرقة الموسيقية في الصورة؟ Peacock أيوه، في ستات ضمن الفرقة الموسيقية في الصورة. الست اللي في الصورة البسة فستان أبيض. نعم، يمكن مالحظة وجود نساء ضمن المجموعة الموسيقية في الخلفية، وهن يرتدين مالبس بألوان زاهية. Question in Egyptian Dialect Correct Answer in MSA Peacockأيوه، في ستات ضمن الفرقة الموسيقية. في جمهور بيسمع الموسيقى ظاهر في الصورة؟ Peacock ال، مفيش جمهور ظاهر في الصورة. الصورة فيها أربع رجالة بيعزفوا الموسيقى في فرقة موسيقية. ال، ال يظهر جمهور في الصورة. الصورة تركز فقط على العازفين والمغني على المسرح. Question in Egyptian Dialect Correct Answer in MSA Peacockأيوه، في جمهور بيستمع للموسيقى من خالل الكاميرا. Figure A.25: Comparative responses from InstructBlip-AraLLaMA and InstructBlip-AceGPT models alongside the correct MSA answers for four selected examples, three from Henna and the last one from LLaVA-Bench, demonstrating the models’ proficiency in Egyptian dialect. 12775 هل يوجد حيوان في الصورة؟ Peacock  'ال، ال يوجد حيوان في الصورة. التركيز الرئيسي للصورة هو شخص يمشي في منطقة صحراوية.' Question in Egyptian Dialect Peacock ال توجد حيوانات في الصورة. الصورة تظهر تكويًنا صخرًيا كبيًرا مع نقوش وهياكل معمارية منحوتة في الصخر، ما يشير إلى أن هذا الموقع قد يكون ذا أهمية تاريخية أو أثرية. هناك شخص يقف في األسفل، مما يعطي مقياًس ا لحجم الهيكل الصخري. هو في حيوان في الصوره؟ ال، مفيش حيوان في الصورة. الصورة فيها شخص واقف على تلة رملية ومفيش أي حيوانات أو حيوانات تانية مرئية في المشهد. ال، ال يوجد حيوان في الصورة. الصورة تظهر بناء تاريخي كبير محفور في صخرة كبيرة في الصحراء، ويبدو أن هناك شخص واحد يقف بالقرب منه ليعطي مقياسًا لحجم البناء. Question in MSA Egyptian Dialect  MSA Egyptian Dialect  MSA MSA MSA GPT4 GPT4 Figure A.26: A demonstration of Peacock model’s advantage over GPT-4V in responding to queries in the Egyptian dialect, with the former responding in the dialect and the latter in MSA. A secondary example shows both models replying in MSA when queried in MSA. 12776
2024
689
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1266–1279 August 11-16, 2024 ©2024 Association for Computational Linguistics TaSL: Continual Dialog State Tracking via Task Skill Localization and Consolidation Yujie Feng1, Xu Chu2,3,4, Yongxin Xu2,3, Guangyuan Shi1, Bo Liu1, Xiao-Ming Wu1∗ 1Department of Computing, The Hong Kong Polytechnic University, Hong Kong S.A.R. 2School of Computer Science, Peking University, Beijing, China 3Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China 4Center on Frontiers of Computing Studies, Peking University, Beijing, China [email protected], [email protected] Abstract A practical dialogue system requires the capacity for ongoing skill acquisition and adaptability to new tasks while preserving prior knowledge. However, current methods for Continual Dialogue State Tracking (DST), a crucial function of dialogue systems, struggle with the catastrophic forgetting issue and knowledge transfer between tasks. We present TaSL, a novel framework for task skill localization and consolidation that enables effective knowledge transfer without relying on memory replay. TaSL uses a novel group-wise technique to pinpoint task-specific and task-shared areas. Additionally, a fine-grained skill consolidation strategy protects task-specific knowledge from being forgotten while updating shared knowledge for bi-directional knowledge transfer. As a result, TaSL strikes a balance between preserving previous knowledge and excelling at new tasks. Comprehensive experiments on various backbones highlight the significant performance improvements of TaSL over existing state-of-the-art methods. The source code1 is provided for reproducibility. 1 Introduction With the rising popularity of conversational digital assistants, it is imperative for dialogue systems to integrate new services while sustaining proficiency in prior tasks seamlessly. Traditional research, often conducted within specific domains offline, falls short in adaptability to new scenarios (Ni et al., 2023). Retraining pre-trained language models (PLMs) from scratch is both challenging and resource-intensive (Liu et al., 2023), highlighting the necessity for efficient continual learning (CL) approaches in dialogue systems (Ke and Liu, 2022). Dialogue state tracking (DST), crucial for taskoriented dialogue systems, dynamically updates ∗Corresponding author. 1https://github.com/WoodScene/TaSL Figure 1: Conceptual illustration of TaSL. By identifying task-relevant areas across both previously accumulated and current tasks, we can consolidate the taskspecific and task-shared parameters to facilitate efficient knowledge transfer and mitigate forgetting. (domain, slot, value) triplets to capture user intentions precisely. The urgent demand for advancing DST models to accommodate emerging services has catalyzed the development of the Continual DST task (Cho et al., 2023). An effective Continual DST system must address the issue of catastrophic forgetting (McCloskey and Cohen, 1989), where a model’s proficiency in old tasks diminishes after learning new ones. It should also promote knowledge transfer (KT) (Ke et al., 2021) across domains2 to enhance end-task performances. Knowledge transfer includes forward transfer, which improves new task performance using knowledge from previous tasks, and backward transfer, which enhances performance on previous tasks after learning a new relevant task. Striking a balance between retaining previous knowledge and excelling in new tasks is vital for success. However, current Continual DST methods (Madotto et al., 2020; Liu et al., 2021; Cho et al., 2023; Feng et al., 2024) mainly focus on mitigating forgetting through memory replay or regularization, overlooking the advantages of KT that can be derived from the inherent correlations between different DST domains. Task correlation in Continual DST is quite ev2In this work, different domains in Continual DST are equivalent to different tasks in CL. 1266 ident. For instance, domains like “Hotel” and “Restaurant” share semantically similar slots, such as “area” and “bookday”, highlighting the need for models to identify and handle common information types. The similarity in these domain-shared slots is crucial for enabling KT. However, learning domain-specific slots like “food” for “Restaurant” could introduce unique information that disrupts the retention of previously acquired knowledge, leading to catastrophic forgetting. To address these challenges, we introduce Task Skill Localization and Consolidation (TaSL), a framework designed to improve KT between tasks without relying on memory replay. This is achieved by identifying and consolidating the importance distribution of model parameters across tasks. TaSL initially employs a group-wise importanceaware skill localization technique that utilizes gradient trajectories to pinpoint tiny regions in the model that store crucial knowledge for the current task. By comparing the importance distribution with those of previous tasks, we can differentiate between task-specific and task-shared parameters, as illustrated in Figure 1. Our innovative skill consolidation phase then categorically integrates weights from previous tasks with the current one, enabling effective KT while minimizing forgetting. In detail, the importance-aware skill localization method employs a new group-wise metric to compute importance scores, effectively quantifying the significance of each “skill unit”3 for the current task. Our approach, focusing on parameter space rather than dataset-driven categorization of domain-shared and domain-specific slots, offers a more robust solution that accurately identifies taskspecific and task-shared knowledge, overcoming inaccuracies caused by dataset noise. Our skill consolidation stage, then based on a fine-grained model averaging strategy, effectively manages different types of knowledge. We enable forward KT to new tasks by starting with a model initialized with weights from previously fine-tuned tasks, thus using past knowledge to improve learning for new tasks without restrictions. For backward KT, we merge knowledge from both current and past accumulated tasks into localized task-shared skill units, thereby enhancing their capability. To prevent catastrophic forgetting, we 3The definition of skill units is model-specific. For example, the Query matrix within the self-attention layer may be regarded as a skill unit. For detailed definitions, please refer to Appendix B. consolidate the integrity of skill units containing previous task-specific knowledge, ensuring they remain unaffected by new task learning. Through extensive experiments on different parameter-level backbones (from 60M to 7B), TaSL exhibits superior performance in mitigating forgetting and showcases remarkable capabilities for KT, significantly outperforming state-of-the-art methods. Our main contributions include: • We propose a novel task skill localization and consolidation (TaSL) framework for CL. • We develop new group-wise skill localization and fine-grained skill consolidation techniques. • Extensive evaluation on continual DST tasks shows TaSL effectively enables knowledge transfer, resulting in a 3.1% absolute increase in Avg. JGA and an 8.8% absolute boost in BWT metrics compared to previous SOTA methods. 2 Related Work 2.1 Continual Dialogue State Tracking Continual Learning (CL) in task-oriented dialogue systems focuses on perpetually integrating knowledge from data streams for future application. Three kinds of CL methods have been developed. Architecture-based methods propose dynamically adding model weights when learning new data (Geng et al., 2021; Lu et al., 2021b; Yang et al., 2023). Replay-based methods store and replay some training samples from previous tasks (Hou et al., 2019; Lu et al., 2021a; Xu et al., 2023a). Regularization-based methods employ additional loss functions to solidify new knowledge (Li and Hoiem, 2017; Xu et al., 2023b). In the realm of Continual DST, pioneering efforts by Madotto et al. (2020) and Liu et al. (2021) have leveraged these CL strategies to set benchmark performance using PLMs. The DST-EGQA approach by Cho et al. (2023) reformulates the DST task to an example-guided question-answering task, aiming to align distribution shifts across different domains to mitigate forgetting. However, these methods overlook DST task correlations that could enhance knowledge transfer. The recent Continual Prompt Tuning (CPT) method by Zhu et al. (2022) attempts knowledge transfer via domain-specific soft prompts but depends on inefficient memory replay and extensive retraining. This dataset-driven approach is inefficient and lacks robustness. Our TaSL innovates by distinguishing between domain-specific and domain-shared knowledge 1267 within the parameter space, then leveraging the skill consolidation process for effective knowledge transfer and forgetting mitigation. 2.2 Task Skill Localization Research indicates that model parameters contribute unevenly to performance (Michel et al., 2019). Panigrahi et al. (2023) introduced the concept of “skill localization” to identify crucial parameters within PLMs, suggesting that fine-tuning critical parameters nearly matches the effect of full fine-tuning. However, their method requires additional time for identifying and retraining key parameters post-fine-tuning, lowering efficiency. Drawing inspiration from the pruning community, previous studies have used gradient-based metrics to identify important parameters during fine-tuning. Sensitivity-based scoring (Sanh et al., 2020; Zhang et al., 2022b) assesses the impact on training loss and sensitivity smoothing, as applied by Zhang et al. (2022a), eliminates unnecessary parameters for more efficient fine-tuning. However, these approaches, focusing on individual parameter importance, often lead to element-wise pruning with huge computational and storage burdens (Feng et al., 2023a). Based on these advances, we introduce a new importance-aware skill localization method, for the first time, that distinguishes between task-specific and shared parameters to mitigate forgetting in CL. 3 Proposed Method: TaSL Problem Formulation In continual DST, we aim to train a model f : X × T →Y across a sequence of dialogue domains T1, . . . , TK. Each dialogue domain has its dataset Dk for Tk. This model predicts the target y based on input x and task Tk ∈T . The notation fk refers to the model after training on task Tk, while ˆfk denotes the model after averaging for ˆfk−1 and fk. Within a given task Tk, a dialogue with M turns of interaction between the system and the user is denoted as XM = {(A1, U1) , (A2, U2) . . . , (AM, UM)}, where A and U represent the system’s response and the user’s input, respectively. Each task Tk is associated with a predefined slot set4 S = {S1, . . . , SJ}, where J is the total number of slots. The objective of DST is 4To specify the domain to which a slot belongs, a slot is defined as the concatenation of the specific domain and the slot name, e.g., “<restaurant-area>”. to predict the dialogue state Bm based on the dialogue context Xm. The dialogue state is a collection of (slot, value) pairs, expressed as Bm = {(S1, V m 1 ) , . . . , (SJ, V m J )}, where V m J is the value for slot SJ at turn m. DST involves training a model f : Xm ⊕Sj →V m j , with ⊕denotes simple text concatenation. Overview TaSL includes two key components: (i) Skill Localization, utilizing a new group-wise importance metric to accurately identify the importance distribution of parameters across tasks, and (ii) Skill Consolidation, which employs a novel fine-grained model averaging strategy to integrate model weights from both current and past tasks for effective knowledge transfer. Figure 2 provides a comprehensive overview of TaSL, with the following subsections detailing each component. 3.1 Importance-aware Skill Localization To address the substantial computational and storage demands imposed by previous parameter-level importance calculation methods (Konishi et al., 2023), we propose a new group-wise metric for evaluating the importance of each skill unit u: I(u) = 1 d1 × d2 d1 X i=1 d2 X j=1 s(wij) (1) where wij denotes the trainable parameters, and d1 × d2 represents the total parameter count in a skill unit u. I(u) measures the collective importance of all parameters within each skill unit, where higher values signify increased importance. The function s(·) is a designated importance function for individual parameters, defined as the magnitude of the gradient-weight product: I (wij) = wij∇wijL (2) This approximates the loss change when a parameter is zeroed out. If removing a parameter has a significant influence, then the model is sensitive to it, and we should retain it (Liang et al., 2021). However, sensitivity in Eq. (2) may not reliably indicate importance (Zhang et al., 2022b). This metric, calculated from a sampled mini-batch, suffers from variability due to stochastic sampling and training dynamics, introducing large uncertainty in estimating sensitivity. To mitigate this, we apply sensitivity smoothing and uncertainty quantification (Zhang et al., 2022a): ¯I(t) (wij) = α1 ¯I(t−1) (wij) + (1 −α1) I(t) (wij) (3) 1268 Figure 2: Overview of TaSL. Step 1: We compute the importance scores of skill units for the current task Tk using our importance-aware skill localization method during fine-tuning. Step 2: Based on a fine-grained model averaging strategy, the skill consolidation method merges the model ˆfk−1, which accumulates knowledge of all previous tasks, with the current task’s model fk. The integration is guided by the importance distributions of skill units across various tasks. We then update the cumulative importance scores for all skill units until task Tk using Eq. (6). This process is designed to be iteratively repeated with the introduction of each subsequent task. ¯U (t) (wij) = α2 ¯U (t−1) (wij) + (1 −α2) I(t) (wij) −¯I(t) (wij) (4) where α1 and α2 are smoothing factors, and t is the iteration number. ¯I(t) represents smoothed sensitivity by exponential moving average and ¯U (t) is the uncertainty term quantified by the local variation between I(t) and ¯I(t). Importance is then defined by multiplying ¯I(t) and ¯U (t), providing a more accurate importance assessment for s(·): s(t) (wij) = ¯I(t) (wij) · ¯U (t) (wij) (5) Calculating current task importance scores. To compute the importance score of each skill unit for task Tk, we employ Eq. (1) during finetuning. The model f with n skill units is denoted as U = {u1, . . . , un}, with their importance scores for task Tk denoted by I(Uk) ∈Rn. The detailed computation process is provided in Algorithm 1. Computing accumulated importance scores for previous tasks. After computing importance scores for each skill unit at the current task Tk, it is essential to compare these with scores from all previously learned tasks to distinguish between task-specific and task-shared parameters. To avoid the inefficiency of storing scores for each past task, we aggregate importance scores from all prior tasks into a cumulative score for tasks up to Tk−1. This method allows for the iterative refinement of accumulated scores without separately saving past task scores. The skill units with these cumulative scores up to Tk−1 are denoted as ˆUk−1, calculated using: I( ˆUk−1) = βNorm(I( ˆUk−2))+ (1 −β)Norm(I(Uk−1)) (6) where β ∈[0, 1], and Norm(·) normalizes importance scores to the [0, 1] range, thus resolving discrepancies across models. The initial scores, I( ˆU1), are set to be equal to I(U1). Following this, the importance distribution for skill units up to task Tk−1 is combined with that of the current task, Tk, to facilitate the skill consolidation process. 3.2 Skill Consolidation After skill localization, the subsequent vital phase involves consolidating this knowledge into a unified framework. This process demands a sophisticated model averaging approach considering various factors to optimize task performance. Traditional coarse-grained model averaging assumes that all model weights are equally important for the training task (Kirkpatrick et al., 2017; Eddine Marouf et al., 2023), which can be written as the following iterative computation format: ˆfk = λ ˆfk−1 + (1 −λ) fk (7) However, this method may overemphasize weights irrelevant to the current task, contaminating previously acquired task-specific knowledge and leading to forgetting. To counteract this, we introduce a fine-grained averaging strategy focusing 1269 Algorithm 1 Importance-aware Skill Localization Input: Training dataset Dk for task Tk; total training iterations T; hyperparameters α1, α2. for t = 1, . . . , T do Sample a mini-batch from Dk and compute the gradient ∇L; Compute the sensitivity I (wij) via Eq. (2); Update ¯I(t) via Eq. (3) and ¯U (t) via Eq. (4); end for Compute the importance score I(uk i ) for each skill unit uk i by Eq. (1), for i = 1, . . . , n. Output: fk and importance scores I(Uk) for Uk. on skill units rather than the entire model. Our approach distinguishes between task-shared and taskspecific skill units, categorically applying weighted averaging to parameters within each skill unit. We initially set importance thresholds δ using quantiles to select the top 20% of skill units based on importance scores. A skill unit uk i is deemed important (denoted as (uk i )+) if its score I(uk i ) is above δk, and unimportant ((uk i )−) otherwise. Our fine-grained averaging strategy customizes parameter combination for each skill unit, based on its importance under different tasks, as follows: ˆuk i =            γˆuk−1 i + (1 −γ)uk i , if (ˆuk−1 i )+, (uk i )+ ˆuk−1 i , if (ˆuk−1 i )+, (uk i )− uk i , if (ˆuk−1 i )−, (uk i )+ 1 2(ˆuk−1 i + uk i ), if (ˆuk−1 i )−, (uk i )− (8) This strategy performs the element-wise adjustment of parameters within each skill unit based on its relevance to previous and current tasks, using hyperparameter γ to control their influences. In the scenario where a skill unit ui is significant for both past and present tasks (case 1), we integrate newly acquired knowledge into this taskshared skill unit to enable backward KT. If a skill unit ui is crucial solely for previous tasks (case 2), we maintain the knowledge within this previous task-specific skill unit untouched to prevent the contamination of historical knowledge with taskirrelevant information. In contrast, for a skill unit important only to the current task (case 3), since the model fk is trained on the initialization of ˆfk−1, the historically learned knowledge is utilized to enhance the performance of the current task, enabling forward KT. Thus, we ensure the integrity of parameters within this current task-specific skill unit, preserving essential knowledge for excelling in the new task. We adopt a straightforward averaging for units not pertinent to either task (case 4). Skill consolidation is performed before starting a new task in CL, utilizing the averaged model for subsequent task initialization. Only the importance scores of ˆUk−1 and Uk are retained for use between tasks, starting with ˆU1 = U1 estimated from f1 on D1. Detailed implementation of TaSL algorithm can be found in the Appendix (Algorithm 2). 4 Experiments and Analysis Dataset We use the continual learning for DST setup proposed by Zhu et al. (2022), which uses 15 single domains from the Schema-Guided Dialog dataset (SGD) (Rastogi et al., 2020). We aggregate our results over the same five domain orders to make the most reliable comparisons with prior works. Comparing results with the same order is crucial as the results can have significant variance depending on the chosen domains and their order. More details about data statistics, task selection, and orderings can be found in the Appendix A. Evaluation Protocol We evaluate DST performance using the widely adopted Joint Goal Accuracy (JGA) metric (Wu et al., 2019), which indicates the percentage of turns for which all slot values are correctly predicted. We denote aj,i as the JGA on the test set of task Ti right after training on task Tj. The performance of Continual DST is assessed using three metrics from Zhu et al. (2022): Avg. JGA = 1 K K X i=1 aK,i (9) FWT = 1 K −1 K X i=2 ai−1,i (10) BWT = 1 K −1 K−1 X i=1 aK,i −ai,i (11) Avg. JGA represents the average JGA across all tasks after training on the final task TK. Forward Transfer (FWT) evaluates a model’s generalization ability by measuring the averaged zero-shot performance. Backward Transfer (BWT) assesses the impact of learning on subsequent tasks on a previous task. Negative BWT indicates the model lost some previously acquired knowledge. 1270 Method Memory-Free Avg. JGA FWT BWT Fine-tuning (Madotto et al., 2020) ✓ 44.10.9 8.31.0 -36.63.9 EWC (Kirkpatrick et al., 2017) 47.91.1 8.40.9 -38.14.1 AdapterCL (Madotto et al., 2020) 49.81.7 DST-EGQA (Cho et al., 2023) 55.53.5 23.62.1 -19.14.2 RoS (Feng et al., 2024) 59.03.9 25.52.0 -17.93.7 TaSL (ours) 62.12.0 26.61.5 -9.12.2 Replay (Madotto et al., 2020) ✗ 58.63.5 10.90.5 -3.22.3 CPT (Zhu et al., 2022) 61.22.5 13.70.8 0.50.4 DST-EGQA (Cho et al., 2023) 68.90.3 22.51.8 -5.91.9 RoS (Feng et al., 2024) 72.10.8 26.72.0 -2.61.5 CPT Multi-task (Zhu et al., 2022) 64.01.9 DST-EGQA Multi-task (Cho et al., 2023) 74.21.8 RoS Multi-task (Cho et al., 2023) 76.30.3 Table 1: CL results of various methods, all utilizing the same T5-small backbone, on 15 different tasks from the SGD dataset. Means and standard variances are reported across five domain permutations. The last two rows provide the multi-tasking results, which serve as an upper bound. Our memory replay-free TaSL outperforms the previous best method, RoS, by achieving a 3.1% absolute improvement on avg. JGA and an 8.8% absolute increase in BWT. Additionally, TaSL exceeds the performance of the majority of memory replay methods and nearly matches the upper bound of the CPT multi-task method. Baselines We evaluate our method with the following Continual DST baselines: Fine-tuning: Continuously fine-tune the backbone model on new task data. Replay: Randomly save |M| samples from the training set of each previous task Ti in memory Mi and jointly train the model on new task data DK and memory M<k. EWC: Maintain a memory but leverage it to compute the Fisher information matrix for regularization (Kirkpatrick et al., 2017). AdapterCL: Freeze the pre-trained model and independently train a residual Adapter (Houlsby et al., 2019) for each task (Madotto et al., 2020). Continual Prompt Tuning (CPT) (Zhu et al., 2022): Freeze the backbone model and continually train soft prompts with memory-guided knowledge transfer in both forward and backward directions. DST-EGQA (Cho et al., 2023): Reformulate DST as a QA task to mitigate forgetting with retrieval-augmented in-context learning. RoS (Feng et al., 2024): Utilize knowledge distillation to enhance the meta-reasoning ability of student models, thereby mitigating forgetting. Training Details We utilize four distinct parameter-level backbones for experiments: T5-small (Raffel et al., 2020), T5-base, FlanT5-large (Chung et al., 2022), and LLaMA-7B (Touvron et al., 2023). For the T5 series model, we perform full fine-tuning across all parameters. For LLaMA-7B, we adopt the Parameter-Efficient Fine-Tuning technique, specifically Low-Rank Adaptation (LoRA) (Hu et al., 2021), to expedite the training process. For TaSL, we set the hyperparameters α1 and α2 in Eq. (3) and Eq. (4) to 0.85, and set β in Eq. (6) and γ in Eq. (8) to 0.7. The memory size per task |M| is maintained at 50, aligning with previous studies. Detailed training settings are provided in Appendix B. Following this, we compare TaSL with baselines in Sec. 4.1, and present a comprehensive ablation study in Sec. 4.2. Subsequently, we delve deeper into the underlying success of our proposed importance-aware skill localization (Sec. 4.3) and skill consolidation techniques (Sec. 4.4), and get some insightful findings from this exploration. 4.1 Main Results Overall CL results of different methods at the same T5-small backbone are summarized in Table 1. TaSL demonstrates superior CL performance through effective knowledge transfer. Unlike vanilla fine-tuning, which suffers from catastrophic forgetting, our approach demonstrates a substantial improvement in Avg. JGA, increasing it from 44.1% to 62.1%, and shows marked advancements in both FWT and BWT. TaSL not only exceeds the CPT method, which relies on memory replay, advancing the Avg. JGA from 61.2% to 62.1%, but also establishes a new 1271 Figure 3: Performance of TaSL w/ different backbones. SOTA across all metrics against the top baseline, DST-EGQA. We achieve an impressive increase in Avg. JGA from 59.0% to 62.1% (3.1% absolute improvement) and elevate BWT from -17.9% to -9.1%, exceeding it by more than 8% and displaying robust backward KT capabilities. Additionally, TaSL obtains the highest FWT scores across all baselines under various conditions. Remarkably, without relying on memory replay, our method nearly matches the performance of DST-EGQA with memory replay, particularly in BWT, with a minimal difference of 3.2% (-9.1% vs. -5.9%). Moreover, our Avg. JGA score closely approaches the upper bound performance set by the CPT multi-task strategy at 64%, underscoring the effectiveness of our fine-grained model averaging strategy that meticulously accounts for domainshared and domain-specific parameters. TaSL consistently demonstrates superior performance across various backbones. To further substantiate the effectiveness of our framework, we conducted experiments using a variety of parameter-level backbones, illustrated in Figure 3, highlighting performance gains with increasing model size. Notably, TaSL achieves a breakthrough by recording a positive BWT score on LLaMA7B, without employing any memory replay techniques. Across different backbones, our method consistently outperformed traditional approaches. For instance, in Flan-T5-large, TaSL significantly boosts the Avg. JGA metric from 56% to 74%, also achieving the most substantial improvements in both FWT and BWT metrics — rising from 33% to 43% and improving from -28% to -13%, respectively. These results further validate the generality of our proposed framework. Fine-grained model averaging can effectively mitigate catastrophic forgetting. To rigorously evaluate our model’s effectiveness in counteracting forgetting, we analyzed its performance on the Figure 4: Performance trajectory of Task 1 during the Continual DST learning process. initial task after training on subsequent tasks. Figure 4 illustrates that our method results in a notably slower forgetting rate, manifesting as a nearly 8% average decrease in performance after training on the last task. This contrasts sharply with vanilla backbones, which display a substantial performance reduction of 20% on average, thereby underscoring our method’s superior capacity to mitigate forgetting. Moreover, an intriguing observation is the enhancement in performance on task 1 after training on task 3, highlighting our model’s effective backward KT ability. 4.2 Ablation Study This section we assess the effects of importanceaware skill localization and fine-grained skill consolidation, with the results discussed below. For hyperparameter sensitivity, see Appendix D. Various importance scoring methods for skill localization. Our method calculates importance scores by Eq. (1). As shown in Table 2, we explore alternative importance scoring approaches: (i) modifying s(·) in Eq. (1) only to include sensitivity, as in Eq. (2); and (ii) using absolute gradients, ∇wijL , for importance assessment (Michel et al., 2019). The results indicate that using moving averages for importance scoring outperforms the alternatives, with the other two variants leading to a maximum performance decrease by 3.26%, 2.24%, and 4.11% across the three metrics. This highlights the value of accurate skill localization in improving model performance. Fine-grained vs. coarse-grained model averaging for skill consolidation. We compared our fine-grained averaging strategy against two coarsegrained strategies: (i) Weight-Ensemble, which averages weights uniformly as per Eq. (7), and 1272 Method Avg. JGA FWT BWT vanilla T5-small 44.10 8.32 -36.63 s (·) = I (·) 60.48 24.39 -10.81 s (·) = ∇wijL 58.82 24.80 -13.22 TaSL (ours) 62.08 26.63 -9.11 Table 2: Ablation study. Evaluating the impact of importance scoring variations on skill localization. Method Avg. JGA FWT BWT vanilla T5-small 44.10 8.32 -36.63 Weight-Ens. 53.23 21.73 -18.28 EMA 52.56 22.27 -16.80 Fine-grained (ours) 62.08 26.63 -9.11 Table 3: Ablation study. Comparing coarse- and finegrained model averaging methods on skill consolidation. (ii) Exponential Moving Average (EMA) (Szegedy et al., 2016), applying a running average of parameters at each fine-tuning iteration. Results are detailed in Table 3. Weight-Ensemble significantly improves upon the vanilla model, highlighting coarse-grained averaging’s benefits for Continual DST. EMA generally surpasses Weight-Ensemble but falls short of our fine-grained approach due to its overuse of averaging, with frequent parameter adjustments within the same task possibly resulting in less optimal outcomes. Our method solely averages weights after each task, enhancing computational efficiency. 4.3 Visualization of Skill Units We visualized the distribution of importance scores for the skill units across tasks and models, as shown in Figure 5, leading to several critical insights: • There is a noticeable variation in the importance of skill units for the same task, with important skill units in LLaMA-7B making up about 20% of all trainable LoRA parameters. • The distribution of important skill units is taskdependent, indicating both task-shared and specific parameters, confirming TaSL’s validity. • Lower layers, nearer to the input, are more crucial for the DST task compared to upper layers. • Within each layer, the importance of the attention layer, especially the V (value) and O (output) matrices, consistently exceeds that of the Q (query), K (key) matrices, and the MLP layer. Figure 5: Visualization of importance scores for skill units across different backbone models and tasks. Seq. length Method Task1 Task2 Task3 2 (LLaMA-7B) Upper bound 92.3 86.1 Fine-tuning 73.1 86.1 Coarse-grained 82.3 71.3 TaSL (ours) 86.9 83.3 3 (T5-small) Upper bound 89.2 81.5 64.4 Fine-tuning 53.1 62.0 64.4 Coarse-grained 58.5 64.6 43.7 TaSL (ours) 80.0 74.1 63.8 Table 4: Analysis of knowledge balancing across old and new tasks. All results are reported in JGA(%). 4.4 Improved Balance in Knowledge Transfer This section evaluates the effectiveness of our fine-grained model averaging method in achieving the optimal balance between preserving previous knowledge and excelling at new tasks in CL, comparing it to coarse-grained approaches. Table 4 shows that for a sequence of two tasks, vanilla fine-tuning on LLaMA-7B results in a notable decline in historical Task 1’s performance (from 92.3% to 73.1%), indicating notable forgetting. Coarse-grained averaging mitigates this to an extent, reducing the decline to 82.3% but impacting new task performance to 71.3%. Our method effectively lessens forgetting (improving to 86.9%) while also maintaining better performance on Task 2, with less than a 3% reduction. As tasks increase to three, our method more effectively compensates for losses on new tasks with gains on historical tasks. Vanilla fine-tuning on T5small results in a combined 55.6% drop in Tasks 1 and 2, while our approach only shows a 16.6% decrease and the loss on task 3 is less than 1% due to TaSL’s effective KT ability. 1273 5 Conclusion In this paper, we introduce a novel TaSL method to enhance Continual DST performance by facilitating effective knowledge transfer across tasks. Our approach leverages an innovative importanceaware skill localization technique and a skill consolidation strategy to differentiate between domainspecific and domain-shared parameters, mitigating forgetting. Comprehensive experiments showcase our method’s exceptional ability to balance preserving past knowledge and excelling in new tasks. Limitations TaSL excels at precisely distinguishing between task-specific and shared parameters through importance-aware skill localization. However, the current importance scoring criteria, relying on firstorder gradients, may lack precision. The Hessian matrix often captures the actual importance, but computing these second-order gradients incurs significant computational costs. Therefore, future improvements should focus on developing more accurate and efficient skill localization methods. In addition, in skill consolidation, the challenge lies in better integrating model parameters. Under our fine-grained model averaging strategy (Eq. (8)), selecting different weighted combinations could impact the overall performance. Although we investigated the model’s sensitivity to various hyperparameter settings (Appendix D), with results showing stable and consistently strong performance across different combinations, there is still potential for further improvement. For instance, developing adaptive methods to select optimal weights or devising more efficient model averaging strategies could further enhance model performance. Acknowledgments We thank the anonymous reviewers for their valuable feedback. This research was partially supported by the grant of HK ITF ITS/359/21FP. References Hyundong Cho, Andrea Madotto, Zhaojiang Lin, Khyathi Raghavi Chandu, Satwik Kottur, Jing Xu, Jonathan May, and Chinnadhurai Sankar. 2023. Continual dialogue state tracking via example-guided question answering. arXiv preprint arXiv:2305.13721. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione, and Stéphane Lathuilière. 2023. Weighted ensemble models are strong continual learners. arXiv e-prints, pages arXiv–2312. Yujie Feng, Bo Liu, Xiaoyu Dong, Zexin Lu, Li-Ming Zhan, Xiao-Ming Wu, and Albert YS Lam. 2024. Continual dialogue state tracking via reason-of-select distillation. In Findings of the Association for Computational Linguistics: ACL 2024. Yujie Feng, Zexin Lu, Bo Liu, Liming Zhan, and XiaoMing Wu. 2023a. Towards llm-driven dialogue state tracking. arXiv preprint arXiv:2310.14970. Yujie Feng, Jiangtao Wang, Yasha Wang, and Xu Chu. 2022. Spatial-attention and demographic-augmented generative adversarial imputation network for population health data reconstruction. IEEE Transactions on Big Data. Yujie Feng, Jiangtao Wang, Yasha Wang, and Xu Chu. 2023b. Towards sustainable compressive population health: a gan-based year-by-year imputation method. ACM Transactions on Computing for Healthcare, 4(1):1–18. Yujie Feng, Jiangtao Wang, Yasha Wang, and Sumi Helal. 2021. Completing missing prevalence rates for multiple chronic diseases by jointly leveraging both intra-and inter-disease population health data correlations. In Proceedings of the Web Conference 2021, pages 183–193. Binzong Geng, Fajie Yuan, Qiancheng Xu, Ying Shen, Ruifeng Xu, and Min Yang. 2021. Continual learning for task-oriented dialogue system with iterative network pruning, expanding and masking. arXiv preprint arXiv:2107.08173. Saihui Hou, Xinyu Pan, Chen Change Loy, Zilei Wang, and Dahua Lin. 2019. Learning a unified classifier incrementally via rebalancing. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 831–839. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790–2799. PMLR. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. 1274 Zixuan Ke and Bing Liu. 2022. Continual learning of natural language processing tasks: A survey. arXiv preprint arXiv:2211.12701. Zixuan Ke, Bing Liu, Nianzu Ma, Hu Xu, and Lei Shu. 2021. Achieving forgetting prevention and knowledge transfer in continual learning. Advances in Neural Information Processing Systems, 34:22443– 22456. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526. Tatsuya Konishi, Mori Kurokawa, Chihiro Ono, Zixuan Ke, Gyuhak Kim, and Bing Liu. 2023. ParameterLevel Soft-Masking for Continual Learning. In Proc. of ICML. Zhizhong Li and Derek Hoiem. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12):2935–2947. Chen Liang, Simiao Zuo, Minshuo Chen, Haoming Jiang, Xiaodong Liu, Pengcheng He, Tuo Zhao, and Weizhu Chen. 2021. Super tickets in pre-trained language models: From model compression to improving generalization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6524–6538. Bo Liu, Liming Zhan, Zexin Lu, Yujie Feng, Lei Xue, and Xiao-Ming Wu. 2023. How good are large language models at out-of-distribution detection? arXiv preprint arXiv:2308.10261. Qingbin Liu, Pengfei Cao, Cao Liu, Jiansong Chen, Xunliang Cai, Fan Yang, Shizhu He, Kang Liu, and Jun Zhao. 2021. Domain-lifelong learning for dialogue state tracking via knowledge preservation networks. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2301–2311. Zexin Lu, Keyang Ding, Yuji Zhang, Jing Li, Baolin Peng, and Lemao Liu. 2021a. Engage the public: Poll question generation for social media posts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 29–40. Zexin Lu, Jing Li, Yingyi Zhang, and Haisong Zhang. 2021b. Getting your conversation on track: Estimation of residual life for conversations. In 2021 IEEE Spoken Language Technology Workshop (SLT), pages 1036–1043. IEEE. Andrea Madotto, Zhaojiang Lin, Zhenpeng Zhou, Seungwhan Moon, Paul Crook, Bing Liu, Zhou Yu, Eunjoon Cho, and Zhiguang Wang. 2020. Continual learning in task-oriented dialogue systems. arXiv preprint arXiv:2012.15504. Michael McCloskey and Neal J Cohen. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? Advances in neural information processing systems, 32. Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, and Erik Cambria. 2023. Recent advances in deep learning based dialogue systems: A systematic survey. Artificial intelligence review, 56(4):3055–3155. Abhishek Panigrahi, Nikunj Saunshi, Haoyu Zhao, and Sanjeev Arora. 2023. Task-specific skill localization in fine-tuned language models. arXiv preprint arXiv:2302.06600. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 8689–8696. Victor Sanh, Thomas Wolf, and Alexander M Rush. 2020. Movement pruning: adaptive sparsity by finetuning. In Proceedings of the 34th International Conference on Neural Information Processing Systems, pages 20378–20389. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state generator for task-oriented dialogue systems. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808–819. 1275 Yongxin Xu, Xu Chu, Kai Yang, Zhiyuan Wang, Peinie Zou, Hongxin Ding, Junfeng Zhao, Yasha Wang, and Bing Xie. 2023a. Seqcare: Sequential training with external medical knowledge graph for diagnosis prediction in healthcare data. In Proceedings of the ACM Web Conference 2023, pages 2819–2830. Yongxin Xu, Kai Yang, Chaohe Zhang, Peinie Zou, Zhiyuan Wang, Hongxin Ding, Junfeng Zhao, Yasha Wang, and Bing Xie. 2023b. Vecocare: visit sequences-clinical notes joint learning for diagnosis prediction in healthcare data. In Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pages 4921–4929. Kai Yang, Yongxin Xu, Peinie Zou, Hongxin Ding, Junfeng Zhao, Yasha Wang, and Bing Xie. 2023. Kerprint: local-global knowledge graph enhanced diagnosis prediction for retrospective and prospective interpretations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 5357–5365. Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2022a. Adaptive budget allocation for parameter-efficient fine-tuning. In The Eleventh International Conference on Learning Representations. Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2022b. Platon: Pruning large transformer models with upper confidence bound of weight importance. In International Conference on Machine Learning, pages 26809–26823. PMLR. Qi Zhu, Bing Li, Fei Mi, Xiaoyan Zhu, and Minlie Huang. 2022. Continual prompt tuning for dialog state tracking. arXiv preprint arXiv:2203.06654. A Dataset Statistics Here, we offer a detailed description of the dataset used in Continual DST. Table 5 displays the number of slots for each of the 15 services used in our experiments and the count of samples in the training, validation, and test sets. Table 6 illustrates the training sequence for these 15 tasks in the context of continual learning. B Implementation Details Definition of Skill Units In our Importanceaware Skill Localization technique, to circumvent the high computational resource demands of parameter-level localization, we introduced a novel group-wise metric, redefining the skill unit u as the basic element for computing importance scores. However, the division of skill units varies across different backbone models due to variations in parameter quantities and architectural designs (e.g., decoder-only and encoder-decoder architectures). The specific distinctions are as follows: • Encoder-decoder architecture backbones. For these backbones (Feng et al., 2021), such as T5small, T5-base, and T5-large, we perform fullparameter fine-tuning during training. For organizational simplicity, we divided the models based on the different functionalities within the transformer blocks, as depicted in Table 7. For instance, in T5-small, with both encoder and decoder comprising 6 transformer blocks each, the total comes to 131 skill units for T5-small, 257 skill units for T5-base, and 558 for T5-large. • Decoder-only architecture backbone. The LLaMA-7B model we utilized falls into this category (Feng et al., 2023b). Leveraging Parameterefficient Fine-tuning Techniques (PEFT) to expedite training, we treat the matrices A and B in LoRA as individual basic skill units. And we add LoRA adapters to attention layers in LLaMA. Each layer, as detailed in Table 8, comprises 8 distinct skill units. Given that LLaMA-7B consists of 32 layers, it is thereby segmented into 256 skill units in total. Model training details For different backbones, we utilized the following hyperparameters: • T5-small (60M), T5-base (220M), and FLANT5-lare (780M): Training was conducted with a learning rate of 3e-4, batch size of 8, maximum input length of 512, maximum target length of 128, and 5 epochs. • LLaMA (7B): Utilizing LORA for efficiency, with a learning rate of 3e-4, batch size of 128, a cutoff length of 512, and 5 epochs. Lora settings were r = 8, alpha = 16, dropout = 0.05, targeting modules [[q_proj,k_proj,v_proj,o_proj]]. For testing, settings included temperature = 0.02, top_p = 0, top_k = 1, num_beams = 1, max new tokens = 128. Experiments are carried out using 2 NVIDIA A100 with 80GB memory. Results are averaged across five different task orders and include the standard error in the tables and plots provided (Feng et al., 2022). C Additional Results To further validate TaSL’s effectiveness in more complex continual learning scenarios, we have conducted additional experiments to verify its performance on transitioning between different datasets, 1276 Task ID Service # Slots # Dialogs # Samples Avg. tokens Train Dev Test Train Dev Test Context Query 30 services_4 5 86 13 25 680 97 208 154 49 31 flights_1 10 560 80 160 4680 667 1379 168 10 32 services_3 5 131 19 38 959 143 290 143 54 33 flights_3 8 65 10 19 420 75 116 133 79 34 trains_1 7 58 9 17 415 67 117 131 76 35 homes_2 8 62 9 18 424 56 139 140 89 36 rentalcars_2 6 77 11 23 631 91 185 157 61 37 restaurants_1 9 256 37 74 2098 297 581 153 10 38 music_1 6 68 10 20 468 73 142 118 61 39 hotels_4 7 80 12 23 559 99 141 134 72 40 media_2 5 32 4 10 215 29 71 112 59 41 hotels_3 6 90 13 26 737 100 193 157 64 42 rentalcars_3 7 44 7 13 332 55 99 148 72 43 hotels_1 7 99 14 29 868 105 250 161 71 44 homes_1 7 244 35 70 1829 282 540 159 81 Table 5: Statistics of the 15 services we used in experiments. Task order Tasks’ IDs in order Order1 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 Order2 39 33 36 42 40 37 38 34 32 35 41 31 30 44 43 Order3 30 41 38 31 43 39 40 33 34 44 37 36 32 35 42 Order4 43 40 44 38 30 37 31 39 32 35 41 34 33 36 42 Order5 30 33 44 31 38 32 42 40 37 43 36 39 41 35 34 Table 6: Five task orders of all our 15 tasks experiments. specifically from SGD to MultiWoz. This involved including another 5 distinct domains from the MultiWoz 2.1 dataset, in addition to the 15 domains in SGD, resulting in a total of 20 domains (i.e., tasks). Table 9 presents the performance of various methods, utilizing T5-small as the backbone. The findings align with the observations from Table 1, although there is a noticeable decrease in the efficacy of all evaluated methods, due to the significant discrepancies in data distribution across the datasets examined. As a memory-free method, our TaSL still significantly outperforms the strongest baseline (i.e., the memory-free version of DSTEGQA) and even surpasses some memory-based methods like “Replay”. These findings demonstrate TaSL’s robustness and effectiveness, showcasing its capability to handle complex continual learning scenarios. D Sensitivity Analysis for Hyperparameters The proposed framework incorporates three key hyperparameters, including the α for computing importance scores in Equations (3) and (4), the β for calculating cumulative importance scores in Equation (6), and the γ for performing weighted averaging within skill units as outlined in Equation (8). Our analysis aims to assess the impact of varying these hyperparameters on our method’s performance, testing on the T5-small backbone model. As evidenced in Table 10, we determine that the optimal setting for α is 0.55. An α value too low results in a performance decline, indicating that the calculated importance scores are not sufficiently accurate. Furthermore, as depicted in the results of Tables 11 and 12, we also find that β and γ values within a normal range do not significantly affect performance. However, excessively high or low values for β and γ may skew the model towards favoring either past or current task knowledge, thereby disrupting the desired balance. 1277 Block Type Skill Unit Name Encoder SelfAttention.q.weight SelfAttention.k.weight SelfAttention.v.weight SelfAttention.0.weight layer.0.layer_norm.weight DenseReluDense.wi.weight DenseReluDense.wo.weight layer.1.layer_norm.weight Decoder SelfAttention.q.weight SelfAttention.k.weight SelfAttention.v.weight SelfAttention.0.weight SelfAttention.relative_attention_bias.weight layer.0.layer_norm.weight layer.1.EncDecAttention.q.weight layer.1.EncDecAttention.k.weight layer.1.EncDecAttention.v.weight layer.1.EncDecAttention.o.weight 1.layer_norm.weight Table 7: Definition of skill units for encoder-decoder architecture backbones at each transformer block. Block Type Skill Unit Name Decoder self_attn.q_proj.lora_A.default.weight self_attn.q_proj.lora_B.default.weight self_attn.k_proj.lora_A.default.weight self_attn.k_proj.lora_B.default.weight self_attn.v_proj.lora_A.default.weight self_attn.v_proj.lora_B.default.weight self_attn.o_proj.lora_A.default.weight self_attn.o_proj.lora_B.default.weight Table 8: Definition of skill units for decoder-only architecture backbones at each transformer block. Nonetheless, the model’s performance remains relatively stable across most conditions, indicating a low sensitivity to hyperparameter variations. About the selection of threshold for important skill units, the Table 13 below shows the model’s performance with varying thresholds δ on T5small. It can be seen that setting a high threshold (50%) reduces model effectiveness by categorizing less significant skill units as important, which can contaminate historical knowledge and lead to forgetting. Conversely, a 1% threshold still maintains strong performance owing to our effective skill consolidation approach, which effectively preserves task-specific knowledge and prevents forgetting. Considering that the heatmap in Figure 5 displays approximately 20% of the area in darker shades, signifying greater importance, we opted for a 20% threshold to differentiate between important and unimportant skill units. Method Memory-Free Avg. JGA FWT BWT Fine-tuning ✓ 20.1 6.6 -53.1 DST-EGQA 40.5 18.4 -37.1 TaSL (ours) 49.9 22.0 -23.8 Replay ✗ 47.2 7.3 -16.0 DST-EGQA 51.2 18.5 -21.9 Table 9: Cross-dataset performance of TaSL. α1,α2 avg. JGA FWT BWT fine-tuning 41.6 9.6 -36.7 0.15 61.8 29.7 -10.7 0.35 61.2 30.1 -12.3 0.55 62.8 28.6 -9.5 0.85 60.7 28.9 -10.3 0.95 61.7 30.0 -10.6 Table 10: Performance comparisons of TaSL (using T5small as the backbone) equipped with different α at task order 1. E Detailed Algorithm In this section, we provide the detailed implementation of TaSL algorithm (see Algorithm 2). 1278 β avg. JGA FWT BWT fine-tuning 41.6 9.6 -36.7 0.1 61.8 29.6 -11.7 0.3 61.5 28.4 -11.4 0.5 62.3 29.5 -10.2 0.7 60.7 28.9 -10.3 0.9 58.2 30.2 -13.0 Table 11: Performance comparisons of TaSL (using T5small as the backbone) equipped with different β at task order 1. γ avg. JGA FWT BWT fine-tuning 41.6 9.6 -36.7 0.1 60.1 28.2 -12.1 0.3 61.7 28.8 -11.2 0.5 63.0 28.4 -10.5 0.7 60.7 28.9 -10.3 0.9 61.7 27.4 -11.5 Table 12: Performance comparisons of TaSL (using T5small as the backbone) equipped with different γ at task order 1. Importance Thresholds δ Avg. JGA FWT BWT 1% 62.0 26.3 -9.4 5% 63.4 25.8 -10.1 10% 62.2 26.2 -9.5 20% 62.1 26.6 -9.1 30% 62.7 26.5 -10.0 40% 60.9 24.6 -10.2 50% 54.8 23.4 -10.3 Table 13: Performance comparisons of TaSL (using T5small as the backbone) equipped with different δ. Algorithm 2 TaSL Input: Dataset Dk for task k = 1, . . . , K; initial pre-trained model f0; hyperparameters β, γ. 1: # sequential tasks. 2: for task k = 1, . . . , K do 3: Get fk and calculate Uk by Algorithm (1); 4: Calculate δk based on I(Uk); 5: if k = 1 then 6: # initialization at beginning task. 7: ˆf1 ←f1, ˆU1 ←U1, ˆδ1 ←δ1; 8: else 9: # fine-grained model averaging. 10: for skill unit i = 1, . . . , n do 11: Calculate ˆuk i by (8); 12: end for 13: Get the averaged model ˆfk based on ˆUk; 14: Calculate accumulated importance score I( ˆUk) according to (6); 15: Calculate ˆδk based on I( ˆUk). 16: end if 17: end for 1279
2024
69
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12777–12797 August 11-16, 2024 ©2024 Association for Computational Linguistics Generating Coherent Sequences of Visual Illustrations for Real-World Manual Tasks João Bordalo1 Vasco Ramos1 Rodrigo Valério1 Diogo Glória-Silva1 Yonatan Bitton2 Michal Yarom2 Idan Szpektor2 Joao Magalhaes1 1NOVA LINCS, NOVA School of Science and Technology, Portugal 2Google Research [email protected], [email protected] Abstract Multistep instructions, such as recipes and howto guides, greatly benefit from visual aids, such as a series of images that accompany the instruction steps. While Large Language Models (LLMs) have become adept at generating coherent textual steps, Large Vision and Language Models (LVLMs) are less capable of generating accompanying image sequences. The most challenging aspect is that each generated image needs to adhere to the relevant textual step instruction, as well as be visually consistent with earlier images in the sequence. To address this problem, we propose an approach for generating consistent image sequences, which integrates a Latent Diffusion Model (LDM) with an LLM to transform the sequence into a caption to maintain the semantic coherence of the sequence. In addition, to maintain the visual coherence of the image sequence, we introduce a copy mechanism to initialise reverse diffusion processes with a latent vector iteration from a previously generated image from a relevant step. Both strategies will condition the reverse diffusion process on the sequence of instruction steps and tie the contents of the current image to previous instruction steps and corresponding images. Experiments show that the proposed approach is preferred by humans in 46.6% of the cases against 26.6% for the second best method. In addition, automatic metrics showed that the proposed method maintains semantic coherence and visual consistency across the sequence of visual illustrations.1 1 Introduction When humans undertake a task with numerous intricate steps, merely reading a step description is limiting, leaving the user to imagine and infer some of the more nuanced details (Choi et al., 2022). Complementing the textual step instructions with 1https://novasearch.github.io/ generating-coherent-sequences/ Figure 1: The properties of the elements in illustrations should remain coherent throughout the whole sequence. images enhances the user experience by better communicating and representing the text semantics and ideas (Serafini, 2014). Although prompt-based image generation has advanced significantly (Betker et al., 2023; Rombach et al., 2022; Saharia et al., 2022), state-of-the-art (SOTA) models such as Latent Diffusion Models (LDMs) (Rombach et al., 2022) still struggle when generating image sequences to accompany textual instruction steps (Lu et al., 2023). The challenge lies in effectively combining two key aspects: (a) accurately portraying the actions outlined in the step instructions, and (b) ensuring coherence between successive images to avoid confusing the user. Existing storytelling approaches (Feng et al., 2023; Pan et al., 2022; Rahman et al., 2023) operate mostly on linear storytelling and use synthetic cartoon datasets with explicit sequence information, i.e., the textual prompts describe the images appropriately and have no implicit co-references. These aspects limit the applicability of existing methods to real-world scenarios (Figure 1), where there is a lack of informative prompts accompanying images, and dependencies between prompts are not necessarily linear. In this paper, we explore the generation of image sequences within two domains: recipe instructions, and Do It Yourself (DIY) guides, both showing increasing online consumption (Bausch et al., 2021; 12777 Brimble, 2020; Sarpong et al., 2020; Quader, 2022). In these domains, accuracy and coherence are of utmost importance to ensure that the result of all manual actions is correct, and that the user is correctly guided to the target output, Figure 1. These domains contain (i) complex sequential manual tasks of detailed actions, (ii) coherence requirements for the images accompanying the sequence step descriptions, and (iii) a non-linear sequential structure, where steps may be related to earlier steps–not necessarily the previous step. To tackle these challenges, we propose to extend Latent Diffusion Models (Rombach et al., 2022), with an LLM decoder to semantically condition the reverse diffusion process in the sequence of steps and a copy mechanism to select the best LDM initialisation. The image generation process is conditioned on the current step and the previous steps, to increase semantic coherence. In addition, our method initializes the reverse diffusion process with a latent vector iteration copied from a previous generation process to ensure the visual coherence of the generated image. Through this dual attendance to past textual and visual items in the sequence, we aim to achieve semantic coherence, which pertains to the presence and persistence of objects in consecutive images, and visual coherence, which aims to ensure the consistency of backgrounds and visual object properties across successive images. Extensive automatic and manual evaluations confirmed that our model outperforms strong baselines in terms of the overall quality of the generated sequence of illustrations in the cooking and DIY domains. 2 Related Work Methods to generate sequences of images, conditioned on textual input, have been explored in the story visualization and story continuation tasks. Story Visualization aims at generating a coherent sequence of images, based on a multi-sentence paragraph or a series of captions forming a narrative (Li et al., 2019; Pan et al., 2022; Maharana et al., 2022). Story Continuation is a variant of Story Visualization, in which the generated sequence is initiated by a source image. In both tasks, generating every image independently of other images in the sequence leads to low visual coherence. In contrast, editing the previous image like in (Cong et al., 2022; Fu et al., 2020) will lead to insuficiently diverse images. Several works addressed these tasks. Pan et al. (2022) proposed AR-LDM, a method to tackle Story Visualization and Continuation using a history-aware autoregressive latent diffusion model (Rombach et al., 2022), which encodes the history of caption-image pairs into a multimodal representation that guides the LDM denoising process. Despite the intuitive idea, the computational complexity of the conditioning network makes this approach too costly. Additionally, AR-LDM still shows room for improvement in terms of coherence. Feng et al. (2023) noted that AR-LDM conditions the current generation on all historic frames and captions equally, despite not all frames being similarly related. To tackle this limitation, they proposed ACM-VSG, a method that selectively adopts historical text-image data for the generation of the new image. The adaptive encoder automatically finds the relevant historical text-image pairs via CLIP similarity. A key difference between ARLDM and ACM-VSG is the computational cost: while AR-LDM fine-tunes CLIP, BLIP, and the LDM, ACM-VSG trains only the cross-attention module, at a much lower cost. Rahman et al. (2022) used the full history of U-net latent vectors from all segments of the sequence, averaging these historic latent vectors in a cross-attention layer that is merged with the existing one in the LDM pipeline. In this method, image generation is conditioned on the text and on the entire set of latent vectors, with limited awareness of the visual coherence between segments. The above approaches focus on two synthetic cartoon datasets (Kim et al., 2017; Gupta et al., 2018), with limited characters and scenes. Among these, (Pan et al., 2022) shows limited performance when applied to real-world sequences, and (Feng et al., 2023; Rahman et al., 2023) do not evaluate their solutions in a real-world scenario. Additionally, these datasets have textual descriptions of the associated images, which are rarely available for real-world complex tasks – the focus of this paper. Finally, it is relevant to note the conceptual relation to the visual storytelling problem (Huang et al., 2016) to generate a text story from a sequence of images (Wang et al., 2020; Hsu et al., 2021) and to the extraction of news storylines (Marcelino et al., 2019, 2021). While, conceptually similar, both tasks can be seen as the inverse of the problem addressed in the current paper. 12778 3 Illustrating Real-World Manual Tasks We consider a set of manual tasks D, where each task TS ∈D is composed of a sequence of n stepby-step instructions, TS = {(s1, v1), ..., (sn, vn)}. A task step (si, vi), consists of a natural language instruction si, and its corresponding visual instruction vi. Given the sequence of steps {s1, . . . , sn}, our goal is to generate a sequence of images {v1, . . . , vn}, in which vi visually represents step si. A step si may be dependent on any number of previous steps, in a non-linear sequential structure (Donatelli et al., 2021). To generate each image accurately, the model needs to condition its output not only on si but also on previous steps {s1, . . . , si−1}; this way, context is preserved even when steps are ambiguous or lack information, e.g. “Add two eggs and mix”. In addition to previous step instructions, we also need to condition on previously generated images, {v1, ..., vi−1}, to maintain the visual aspects that are only introduced in the images, such as object properties and background artefacts not mentioned in the text. 4 Sequential Latent Diffusion Model The latent diffusion model proposed by Rombach et al. (2022) transforms the diffusion process into a low-dimensional latent space through an encoder z = E(v) and recovers the real image with a decoder v = D(z). The complete model is also conditioned on an input y by augmenting the U-Net backbone with a cross-attention layer to support the encoded input τθ(y). The conditional LDM is learned with the loss, LLDM = EE(v),y,ϵ,t h ∥ϵ−ϵθ(zt, t, τθ(y))∥2 2 i , (1) where ϵ ∼N(0, 1), t is the denoising iteration, and the conditioning encoder τθ(y) uses the entire set of tokens in y to condition the U-Net denoising process in the LDM backbone. Eq. 1 evidences how the conditional LDM is designed to generate one image v at a time. More relevant to our problem is the fact that the latent vectors zt are independent across different image generations, since the reverse diffusion process iterates from T to 1 starting with a new random seed zT for every new image generation. 4.1 Sequence Context Decoder Generally, textual step descriptions describe what the user should do at a specific step of their manFigure 2: Example captions generated by InstructBLIP. In the "No Context" example, the model only receives the image. In the "Current Step as Context" example, the model receives the image plus the "Original Step". ual task. These descriptions do not make accurate captions of the accompanying step images as they often contain information that is not visually representable, such as temporal information, "Cook for 10 minutes", or multiple actions, "Chop the rosemary, dice the carrots, and peel the cucumber." Additionally, it is also common for steps to not be self-contained, as they depend on the previous step descriptions for context. To overcome this, for each step si, we use a decoder-only model, φ, which we call Sequence Context Decoder, to transform the step and its context into a visual caption ci which describes the contents of the image vi. To ensure the generated captions are contextually relevant, we adopt a middle-ground approach and consider the target step si and a context window of w steps. Formally, we define the decoder ci = φ(si, {si−1, . . . si−w}) (2) to generate a contextual caption ci from its step description si and context. The decoder φ(·) is trained similarly to an image caption generator, but instead of receiving images as input, it receives the step and its context. By training the model to output image captions for the original images that we are trying to replicate, the model learns to generate texts that are more appropriate as image generation prompts. The objective now is learning how to generate better image generation prompts, instead of training the image generation module. To train the decoder model φ(·), we generated contextual captions for each image in dataset D using InstructBLIP (Li et al., 2022). To achieve richer and contextualized captions, we prompted InstructBLIP with additional context, conditioning the caption on the recipe steps, in addition to the image. Figure 2 shows example captions generated by the model, given a real data point in the dataset. 12779 Figure 3: The proposed method uses the sequence context decoder to maintain semantic coherence. The reverse diffusion process uses a conditioning seed zi T that is copied from a previous step and iteration zj k. See Equation 3. 4.2 Sequence Conditioned Reverse Diffusion To maintain visual coherence among images in the sequence, we need to condition the current generation on the previous images. Image-toimage (Meng et al., 2022) generation follows this principle, but new images are too strongly influenced by previous ones and do not correctly integrate the new aspects present in the step description. Following the rationale of conditioning every reverse diffusion process on previous processes, we propose to leverage latent vector iterations from early reverse diffusion processes. This leads us to the final formulation of the proposed method, LSLDM(si) = EE(vi),si,ϵ,t h ∥ϵ −ϵθ(zi t, t, τθ(ci = φ(si, {si−1, . . . , si−w})))∥2 2 i (3) where for each si a new reverse diffusion process starts with a conditioning seed zi T copied from a previous step sj with j < i and a latent vector iteration k corresponding to the latent vector zj k, as illustrated in Figure 3. Next, we describe the details of how these two variables are determined. 4.2.1 Random and Fixed Seeds To ground the generation, a straightforward method is to set a fixed seed for every step i in the sequence, zi T = cte. By fixing the initial seed, we aim to improve the coherence between generated images. The first two columns in Figure 5 demonstrate this approach. We observed a greater homogeneity between generated images when using a fixed seed. Hence, all step illustrations share the same random seed, to achieve more coherent scenes and backgrounds but without the capacity to select the optimal starting seed. 4.2.2 Conditioned Initialisation While using a fixed seed can improve the results, we argue that a better solution is achieved by using latent vectors from previous reverse diffusion processes. In particular, latent vectors that have already been semantically conditioned on past steps. Figure 3 shows how the latent vector representations zi t evolve with increasing iterations, until they arrive at the final image vi = D(zi 0) for step si. These latent representations already contain meaningful information about the image (Mao et al., 2023), which could be leveraged to improve the coherence of the following generations. To achieve this, we need to carefully select which step to 12780 choose, to use as input seed for the next step image generation. A step si may be dependent on any previous step j < i, {si−1, . . . , s1}. To select the optimal initialization of the reverse diffusion process for step si, we start by determining the most similar step sj as the arg max j sim(si, sj ∈{si−1, . . . s1}) (4) where sim(·) represents CLIP text similarity. If this similarity score is above a predefined threshold η, we use sj to extract the latent vector. If no step sj has a similarity score above η, we generated image vi with the shared random seed. The reverse diffusion process progressively iterates over the latent vectors towards the final image. This means that conditioning the reverse diffusion process on latent vector iterations from a later iteration, i.e. a highly denoised latent vector, would force the resulting image to be very close to the previous one. To decide how strongly we want to condition vi on the step sj, we select the kth latent vector iteration as k = n · (sim(si, sj) −η)/(1.0 −η) (5) where n is the maximum number of reverse diffusion iterations that we consider. This brings us to the target reverse iteration vector zj k which will be used as a starting seed zi T in Eq 3 when calculating LSLDM(si). Figure 3 illustrates the whole process: the proposed method captures the visual aspects that should be in the image, and the linked denoising latent vector provide the seed to generate a step image. 5 Experimental Methodology Dataset. We collected a dataset consisting of publicly available manual tasks in the recipes domain from AllRecipes. We also considered DIY manual tasks from WikiHow, in an out-of-domain evaluation. Each manual task has a title, a description, a list of ingredients/resources, and a sequence of step-by-step instructions, which may or may not be illustrated. Since we want to illustrate the steps of a task, we focus on manual tasks which are fully illustrated, as we can use these images as groundtruth for training and evaluating our methods. In total, we used 1100 tasks, with an average of 5.06 steps per task, resulting in 5562 individual steps. We found that recipes with long steps descriptions or many steps, were difficult to tackle and produced worse results. This pointed towards a refinement of the dataset, so that our approach could better focus on the issue of coherence, instead of tackling other problems. When a step description is too long, it contains too much information, often with multiple actions, which is hard to represent in a single image. Adding to this issue, the CLIP (Radford et al., 2021) text encoder, used in the Stable Diffusion (Rombach et al., 2022) model, truncates the input text at 77 tokens. We filtered the recipes that had steps that were too long. A second problem arises when a recipe has too many steps, as it is difficult to produce coherent illustrations over such a long sequence of steps. To mitigate this concern, we limited the number of steps in a recipe from 4 to 6 steps. In a final stage of refining the dataset, we removed any steps that did not contain actions which we could illustrate, such as steps merely saying “Enjoy!”. Contextual Caption Generation. As previously described, we provide InstructBLIP (Li et al., 2022) with additional context to produce contextualized captions. We experimented with different context lengths and decided to rule out experiments that gave InstructBLIP the full context, i.e., all previous step instructions, as this led to very long outputs that often repeated irrelevant information from the input context, instead of describing the image. We addressed the issue of long input prompts, by using a context window, as described in Section 4.1. This allows us to give the model additional context while mitigating possible errors in the generated training caption. When generating the image captions used to train the Sequence Context Decoder, we produced two sets of captions: long and short. The prompt to InstructBLIP consists of the additional context window followed by "Given the steps, give a short description of the image. Do NOT make assumptions, say only what you see in the image.". We generate captions with a window of 2 steps– short captions–and with a window of 3 steps–long captions. Finally, we generate a long and a short caption for every image, vi in the dataset. Model Details. To train the sequence context decoder model, we fine-tuned an Alpaca-7B model for 10 epochs on a single A100 40Gb GPU. We used a cross-entropy loss and a cosine learning rate 12781 scheduler, starting at 1e−5. The batch size was set to 2, with a gradient accumulation step of 4. The dataset had a total of 5562 step-caption pairs, from which we used 80% for training and 180 examples for testing. We used a frozen Stable Diffusion 2.1 (Rombach et al., 2022) for image generation. Human Annotations. To evaluate our models we ran three annotation jobs, and, in all cases, annotators were allowed to provide feedback on the generation errors. See Appendix D for details. In the first job, annotators inspected 30 sequences of images generated with different methods (withheld from annotators) and selected the 3 best sequences, out of a total of 5 sequences. Besides this selection, we provided an additional No good sequence label, for when no sequence of images was of good quality. The second job aimed at obtaining finer-grained annotations for two methods that performed best. Annotators compared the proposed method against the second best method (latent 1) and were asked to select the preferred one or to indicate a tie. After these two initial human annotation jobs, we decided to compare the proposed method to the real-world image sequences and asked annotators to rate the two sequences. This is by far the most challenging setting where real-world image sequences have the natural visual coherence that we wish to achieve. Automatic Metrics. To assess image sequences generated by the proposed method, we measured the semantic correctness and the sequence coherence with CLIPScore (Hessel et al., 2021) and with the novel DreamSim metric (Fu et al., 2023), respectively. DreamSim measures the similarity between two images in terms of their foreground objects and semantic content, while also being sensitive to colour and layout. This is particularly well suited to our task, as we wish to maintain visual coherence across the entire sequence. 6 Results and Discussion In this section, we start by comparing the proposed method to latent diffusion models, SD2.1 (Rombach et al., 2022), and non-LDM models, i.e. aMUSEd (Patil et al., 2024) and DALL·E Mini (Dayma et al., 2021), with automatic metrics. For the ablation studies, we experimented with using (1) a random seed for all steps of the sequence, (2) a fixed seed for all steps of the sequence, (3) a fixed latent vector iteration from the previous step, represented by Latent k, where k is the fixed Figure 4: Automatic evaluation of image sequence. CLIP-Score (Hessel et al., 2021) measures the alignment between the step and the image. DreamSim (Fu et al., 2023) measures similarity between visual illustrations in the sequence. iteration, and (4) the proposed method that selects a latent vector iteration, zj i from the previous denoising steps as a starting seed. 6.1 Automatic Evaluation Figure 4 shows that the proposed method can improve the coherence of image sequences (as measured by DreamSim), while maintaining the same text-to-image generation capabilities (as measured by CLIPScore). This result is particularly important because it clearly shows that it is possible to maintain key visual and semantic traits from specific iterations of the reverse-diffusion process – an LDM property that we leverage in this paper. 6.2 Sequence Generation Results Recipes Domain. To validate our initial hypothesis, we start by analysing its performance in the recipes domain. Our goal is to assess how conditioning the reverse diffusion process of each step affects the overall results. Table 1 provides the complete set of results across all competing methods. It is clear how using latent vector iterations and random seeds supports the intuition behind our method: a manual task is composed of continuous and independent actions, which should be conditioned by latent vector iterations and by random seeds, respectively. Building on this observation, we calibrated the η parameter of the proposed method, using human evaluation, as reported in annex in Table 2. To compare the proposed method to the best performing method (Latent T-1), we asked annotators to select the best sequence out of two side-by-side 12782 Method Best (%) Second Best (%) Third Best (%) Random seed 17.70 41.20 13.00 Fixed seed 29.40 17.70 33.30 Latent T-1 33.30 17.70 37.04 Latent T-2 17.70 23.50 16.70 Img-to-Img 2.00 0.00 0.00 Table 1: Annotation results for the evaluation of the various methods of maintaining visual coherence. Annotators picked No Good Sequence in 18.99% of the sequences; we report the results for the remaining 81.01%. η Best (%) Second Best (%) Third Best (%) 0.70 19.20 12.80 14.90 0.65 12.80 14.90 25.50 0.60 19.20 23.40 21.30 0.55 14.90 23.40 21.30 0.50 34.04 25.53 17.02 Table 2: Annotation results for the sequences generated with different threshold values, η. Annotators picked No Good Sequence in 20.34% of the generations; we show results for the remaining 79.66%. Method Recipes DIY (seen) (unseen) Proposed method (wins) 46.67 30.00 Second best (wins) 26.67 23.33 Tie 10.00 16.67 No good sequence 16.67 30.00 Table 3: Annotation results of the comparison between our proposed method and the winning method from Table 1 (Latent T-1). sequences (see Appendix D for details about the annotation instructions). Results reported in Table 3 show that in 46.7% of the cases, human annotators preferred our method over the alternative and in 10.0% of the cases the two methods were equally good. According to the annotations, the proposed method was equal or better than the second best baseline with an agreement of 70% across the full set of tasks and annotators. This confirms our hypothesis and supports the importance of selectively conditioning the denoising process on the previously generated steps of the sequence. DIY Domain. To assess the generalisation of the proposed method, we evaluated its performance in an unseen domain: DIY tasks. With the results of our human annotation study, we observed that the transition from generating recipe images to DIY tasks has shown promising results. Results in Table 3 show that in 30.0% of the tasks, neither method produced satisfactory results. For the tasks that were correctly illustrated, we see that annotators preferred our method in 30.0% of the tasks, compared to 23.3% for the second-best approach. Additionally, 16.7% of comparisons resulted in a tie between the two methods. Although we see limitations in this domain, the results show that our approach is capable of generalizing to an unseen domain, producing satisfactory image sequences. Generated vs Ground-Truth Sequences. We compared the quality of generated image sequences against the ground-truth sequences and asked human annotators to rate each sequence in a 5 point Likert scale. This is a particularly challenging setting because the real image sequences are photos taken by humans in a real-world setting, where sequence coherence is naturally captured. Table 4 shows that our method achieves over 60% of the ground-truth score, with the ground-truth sequences only 0.42 points below the maximum score. Method Average Rating Proposed method 2.93 ± 1.14 Ground-truth 4.58 ± 0.79 Table 4: Human annotation results for the comparison of the proposed method with ground-truth images. Qualitative Analysis. To illustrate how different conditioning methods affect the quality of generated sequences, we present several examples in Figure 5 and in Appendix E.2. In Figure 5, we can see that by using a fixed seed for all steps, we are able to preserve the background and add objects for each specific step. We can also see that the image-to-image method conditions the generations too strongly. Figure 5 also shows that our method is capable of preserving and recall key visual artefacts from several steps back in the sequence. We believe this is a distinctive and fundamental feature of the proposed method. In the DIY out-of-domain experiment, Figure 6 shows a strong generalization to tasks involving 12783 Figure 5: Examples of recipe illustrations with different methods for maintaining visual coherence. Figure 6: Examples of DIY illustrations with different methods for maintaining visual coherence. simple object utilisation, such as using a broom for cleaning or a brush for painting. While fixed latent vector iteration methods show some memorization capability, the image-to-image generations are too biased on previous generations, and random seeds lead to very diverse generations. In this unseen domain, the proposed method encounters considerable challenges when tasked with more intricate activities, like performing a car’s oil change with its complex mechanical components, or engaging in tasks that involve philosophical or introspective elements, see Appendix E.2 for visual examples. It is worth noting that these limitations are inherited from the core image generation method, which struggles to handle fine-details. 6.3 Sequence Conditioned Reverse Diffusion To better understand the strength of our visual coherence hypothesis, we conducted an experiment where all the images of the sequence are generated from the latent vectors of a fixed iteration from the previous task step. Specifically, we use the latent 12784 vector iteration k, with k ∈{1, 2, 5, 10, 20, 49}, of the previous task step as the starting point of the current reverse diffusion process. Our empirical analysis of these generations, from which some examples can be seen in annex in Figure 7, confirms our initial hypothesis showing that using latent vector iterations from previous steps provides a good result in many cases. Additionally, this experiment evidences the strength of latent vector iterations as conditioning signals in a reverse diffusion process. This experiment shows that later iterations add a very strong bias to the generation emphasising the importance of selecting the best conditioning seed from previous reverse diffusion iterations and processes. An extreme case of this phenomenon is observed in the image-to-image method in Figure 5. 6.4 Sequence Context Decoder We assessed the capacity of the sequence context decoder of maintaining the semantic coherence by manually annotating the decoder output for six settings with different context lengths and caption lengths, Table 5. We considered three context lengths: the shortest one considers the current step, while the longest, shows two steps and a caption. For the captions, we used both short and long captions, as detailed in Section 5. Table 5 shows the results for the different evaluation settings. These results indicate that the model attains the best semantic coherence with a context window of 2 and with short captions. We observed that captions generated with short contexts tend to lack some information, while captions generated with longer contexts introduced too much information, which was often noisy. This is aligned with a recent study (Liu et al., 2023) that highlights the fact that current large language models do not robustly make use of information in long input contexts. These results indicate that additional context needs to be carefully considered and curated, as the model is not able to filter out excess information. In annex E.1 we provide more insight into the performance of the decoder. 7 Conclusions In this paper, we addressed the problem of illustrating complex manual tasks and proposed a framework for generating a sequence of images that illustrate the manual task. The framework is composed of a novel sequence context decoder that Sequence Context Captions Avg. Rating {sn} short 3.00 {sn} long 3.23 {sn, cn−1} short 3.68 {sn, cn−1} long 3.56 {sn, sn−1, cn−2} short 3.41 {sn, sn−1, cn−2} long 3.35 Table 5: Sequence Context Decoder results for different context lengths, configurations and caption type. preserves the semantic coherence of a sequence of actions by transforming it into a visual caption. The full sequence illustration framework is completed by a sequence conditioned reverse diffusion process that uses a latent vector iteration from a past image generation process to maintain visual coherence. Automatic and human evaluations in the target domain demonstrated the strong performance of the framework in generating coherent sequences of visual instructions of manual tasks. The proposed method was preferred by humans in 46.6% of the cases against 26.6% for the second best method. In addition, automatic measures, also confirmed that our method maintains visual and semantic coherence while illustrating a complex manual task. The generalization to unseen domains was successfully validated in the DIY domain with positive results. 8 Risks and Limitations While we conducted a thorough set of experiments and validations, we acknowledge that more experiments could shed more light in some aspects. First, we did not experiment with larger LLMs to discover the impact of scaling. Second, we also did not consider that steps may dependent on multiple previous steps. Third, we did not consider contexts larger than 3 steps. Finally, in terms of risks, we acknowledge that our work could potentially be used to generate false information. Acknowledgements We thank the anonymous reviewers for their valuable comments and suggestions. This work was partially supported by Amazon Alexa Prize TaskBot, by a Google Research Gift and by NOVA LINCS ref. UIDB/04516/2020 (https://doi.org/10.54499/UIDB/04516/ 2020). 12785 References Cleber Lemes Bausch, Gabriel Sperandio Milan, Ana Paula Graciola, Luciene Eberle, and Suélen Bebber. 2021. The covid-19 pandemic and the changes in consumer habits and behavior. Revista Gestão e Desenvolvimento, 18(3):3–25. James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. 2023. Improving image generation with better captions. Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf. Lucy Brimble. 2020. More than 70% of adults use social media for recipes instead of cookbooks, survey finds. Independent UK. Jason Ingyu Choi, Saar Kuzi, Nikhita Vedula, Jie Zhao, Giuseppe Castellucci, Marcus Collins, Shervin Malmasi, Oleg Rokhlenko, and Eugene Agichtein. 2022. Wizard of tasks: A novel conversational dataset for solving real-world tasks in conversational settings. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 3514–3529. International Committee on Computational Linguistics. Gaoxiang Cong, Liang Li, Zhenhuan Liu, Yunbin Tu, Weijun Qin, Shenyuan Zhang, Chengang Yan, Wenyu Wang, and Bin Jiang. 2022. Ls-gan: Iterative language-based image manipulation via long and short term consistency reasoning. In Proceedings of the 30th ACM International Conference on Multimedia, MM ’22, page 4496–4504, New York, NY, USA. Association for Computing Machinery. Boris Dayma, Suraj Patil, Pedro Cuenca, Khalid Saifullah, Tanishq Abraham, Phúc Le Khac, Luke Melas, and Ritobrata Ghosh. 2021. Dall·e mini. Lucia Donatelli, Theresa Schmidt, Debanjali Biswas, Arne Köhn, Fangzhou Zhai, and Alexander Koller. 2021. Aligning actions across recipe graphs. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6930– 6942, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhangyin Feng, Yuchen Ren, Xinmiao Yu, Xiaocheng Feng, Duyu Tang, Shuming Shi, and Bing Qin. 2023. Improved visual story generation with adaptive context modeling. In Findings of the Association for Computational Linguistics: ACL 2023, pages 4939– 4955, Toronto, Canada. Association for Computational Linguistics. Stephanie Fu, Netanel Tamir, Shobhita Sundaram, Lucy Chai, Richard Zhang, Tali Dekel, and Phillip Isola. 2023. Dreamsim: Learning new dimensions of human visual similarity using synthetic data. Tsu-Jui Fu, Xin Wang, Scott Grafton, Miguel Eckstein, and William Yang Wang. 2020. SSCR: Iterative language-based image editing via self-supervised counterfactual reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4413–4422, Online. Association for Computational Linguistics. Tanmay Gupta, Dustin Schwenk, Ali Farhadi, Derek Hoiem, and Aniruddha Kembhavi. 2018. Imagine this! scripts to compositions to videos. In European Conference on Computer Vision. Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. 2021. Clipscore: A referencefree evaluation metric for image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 711 November, 2021, pages 7514–7528. Association for Computational Linguistics. Chi-yang Hsu, Yun-Wei Chu, Ting-Hao Huang, and Lun-Wei Ku. 2021. Plot and rework: Modeling storylines for visual storytelling. In Findings of the Association for Computational Linguistics: ACLIJCNLP 2021, pages 4443–4453, Online. Association for Computational Linguistics. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Ting-Hao Kenneth Huang, Francis Ferraro, Nasrin Mostafazadeh, Ishan Misra, Aishwarya Agrawal, Jacob Devlin, Ross Girshick, Xiaodong He, Pushmeet Kohli, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, Lucy Vanderwende, Michel Galley, and Margaret Mitchell. 2016. Visual storytelling. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1233–1239, San Diego, California. Association for Computational Linguistics. Kyung-Min Kim, Min-Oh Heo, Seong-Ho Choi, and Byoung-Tak Zhang. 2017. Deepstory: Video story qa by deep embedded memory networks. Dongxu Li, Junnan Li, Hung Le, Guangsen Wang, Silvio Savarese, and Steven C. H. Hoi. 2022. Lavis: A library for language-vision intelligence. Yitong Li, Zhe Gan, Yelong Shen, Jingjing Liu, Yu Cheng, Yuexin Wu, Lawrence Carin, David Carlson, and Jianfeng Gao. 2019. Storygan: A sequential conditional gan for story visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6329–6338. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the middle: How language models use long contexts. 12786 Yujie Lu, Pan Lu, Zhiyu Chen, Wanrong Zhu, Xin Eric Wang, and William Yang Wang. 2023. Multimodal procedural planning via dual text-image prompting. Adyasha Maharana, Darryl Hannan, and Mohit Bansal. 2022. Storydall-e: Adapting pretrained text-to-image transformers for story continuation. In Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVII, page 70–87, Berlin, Heidelberg. SpringerVerlag. Jiafeng Mao, Xueting Wang, and Kiyoharu Aizawa. 2023. Guided image synthesis via initial image editing in diffusion model. In Proceedings of the 31st ACM International Conference on Multimedia, MM 2023, Ottawa, ON, Canada, 29 October 2023- 3 November 2023, pages 5321–5329. ACM. Gonçalo Marcelino, David Semedo, André Mourão, Saverio Blasi, João Magalhães, and Marta Mrak. 2021. Assisting news media editors with cohesive visual storylines. In Proceedings of the 29th ACM International Conference on Multimedia, MM ’21, page 3257–3265, New York, NY, USA. Association for Computing Machinery. Gonçalo Marcelino, David Semedo, André Mourão, Saverio Blasi, Marta Mrak, and Joao Magalhaes. 2019. A benchmark of visual storytelling in social media. In Proceedings of the 2019 on International Conference on Multimedia Retrieval, ICMR ’19, page 324–328, New York, NY, USA. Association for Computing Machinery. Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. 2022. Sdedit: Guided image synthesis and editing with stochastic differential equations. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In NeurIPS. Xichen Pan, Pengda Qin, Yuhong Li, Hui Xue, and Wenhu Chen. 2022. Synthesizing coherent story with auto-regressive latent diffusion models. Suraj Patil, William Berman, Robin Rombach, and Patrick von Platen. 2024. amused: An open MUSE reproduction. CoRR, abs/2401.01808. Shams Bin Quader. 2022. How the central sydney independent musicians use pre-established ‘online diy’to sustain their networking during the covid-19 pandemic. The Journal of International Communication, 28(1):90–109. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. Tanzila Rahman, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Shweta Mahajan, and Leonid Sigal. 2022. Make-a-story: Visual memory conditioned consistent story generation. Tanzila Rahman, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Shweta Mahajan, and Leonid Sigal. 2023. Make-a-story: Visual memory conditioned consistent story generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2493–2502. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10684–10695. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. 2022. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487. David Sarpong, George Ofosu, David Botchie, and Fintan Clear. 2020. Do-it-yourself (diy) science: The proliferation, relevance and concerns. Technological Forecasting and Social Change, 158:120127. Frank Serafini. 2014. Reading the visual: An introduction to teaching multimodal literacy. Teachers College Press. Ruize Wang, Zhongyu Wei, Piji Li, Qi Zhang, and Xuanjing Huang. 2020. Storytelling from an image stream using scene graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):9185–9192. 12787 A Model training We chose Alpaca-7B as our base model, as it is an open-source instruction-tuned model, and there are implementations using LoRA (Hu et al., 2022), which reduces the computational cost during training. We encourage future work to study the impacts of scaling the LLM. We experimented with different hyperparameters, namely different learning rates, and learning rate schedulers, to find the most suitable ones for our problem. We point out that we are using the loss of the models as the main criterion for future experiments, as we do not have an automatic metric for evaluating model behaviour. The loss is not always indicative of the model’s performance in the task at hand, as we verified empirically. Training Details Base Model Alpaca-7B Training Time ≈10h Epochs 10 Loss Function Cross-Entropy Weight Decay 0.01 Model Max Length 400 Batch Size 2 Gradient Accumulation Steps 4 Effective Batch Size 8 Learning Rate 1e−05 Learning Rate Scheduler Cosine Optimizer AdamW Adam β1 0.900 Adam β2 0.999 Adam ϵ 1e−08 LoRA LoRA Rank 8 LoRA α 32 LoRA Dropout 0.1 Table 6: Training parameters for the best model. Since the cosine scheduler has greater variability between runs, due to different number of epochs leading to different loss curves, we decided to run further experiments with the next best-performing model. We experimented with varying the weight decay parameter but found that there were no significant differences in the loss curves for the three weight decay values. For the aforementioned runs, we used β1 = 0.900 and β2 = 0.999 as the AdamW optimizer’s hyperparameters. As a final test, we changed these to the values proposed by Ouyang et al. (2022), β1 = 0.900 and β2 = 0.950. We did not see any improvement in the loss curve. Based on these results, we fine-tuned our Alpaca7B models for 10 epochs on a single A100 40Gb GPU. We used a cross-entropy loss, a cosine learning rate scheduler, starting at 1e−5. Our batch size was 2, with a gradient accumulation step of 4, leading to an effective batch size of 8. The dataset had a total of 5562 examples; we used 80% for training and the remaining for evaluation. Figure 6 summarizes the training information for our bestperforming model. B Fixed Latent Iteration Generation As mentioned in Section 6.3, we conducted an empirical analysis of generations using a fixed latent vector iteration from the previous step. This analysis helped us understand how different latent vector iterations impact the generation of the following image. An example from this analysis is shown in Figure 7. C Negative Prompts We found problems which were not related to the prompts or the concepts we were trying to generate, but general problems in image generation, i.e., tiled images, or deformed hands. In order to reduce some of these common problems, present in Stable Diffusion generations, we used negative prompts. A negative prompt steers the generation away from the concepts present in it. This string of text is added to the end of the original prompt. In the negative prompt, we included undesirable concepts such as human or hands, and also included some additional concepts, following common practices. Our final negative prompt follows: negative_prompts = ["hands", "human", "person", "cropped", "deformed", "cut off", "malformed", "out of frame", "split image", "tiling", "watermark", "text"]. D Human Annotations In this Section, we present examples of the annotation tasks. The human annotation pool consisted of 3 PhD students and 5 MSc students. 25% of the annotators were women and 85% were men. 12788 Figure 7: Maintaining visual coherence through the use of different memory latent vectors. Figure 8: Annotation of the comparison between visual coherence methods. Figure 8 shows the annotation task to choose the best visual coherence maintaining method. The annotators saw 5 sequences: Random Seed, Fixed Seed, Latent 1, Latent 2, and Image-to-Image. They were then asked to pick the best, second best, and third best sequences. They could also indicate that there were no good sequences, by checking the No good sequence checkbox. Additionally, they Figure 9: Annotation of the comparison between different heuristic thresholds. could leave an observation, in the appropriate text area. Figure 9 shows the annotation task to tune the threshold of our method. The annotators had to pick between 5 sequences, generated with different values of threshold: 0.50, 0.55, 0.60, 0.65, 0.70. They were asked to pick the best, second best, and third best sequences. They could also indicate that there were no good sequences, by 12789 Figure 10: Annotation of the comparison between our method and the best visual coherence method. checking the No good sequence checkbox. They could also leave an observation, in the appropriate text area. Figure 10 shows the annotation task to choose between our method and the winning visual coherence maintaining method. The annotators saw 2 sequences, one generated with Latent 1 and another with our method. They had to choose the win sequence, or deem them equivalent. If there was no good sequence, they could check the No good sequence checkbox. They could also leave an observation. Figure 11 shows the annotation task to rate sequences generated with our method and the ground-truth images, from a scale of 1 to 5. Additionally, the annotators could select that there was no good sequence, or leave an observation. Figure 12 shows the annotation guidelines for the task to rate sequences generated with our method and the ground-truth images. E Failure Analysis of Extra Examples E.1 Sequence Context Decoder We further analysed the errors of the best method and present the results in Table 7. Hallucinations occurred in 3.9% of the generations, and the LLM Figure 11: Annotation of the comparison between our method and the ground-truth images. copied the input into the output in 7.2% of the cases. Finally, it is interesting to see that the input was too complex in 6.2% of the cases, i.e., describing more actions than what is possible to depict in the image. Error type % Hallucinations 3.9% Complex step with many actions 6.2% Copied input 7.2% Table 7: The contribution of each error type to the overall performance. E.2 Qualitative Analysis Table 8 shows some example generations from the Sequence Context Decoder, each highlighting a particular behaviour of the model. In Example 1, we can see the model correctly identifying the ingredients from the context, captionn−2, going two steps back, and integrating them in the final output. It also recognizes the plate from stepn as the 12790 Figure 12: Annotation guidelines for the comparison between our method and the ground-truth images. object containing the ingredients. This shows the potential in giving the model additional context to generate richer prompts. In Example 2, we can see that, despite being able to maintain the red apples, the model makes no explicit reference to their state, chopped up. This is still a limitation, which may lead to a wrongful representation of intact apples. In Example 3, we want to highlight two main aspects of the generation: we can see the model adding the bowl of soup from the context to the prompt, maintaining semantic coherence. We can also see that the model kept lime juice. This is correct, from the point of view of the task at hand, but shows the lack of understanding of what can be perceived in an image. We reason that this knowledge should come from the pretraining of the model, and not from our fine-tuning to this task. Example 4 shows an example of a depiction that is mostly correct, but misses a step of the sequence. The representation of the saucepan with black tea in it is plausible, but stepn indicates the saucepan should be removed from the heat. Finally, in Example 5, we see a very long step, with various actions. In this case, we consider it plausible for the model to pick one of these actions. This is a better result than attempting to represent them all, which would lead to an inadequate prompt. Despite this, this specific generation lacks some context, as the word vegetables is generic; it is important for the generated prompts to be specific, containing the ingredients mentioned in the context. E.3 Challenges of Sequential Image Synthesis To analyse the off-the-shelf behaviour of Stable Diffusion in the task at hand, we devised a few generation baselines. It is important to understand how the generation model used the text in the prompts, and where, and how, it failed. To do this, we assessed the performance of three image generation baselines in our target problem: (i) the current step instruction, (ii) the current step instruction plus the previous step instruction, and (iii) the current step instruction plus all previous step instructions. We generated images based on: (i) the current step instruction, (ii) the current step instruction plus the previous step instruction, and (iii) the current step instruction plus all Isolated errors. We found problems which appeared consistently throughout different generations. These problems were not tied to the prompts or the concepts we were trying to generate, but downfalls of the image generation model. Among such examples were tiled images, and deformed objects, hands, and faces. Representing Long Prompts. We found that long prompts were detrimental to the generations. These models are not prepared to represent many actions or objects, with some context being lost in the generation. Furthermore, these models fail to illustrate distant relationships in the texts, Figure 1. Hence, training an LDM in this setting is not realistic due to the limitations of existing models. First, the amount of required data is not easily obtained, second, current LDMs are not designed to support long inputs, and third they do not support the notion of sequential generation. 12791 Example 1: The Sequence Context Decoder was able to dereference the objects from captionn−2 and create the correct caption for stepn. Captionn−2 A wooden cutting board with ingredients for a salad including avocado, crab meat, and grapes Stepn−1 Dress all ingredients except the arugula with one fourth cup dressing, dress the arugula separately with two tablespoon dressing. Stepn Plate the arugula first. Combine the rest of the ingredients, and add on top. Output A plate of salad with avocado, crab meat, and grapes on top. Example 2: The state of objects is not always preserved in the generated caption. Captionn−1 A purple plastic cup is sitting on a wooden table. Inside the cup, there is a mixture of black pepper, olive oil, brown sugar, curry powder, salt, and white vinegar. There are also some sultanas floating in the mixture. The cup is covered with plastic wrap. Stepn Get White Rice in a big bowl. Add in Celery and Scallion. In order to add a bit of color and some sweetness, chop up Red Apple and dust that up. Toss them together. Output A bowl of white rice is being prepared with celery, scallions, and red apples Example 3: Some objects are correctly added to the generated caption. Captionn−1 In the image, there is a pot of soup on the stove with chicken and vegetables in it. There is also a person’s hand reaching into the pot to add more ingredients to the soup. Stepn Serve with White Rice on the bottom and Lime Wedge on the side. Or with tortillas for dipping into the soup. Output In the image, there is a bowl of soup with a tortilla in it. The soup has chicken, vegetables, and lime juice in it. Example 4: Sequence of actions was not fully captured by the model. Stepn−1 Bring Water to boil in a medium saucepan. Stepn Once water has boiled, remove saucepan from heat and add the Black Tea. Steep for 15 minutes. Output a saucepan with black tea in it on a stove top Example 5: When steps provide plenty of context, using a single step is sufficient. Step n Halve fennel, remove stalk, and cut into fine strips. Finely dice onion. Peel garlic. Clean zucchini and bell pepper and cut into bite-sized pieces. Combine vegetables with rosemary, thyme, and some of the olive oil in a plastic bag. Close, shake, and marinate for approx. 20 min. Output A woman is cutting up vegetables on a cutting board with a knife and a peeler. Table 8: Qualitative analysis of the Sequence Context Decoder results. 12792 Figure 13: Examples of recipe illustrations with different methods for maintaining visual coherence. Figure 14: Examples of recipe illustrations with different methods for maintaining visual coherence. 12793 Figure 15: Examples of recipe illustrations with different methods for maintaining visual coherence. Figure 16: Examples of recipe illustrations with different methods for maintaining visual coherence. 12794 Figure 17: Examples of recipe illustrations with different methods for maintaining visual coherence. Figure 18: Examples of recipe illustrations with different methods for maintaining visual coherence. 12795 Figure 19: Examples of task illustrations with different methods for maintaining visual coherence. Figure 20: Examples of task illustrations with different methods for maintaining visual coherence. 12796 Figure 21: Example of a task that is very challenging to illustrate. We can see how the generated images still capture some of the more challenging elements of the steps, such as "Make a note of the longitude and latitude" in step 4, with the images showing a pen. Figure 22: Examples of task illustrations with different methods for maintaining visual coherence. 12797
2024
690
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12798–12823 August 11-16, 2024 ©2024 Association for Computational Linguistics Cheetah : Natural Language Generation for 517 African Languages Ife Adebaraξ,⋆AbdelRahim Elmadanyξ,⋆Muhammad Abdul-Mageedξ,Ω,λ ξThe University of British Columbia ΩMBZUAI λ Invertible AI {ife.adebara,a.elmadany,muhammad.mageed}@ubc.ca Abstract Low-resource African languages pose unique challenges for natural language processing (NLP) tasks, including natural language generation (NLG). In this paper, we develop Cheetah, a massively multilingual NLG language model for African languages. Cheetah supports 517 African languages and language varieties, allowing us to address the scarcity of NLG resources and provide a solution to foster linguistic diversity. We demonstrate the effectiveness of Cheetah through comprehensive evaluations across six generation downstream tasks. In five of the six tasks, Cheetah significantly outperforms other models, showcasing its remarkable performance for generating coherent and contextually appropriate text in a wide range of African languages. We additionally conduct a detailed human evaluation to delve deeper into the linguistic capabilities of Cheetah. The findings of this study contribute to advancing NLP research in low-resource settings, enabling greater accessibility and inclusion for African languages in a rapidly expanding digital landscape. The GitHub repository for the Cheetah project is available at https://github.com/UBC-NLP/Cheetah. 1 Introduction The linguistic diversity present in African languages poses unique challenges for NLG systems. With over 2, 000 languages spoken across the African continent (Eberhard et al., 2021), the need for effective NLG solutions that can accommodate this rich linguistic ecosystem cannot be over-emphasized. This is especially important because traditional NLG approaches have primarily focused on high-resource languages, such as English and French due to the availability of large-scale datasets and resources. Consequently, low-resource ⋆Authors contributed equally. Figure 1: Cheetah is trained on 517 African languages and language varieties across 14 language families. The languages are domiciled in 50 of 54 African countries and are written in six different scripts. languages, including numerous African languages, have been marginalized in NLG research and development. Developing robust NLG systems for the diverse needs of African communities is challenging due to the scarcity of extensive language datasets, limited linguistic research, and variations across these languages. To address these challenges, recent advancements in language modeling and transfer learning techniques have shown promise in supporting NLG in low-resource languages. Pretrained language models, such as GPT-3 (Radford et al., 2018, 2019; Brown et al., 2020), mT5 (Xue et al., 2021), and mT0 (Muennighoff et al., 2022), have demonstrated remarkable capabilities in understanding and generating human-like text. These models capture the statistical regularities and syntactic structures of the languages they are trained on, making them valuable starting points for supporting NLG in low-resource settings. In this paper, we present a pioneering work on NLG in African languages by introducing Cheetah: a novel language model (LM) specifically designed to support 517 African languages and language varieties. To the best of our knowledge, Cheetah supports the largest number of African languages and language varieties. Leveraging a vast 12798 corpus of text data collected from diverse sources, Cheetah learns intricate linguistic information that characterizes each African language. The contributions of this work are three fold. First, we address the scarcity of NLG resources for African languages by providing a comprehensive language model that covers a wide range of languages spoken on the continent. Second, we demonstrate the efficacy of our approach through extensive evaluations across six downstream task clusters. Each cluster includes multiple languages, showcasing the model’s ability to generate coherent and contextually appropriate text in different African languages. Third, we perform fine grained human analysis of Cheetah using a controlled machine translation (MT) test set. This uncovers model behaviour that is not visible with automatic metrics. By supporting NLG in African languages, we foster linguistic diversity, empower African communities to express themselves in their native languages, and bridge the digital divide. This paper serves as a foundational step towards promoting Afrocentric NLP (Adebara and Abdul-Mageed, 2022) that prioritizes inclusivity and cultural preservation in language technology, emphasizing the importance of catering to the unique linguistic needs of diverse populations. The rest of the paper is organized as follows: In Section 2, we discuss related work. In Section, 4 we describe AfroNLG, the benchmark we create for evaluation. We provide details of Cheetah in Section 3. We present performance of Cheetah in Section 5 and compare it to other multilingual models. We present controlled test sets in Section 5.1. We conclude in Section 6, and outline a number of limitations and use cases for our work in Section 7 and Section 8. 2 Literature Review One of the challenges in NLG is to generate coherent and semantically meaningful text. Various approaches have been proposed, including templatebased (Becker, 2002; Van Deemter et al., 2005), rule-based (Dušek and Jurˇcíˇcek, 2015; van Miltenburg et al., 2020), and statistical approaches (Li et al., 2016). More recently, deep learning approaches (Sutskever et al., 2014) including the transformer model (Vaswani et al., 2017) have achieved SoTA results in various NLG tasks such as text summarization (Shi et al., 2021) and machine translation (Vaswani et al., 2017). While these models have shown impressive results, they often require a large amount of training data and computing resources. However, only a few African languages have benefited from these advancements due to inadequate data. To address this issue, researchers have proposed transfer learningbased approaches, where a pretrained model is finetuned for a specific NLG task. Transfer learning (Raffel et al., 2020; He et al., 2022; Ruder et al., 2019) has enabled the use of low-resource languages on various NLP tasks. Due to lack of adequate (or good quality) pretraining data (Kreutzer et al., 2021), transfer learning is often the most accessible method for only a few low-resource languages leaving behind a vast majority of extremely low-resource languages. This is because about 90% of the world’s languages is claimed to be either leftbehinds, in that it is probably impossible to build NLP resources for them, or scraping-bys with no labelled datasets (Joshi et al., 2020). For the leftbehinds, labelled and unlabelled data are unavailable and even transfer learning approaches are beyond reach while the scraping-by languages have no labelled data with which to evaluate model performance. 2.1 Language Models Only a few African languages have benefited from the recent advancement of language models (LM) due to inadequate data sizes. We now describe encoder-decoder LMs that support NLP tasks in African languages. We describe these under two broad headings: massively multilingual models and African models. We summarize the models and African languages they cover in Table 1. Multilingual Models: The massively multilingual models such as mBART (Liu et al., 2020), MT0 (Muennighoff et al., 2022), and mT5 (Xue et al., 2021) are trained on several languages. However, in most cases, only a few African languages are represented. Among the mentioned models, mT0 is pretrained on the highest number of African languages (n=13). African Models. Adelani et al. (2022) use pretrained T5, mT5, and mBART models and develop AfriByT5, AfriMT5, AfriMBART respectively to investigate machine translation in zero-shot and out-of-domain settings. The authors experiment on 17 African languages and demonstrate that further pretraining is effective for adding new languages to pretrained models. Jude Ogundepo et al. 12799 (2022) train AfriTeVa, an encoder-decoder language model from scratch on ten African languages and English using similar training objectives like T5 model. AfriTeVa-V2 (Oladipo et al., 2023) has enhanced support for 16 African languages with improved quality pretraining data. 1 African Natural Language Understanding. Several works attempt to improve the performance on African NLU tasks by proposing multilingual and African-dedicated models such as mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020), AfriBERTa (Ogueji et al., 2021), AfroLM (Dossou et al., 2022), Afro-XLM-R (Alabi et al., 2022), KINYaBERT (Nzeyimana and Niyongabo Rubungo, 2022), and SERENGETI (Adebara et al., 2023). 2.2 Benchmarks Multiple benchmarks have been developed for NLG. However, only a few of Africa’s 2, 000 languages have been supported to date. In most cases, the benchmarks support only the machine translation task. We provide a brief overview under two headings: African and multilingual. We summarize key information about each benchmark in Table C.1. African Benchmarks. AfroMT (Reid et al., 2021a) is a multilingual machine translation benchmark. It consists of translation tasks between English and eight African languages — Afrikaans, Xhosa, Zulu, Rundi, Sesotho, Swahili, Bemba, and Lingala. Menyo-20k (Adelani et al., 2021) is an MT evaluation benchmark for English-Yorùbá. Multilingual with African Languages. FLoRES-200 (Costa-jussà et al., 2022; Guzmán et al., 2019) is an evaluation benchmark that provides MT evaluation support in 200 languages including 52 African languages. GEM (Gehrmann et al., 2021, 2022) referenced as “living" benchmark, comprises of 40 tasks and supports 52 languages including 10 African languages. NLLB Seed Data (Costa-jussà et al., 2022) is a set of professionally-translated sentences sampled from Wikipedia. It consists of around six thousand sentences in 39 languages which include 8 African language. Similarly, NLLB Multi Domain (Costa-jussà et al., 2022) is an MT evaluation benchmark made from a set of 1After we finished our work, we became aware of a new version of AfriTeva, AfriTevaV2 (Oladipo et al., 2023) that was released only recently (December, 2023). We plan to evaluate AfriTevaV2 in our camera-ready version of this work. professionally-translated sentences in the news and health domains. It consists of approximately 3, 000 sentences in each domain and supports 8 languages including 2 African languages. Toxicity-200 (Costa-jussà et al., 2022) is an evaluation benchmark to evaluate the presence of toxic items in the MT text. It provides support for 50 African languages. XGLUE (Liang et al., 2020) is a cross-lingual, multi-task benchmark created with multilingual and bilingual corpora. It supports 19 languages and one African language, i.e., Swahili. 3 Cheetah 3.1 Pretraining Data We are guided by three main principles in developing this data: quality, linguistic diversity, and coverage. Quality. Developing NLP technologies for low resource languages poses a significant challenge due to the limited availability of high-quality training data. To address this issue, we undertook the task of manually curating a diverse corpus spanning multiple domains, including news articles, health documents, religious texts, legal documents, and social media feeds. This manual curation approach was necessary because there were no existing datasets available for the majority of the languages we aimed to support, and we wanted to ensure the utilization of reliable and high-quality data. Coverage. In all, we train Cheetah using a 42G multi-domain corpus across 517 African languages and language varieties. The languages are spoken in 50 of 54 African countries and they are written with five scripts. This provides support to at least 500M Africans. Linguistic Diversity. The inclusion of languages from various domains, geographical regions, and linguistic typologies, along with the utilization of reliable data sources, contributes to enhancing the robustness and quality of Cheetah. Our data consists of languages from 14 language families in Africa written in five different orthographies. Furthermore, our data spans languages with a vast array of exotic linguistic features including tone, vowel and consonant harmony, reduplication, word orders, and word classes. More details about pretraining is in Appendix A. 3.2 Implementation Details Vocabulary. We use SentencePiece (Kudo and Richardson, 2018) to encode text as WordPiece 12800 Category LM Lang/Total African Languages Families Multilingual MBART 3/50 afr, swh, yor. 2 MT0 14/101 afr, amh, hau, ibo, lin, mlg, nyj, orm, sot, 4 sna, som, swh, xho, yor, and zul MT5 12/101 afr, amh, nya, hau, ibo, mlg, sna, som, swh, xho, yor, and zul 3 African AfriVeTa 10/10 gaz, amh, Gahuza, hau, ibo, pcm, som, swa, tir, and yor. 3 AfriMT5 17/17 bam, bbj, ewe, fon, hau, ibo, lug, luo, pcm, mos, swa, tsn, twi, wol, yor, zul. 3 AfriByT5 17/17 bam, bbj, ewe, fon, hau, ibo, lug, luo, pcm, mos, swa, tsn, twi, wol, yor, zul. 3 AfriMBART 17/17 afr, amh, nya, hau, orm, som, swh, xho. 3 Cheetah 517/517 Includes 517 African languages. 14 Table 1: Comparing with available encoder-decoder models with African languages represented. Lang/Total. describe the number of African languages comparing with the covered languages in the pretrained language models. Families. describes the number of covered language families. tokens (Sennrich et al., 2016) with 250K WordPieces. We also include data covering the ten top spoken languages globally: Arabic, English, French, German, Greek, Italian, Portuguese, Russian, Spanish, and Turkish. We use Wikipedia dumps for these ten languages. We use 1M sentences for each language. However, we only include it in the vocabulary. Models Architecture. We pretrain Cheetah using the encoder-decoder architecture (Xue et al., 2021). Each of the encoder and decoder components is similar in size and configuration to T5, with 12 layers each with 12 attention heads, and 768 hidden units for the base model. In total, this results in a model with ∼580 million parameters. We provide further details in Table B.1. Objective. We use an unsupervised (denoising) objective. The main idea is to feed the model with masked (corrupted) versions of the original sentence, and train it to reconstruct the original sequence. The denoising objective (Xue et al., 2021) works by randomly sampling and dropping out 15% of tokens in the input sequence. All consecutive spans of dropped-out tokens are then replaced by a single sentinel token. Pretraining Procedure For pretraining Cheetah, we use a learning rate of 0.01, a batch size of 1, 024 sequences, and a maximum sequence length of 1, 024. We pretrain each model for 1M steps. We train our models on Google Cloud TPU with 128 cores (v3 −128) from TensorFlow Research Cloud (TFRC).2 4 AfroNLG Benchmark We create AfroNLG, a multi-lingual, multi-task benchmark comprising 67 test sets across six task 2https://sites.research.google/trc/about/ clusters. AfroNLG includes cloze tasks, machine translation, paraphrase, question answering, summarization, and title generation. It supports 527 languages, including 517 African languages and language varieties and the top 10 world languages. To the best of our knowledge, this is the most extensive benchmark till date for African languages. Table C.1 shows, how our benchmark compares to others. We provide the details of each task cluster and datasets in what follows. For detailed statistics about the task clusters, refer to Appendix C. Cloze Test. In order to comprehensively evaluate Cheetah across all the languages it was pretrained on, we employ cloze-tasks as our evaluation approach and perform two cloze tasks experiments. These tasks assess the model’s ability to fill in missing information. In the first cloze task, which we henceforth call mask-one, we randomly mask only one token in each sentence. In the second clozetask, which we call mask-at-least-one, we randomly mask at least one token and not more than 10% of the tokens in each sentence. For each of the 517 languages, we construct a cloze-task dataset comprising 200 data points for each language in the Train set, 100 examples for each language in the Test set, and 50 data points for each language in the Dev set. We ensure that there is no overlap between the data used for the cloze tasks and the pretraining data. We show an example of our cloze task in Figure C.1. Machine Translation. We include only datasets pertaining African languages in our benchmark. In selecting the languages for our MT benchmark, we strive to keep datasets that have been used in any published machine translation task. This allows us to cover a diverse set of languages and compare our models to existing SoTA across a large number of 12801 language pairs. Our benchmark thus contains data from Afro-MT3 (Reid et al., 2021b), Lafand-MT4 (Adelani et al., 2022), PidginUNMT5 (Ogueji and Ahia, 2019), and SALT6 (Akera et al., 2022). The datasets we consider make up 35 language pairs. Paraphrase. A paraphrase task aims to create semantically similar and fluent paraphrases given an input text (Chen et al., 2023; Palivela, 2021). We use the TaPaCo dataset (Scherrer, 2020) for our paraphrase generation benchmark. TaPaCo is a freely available paraphrase corpus for 73 languages extracted from the Tatoeba database. The dataset has four African languages: Afrikaans, Berber (a macro-language), Amazigh, and Kirundi. Question Answering. The QA task aims to provide answers to questions based on a knowledge base also referred to as contexts. We use TYDIA7 QA dataset (Clark et al., 2020). The dataset has a primary task and a gold passage task. In our benchmark, we only include the gold passage task, where a correct answer is predicted from a passage containing one answer, similar to the existing reading comprehension task. Summarization. Summarization is the task of generating an abridged version of a text, while capturing the salient ideas and the intended information from the original text (Nallapati et al., 2016; King et al., 2022). We use the subset of XLSum (Hasan et al., 2021), an abstractive summarization dataset, that consists of African languages including Amharic, Hausa, Igbo, Kirundi, Oromo, Pidgin, Somali, Swahili, Tigrinya, and Yorùbá. We also develop new test sets using data we crawled from the web, which are non-overlapping with XLSum. Specifically, we crawl data from BBC and Voice of Africa (webpages) for Hausa, Ndebele, and Swahili. Title Generation. The title generation task returns a single sentence title for a given article. Similar to the summarization task, we use XL-SUM to create a news title generation dataset. We also collect a new test set for title generation across 15 languages. The dataset comprises ∼6, 000 BBC and Voice of Africa articles, non-overlapping with XL-Sum, and is particularly useful for zero-shot title generation. 3https://github.com/machelreid/afromt 4https://github.com/masakhane-io/lafand-mt 5https://github.com/keleog/PidginUNMT 6https://github.com/SunbirdAI/salt 7https://github.com/google-research-datasets/tydiqa 5 Evaluation and Results We evaluate Cheetah on six task clusters of AfroNLG benchmark and compare to performance on mT0, mT5, Afri-MT5, and AfriTeVa. We report results in Table 2. For all models, we finetune on the training data split (Train) for 20 epochs with an early stopping of 5 epochs, learning-rate of 5e −5, batch size of 16, and sequence length of 512. All experiments were performed on 4 GPUs (Nvidia V100). We report the results of each experiment as an average of three runs, each with a different seed.8 We show evaluation results per language and provide information of model performance next. Cloze Test. Cheetah outperforms all other models on both cloze tasks as in Table 2. We show the results for each language that is supported by the models compared in Table D.1 and Table D.2. The performance of all models on mask-one is better than the performance on mask-at-least-one, reflecting how increasing the number of masked tokens makes the task more challenging. It is also important to mention that since evaluation is based on BLEU it does not reflect correct synonyms that each model may have generated to replace the masked tokens. Machine Translation. Cheetah sets a new SOTA on 23 tasks surpassing previous models. The mT0 and AfriTEVA models also demonstrate strong performance on six languages. Notably, pairs with French as the source language tend to yield the lowest BLEU scores, indicating relatively lower translation quality. On the other hand, the language pair involving English to Nigerian Pidgin, specifically on LafandMT and PidginUNMT, showcases the highest BLEU scores. We assume that the similarity between the Nigerian Pidgin and English contributes favourably to translation quality in these scenarios. We also report CHRF and CHRF++ results in Table C.4 and Table C.5 in the Appendix. Paraphrase. In the three paraphrase tasks, Cheetah demonstrates remarkable superiority over all other models. Specifically, we achieve an impressive ROUGE score of 46.0 on the Berber paraphrase task, surpassing the second-best model by a margin of approximately two points. Question Answering. In the task of question answering, mT0 exhibits superior performance compared to other models. While mT5 achieves the second-highest performance, Cheetah attains the third-highest performance in this task. 8Specifically, we use seed values 41, 1512, and 20235. 12802 Cluster Task Metric mT0 mT5 Afri-MT5 AfriTeVa Cheetah Machine Translation (MT) English →Afrikaans Bleu 20.38±0.3 12.35±1.1 7.12±2.67 7.75±1.67 19.72±0.75 English →Bemba Bleu 19.19±0.3 12.28±0.48 11.73±12.3 20.5±0.87 18.9±1.22 English →Lingala Bleu 15.98±1.16 14.12±0.56 14.32±12.74 13.88±1.04 9.64±1.11 English →Rundi Bleu 12.26±0.47 8.82±0.43 9.57±0.42 7.83±1.04 10.54±0.54 English →Sesotho Bleu 11.04±1.2 12.74±0.75 10.0±1.79 10.76±1.4 13.3±1.38 English →Swahili Bleu 10.59±1.84 9.33±0.58 3.08±0.57 7.24±0.46 11.08±0.61 English →Xhosa Bleu 10.04±0.98 8.25±0.7 3.86±1.35 7.5±0.32 12.34±0.51 English →Zulu Bleu 17.65±1.86 17.97±1.69 1.9±1.11 13.45±1.81 19.49±1.16 English →Hausa Bleu 5.06±0.21 4.96±0.16 0.85±0.04 7.32±0.00 9.22±0.08 English →Igbo Bleu 13.05±0.17 11.57±0.23 1.12±0.09 12.34±0.23 16.75±0.26 English →Luganda Bleu 2.17±2.77 3.33±0.35 0.09±0.01 4.21±0.77 9.75±0.01 English →N. Pidgin Bleu 33.17±0.28 32.65±0.19 2.39±0.23 9.39±0.18 32.64±0.14 English →Swahili Bleu 22.04±2.89 23.2±0.23 2.79±0.08 22.39±0.28 28.11±0.14 English →Zulu Bleu 6.83±0.29 0.58±1.37 0.4±0.03 4.45±0.37 11.75±0.38 English →Twi Bleu 3.4±0.12 1.23±0.03 0.03±0.0 1.68±0.94 4.64±0.13 English →Yoruba Bleu 5.42±0.85 2.58±3.1 0.04±0.0 3.63±4.01 7.83±0.14 English →Zulu Bleu 10.28±0.49 1.31±2.26 0.14±0.03 3.8±4.2 12.13±0.1 French →Bambara Bleu 2.0±2.6 0.37±0.19 0.15±0.01 3.18±0.18 3.06±0.27 French →Ghomálá’ Bleu 0.4±0.09 0.33±0.01 0.07±0.0 0.96±0.01 0.28±0.25 French →Ewe Bleu 0.7±0.35 0.31±0.36 0.09±0.07 0.84±0.16 3.47±0.03 French →Fon Bleu 0.69±0.31 0.8±0.13 1.52±0.06 1.73±0.53 1.29±0.16 French →Moore Bleu 0.27±0.06 0.12±0.05 0.19±0.02 0.47±0.04 1.66±0.86 French →Wolof Bleu 4.02±0.12 0.3±0.05 0.11±0.01 3.08±0.25 3.01±0.07 English →N. Pidgin (UNMT) Bleu 27.44±0.26 23.42±1.61 7.05±1.37 22.54±0.84 26.56±0.04 Acholi →English Bleu 16.41±0.08 11.16±4.77 4.9±0.11 8.37±8.12 19.33±0.1 Acholi →Lugbara Bleu 2.57±0.21 1.48±1.31 2.44±0.37 8.29±0.14 7.21±0.69 Acholi →Luganda Bleu 3.64±0.07 1.74±0.12 0.92±0.01 5.53±0.34 8.03±0.38 Acholi →Nyankore Bleu 2.17±0.14 0.79±0.51 0.46±0.03 4.26±0.54 5.1±0.14 Acholi →Ateso Bleu 1.64±2.34 1.94±0.25 4.9±0.11 7.74±0.33 6.33±0.6 English →Lugbara Bleu 6.19±6.33 8.38±0.49 5.93±0.22 10.95±0.32 11.61±0.28 English →Luganda Bleu 12.08±0.03 10.58±0.25 2.59±0.73 12.41±0.35 17.12±0.16 English →Nyankore Bleu 6.46±0.08 5.69±0.02 1.4±0.39 7.88±0.18 9.04±0.24 English →Ateso (salt) Bleu 10.24±0.06 8.28±0.19 4.91±0.59 11.64±0.49 11.12±0.38 Lugbara →Ateso Bleu 2.21±0.35 1.5±0.2 2.22±0.15 6.67±0.32 3.68±0.31 Luganda →Lugbara Bleu 3.96±0.57 2.61±0.12 3.44±0.32 8.05±0.23 7.99±0.47 Luganda →Ateso Bleu 4.47±0.08 3.01±0.16 2.5±0.22 8.17±0.18 8.13±0.33 Nyankore →Lugbara Bleu 3.45±0.29 2.1±0.32 2.6±0.29 7.5±0.09 7.29±0.09 Nyankore →Luganda Bleu 8.54±0.17 6.91±0.23 2.01±0.25 6.77±6.73 6.25±10.26 Nyankore →Ateso Bleu 3.33±0.11 2.25±0.23 2.12±0.4 6.27±0.12 6.36±0.4 Paraphrase Multilingual Bleu 41.79±0.28 41.75±0.21 34.72±0.51 43.02±1.25 43.23±0.09 Berber Bleu 44.84±0.31 44.03±0.24 36.08±0.83 **46.41±0.71 46.0±0.27 Kabyle Bleu 25.91±0.13 25.32±0.46 11.56±0.73 16.06±14.79 26.27±0.56 Question Answering QA Swahili F1 79.84±0.19 72.04±0.54 0 62.64±0.78 71.98±1.18 Summarization Multilingual RougeL 22.31±0.12 22.23±0.04 5.34±0.48 18.97±0.06 24.86±0.02 Amharic RougeL 13.81±0.04 13.09±0.03 4.4±1.07 8.29±0.51 15.09±0.1 Igbo RougeL 18.9±0.73 13.22±0.46 14.24±0.39 16.05±0.49 17.36±0.43 Oromo RougeL 11.28±0.03 10.51±0.07 3.52±0.49 7±1.73 14.53±0.1 Rundi RougeL 19.63±0.01 18.02±0.13 11.82±0.39 16.13±0.03 22.57±0.04 Swahili RougeL 26.38±0.02 24.81±0.11 15.07±0.17 21.59±0.13 29.05±0.13 Yoruba RougeL 21.57±0.05 20.06±0.12 13.52±0.18 17.3±0.11 22.49±0.0 Hausa RougeL 26.46±0.06 25.76±0.02 19.96±0.26 25.19±0.11 30.07±0.31 Nigerian Pidgin RougeL 26.54±0.05 25.79±0.1 14.28±1.23 20.29±0.12 27.08±0.02 Somali RougeL 20.69±0.08 19.21±0.06 13.62±0.81 19.27±0.18 23.92±0.04 Tigrinya RougeL 15.84±0.13 13.93±0.11 6.53±0.42 10.07±0.09 16.88±0.12 Title Generation Multilingual Bleu 6.53±0.02 6.65±0.08 0.1±0.02 5.2±0.02 7.52±0.07 Amharic Bleu 3.13±0.23 2.65±0.68 0.34±0.14 2.31±0.14 4.34±0.34 Igbo Bleu 6.95±0.13 6.9±0.22 0.77±0.12 4.61±0.14 8.47±0.07 Oromo Bleu 1.1±1.84 2.66±0.19 0.21±0.06 1.54±0.17 3.26±0.21 Rundi Bleu 4.4±0.28 4.13±0.22 0.84±0.07 3.33±0.23 6.05±0.5 Swahili Bleu 9.1±0.23 9.31±0.11 1.22±0.09 7.01±0.09 10.59±0.6 Yoruba Bleu 6.8±0.16 7.23±0.59 0.34±0.05 5.04±2.0 7.97±0.32 Hausa Bleu 8.11±0.24 7.3±0.34 2.59±0.01 6.69±0.18 8.48±0.23 Nigerian Pidgin Bleu 6.75±0.6 3.96±4.3 0.89±0.02 4.72±0.84 6.22±0.28 Somali Bleu 3.37±0.21 3.31±0.16 0.38±0.11 2.82±0.47 5.25±0.14 Tigrinya Bleu 2.99±0.1 2.94±1.09 0.7±0.18 1.92±0.26 5.1±0.05 Cloze-task Mask-one Bleu 13.61±0.91 8.18±3.94 0.00±0.00 8.36±3.42 13.98±0.32 Mask-at-least-one Bleu 2.36±0.11 2.66±0.09 0.93±0.12 0.68±0.09 7.07±0.09 AfroNLG Score 12.56 11.05 5.15 10.84 14.25 Table 2: Average performance of finetuned African and multilingual models across three runs on AfroLNG benchmark test sets. 12803 Summarization. Cheetah sets a new SOTA on 11 languages, outperforming other models by an average margin of at least three points. Detailed results can be found in Table 2. Title Generation. On the Title generation task, Cheetah sets a new SOTA on 11 languages. We report results in Table 2. 5.1 Investigating linguistic capabilities In order to further test the utility of our models, we use grammar templates to construct test data in English. We use nine linguistic rules and 19 lexical items to generate 152 sentences. Next, we use our model to translate from source to target and manually evaluate the quality of the generated data. We design new evaluation metrics, faithfulness and fluency, for the manual evaluation (see Section 5.2). A detailed description follows. Grammar templates. We use grammar templates (McCoy et al., 2019) developed with context-free grammars (CFG) on the source side to construct controlled test sets in English. We use CFG on the source side alone because constituents and constituent order differs across languages. We adopt this method for two reasons. First, utilizing grammar templates provides a standardized framework that ensures that the same grammatical phenomena are tested consistently. By employing a uniform approach, we can effectively isolate and evaluate specific linguistic features, facilitating a more rigorous and meaningful comparison of language model performance. Second, grammar templates exhibit a high degree of flexibility, allowing for easy modification and extension to encompass a wide range of linguistic phenomena. This adaptability not only facilitates the incorporation of new linguistic features but also enables the evolution of our test sets to match the dynamic landscape of natural language processing research. Other alternatives to templates include using parsed corpora (Bender et al., 2011) or naturally occurring sentences. For the languages we explore, there are no good quality parsers, making automatic parsing inaccessible for this analysis. Furthermore, when a corpus is parsed automatically, the likelihood of encountering parsing errors escalates with the intricacy of the sentence structure (Bender et al., 2011; Marvin and Linzen, 2018). Conversely, if the test set exclusively comprises sentences with accurate gold parses, sourcing an ample quantity of instances showcasing syntactic complexities becomes an arduous task (Marvin and Linzen, 2018). Furthermore, the utilization of naturally occurring sentences introduces potential complications that might confound the interpretation of experiments (Ettinger et al., 2018). The templates include transitive and intransitive structures, negative and affirmative structures, and structures with gender and number. Table E.1 provides examples of generated sentences using the templates.The entire generated grammar is available at our GitHub:9. Inference. We test three of our finetuned machine translation models with the generated dataset. This allows us to evaluate how much linguistic information the models have acquired during pretraining and finetuning. Specifically, we use the English→Hausa, English→Swahili, and English→Yorùbá based on MT0, MT5, AfriTEVA, and Cheetah models that were finetuned on the LafandMT dataset. We do not include Afri-MT5 in this analysis because it has very low scores across several tasks as shown in Table 2. Notably, Hausa, Swahili, and Yorùbá have distinct typologies and the performance of each model on each language gives further insight of performance across varying typological features. Table E.2 shows a number of linguistic differences between the three languages. This method can be generalized to any African language. 5.1.1 Linguistic Details Morphology Morphologically, both Hausa and Swahili are classified as agglutinative languages (Jaggar, 2017; Dryer and Haspelmath, 2013), characterized by the systematic addition of prefixes, suffixes, and affixes to root words or stems. This process imparts precise grammatical meanings, encompassing tense, case, mood, person, number, and more. Conversely, Yorùbá exhibits an analytic structure, relying on word order and discrete function words to denote grammatical relationships, with minimal use of inflections or affixes. The following are examples from the generated (1) Hausa, (2) Swahili, and (3) Yorùbá, respectively. (1) a. Bai neg.masculine barshi leave ba at-all ‘he did not leave him’ 9generatedsentences 12804 b. Bata Neg.feminine barshi leave ba at-all ‘she did not leave him’ (2) a. Ha-ku-mu-a-cha 3pl.sg.sub-neg-3pl.sg.obj-leave ‘He did not leave him’ b. Ha-ku-mu-a-cha 3pl.sg.sub-neg-3pl.sg.obj-leave ‘She did not leave him’ (3) a. Òhun 3pl.sg.sub ò neg kúrò leave lÓ ˙ dÒ ˙ from è ˙3pl.sg.obj ‘He did not leave him’ b. Òhun 3pl.sg.sub ò neg kúrò leave lÓ ˙ dÒ ˙ from è ˙3pl.sg.obj ‘She did not leave him’ Phonology In terms of phonology, Yorùbá and Hausa are tonal languages, where pitch distinctions contribute to word differentiation. However, Hausa features a relatively simpler tone system compared to Yorùbá and in most cases tone is not marked in Hausa orthography. Only dictionaries and pedagogical materials indicate tone in text. Yorùbá on the other hand has three tones and indicating tones in orthography significantly reduces ambiguity (Adebara and Abdul-Mageed, 2022). Swahili, in contrast, is devoid of tones altogether. 5.2 Human evaluation To evaluate the effectiveness of each model across different languages, we assess the generated output’s faithfulness and fluency using a five-point Likert scale. Faithfulness measures how accurately a model’s output corresponds to the input sentence, while fluency assesses the grammatical coherence and plausibility of the generated output. We use both metrics because a model can produce coherent output that may not be faithful to the input sentence. This way, if faithfulness penalizes a model for outputs that are not true to the input or that include additional unnecessary information, fluency complements our evaluation of the quality of the same model if the output is fluent. For each grammar category, we return the average Likert point for each language and across the different models model. 5.3 Annotation We annotated each model’s output for faithfulness and fluency. For Hausa and Yorùbá, two expert annotators evaluated the model’s output for faithfulness and fluency. We ensured that each annotator has native speaker competency in reading and writing (while some had a linguistic background). We gave specific annotation instructions (see Section E in the Appendix) to ensure the values are not assigned arbitrarily. We also ensured that the annotators do not know who created which models to prevent any biases. We report the Cohen’s Kappa agreement scores in Table E.3 (Appendix). For Swahili, only one annotator made it to the final annotation task since we could not acquire high quality annotations from other annotators. The Swahili annotator who did the final annotation is a university lecturer with a Ph.D. in linguistics. 5.4 Fluency and Faithfulness Performance We report the distribution of faithfulness and fluency scores across all models and languages in Figure 2. Overall, Cheetah produces more faithful and more fluent outputs than other models on all languages. We now go on to provide a brief analysis of model performance. More details are in in Appendix F. Intransitives In the case of Hausa examples, all three models can generate intransitive sentences with varying levels of fluency and faithfulness with Cheetah outperforming other models. In the context of Swahili, errors primarily relate to tense, possibly because Swahili has an agglutinative structure, and the models may lack exposure to a comprehensive range of grammatical features during training. In the case of Yorùbá, all models consistently incorporate at least one object in each intransitive case. This could be because intransitive sentences in Yorùbá lack a clear direct object, making it challenging for machine translation models to select the accurate translation. Additionally, some intransitive phrases in Yorùbá can be polysemous, further complicating the translation process. We report the distribution of scores in Figure G.1. Transitives Cheetah demonstrates the capability to provide three distinct semantic senses for the polysemous transitive verb treated whereas the other models typically produce only a single semantic in12805 Figure 2: Faithfulness and fluency for Hausa, Swahili, and Yorùbá terpretation. For Swahili, certain instances exhibit the deletion or simplification of object markers in an ungrammatical manner. Figure G.2 shows the distribution of model performance on transitives. Negative mT0, MT5, and AfriTeVa have a tendency to output the negation of the antonym of the verb in each sentence rather than the negation of the verb. Cheetah also makes similar mistakes about 5% of the time. Affirmative. The models generally perform better in the context of the affirmative examples than on the negated examples. However, in the context of Hausa, mT5, mT0, and AfriTeVa consistently output the antonym of negated verb. For instance, the models return “Sara left" rather than “Sara did not leave". In the Swahili examples, we also find cases of double negation (which is not grammatically correct in Swahili). We show the distribution of results in Figure G.5 and Figure G.4. Gender/Agreement Yorùbá does not distinguish gender, yet Cheetah uses Arábìrin (female) before every occurrence of the name “Sara" to indicate that the it has a high probability of being feminine (see Figure G.3). However, “Fred" is not annotated this way. For Hausa, which requires agreement between the gender of the noun and the verb, we find Cheetah outperforming both mt0 and mt5 significantly. AfriTeVa, however, has very low accuracy in the context of gender. Furthermore, mt0, mt5, and Cheetah return connotations for love and relationships for each examples where a male and female pronoun co-occur cross-lingually. 6 Conclusion We introduced Cheetah, a massively multilingual language model designed for African natural language generation. We also propose a new African language generation benchmark, dubbed AfroNLG, that is both sizeable and diverse. We evaluate Cheetah on AfroNLG comparing it to three other models, two multilingual and one dedicated to African languages. The performance of Cheetah surpasses that of all other models we evaluate. This is demonstrated by its superior AfroNLG score, which is approximately three times better than the combined performance of other models. Furthermore, Cheetah outperforms all other models across 48 out of 65 test sets spanning six task clusters. We further analyze our model’s robustness to lexical complexity and carry out human evaluation to inspect the model’s perform on a controlled test set. Again, our results underscore superiority of our model. 7 Limitations We identify the following limitations for our work: 1. The limitations of our language model include 12806 the limited scope of our evaluation. Future work should focus on increasing the subset of languages evaluated manually in order to ensure quality. We believe automatic analyses are not sufficient for development of models that get deployed in particular applications. 2. Another limitation is related to our inability to perform extensive analysis of biases and hateful speech present in our pretraining data. Again, this is due to relatively restricted access to native speakers (and even automated tools) to perform this analysis. As a result, we cannot fully ensure that our models are free from biases and socially undesirable effects. Therefore, it is important that these models be used with care and caution, and be analyzed for biases and socially undesirable effects before use. 3. Additionally, due to unavailability of sufficient computing resources, we were unable to evaluate larger multilingual language models. 8 Ethics Statement and Wider Impacts Cheetah aligns with Afrocentric NLP where the needs of African people is put into consideration when developing technology. We believe Cheetah will not only be useful to speakers of the languages supported, but also researchers of African languages such as anthropologists and linguists. We discuss below some use cases for Cheetah and offer a number of broad impacts. 1. Cheetah aims to address the lack of access to technology in about 90% of the world’s languages, which automatically discriminates against native speakers of those languages. More precisely, it does so by focusing on Africa. To the best of our knowledge, Cheetah is the first massively multilingual PLM developed for African languages and language varieties. A model with knowledge of 517 African languages, is by far the largest to date for African NLP. 2. Cheetah enables improved access of important information to the African community in Indigenous African languages. This is especially beneficial for people who may not be fluent in other languages. This will potentially connect more people globally. 3. Cheetah affords opportunities for language preservation for many African languages. To the best of our knowledge, Cheetah consists of languages that have not been used for any NLP task until now. We believe that it can help encourage continued use of these languages in several domains, as well as trigger future development of language technologies for many of these languages. 4. Although LMs are useful for a wide range of applications, they can also be misused. Cheetah is developed using publicly available datasets that may carry biases. Although we strive to perform analyses and diagnostic case studies to probe performance of our models, our investigations are by no means comprehensive nor guarantee absence of bias in the data. In particular, we do not have access to native speakers of most of the languages covered. This hinders our ability to investigate samples from each (or at least the majority) of the languages. 5. We emphasize the ethical use of these models. Acknowledgments We acknowledge support from Canada Research Chairs (CRC), the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN2018-04267), the Social Sciences and Humanities Research Council of Canada (SSHRC; 4352018-0576; 895-2020-1004; 895-2021-1008), Canadian Foundation for Innovation (CFI; 37771), Digital Research Alliance of Canada,10 and UBC Advanced Research Computing-Sockeye.11 References Julien Abadji, Pedro Javier Ortiz Suárez, Laurent Romary, and Benoît Sagot. 2021. Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus. Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event), pages 1 – 9, Mannheim. LeibnizInstitut für Deutsche Sprache. Ife Adebara and Muhammad Abdul-Mageed. 2022. Towards afrocentric NLP for African languages: Where we are and where we can go. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 10https://alliancecan.ca 11https://arc.ubc.ca/ubc-arc-sockeye 12807 3814–3841, Dublin, Ireland. Association for Computational Linguistics. Ife Adebara, AbdelRahim Elmadany, Muhammad Abdul-Mageed, and Alcides Alcoba Inciarte. 2023. Serengeti: Massively multilingual language models for africa. David Adelani, Jesujoba Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P. Dossou, Chris Emezue, Colin Leong, Michael Beukman, Shamsuddeen Muhammad, Guyo Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles Hacheme, Eric Peter Wairagala, Muhammad Umair Nasir, Benjamin Ajibade, Tunde Ajayi, Yvonne Gitau, Jade Abbott, Mohamed Ahmed, Millicent Ochieng, Anuoluwapo Aremu, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Kalipe, Derguene Mbaye, Allahsera Auguste Tapo, Victoire Memdjokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya, Happy Buzaaba, Blessing Sibanda, Andiswa Bukula, and Sam Manthalu. 2022. A few thousand translations go a long way! leveraging pre-trained models for African news translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3053–3070, Seattle, United States. Association for Computational Linguistics. David Adelani, Dana Ruiter, Jesujoba Alabi, Damilola Adebonojo, Adesina Ayeni, Mofe Adeyemi, Ayodele Esther Awokoya, and Cristina España-Bonet. 2021. The effect of domain and diacritics in Yoruba– English neural machine translation. In Proceedings of Machine Translation Summit XVIII: Research Track, pages 61–75, Virtual. Association for Machine Translation in the Americas. Benjamin Akera, Jonathan Mukiibi, Lydia Sanyu Naggayi, Claire Babirye, Isaac Owomugisha, Solomon Nsumba, Joyce Nakatumba-Nabende, Engineer Bainomugisha, Ernest Mwebaze, and John Quinn. 2022. Machine translation for african languages: Community creation of datasets and models in uganda. Jesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, and Dietrich Klakow. 2022. Adapting pretrained language models to African languages via multilingual adaptive fine-tuning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4336–4349, Gyeongju, Republic of Korea. International Committee on Computational Linguistics. Tilman Becker. 2002. Practical, template–based natural language generation with TAG. In Proceedings of the Sixth International Workshop on Tree Adjoining Grammar and Related Frameworks (TAG+6), pages 80–83, Universitá di Venezia. Association for Computational Linguistics. Emily M. Bender, Dan Flickinger, Stephan Oepen, and Yi Zhang. 2011. Parser evaluation over local and non-local deep dependencies in a large corpus. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 397– 408, Edinburgh, Scotland, UK. Association for Computational Linguistics. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA. Curran Associates Inc. Samuel Cahyawijaya, Genta Indra Winata, Bryan Wilie, Karissa Vincentio, Xiaohong Li, Adhiguna Kuncoro, Sebastian Ruder, Zhi Yuan Lim, Syafri Bahar, Masayu Khodra, Ayu Purwarianti, and Pascale Fung. 2021. IndoNLG: Benchmark and resources for evaluating Indonesian natural language generation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8875–8898, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jiaao Chen, Derek Tam, Colin Raffel, Mohit Bansal, and Diyi Yang. 2023. An Empirical Survey of Data Augmentation for Limited Data Learning in NLP. Transactions of the Association for Computational Linguistics, 11:191–211. Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454–470. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672. 12808 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Bonaventure F. P. Dossou, Atnafu Lambebo Tonja, Oreen Yousuf, Salomey Osei, Abigail Oppong, Iyanuoluwa Shode, Oluwabusayo Olufunke Awoyomi, and Chris Chinenye Emezue. 2022. Afrolm: A self-active learning-based multilingual pretrained language model for 23 african languages. Matthew S. Dryer and Martin Haspelmath, editors. 2013. WALS Online. Max Planck Institute for Evolutionary Anthropology, Leipzig. Ondˇrej Dušek and Filip Jurˇcíˇcek. 2015. Training a natural language generator from unaligned data. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 451–461, Beijing, China. Association for Computational Linguistics. David M Eberhard, F Simons Gary, and Charles D Fennig (eds). 2021. Ethnologue: Languages of the world. Twenty-fourth edition, Dallas, Texas: SIL International. Allyson Ettinger, Ahmed Elgohary, Colin Phillips, and Philip Resnik. 2018. Assessing composition in sentence vector representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1790–1801, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, Anuoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, Wanyu Du, Esin Durmus, Ondˇrej Dušek, Chris Chinenye Emezue, Varun Gangal, Cristina Garbacea, Tatsunori Hashimoto, Yufang Hou, Yacine Jernite, Harsh Jhamtani, Yangfeng Ji, Shailza Jolly, Mihir Kale, Dhruv Kumar, Faisal Ladhak, Aman Madaan, Mounica Maddela, Khyati Mahajan, Saad Mahamood, Bodhisattwa Prasad Majumder, Pedro Henrique Martins, Angelina McMillan-Major, Simon Mille, Emiel van Miltenburg, Moin Nadeem, Shashi Narayan, Vitaly Nikolaev, Andre Niyongabo Rubungo, Salomey Osei, Ankur Parikh, Laura Perez-Beltrachini, Niranjan Ramesh Rao, Vikas Raunak, Juan Diego Rodriguez, Sashank Santhanam, João Sedoc, Thibault Sellam, Samira Shaikh, Anastasia Shimorina, Marco Antonio Sobrevilla Cabezudo, Hendrik Strobelt, Nishant Subramani, Wei Xu, Diyi Yang, Akhila Yerukola, and Jiawei Zhou. 2021. The GEM benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pages 96–120, Online. Association for Computational Linguistics. Sebastian Gehrmann, Abhik Bhattacharjee, Abinaya Mahendiran, Alex Wang, Alexandros Papangelis, Aman Madaan, Angelina Mcmillan-major, Anna Shvets, Ashish Upadhyay, Bernd Bohnet, Bingsheng Yao, Bryan Wilie, Chandra Bhagavatula, Chaobin You, Craig Thomson, Cristina Garbacea, Dakuo Wang, Daniel Deutsch, Deyi Xiong, Di Jin, Dimitra Gkatzia, Dragomir Radev, Elizabeth Clark, Esin Durmus, Faisal Ladhak, Filip Ginter, Genta Indra Winata, Hendrik Strobelt, Hiroaki Hayashi, Jekaterina Novikova, Jenna Kanerva, Jenny Chim, Jiawei Zhou, Jordan Clive, Joshua Maynez, João Sedoc, Juraj Juraska, Kaustubh Dhole, Khyathi Raghavi Chandu, Laura Perez Beltrachini, Leonardo F . R. Ribeiro, Lewis Tunstall, Li Zhang, Mahim Pushkarna, Mathias Creutz, Michael White, Mihir Sanjay Kale, Moussa Kamal Eddine, Nico Daheim, Nishant Subramani, Ondrej Dusek, Paul Pu Liang, Pawan Sasanka Ammanamanchi, Qi Zhu, Ratish Puduppully, Reno Kriz, Rifat Shahriyar, Ronald Cardenas, Saad Mahamood, Salomey Osei, Samuel Cahyawijaya, Sanja Štajner, Sebastien Montella, Shailza Jolly, Simon Mille, Tahmid Hasan, Tianhao Shen, Tosin Adewumi, Vikas Raunak, Vipul Raheja, Vitaly Nikolaev, Vivian Tsai, Yacine Jernite, Ying Xu, Yisi Sang, Yixin Liu, and Yufang Hou. 2022. GEMv2: Multilingual NLG benchmarking in a single line of code. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 266–281, Abu Dhabi, UAE. Association for Computational Linguistics. Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc’Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali–English and Sinhala– English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098–6111, Hong Kong, China. Association for Computational Linguistics. Tahmid Hasan, Abhik Bhattacharjee, Md. Saiful Islam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M. Sohel Rahman, and Rifat Shahriyar. 2021. XLsum: Large-scale multilingual abstractive summarization for 44 languages. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 4693–4703, Online. Association for Computational Linguistics. Junxian He, Chunting Zhou, Xuezhe Ma, Taylor BergKirkpatrick, and Graham Neubig. 2022. Towards a unified view of parameter-efficient transfer learning. In International Conference on Learning Representations. 12809 Philip J. Jaggar. 2017. The Hausa “Grade 5” verb: Morphosyntactic preliminaries, 1 edition, pages 18– 27. Harrassowitz Verlag. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6282–6293, Online. Association for Computational Linguistics. Odunayo Jude Ogundepo, Akintunde Oladipo, Mofetoluwa Adeyemi, Kelechi Ogueji, and Jimmy Lin. 2022. AfriTeVA: Extending ?small data? pretraining approaches to sequence-to-sequence models. In Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing, pages 126–135, Hybrid. Association for Computational Linguistics. Daniel King, Zejiang Shen, Nishant Subramani, Daniel S. Weld, Iz Beltagy, and Doug Downey. 2022. Don’t say what you don’t know: Improving the consistency of abstractive summarization by constraining beam search. In Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 555–571, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Julia Kreutzer, Isaac Caswell, Lisa Wang, Ahsan Wahab, Daan van Esch, Nasanbayar Ulzii-Orshikh, Allahsera Tapo, Nishant Subramani, Artem Sokolov, Claytone Sikasote, Monang Setyawan, Supheakmungkol Sarin, Sokhar Samb, Benoît Sagot, Clara Rivera, Annette Rios, Isabel Papadimitriou, Salomey Osei, Pedro Ortiz Suárez, Iroro Orife, Kelechi Ogueji, Andre Niyongabo Rubungo, Toan Q. Nguyen, Mathias Müller, André Müller, Shamsuddeen Hassan Muhammad, Nanda Muhammad, Ayanda Mnyakeni, Jamshidbek Mirzakhalov, Tapiwanashe Matangira, Colin Leong, Nze Lawson, Sneha Kudugunta, Yacine Jernite, Mathias Jenny, Orhan Firat, Bonaventure F. P. Dossou, Sakhile Dlamini, Nisansa de Silva, Sakine Çabuk Ballı, Stella Biderman, Alessia Battisti, Ahmed Baruwa, Ankur Bapna, Pallavi Baljekar, Israel Abebe Azime, Ayodele Awokoya, Duygu Ataman, Orevaoghene Ahia, Oghenefego Ahia, Sweta Agrawal, and Mofetoluwa Adeyemi. 2021. Quality at a glance: An audit of web-crawled multilingual datasets. arXiv preprint arXiv:2103.12028. Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Aman Kumar, Himani Shrotriya, Prachi Sahu, Amogh Mishra, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan, Mitesh M. Khapra, and Pratyush Kumar. 2022. IndicNLG benchmark: Multilingual datasets for diverse NLG tasks in Indic languages. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5363–5394, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Xiao Li, Kees van Deemter, and Chenghua Lin. 2016. Statistics-based lexical choice for NLG from quantitative information. In Proceedings of the 9th International Natural Language Generation conference, pages 104–108, Edinburgh, UK. Association for Computational Linguistics. Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008–6018, Online. Association for Computational Linguistics. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726–742. Rebecca Marvin and Tal Linzen. 2018. Targeted syntactic evaluation of language models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Linguistics. Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev, Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff, and Colin Raffel. 2022. Crosslingual generalization through multitask finetuning. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Ça˘glar Gulçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 280–290, Berlin, Germany. Association for Computational Linguistics. 12810 Antoine Nzeyimana and Andre Niyongabo Rubungo. 2022. KinyaBERT: a morphology-aware Kinyarwanda language model. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5347– 5363, Dublin, Ireland. Association for Computational Linguistics. Kelechi Ogueji and Orevaoghene Ahia. 2019. Pidginunmt: Unsupervised neural machine translation from west african pidgin to english. arXiv preprint arXiv:1912.03444. Kelechi Ogueji, Yuxin Zhu, and Jimmy Lin. 2021. Small data? no problem! exploring the viability of pretrained multilingual language models for lowresourced languages. In Proceedings of the 1st Workshop on Multilingual Representation Learning, pages 116–126, Punta Cana, Dominican Republic. Association for Computational Linguistics. Akintunde Oladipo, Mofetoluwa Adeyemi, Orevaoghene Ahia, Abraham Owodunni, Odunayo Ogundepo, David Adelani, and Jimmy Lin. 2023. Better quality pre-training data and t5 models for African languages. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 158–168, Singapore. Association for Computational Linguistics. Hemant Palivela. 2021. Optimization of paraphrase generation and identification using language models in natural language processing. International Journal of Information Management Data Insights, 1(2):100025. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1–67. Machel Reid, Junjie Hu, Graham Neubig, and Yutaka Matsuo. 2021a. AfroMT: Pretraining strategies and reproducible benchmarks for translation of 8 african languages. In Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic. Machel Reid, Junjie Hu, Graham Neubig, and Yutaka Matsuo. 2021b. AfroMT: Pretraining strategies and reproducible benchmarks for translation of 8 African languages. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1306–1320, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Sebastian Ruder, Matthew E. Peters, Swabha Swayamdipta, and Thomas Wolf. 2019. Transfer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials, pages 15–18, Minneapolis, Minnesota. Association for Computational Linguistics. Yves Scherrer. 2020. TaPaCo: A corpus of sentential paraphrases for 73 languages. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 6868–6873, Marseille, France. European Language Resources Association. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, Berlin, Germany. Association for Computational Linguistics. Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, and Chandan K. Reddy. 2021. Neural abstractive text summarization with sequence-to-sequence models. ACM/IMS Trans. Data Sci., 2(1). Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS’14, page 3104–3112. MIT Press. Kees Van Deemter, Emiel Krahmer, and Mariët Theune. 2005. Real versus template-based natural language generation: A false opposition? Comput. Linguist., 31(1):15–24. Emiel van Miltenburg, Chris van der Lee, Thiago CastroFerreira, and Emiel Krahmer. 2020. Evaluation rules! on the use of grammars and rule-based systems for NLG evaluation. In Proceedings of the 1st Workshop on Evaluating NLG Evaluation, pages 17–27, Online (Dublin, Ireland). Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 6000–6010, Red Hook, NY, USA. Curran Associates Inc. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 12811 483–498, Online. Association for Computational Linguistics. 12812 Appendices A Pretraining Data We provide details of our pretraining data below: Religious Domain. Our religious data is taken from online Bibles, Qurans, and data crawled from the Jehovah’s witness website. We also include religious texts from the book of Mormon. News Domain. We collect data from online newspapers (Adebara and Abdul-Mageed, 2022) and news sites such as Voice of America, Voice of Nigeria, BBC, Global voices, and DW news sites. We collect local newspapers from 27 languages from across Africa. Government Documents. We collect government documents South African Centre for Digital Language Resources (SADiLaR), and the Universal Declaration of human rights (UDHR) in multiple languages. Health Documents. We collect multiple health documents from the Department of Health, State Government of Victoria, Australia. We collect documents in Amharic, Dinka, Harari, Oromo, Somali, Swahili, and Tigrinya. Existing Corpora. We collect corpora available on the web for different African languages, including from Project Gutenberg for Afrikaans, South African News data. for Sepedi and Setswana, OSCAR (Abadji et al., 2021) for Afrikaans, Amharic, Somali, Swahili, Oromo, Malagasy, and Yoruba. We also used Tatoeba for Afrikaans, Amharic, Bemba, Igbo, Kanuri, Kongo, Luganda, Malagasy, Sepedi, Ndebele, Kinyarwanda, Somali, Swahili, Tsonga, Xhosa, Yoruba, and Zulu; Swahili Language Modelling Data for Swahili; Ijdutse corpus for Hausa; Data4Good corpora for Luganda, CC-100 for Amharic, Fulah, Igbo, Yoruba, Hausa, Tswana, Lingala, Luganada, Afrikaans, Somali, Swahili, Swati, North Sotho, Oromo, Wolof, Xhosa, and Zulu; Afriberta-Corpus for Afaan / Oromo, Amharic, Gahuza, Hausa, Igbo, Pidgin, Somali, Swahili, Tigrinya and Yoruba; mC4 for Afrikaans, Amharic, Hausa, Igbo, Malagasy, Chichewa, Shona, Somali, Sepedi, Swahili, Xhosa, Yoruba and Zulu. B Models C AfroNLG Benchmark We report statistics of AfroNLG benchmark in Table C.2 and C.1 respectively. Figure C.1: Examples from the mask-one and mask-atleast-one cloze task data. C.1 CHRF and CHRF++ Results D Cloze Task Results We provide results on the performance of each model on individual languages. We use a dash ’-’ to indicate that a specific model does not support a language. E Annotation We gave the following annotation rules to our annotators: Faithfulness refers to how close to the English sentence the model output is. It should be annotated with values between 1 and 5. Faithfulness should be evaluated independently of the fluency of the model output. Below are some detailed explanations for the scale for faithfulness: • Give a value 1 if model output is not related to the source sentence. • Give a value 2 if the model output is the opposite of the source sentence. • Give a value 3 if the model output is somewhat related to the source sentence. It should have some words or phrases that make it related to the source. • Give a value 4 if the model output is closely related but changes the meaning slightly (e.g difference in gender, number etc) • Give a value 5 if the model output is an exact translation Fluency is how grammatically correct the model is. Faithfulness and fluency should be judged independently. That is, even if the output is not faithful, don’t use it to determine the fluency score and vice versa. Here are some detailed explanations on how to assign the values: • Give a value 1 if model output is completely ungrammatical and nonsensical. 12813 Model Size Params No._heads No._layers D_model Vocab S._Len B. Size #Train_Steps #Langs #A.Langs mT0 base 580M 12 12 768 ∼250k 1024 1024 UNK 101 13 mT5 base 580M 12 12 768 250K 1024 1024 1M 101 13 AfriMT5 base 580M UNK UNK UNK UNK UNK 2048 UNK 17 17 AfriTeVa base 229M 12 12 768 40K 512 256 500K 10 10 Cheetah base 580M 12 12 768 250K 1024 1024 1M 527 517 Table B.1: Parameters of Cheetah compared with other models. Category Benchmark Reference Task Lang/Total Datasets Tasks Multilingual FLoRES200 (Costa-jussà et al., 2022) 52/200 MT Wiki 1 GEMv1 (Gehrmann et al., 2021) DRG, DT, RES, TS, SMP 10/52 18 13 GEMv2 (Gehrmann et al., 2021) DRG, DT, PPH, QA, RES, TS, SLG, SMP, TS 10/52 50 9 IndicNLG (Kumar et al., 2022) BG, HG, SUM, PARA, QA 0/11 5 5 IndoNLG (Cahyawijaya et al., 2021) SUM, QA, Chit-Chat 0/3 5 3 NLLB M.D. (Costa-jussà et al., 2022) MT 2/8 Wiki 1 NLLB S.D. (Costa-jussà et al., 2022) MT 2/8 Wiki 1 Toxicity200 (Costa-jussà et al., 2022) MT 50/200 Wiki 1 XGLUE (Liang et al., 2020) NER, POS, MLQA, PAWS-X, XLNI, NC, QADSM, WPR, QAM, QG, NTG 1/19 19 11 African AfroMT (Reid et al., 2021a) MT 8/8 5 1 Menyo-20k (Adelani et al., 2021) MT 1/2 6 1 AfroNLG Our Work Cloze, CS, MT, QA, TG, SUM, PARA 517/527 67 7 Table C.1: A Comparison of AfroNLG with other multilingual Benchmarks. MT: Machine translation, QA: Question Answering, CS: Code-Switching, TG: Title Generation, SUM: Summarization, PARA: Paraphrase, NER: Named Entity Recognition, POS: Part-Of-Speech Tagging, MLQA: Multilingual Question Answering, PAWS-X: Parallel Aggregated Word Scrambling for Cross-Lingual Understanding, XNLI: Cross-Lingual Natural Language Interference, NC: News Classification, QADSM: Query-AD Matching, WPR: Web Page Ranking, QAM: QA Matching, NTG: News Title Generation, BG: WikiBio Biography Generation, and HG: Headline Generation. SD: Seed Data, MD: Multi Domain. DRG: Dialogue Response Generator, DT: Data-to-Text, RES: Reasoning, TS: Text Summarization, SMP: Text Simplification, PPH: Paraphrase, SLG: Slide Generation • Give a value 2 if the model output is reasonable but includes some foreign words or gibberish. • Give a value 3 if the model output contains some grammatical phrases but also contains some ungrammatical phrases. • Give a value 4 if the model output is almost grammatical (but may have a few errors like spelling mistakes) • Give a value 5 if the model output is very fluent and sounds looks like what a native speaker will say. To prepare for the official annotation process, each evaluator annotated 10 random samples for the language pair they were assigned to. Following the individual evaluations, we reviewed any annotation errors and inconsistencies in assessments and assigned another random pair of 10 samples. In the second phase, we removed evaluators for which the quality of their annotations were deemed of a poor quality. F Fluency and Faithfulness Performance We report the distribution of faithfulness and fluency scores across all models and languages in Figure 2. Overall, Cheetah produces more faithful and more fluent outputs than other models on all languages. We now go on to provide detailed analysis of model performance. Intransitives In the case of Hausa examples, all three models manage to produce intransitive examples. However, Cheetah consistently appends objects to these intransitive examples. This inclination to add objects might stem from biases within the data used for pretraining or finetuning Cheetah. Nevertheless, it is worth noting that Cheetah outperforms other models by generating more fluent and more faithful Hausa outputs. In the Swahili context, all models successfully generate intransitive translations, with model errors primarily related to tense. This performance discrepancy in Swahili can be attributed to its agglutinative structure, with models potentially lacking exposure to a comprehensive range of grammatical features during pretraining 12814 Dataset Pairs Train Dev Test Lafand eng-hau 5,866 1,301 1,501 eng-ibo 6,945 1,457 1,412 eng-lug 4,076 1,501 1,501 eng-pcm 4,791 1,485 1,565 eng-swa 30,783 1,792 1,836 eng-tsn 2,101 1,343 1,501 eng-twi 3,338 1,285 1,501 eng-yor 6,645 1,545 1,559 eng-zul 3,540 1,462 1,001 fra-bam 3,014 1,501 1,501 fra-bbj 2,233 1,134 1,431 fra-ewe 2,027 1,415 1,564 fra-fon 2,638 1,228 1,580 fra-mos 2,494 1,493 1,575 fra-wol 3,361 1,507 1,501 AfroMT eng-afr 25,799 3,226 3,226 eng-bem 12,043 1,506 1,506 eng-lin 17,679 2,211 2,210 eng-run 12,475 1,560 1,560 eng-sot 28,844 3,607 3,606 eng-swa 28,084 3,511 3,512 eng-xho 26,091 3,263 3,262 eng-zul 29,127 3,641 3,642 PidginUNMT eng-pcm 1,682 211 211 SALT All-pairs 20,006 2,501 2,502 Table C.2: Statistics of the MT data in our benchmark. All-pairs each have the same size of data. They include ach-eng, ach-lgg, ach-lug, ach-nyn, ach-teo, ach-teo, eng-lgg, eng-lug, eng-nyn, eng-teo, lgg-teo, lug-lgg, lug-teo, nyn-lgg, nyn-lug, and nyn-teo or finetuning. In the context of Yorùbá, all models consistently incorporate at least one object in each intransitive case. Notably, mT0 generates an output without an object approximately 5.88% of the time. This may be because intransitive sentences inherently lack a clear direct object, making it more challenging for machine translation models to grasp context and select the accurate translation. In certain instances, some intransitive phrases can be polysemous, further complicating the translation process. Intransitive English verbs do not always retain their intransitive nature in Yorùbá. Furthermore, transitives with optional/truncated objects tend to have a compulsory object in Yorùbá. This phenomenon potentially contributes to the models’ tendency to append objects to intransitive Yorùbá phrases. For instance, whereas the intransitive "slept" in "John slept" maps to the intransitive form "John sùn" in Yorùbá, the intransitive verb "prayed", in "John prayed" becomes "John gbàdúrà", a transitive verb in Yorùbá. On the other hand, the transitive verb "ate" in "John ate", has an optional/truncated object in English but becomes "John je ˙ un", a transitive with an obligatory object. In Yorùbá, both "ate" and "prayed" are transitive verbs that require an object. They are derived from "je ˙ " (eat) and "oúnje ˙ " (food), which give rise to "je ˙ un" and "gbà" (collect) and "àdúrà" (prayer), resulting in "gbàdúrà" respectively. Transitives In the context of transitives, Cheetah stands out as the top-performing model across all three languages, as illustrated in Figure 2. Cheetah demonstrates the capability to provide three distinct semantic senses for the polysemous transitive verb treated whereas the other models typically produce only a single semantic interpretation. In Swahili examples, certain instances exhibit the deletion or simplification of object markers in an ungrammatical manner. For a visual representation of the annotation of intransitive sentences in Yorùbá, please refer to Figure G.3. Figure G.2 shows the distribution of model performance on transitives. Negative In the context of Yorùbá, all models are able to produce the correct negation marker including the correct tone marks. The tone patterns on negation markers may vary based on the context of words before and after the negation marker and it was interesting to see these variations in the models outputs. Despite this, mT0, MT5, and AfriTeVa have a tendency to output the negation of the antonym of the verb in each sentence rather than the negation of the verb. Cheetah also makes similar mistakes about 5% of the time. Gender/Agreement We find interesting cases of gender in the model’s output. For example, whereas Yorùbá grammar does not distinguish gender, Cheetah uses Arábìrin (female) before every occurrence of the name “Sara" to indicate that the it has a high probability of being feminine (see Figure G.3). It is important to mention that “Fred" is not annotated this way. For Hausa, which requires agreement between the gender of the noun and the verb, we find Cheetah outperforming both mt0 and mt5 significantly. AfriTeVa, however, has very low accuracy in the context of gender. Furthermore, mt0, mt5, and Cheetah return connotations for love and relationships for each examples where a male and female pronoun co-occur cross-lingually. Number Cheetah significantly outperforms all three models in accurately assigning appropriate number markers. We also find that when translating the word "you" into Hausa, Swahili, or Yorùbá, all four models use either singular or plural forms. We assume that this is due to the fact that the second person in English (i.e., “you") can be either singular or plural while each of these languages have a different word for the singular and plural forms. 12815 Task Cluster Test Set Source Train Dev Test Cloze test 517 languages Ours 103,400 25,850 51,700 Paraphrase Multilingual†† (Scherrer, 2020) 22,390 2,797 2,794 Berber 17,607 2,200 2,200 Kabyle 4,441 555 555 Question Answering Swahili (Clark et al., 2020) 49,881 499 n/a Summarization Multilingual† (Hasan et al., 2021) 63,040 7,875 7875 Amharic 5,761 719 719 Igbo 4,183 522 522 Oromo 6,063 757 757 Rundi 5,746 718 718 Swahili 7,898 987 987 Yorùbá 6,350 793 793 Hausa 6,418 802 802 Nigerian Pidgin 9,208 1,151 1,151 Somali 5,962 745 745 Tigrinya 5,451 681 681 Multilingual⋆† Ours 428 Title Generation Multilingual† (Hasan et al., 2021) 63,040 7,875 7875 Amharic 5,761 719 719 Igbo 4,183 522 522 Oromo 6,063 757 757 Rundi 5,746 718 718 Swahili 7,898 987 987 Yorùbá 6,350 793 793 Hausa 6,418 802 802 Nigerian Pidgin 9,208 1,151 1,151 Somali 5,962 745 745 Tigrinya 5,451 681 681 Multilingual⋆ Ours 5899 Table C.3: Statistics of the data in our benchmark. †† includes amh, ber, kab, run. † has amh, ibo, orm, run, swa, yor, hau, pcm, som, and tir. ⋆† is a newly created summarization test set including ‘hau’, ‘nde’ (zero-shot), and ‘swa’. ⋆ is a newly created test set across 15 languages: ‘amh’, ‘gag’ (zero-shot), ‘hau’, ‘ibo’, ‘pcm’, ‘som’, ‘swa’, ‘tir’, ‘yor’, ‘kin’ (zero-shot), ‘afr’, ‘mlg’ (zero-shot), ‘orm’, ‘nde’ (zero-shot), ‘sna’(zero-shot) 12816 Task Metric mT0 mT5 afri-mt5 AfriTeVa Cheetah Translate English to Afrikaans Chrf 26.97±4.75 26.11±4.12 14.66±8.79 20.75±4.02 39.88±0.81 Translate English to Bemba Chrf 10.27±0.89 6.39±1.96 20.23±13.97 9.94±10.05 15.76±0.19 Translate English to Rundi Chrf 21.51±1.39 17.56±3.13 24.91±3.59 31.58±2.33 28.65±3.55 Translate English to Sesotho Chrf 21.08±3.54 12.08±10.91 23.75±4.77 29.57±1.61 29.05±2.41 Translate English to Swahili Chrf 23.26±0.16 20.35±4.87 24.60±0.2 20.5±4.88 37.24±0.04 Translate English to Xhosa Chrf 27.44±3.1 25.88±4.94 34.97±2.49 20.25±15.35 33.45±0.21 Translate English to Zulu Chrf 27.12±3.49 21.54±2.16 37.8±1.41 25.39±16.55 43.75±0.11 Translate English to Hausa Chrf 28.53±0.26 27.65±0.53 19.99±0.42 31.68±0.29 34.9±0.32 Translate English to Igbo Chrf 40.31±0.17 37.18±0.34 22.01±0.7 33.24±0.23 44.37±0.31 Translate English to Luganda Chrf 25.94±2.41 23.33±0.31 15.57±1.45 24.16±2.55 36.22±0.09 Translate English to N. Pidgin Chrf 63.49±0.05 63.9±0.1 24.79±0.68 53.76±0.01 62.95±0.17 Translate English to Swahili Chrf 50.52±3.33 51.76±0.12 21.00±0.7 44.84±0.33 56.36±0.15 Translate English to Setswana Chrf 30.89±0.36 16.62±0.22 13.17±1.73 23.75±0.45 35.87±0.64 Translate English to Twi Chrf 23.56±0.24 15.8±1.29 12.74±1.33 17.47±3.26 25.89±0.2 Translate English to Yoruba Chrf 19.41±1.97 16.51±0.38 11.49±0.29 20.62±0.36 25.09±0.07 Translate English to Zulu Chrf 35.4±1.27 16.13±7.84 15.04±1.1 12.75±0.56 38.81±0.21 Translate French to Bambara Chrf 16.49±0.39 7.44±1.12 10.16±1.58 19.41±0.53 19.91±0.05 Translate French to Ghomálá’ Chrf 8.3±0.76 6.53±0.57 6.72±3.75 13.16±0.4 8.57±3.15 Translate French to Ewe Chrf 10.19±2.32 5.46±3.02 6.96±3.02 13.44±1.64 21.6±0.22 Translate French to Fon Chrf 5.67±2.65 6.09±0.72 5.82±1.58 11.88±1.83 12.71±0.41 Translate French to Moore Chrf 7.86±1.43 5.16±2.20 7.79±0.97 11.42±0.7 12.34±0.56 Translate French to Wolof Chrf 17.55±0.2 3.15±0.12 11.26±1.91 17.58±0.44 16.67±0.21 Translate English to N. Pidgin (pidginUNMT) Chrf 41.83±0.17 37.12±0.77 21.65±1.33 39.04±0.50 40.2±0.17 Translate Acholi to English Chrf 39.12±0.1 33.07±5.49 21.65±1.33 34.19±0.06 42.17±0.05 Translate Acholi to Lugbara Chrf 25.05±0.85 20.61±5.92 28.71±0.34 34.01±0.29 32.31±1.11 Translate Acholi to Luganda Chrf 22.13±0.63 25.75±0.02 24.31±0.1 32.77±0.68 37.34±0.47 Translate Acholi to Nyankore Chrf 27.52±0.45 20.03±3.88 24.50±0.02 32.39±0.92 35.0±0.33 Translate Acholi to Ateso Chrf 26.0±1.99 22.16±1.63 28.33±0.01 35.37±0.61 34.62±1.05 Translate English to Lugbara Chrf 38.84±0.01 37.12±0.77 39.11±0.01 38.94±0.3 40.2±0.17 Translate English to Luganda Chrf 43.71±0.08 41.05±0.19 35.34±1.11 43.14±0.22 49.38±0.02 Translate English to Nyankore Chrf 40.43±0.21 38.38±0.13 36.8±0.07 40.36±0.17 43.67±0.32 Translate English to Ateso (salt) Chrf 41.98±0.13 38.91±0.05 39.76±1.35 42.1±0.42 42.96±0.48 Translate Lugbara to Ateso Chrf 22.67±1.51 20.47±0.7 28.13±0.58 34.3±0.64 29.04±0.3 Translate Luganda to Lugbara Chrf 28.65±1.5 25.74±0.5 30.87±0.12 34.26±0.24 34.94±0.6 Translate Luganda to Ateso Chrf 31.74±0.22 27.66±0.64 34.04±0.01 37.19±0.07 39.05±0.49 Translate Nyankore to Lugbara Chrf 27.47±0.45 24.63±0.76 15.01±0.01 33.17±0.21 33.2±0.19 Translate Nyankore to Luganda Chrf 39.34±0.14 37.34±0.16 35.26±0.13 40.48±0.63 45.29±0.01 Translate Nyankore to Ateso Chrf 28.6±0.11 24.64±1.05 30.69±0.16 34.37±0.14 35.52±0.64 Average 28.07 23.88 22.62 28.77 34.08 Table C.4: Performance of various models on MT data using CHRF 12817 Task Metric mT0 mT5 afri-mt5 AfriTeVa Cheetah Translate English to Afrikaans Chrf++ 22.86±3.74 22.32±2.80 11.62±6.72 17.27±2.91 34.02±0.7 Translate English to Bemba Chrf++ 9.04±0.79 5.46±1.78 23.65±1.87 7.85±7.45 13.9±0.13 Translate English to Rundi Chrf++ 18.06±1.16 14.41±2.53 20.36±2.88 25.39±1.57 23.94±3.03 Translate English to Sesotho Chrf++ 17.34±3.09 10.2±8.75 19.31±3.94 23.85±1.43 23.9±2.03 Translate English to Swahili Chrf++ 18.5±0.31 16.28±4.48 19.42±2.2 16.16±3.93 30.6±0.11 Translate English to Xhosa Chrf++ 21.34±2.66 19.96±4.05 26.94±1.92 15.76±11.49 27.0±1.01 Translate English to Zulu Chrf++ 21.14±2.6 17.32±3.17 28.97±1.14 19.29±12.69 40.97±1.10 Translate English to Hausa Chrf++ 25.98±0.27 25.22±0.5 18.28±0.41 28.56±0.22 32.23±0.29 Translate English to Igbo Chrf++ 37.82±0.15 34.8±0.32 20.25±0.68 29.89±0.22 41.87±0.31 Translate English to Luganda Chrf++ 23.15±2.19 20.74±0.36 13.43±1.28 20.27±2.21 33.12±0.08 Translate English to N. Pidgin Chrf++ 60.57±0.15 60.12±0.07 23.85±0.64 49.72±0.36 59.74±0.18 Translate English to Swahili Chrf++ 47.67±3.33 48.95±0.13 19.01±1.69 40.84±0.31 53.67±0.15 Translate English to Setswana Chrf++ 29.02±0.35 14.87±0.16 11.77±1.61 21.25±0.36 34.05±0.64 Translate English to Twi Chrf++ 21.25±0.22 13.63±1.18 11.7±1.13 15.39±3.02 23.96±0.2 Translate English to Yoruba Chrf++ 18.41±1.89 15.47±0.4 10.19±0.25 18.99±0.27 24.1±0.06 Translate English to Zulu Chrf++ 30.99±1.13 13.86±6.85 11.34±2.1 10.58±0.77 34.31±0.2 Translate French to Bambara Chrf++ 15.75±0.36 6.8±0.97 10.2±1.41 18.28±0.49 19.65±0.14 Translate French to Ghomálá’ Chrf++ 7.0±0.77 5.64±0.44 5.84±3.04 11.13±0.34 7.28±2.83 Translate French to Ewe Chrf++ 9.09±2.21 4.75±2.76 6.56±3.19 11.72±1.4 20.53±0.23 Translate French to Fon Chrf++ 5.24±2.33 5.57±0.63 5.28±1.38 10.94±1.93 11.76±0.45 Translate French to Moore Chrf++ 7.08±1.33 4.63±2.02 7.18±0.79 10.31±0.64 11.2±0.54 Translate French to Wolof Chrf++ 16.27±0.24 2.65±0.11 10.23±1.73 15.73±0.33 15.58±0.19 Translate English to N. Pidgin (pidginUNMT) Chrf++ 42.12±0.18 37.67±1.64 22.53±1.31 28.38±0.98 39.58±0.49 Translate Acholi to English Chrf++ 37.96±0.1 27.18±0.36 28.24±0.38 31.83±0.07 41.06±0.06 Translate Acholi to Lugbara Chrf++ 23.41±0.84 19.57±5.04 27.18±0.36 31.45±0.29 30.68±1.02 Translate Acholi to Luganda Chrf++ 25.67±0.34 19.59±0.56 21.52±0.02 28.52±0.63 33.93±0.48 Translate Acholi to Nyankore Chrf++ 24.02±0.41 17.35±3.35 21.38±0.23 27.73±0.84 31.04±0.29 Translate Acholi to Ateso Chrf++ 23.65±1.87 20.07±1.53 25.81±0.04 31.56±0.57 31.83±0.99 Translate English to Lugbara Chrf++ 36.83±0.03 38.3±0.13 37.29±0.12 34.3±0.77 35.85±0.01 Translate English to Luganda Chrf++ 40.1±0.06 37.56±0.19 32.18±1.05 38.28±0.2 45.82±0.04 Translate English to Nyankore Chrf++ 35.93±0.18 34.07±0.12 32.59±0.05 34.88±0.15 39.17±0.33 Translate English to Ateso (salt) Chrf++ 37.98±0.11 38.93±0.01 36.83±1.23 37.85±0.4 39.87±0.47 Translate Lugbara to Ateso Chrf++ 20.55±1.38 18.54±0.65 25.6±0.64 30.48±0.59 26.43±0.32 Translate Luganda to Lugbara Chrf++ 26.79±1.49 23.94±0.48 29.13±0.11 31.56±0.24 33.04±0.58 Translate Luganda to Ateso Chrf++ 28.94±0.22 25.11±0.59 31.26±0.01 33.18±0.05 35.99±0.45 Translate Nyankore to Lugbara Chrf++ 22.89±0.73 25.75±0.44 12.07±0.11 30.54±0.2 31.35±0.2 Translate Nyankore to Luganda Chrf++ 35.7±0.12 33.73±0.15 31.99±0.07 35.74±0.54 41.63±0.0 Translate Nyankore to Ateso Chrf++ 26.03±0.08 22.35±0.98 28.05±0.09 30.53±0.13 32.65±0.62 Average 25.58 21.67 20.50 25.16 31.24 Table C.5: Performance of various models on MT data using CHRF++ 12818 ISO MT0 MT5 AfriMT5 AfriTeVa Cheetah afr 0 0 20.45 amh 0 0 0 0 bam 0 0 bbj 5.21 0 8.45 ewe 0 0 fon 0 0 hau 0 0 0 0 13.41 ibo 0 0 0 0 0 lin 0 25.35 lug 0 0 luo 0 9.35 mos 0 14.53 mlg 0 0 15.65 nya 7.64 nyj orm 0 0 pcm 0 0 10.10 sna 0 0 0 som 0 0 0 10.39 sot 4.69 15.23 swa 0 0 7.02 swh tir 6.33 tsn 0 0 twi 0 0 wol 0 0 xho 0 0 6.92 yor 0 3.61 0 0 6.42 zul 0 0 0 8.05 Table D.1: Bleu scores for mask-one cloze task on the union of languages represented in the four models we compare Cheetah with. Red describes zero-shot performance greater than 0. ISO MT0 MT5 AfriMT5 AfriTeVa Cheetah afr 0 0 0 amh 0 0 0 0 bam 0 0 bbj 0 0 ewe 0 0 fon 0 0 hau 0 0 0 6 ibo 0 0 0 8 lin 0 0 lug 0 0 luo 0 0 mos 0 0 mlg 0 0 nya 0 0 12 nyj orm 0 0 0 pcm 0 0 0 sna 0 0 0 som 0 0 0 4 sot 10 swa 0 0 12 swh tir 0 0 0 tsn 0 0 twi 0 0 wol 0 0 xho 0 0 0 6 yor 0 0 0 0 0 zul 0 0 0 0 Table D.2: Bleu scores for mask-at-least-one cloze task on the union of languages represented in the four models we compare Cheetah with. Category Example Intransitive He left Intransitive + Negation We did not leave Transitive You left Lagos Transitive + Negation She did not leave them Table E.1: Some examples of sentences generated with the templates Lang. Family # Tone Gender Morphology Hausa Afro-Asiatic Two Two Isolating Swahili N.C. Bantu None Five Agglutinative Yourba N.C. Non-Bantu Three None Isolating Table E.2: Some linguistic differences between Hausa, Swahili, and Yoruba. N.C. refers to Niger-Congo hau yor Model Faith. Flu. Faith. Flu. mT0 90.54 97.62 96.57 93.92 mT5 93.51 96.48 82.23 81.10 AfriTeVa 87.27 96.94 88.56 84.73 Cheetah 96.61 97.26 87.11 92.64 Table E.3: Kappa scores for Faithfulness (i.e., Faith.) and Fluency (i.e., Flu.) across the four models and three languages we evaluate. 12819 G Results on Quality Evaluation 12820 Figure G.1: Faithfulness and fluency for Intransitives in Hausa, Swahili, and Yorùbá Figure G.2: Faithfulness and fluency for Transitives in Hausa, Swahili, and Yorùbá 12821 Figure G.3: Performance on some intransitive examples in the Yorùbá test set. The correct words have no highlights, plausible words or phrases are highlighted with yellow ink while wrong words and phrases are highlighted with grey highlights. We use plausible to refer to words or phrases that can be used in place of the gold or which give additional information. Figure G.4: Faithfulness and fluency for Intransitives + Negation in Hausa, Swahili, and Yorùbá 12822 Figure G.5: Faithfulness and fluency for Transitives + Negation in Hausa, Swahili, and Yorùbá 12823
2024
691
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12824–12840 August 11-16, 2024 ©2024 Association for Computational Linguistics TAPERA: Enhancing Faithfulness and Interpretability in Long-Form Table QA by Content Planning and Execution-based Reasoning Yilun Zhao1 Lyuhao Chen2 Arman Cohan1,3 Chen Zhao4 1Yale University 2Zhejiang University 3Allen Institute for AI 4NYU Shanghai Abstract Long-form Table Question Answering (LFTQA) requires systems to generate paragraph long and complex answers to questions over tabular data. While Large language models based systems have made significant progress, it often hallucinates, especially when the task involves complex reasoning over tables. To tackle this issue, we propose a new LLM-based framework, TAPERA, for LFTQA tasks. Our framework uses a modular approach that decomposes the whole process into three sub-modules: 1) QA-based Content Planner that iteratively decomposes the input question into sub-questions; 2) Execution-based Table Reasoner that produces executable Python program for each sub-question; and 3) Answer Generator that generates long-form answer grounded on the program output. Human evaluation results on the FETAQA and QTSUMM datasets indicate that our framework significantly improves strong baselines on both accuracy and truthfulness, as our modular framework is better at table reasoning, and the long-form answer is always consistent with the program output. Our modular design further provides transparency as users are able to interact with our framework by manually changing the content plans. § https://github.com/yilunzhao/TaPERA 1 Introduction Long-form question answering (LFQA) presents a unique challenge in natural language processing (NLP), demanding models that not only comprehend the long context but also maintain factual accuracy during answer generation (Fan et al., 2019; Xu et al., 2023a). A prevalent and concerning issue is hallucination, where models generate answers that are coherent yet factually incorrect or irrelevant to the input context. This problem undermines the trustworthiness of NLP systems, posing a barrier Within the Oil and Gas industry, Sinopec Group earns the highest profit - $6,205 million. However, compared to the most profitable company overall, Apple, the profit earned by Sinopec Group is much lower. In fact, Apple earns $51,306 million more profit than Sinopec Group. 1 Walmart Retail 559,151 $ 13,510 $ 2,300,000 United States 2 State Grid Electricity 386,618 $ 5,580 $ 896,360 China 3 Amazon Retail 386,064 $ 21,331 $ 1,298,000 United States 4 National Petroleum Oil and gas 283,958 $ 4,575 $ 1,242,245 China 5 Sinopec Group Oil and gas 283,728 $ 6,205 $ 553,833 China 6 Apple Electronics 274,515 $ 57,511 $ 147,000 United States 7 CVS Health Healthcare 268,706 $ 7,179 $ 256,500 United States 8 UnitedHealth Healthcare 257,141 $ 15,403 $ 330,000 United States 9 Toyota Automotive 256,722 $ 21,180 $ 366,283 Japan 10 Volkswagen Automotive 253,965 $ 10,104 $ 662,575 Germany Employees Headquarters Rank Company Name Industry Revenue ($ Million) Profit ($ Million) Q: Which company earns the highest profit in the Oil and Gas industry, and how does it compare to the most profitable company overall? Table Title: Top 10 Company in 2012 National Petroleum earns the highest profit in the Oil and Gas industry, amounting to $4,575 million dollars. However, the most profitable company overall, Walmart, earns $13,510 million, much more than National Petroleum. QA-based Plan: Q1: Which company earns the highest profit in the Oil and Gas industry? A1: Sinopec Group earns the highest profit in the Oil and Gas industry. Q2: How much profit does Sinopec Group earn? A2: Sinopec Group earns $ 6,205 million in profit. Q3: Which company earns the overall highest profit? A3: Apple earns the overall highest profit. Q4: How much more profit does Apple earns, compared to Sinopec Group? A4: Apple earns $ 57,511 million in profit than Sinopec Group Q5: How much more profit does Apple earns, compared to Sinopec Group? A5: Apple earns $ 51,306 million more in profit than Sinopec Group Plan-based Answer Generation Direct Answer Generation Figure 1: An example of long-form table QA. Answering complex questions with an end-to-end LLM-based system poses hallucination challenges (highlighted in Red). We propose a modular-based framework that produces a QA-based plan first, followed by generating an answer conditioned on this plan (through generating executable Python programs as an intermediate step). to their real-world application in domains where reliability is paramount, such as finance and healthcare. The hallucination issues become even more pronounced when the task involves reasoning over tables, e.g., long-form table question answering (LFTQA) (Nan et al., 2022b; Zhao et al., 2023d). 12824 As illustrated in Figure 1, given the complex question and numerous data points in a table, the system must be capable of understanding the relationships within the data and perform human-like reasoning over the tabular content to compose the paragraphlong answer. This paper introduces TAPERA, aiming to enhance the trustworthiness of LLM-based methods in long-form TAble QA. As depicted in Figure 2, TAPERA employs a modular approach comprising three components: (1) QA-based Content Planner: Given a complex question, this module generates and iteratively updates a QA-based content plan. It controls the step-by-step reasoning process to shape the final answer content. (2) Execution-based Table Reasoner: For each subquestion in the QA-based plan, this module generates executable programs (i.e. Python programs) to extract and process question-relevant information from tables. (3) Answer Generator: Given the sub-question and the executed output from the table reasoner, this module composes a natural language sub-answer, ensuring consistency with the reasoner’s output. Moreover, once the entire QAbased plan is finalized, it generates the final answer based on the plan and all sub-answers. TAPERA features several core advancements: (1) Modularity. The complex task is decomposed into multiple easier sub-tasks, which can be solved with more tailored solutions (e.g., reasoning enhanced modules). (2) Faithfulness The table reasoner first generates an executable Python program as external tools to output question-relevant facts, then the answer generator is applied to ensure the consistency between the generated answer and the program output, which improves the faithfulness of generated answer. (3) Interpretability. The system allows users for easy identification of which sub-plan or reasoning step leads to an undesired final answer. Additionally, our user study demonstrates the potential of applying user interaction to modify the generated plan to control and improve the final answer. We experiment TAPERA framework on the FETAQA (Nan et al., 2022b) and QTSUMM (Zhao et al., 2023d) datasets. Human evaluation results demonstrate that TAPERA achieves significant improvements in terms of both accuracy and faithfulness. Specifically, TAPERA surpasses 1) the second-best system in comprehensiveness, i.e. Blueprint (Narayan et al., 2023), by 5.4%; and 2) the second-best system in faithfulness, i.e. Dater (Ye et al., 2023), by 5.0%. Our human evaluation results further indicate the inherent limitations in aligning automated metrics and human preference, consistent with previous work (Xu et al., 2023b). In addition, we We also conduct comprehensive human studies, where users are allowed to interact with the system by modifying and improving the QA-based plans generated by the content planner. The results reveal that using such user-controlled plans can further improve model performance on those challenging questions. Our contributions are summarized below: • We propose TAPERA, a novel modular approach that combines a QA-based content planner and an execution-augmented reasoning module, aiming to enhance the faithfulness and interpretability of LLM-based systems in LFTQA. • Human evaluation results on the FETAQA and QTSUMM datasets demonstrate that TAPERA achieves significant improvements. Moreover, we conduct detailed analysis on each module. • Our user study reveals that user interaction with the plans generated by the QA-based content planner can further refine and improve model’s performance, highlighting the potential of userdriven customization in LLM applications. 2 Related Work Reasoning over Tabular Data Table reasoning helps non-expert users interact with complex tabular data and has received significant attention. It has several tasks, such as table question answering (Pasupat and Liang, 2015; Iyyer et al., 2017; Zhong et al., 2017; Zhao et al., 2022a; Nan et al., 2022b; Zhao et al., 2023a,b,f), table fact verification (Chen et al., 2020b; Gupta et al., 2020), and table-to-text generation (Chen et al., 2020a; Cheng et al., 2022b; Nan et al., 2022a; Zhao et al., 2023c). Prior work on table reasoning mainly pre-trains models with joint table-text data and then finetunes over specific table-relevant tasks (Herzig et al., 2020; Liu et al., 2022b; Zhao et al., 2022b; Liu et al., 2022a; Jiang et al., 2022; Cheng et al., 2022a; Xie et al., 2022), and recent work shows LLMs also achieves competitive results (Touvron et al., 2023; OpenAI, 2023; Chen, 2023; Zhao et al., 2023e; Tang et al., 2024). To further advance the capabilities of LLMs in table QA tasks, research has primarily focused on two avenues: (1) Enhancing LLMs with code execution, which utilize code-optimized LLMs to generate executable programs to answer questions over tabular 12825 Execution-based Table Reasoner Answer Generator Q1: Which company earns the highest profit in the Oil and Gas industry? A1: [Pending] Q2: How much profit does that company earn? A2: [Pending] Q3: Which company earns the overall highest profit? A3: [Pending] Q4: How much profit does that company earn? A4: [Pending] Q5 What‘s the difference in profit between the two mentioned companies? A5: [Pending] Initialization of 2nd-iteration Plan Q1: Which company earns the highest profit in the Oil and Gas industry? A1: [Pending] Q2: Which company earns the overall highest profit? A2: [Pending] Q3: Compare these two companies. A3: [Pending] Initialization of 1st-iteration Plan QA-based Content Planner Q: Which company earns the highest profit in the Oil and Gas industry, and how does it compare to the most profitable company overall? 1 Walmart Retail 559,151 $ 13,510 $ 2,300,000 United States 2 State Grid Electricity 386,618 $ 5,580 $ 896,360 China 3 Amazon Retail 386,064 $ 21,331 $ 1,298,000 United States 4 National Petroleum Oil and gas 283,958 $ 4,575 $ 1,242,245 China 5 Sinopec Group Oil and gas 283,728 $ 6,205 $ 553,833 China 6 Apple Electronics 274,515 $ 57,511 $ 147,000 United States 7 CVS Health Healthcare 268,706 $ 7,179 $ 256,500 United States 8 UnitedHealth Healthcare 257,141 $ 15,403 $ 330,000 United States 9 Toyota Automotive 256,722 $ 21,180 $ 366,283 Japan 10 Volkswagen Automotive 253,965 $ 10,104 $ 662,575 Germany Employees Headquarters Rank Company Name Industry Revenue ($ Million) Profit ($ Million) Table Title: Top 10 Company in 2012 Q1: Which company earns the highest profit in the Oil and Gas industry? A1: Sinopec Group earns the highest profit in the Oil and Gas industry. Q2: Which company earns the overall highest profit? A2: [pending] Q3: Compare these two companies. A3: [pending] Step 2: Step-by-Step answer each sub-question within the plan Q1: Which company earns the highest profit in the Oil and Gas industry? A1: Sinopec Group earns the highest profit in the Oil and Gas industry. Q2: Which company earns the overall highest profit? A2: Apple earns the overall highest profit. Q3: Compare these two companies. A3: [pending] Q1: Which company earns the highest profit in the Oil and Gas industry? A1: Sinopec Group earns the highest profit in the Oil and Gas industry. Q2: Which company earns the overall highest profit? A2: Apple earns the overall highest profit. Q3: Compare these two companies. A3: Apple is in the electronics industry, while Sinopec Group is in the Oil and Gas industry. Step 1: pass sub-questions to answer Step 3: Fulfil plans with sub-answers Q1: Which company earns the highest profit in the Oil and Gas industry? A1: Sinopec Group earns the highest profit in the Oil and Gas industry. Q2: Which company earns the overall highest profit? A2: Apple earns the overall highest profit. Q3: Compare these two companies. A3: Apple is in the electronics industry, while Sinopec Group is in the Oil and Gas industry. Complete 1st-iteration Plan Step 4: Update or Finalize the Plan Execution Result For each sub-question within the plan, generate executable Python Program Figure 2: An illustration of TAPERA. Top: The workflow of our modular framework. The QA-based content planner first generates a plan with QA-pairs, then Step 1-3 fills in the plan with sub-answers, and Step 4 finalizes or refines the plan. Bottom: A running example for the first iteration of the QA-based plan generation and refinement. data (Cheng et al., 2023; Nan et al., 2023; Kong et al., 2024) or to locate relevant table regions conditioned on a given question (Ye et al., 2023; Patnaik et al., 2024). While this approach is effective for deriving a single fact from tables for short-form answers, it struggles in LFTQA tasks that requires a more complex information synthesis. (2) Question decomposition, which breaks down questions into more fine-grained components, thereby simplifying the table reasoning required (Ye et al., 2023; Nan et al., 2023). However, existing methods rely on a single-round question decomposition, which might not always provide sufficient evidence to address the main question in LFTQA. Content Planning for Text Generation Content planning is effective in capturing and controlling both the content and structure of generated text, particularly for longer outputs (You et al., 2023; Narayan et al., 2023; Huot et al., 2023). The process of plan-based text generation typically involves two distinct stages: planning an outline, and subsequently filling in the specific details. Previous work on table-to-text generation typically represents plans as a sequence of phrase keywords, events, and their interactions (Puduppully et al., 2018; Su et al., 2021; Puduppully and Lapata, 2021; Li et al., 2023), aiming to improve the coherence and narrative flow of generated text. However, such plan representations are not ideally suited for LFTQA tasks, especially in the era of LLMs. This is because the primary objective of LFTQA is to extract and present question-relevant information accurately, rather than creating a narrative flow – an aspect where LLMs already excel (Goyal et al., 2023). This work adapts the idea of QA-based plan (Narayan et al., 2023; Huot et al., 2023; Liu et al., 2023a), utilizing question-answer pairs as intermediate plans for LFTQA tasks. Our work extends beyond previous work on QA-based plans to table-relevant tasks and uses LLMs to iteratively generate and refine the plans based on the feedback (i.e., answer) from table reasoning module. Automatic Evaluation of LFQA One primary challenge for LFQA lies in accurately evaluating the long-form generation. Recent work (Krishna et al., 2021; Xu et al., 2023b; Krishna et al., 2023; Liu et al., 2023b) indicates that there is no existing automatic text generation metrics are predictive of 12826 Property FETAQA QTSUMM Table Source Wikipedia Wikipedia Unique Tables 1,942 424 Avg. Rows per Table 14.2 12.0 Avg. Columns per Table 5.7 6.7 Avg. Table Title Length 5.4 7.4 Avg. Query Length 14.0 22.3 Avg. Summary Length 23.3 67.8 Test Set Size (# QA Pairs) 2,003 1,078 Table 1: Basic statistics of the FETAQA and QTSUMM test sets used in our experiments. human preference judgments. We find the same issue when evaluating TAPERA in LFTQA, therefore, we mainly use human evaluation for results. 3 Long-form Table QA Task In this section, we first formulate the LFTQA task, and then discuss the datasets used in our work. 3.1 Problem Formulation The LFTQA task can be formulated as follows. The input is a user query Q, and a table T . The table T = W ∪{Ti,j|i ≤RT , j ≤CT } has RT rows and CT columns, with W being the table title and Ti,j being the content in the (i, j)-th cell. The task is to generate a paragraph-long answer1 Y = (y1, y2, . . . , yn) given the query Q and source table T : Y = argmax n Y i=1 P(yi|y<i, Q, T ; θ), (1) where θ denotes the parameters of a neural text generation model, and yi denotes the i-th tokens in the generated answer. 3.2 Evaluated Datasets We conduct experiments and analysis on the test sets of FETAQA (Nan et al., 2022b) and QTSUMM (Zhao et al., 2023d). Table 1 illustrates the basic data statistics of these two datasets. • FETAQA is a challenging dataset designed for long-form question answering over tables, with answers averaging 18.9 words. It requires models to fetch multiple entities from the given Wikipedia table, aggregate and reason over these 1In the query-focused summarization task (Dang, 2006), the answer Y is also named as summary of the user query. Algorithm 1: QA-based Plan Generation and Refinement using the TAPERA Framework 1: Input: Complex question Q, table T 2: Output: Final answer A 3: 4: P0 = {q0,0, ..., q0,n} ▷Content planner initializes a plan (Section 4.1) 5: i ←0 ▷Initialize iteration counter 6: 7: while Pi is not finalized do ▷Iterative Refinement 8: for Every sub-question qi,j in Pi do 9: eoi,j ←Table reasoner generates program over T to obtain qi,j-relevant execution output (Section 4.2) 10: 11: ai,j ←Answer generator generates the subanswer using (qi,j, eoi,j) (Section 4.3) 12: end for 13: 14: Pi ←Content planner updates a new plan based on Pi (Section 4.1) 15: i ←i + 1 16: end while 17: 18: A ←Answer generator produces a final answer (Section 4.3) 19: 20: return A entities, and structure the inferred information to produce a coherent long-form answer. • QTSUMM challenges text generation models to perform human-like reasoning and analysis on Wikipedia-sourced tables to produce tailored paragraph-length answers (i.e. summaries). In contrast to the FETAQA dataset, QTSUMM has longer output lengths, with answers averaging 68.0 words. 4 Long-form Table QA with TAPERA In this section, we discuss our proposed framework TAPERA for LFTQA (Figure 2). Solving LFTQA requires complex table reasoning and consistent generation, both are challenging for existing LLM-based systems, and these challenges are more pronounced when they are entangled. Therefore, we propose a modular framework (Andreas et al., 2016) that decomposes the complex task into multiple sub-tasks, and adopt multiple sub-modules to tackle each sub-task. More specifically, our framework contains three main components: a QA-based content planner, a execution-based table reasoner, and an answer generator (Algorithm 1). Given a complex question Q, we generate a QA-based plan that includes a sequence of sub-questions (Section 4.1). Then for each subquestion, we first apply table reasoner to generate an executable program to obtain question12827 relevant facts as program output (Section 4.2). Subsequently, the answer generator derives a subanswer from the program output (Section 4.3). However, due to the complexity of the task, we expect our framework might fail with a single attempt (Narayan et al., 2023). Thus, we adopt an iterative refinement approach that repeats the above process until the content planner finalizes a confident QA-based plan. The answer generator generates the final answer based on the finalized plan and sub-answers. 4.1 QA-based Content Planner Previous work on QA-based content generation (You et al., 2023; Narayan et al., 2023; Huot et al., 2023) mainly relies on task-specific models. However, they often struggle to generalize to complex reasoning tasks. We instead adopt LLMs (OpenAI, 2023; Touvron et al., 2023) as a content planner for QA-based content plan generation and introduce an iterative refinement pipeline to enhance plan generation. We detail the workflow of content planner as follows: Plan Generation Given a complex question Q over table T, the content planner decomposes the question into a series of step-by-step subquestions. This results into an initial plan P0 (with empty answers). The finalized plans for the FETAQA and QTSUMM contain an average of 2.6 and 4.7 QA pairs, respectively. Iterative Plan Refinement The LLMs might not faithfully generate a consistent plan in a single attempt. Additionally, a sub-question might depend on the incorrect answer (from table reasoner and answer generator) to the previous subquestion, causing cascading errors. Therefore iterative refinement is necessary for consistency and accuracy. Specifically, in each iteration, after getting all sub-answers, we use CoT prompting to instruct LLMs to 1) determine whether the plan is finalized, and 2) refine and generate a revised plan Pi+1. If the LLMs determine that the current plan Pi is comprehensive enough to answer the given complex question, we terminate the iteration. 4.2 Execution-Based Table Reasoner Given a sub-question in the current plan, the LLMs might not generate an accurate answer end-to-end. We therefore propose execution-based table reasoner that generates executable programs as an intermediate reasoning step to enhance table reasoning. Unlike recent work in table-relevant tasks (Cheng et al., 2023; Nan et al., 2023; Kong et al., 2024) that uses SQL queries as programs, we use Python programs instead. This is mainly because 1) SQL queries are primarily designed to interact with structured data in relational databases but not most tables in the wild; 2) while SQL is efficient for straightforward data retrieval and basic operations, it becomes less intuitive and user-friendly when dealing with complex table reasoning tasks. Executable Program Generation Given a subquestion and a table, we prompt execution-based table reasoner to generate a Python program. However, in practice, LLMs are likely to generate programs that are not executable. In such cases, we apply SELF-DEBUG prompting methods (Chen et al., 2023), providing LLMs with execution error message to instruct it to regenerate a program. 4.3 Answer Generator The objective of LLM-based answer generator is two-fold: 1) to provide a sub-answer corresponding to the execution output from table reasoner in each plan-refinement iteration, and 2) to generate the final answer based on the finalized QA-based plan. The design principle of answer generator is not to perform new reasoning over tables, which may introduce hallucinations. Instead, we focus on transforming the output from table reasoner into an long-form answer that presents coherent narratives, ensuring the content consistency between the program and the long-form answer. Generating Sub-Answer from Execution Output We first execute the generated Python program over the table to produce short-form answers, then we prompt answer generator to generate a sentencelong answer. Generating Final Answer Based on Plan Once the QA-based plan is finalized (Section 4.1), the answer generator is applied to generate the final answer based on the plan and all generated answers for corresponding sub-questions. 5 Experiment We next discuss the baselines, human and automated evaluations, implementation details and main experimental results. 5.1 Baseline Systems We implement the following baseline systems: 12828 • Zero-/One-shot Direct Answering. We prompt LLMs to directly output the final answer based on the given table and question. • Chain-of-Thought (Wei et al., 2022) prompts LLMs to generate the reasoning chain in textual format before answering the question. • Blueprint (Narayan et al., 2023) employs a finetuned model to generate a QA-based plan for long-form text generation. In our experiment, we adapt the Blueprint method by prompting LLMs to first generate the QA-based plan in a singleround, and then prompt them again to generate the final answer based on the plan. • REFACTOR (Zhao et al., 2023d) employs rule-based methods to extract several questionrelevant facts from the table. These facts are then concatenated into the input context for the LLMs. • Dater (Ye et al., 2023) prompts LLMs to break down the table into smaller, question-relevant segments, and then to decompose complex questions into sub-question-answer lists using intermediate SQL queries. Finally, it assembles these elements to compose the final answer. We also develop three variants of TAPERA to research the effectiveness of each component: • TAPERA without Table Reasoner. In this setting, we remove the component of execution-based table reasoner from TAPERA, while keeping the rest the same. • TAPERA without Iterative Plan Refinement. In this setting, content planner is only applied to generate the initial plan, with each pending sub-answer fulfilled by table reasoner and answer generator in each iteration. • TAPERA without Sub-Answer Generation. In this setting, we directly use the execution output from table reasoner as the sub-answer. 5.2 Human Evaluation For human evaluation, we assess the following two dimensions: • Faithfulness: A good answer should correctly answer the given question. It should not contain any unfaithful or hallucinated text. • Comprehensiveness: A good answer should provide all the necessary information to answer the question. Moreover, it should avoid details that are consistent with tabular data yet irrelevant to the given question (Potluri et al., 2023). Each generated answer was scored from 1 (worst) to 5 (best) for each criteria, with the final score averaged across different evaluators. It is worth noting that we do not evaluate aspects of fluency and coherence. This is because our preliminary study has found that all the evaluated GPTbased systems are capable of generating coherent and grammatically correct answers. This finding is also corroborated in the QTSUMM paper. 5.3 Automated Evaluation Following QTSUMM, we adopt following popular automated evaluation metrics: • BLEU (Papineni et al., 2002) computes the geometric average of the precision over output text’s n-grams. We use SacreBLEU (Post, 2018) for BLEU score calculation. • ROUGE (Lin and Hovy, 2003) measures the word overlap between the candidate and reference text. We reported F1 score for ROUGE-L. • METEOR (Banerjee and Lavie, 2005) is based on a generalized concept of unigram matching between the generated text and reference. • BERTScore (Zhang et al., 2020) measures the similarity between the reference and generated text using contextual word embeddings. • TAPAS-Acc (Liu et al., 2022a) is a referencefree metric that uses TAPAS (Herzig et al., 2020) fine-tuned on the TabFact dataset (Chen et al., 2020b) as backbone to evaluate the faithfulness of generation. However, since TabFact only contains single-sentence statements, the reliability of using TAPEX-Acc for evaluating long-form generated text remains questionable. • AutoACU (Liu et al., 2023c) is a referencebased evaluation system. The A2CU first extracts atomic content units (ACUs) from the generation and then evaluates them against reference. A3CU is an accelerated version of A2CU that directly computes the similarity between two text without extracting ACUs, but with the similar evaluation target. We use F1 score of A3CU for evaluation. 5.4 Implementation Details We use gpt-3.5-turbo-1106 as the backbone for all the evaluated systems. For LLM hyperparameter settings, we set temperature as 0.7, Top P as 1.0, and maximum output length as 512. Following QTSUMM, we represent tabular data in Markdown format. In practice, we have found that GPT-* 12829 System Avg. Answer Length Comprehensiveness Faithfulness FETAQA QTSUMM FETAQA QTSUMM FETAQA QTSUMM Ground Truth 23.3 67.8 0-shot 25.6 84.0 3.86 3.70 3.84 3.37 1-shot 29.3 83.0 3.80 3.64 3.91 3.46 CoT 19.4 38.2 3.08 2.96 3.79 3.54 Blueprint 20.2 60.8 3.94 3.72 3.77 3.57 REFACTOR 32.9 86.4 3.84 3.54 3.89 3.65 Dater 22.0 43.3 3.75 3.61 3.92 3.76 ours 23.1 52.2 4.10 3.99 4.18 4.01 wo. Iterative Plan Refinement 18.3 44.1 4.06 (-0.04) 4.04 (+0.05) 3.95 (-0.23) 3.74 (-0.27) wo. Table Reasoner 21.0 56.2 3.93 (-0.17) 3.88 (-0.11) 4.11 (-0.07) 3.82 (-0.19) wo. Sub-answer Conversion 20.7 49.7 4.08 (-0.02) 3.91 (-0.08) 4.07 (-0.11) 3.94 (-0.07) Table 2: Human evaluation results (Likert Scale Scoring) on the aspects of comprehensiveness and faithfulness. Eight human evaluators were engaged to assess 250 predictions from each listed system. An exception was made for TAPERA on QTSUMM, where we evaluated all outputs to aid the user analysis discussed in Section 6. We use the faithfulness-level score on QTSUMM as the ranking indicator of model performance. All the evaluated systems apply gpt-3.5-turbo-1106 as the backbone. ∗: The official implementation does not support the QTSUMM dataset; hence, results reported here are from our independent reproduction. System BL R-L ME BS TAcc ACU Dater 16.6 35.2 35.5 82.9 83.2 36.3 Blueprint 17.6 38.3 45.7 88.3 86.5 48.1 CoT 19.3 39.0 47.2 85.9 92.3 51.9 0-shot Direct 20.1 39.7 49.7 89.9 90.5 54.2 REFACTOR 19.9 39.5 48.8 91.2 86.7 54.7 1-shot Direct 20.5 40.2 49.5 90.3 89.4 55.2 TAPERA 14.6 33.0 33.2 88.7 76.6 37.7 wo. Table Reasoner 14.7 34.4 33.5 88.9 78.8 39.8 wo. Iterative Plan Refine. 14.6 34.5 34.3 88.8 80.4 40.0 wo. Sub-answer Conv. 14.5 34.7 34.2 90.0 78.2 40.4 Table 3: Automated evaluation results on the QTSUMM test sets. BL denotes BLEU, R-L denotes ROUGE-L, BS denotes BERTScore, ME denotes METEOR, TAcc denotes TAPAS-Acc. We use ACU as the ranking indicator of model performance. The automated evaluation results on FETAQA are shown in Table 7 in Appendix. models can effectively understand this table format. To ensure fair comparison, for all modules in the baseline systems and TAPERA, we use a one-shot sample if in-context samples are required in the prompts (the first one provided in the official implementation when multiple samples are available). The prompt for each module of TAPERA can be found in the Appendix. 5.5 Main Experimental Results Table 2 illustrates the results of human evaluation; and Table 3 and Table 7 show the automated evaluation results for QTSUMM and FETAQA, respectively. We draw the following conclusions: Effectiveness of TAPERA framework Among the evaluated systems, TAPERA achieves the best performance in both faithfulnessand comprehensiveness-level human evaluation, demonstrating the effectiveness of the entire framework. Specifically, TAPERA improves the best baseline in comprehensiveness, Blueprint (which is the worst among baselines in faithfulness) by 5.4%; Similarly, TAPERA improves the best baseline in faithfulness, Dater (one of the worst among baselines in comprehensiveness) by 5.0%. Moreover, the removal of any component from TAPERA leads to a degradation in both comprehensiveness and faithfulness, which underscores the necessity of each module in the system. Mismatch between automated evaluation and human evaluation Despite receiving the best scores in human evaluation, TAPERA achieves relatively low scores under all automated evaluation metrics. This significant disparity between human evaluation and automated evaluation is consistent with prior work (Xu et al., 2023b), suggesting that future work should focus on developing more reliable automatic evaluation methods. 6 Human Analysis of TAPERA To obtain a deeper insight into the effectiveness and potential of TAPERA, we conduct a human analy12830 Question Type Representative Question Description Open-ended Questions Summarize the basic information of the episode(s) written by Damon Lindelof These questions require a comprehensive summary and often have initial plans that lack clarity or specificity. Subjective Analysis How did the performance of Tom Brady in terms of passing yards during the Regular Season 2011 compare with other quarterbacks listed in 2011? These questions involve personal opinions or interpretations, often comparing and contrasting different elements. Ambiguous Queries Who were the top three scorers for the 1961-62 Michigan Wolverines men’s basketball team and how many points did they score? Queries that are not clearly defined or have multiple interpretations, leading to vague or inaccurate responses. Segmentation Errors What was the overall success of the songs performed in non-English languages in terms of placement and points secured, and were there any notable exceptions to this trend? These involve errors in dividing or categorizing the question properly, often leading to partial or incorrect analysis. Table 4: Case studies on four categories of errors from the QA-based content plan generation component. Having human-in-the loop plan generation is a promising direction for improving on these categories. Error Type Representative Question Explanation Missing or Incorrect Answer in Complex Structure Questions What are the top three highest mountains in Japan in terms of their elevation in meters and to which prefectures do they belong? This error occurs due to the model’s inability to accurately rank items based on a given table or misunderstand the “top three” requirement, leading to omission or incorrect inclusion. Misunderstanding the Query Who were the top three scorers for the 1961-62 Michigan Wolverines men’s basketball team and how many points did they score? The model identifies the top scorers but fails to provide detailed information (individual scores) as requested, indicating a gap in understanding the full scope of the question. Misunderstanding the Table How did the team with the lowest conference (Conf.) rank perform in terms of their overall record, Conf. record, PPG, and PAG, and who was their head coach and MVP? The model incorrectly understands data or key information presented in the table. This typically occurs when the model parses the table and misinterprets numerical values, rankings, or categorical data, leading to incorrect conclusions or predictions. Inability to Retrieve Relevant Information Which game had the highest and lowest attendance and what could possibly explain this variance? This error occurs when the model cannot access or process part of the required information, indicating a limitation in data retrieval or hypothesis generation. Calculation Error What is the average duration at the peak for the songs listed in the Adult Contemporary chart from 1961 to 2011? This error is due to incorrect numerical processing or arithmetic operations by the program, reflecting a flaw in computational accuracy. Table 5: Case studies on five categories of errors from the reasoning and answer generation modules. Having a better reasoning-enhanced components (or having stronger LLM) can mitigate these errors. sis on the QTSUMM dataset. Specifically, we collect examples where TAPERA’s output received a human evaluation score of 3 or lower in either faithfulness or comprehensiveness (234 out of 1,078 examples). For these erroneous examples, we provide human evaluators with the question, table, and finalized QA-based plan. The evaluators are asked to identify which module (i.e., content planner, table reasoner, answer generator, or a combination thereof) is responsible for the errors. Among the 234 filtered examples, 180 and 77 are marked as errors attributable to the table reasoner and content planner modules, respectively. Meanwhile, only 12 are marked as errors attributable to answer generator, demonstrating that GPT-3.5 performs well in generating answers that are consistent with the execution results or plans. Table 4 and Table 5 present a comprehensive error analysis of table reasoner and content planner, respectively. In the following subsections, we discuss our analysis of these two modules. 6.1 Analysis of Table Reasoner The modularity feature of TAPERA allows us to update the backbone of each component with more powerful LLMs (i.e., gpt-4-1106-preview). We investigate whether using more powerful LLMs as the backbone of table reasoner can effectively 12831 Human-Evaluation Criteria Loss Tie Win Faithfulness 15 20 42 Comprehensiveness 7 11 59 Table 6: Number of losses, ties, and wins for answers generated based on the user-modified plan compared to the original answers. We use the 77 QTSUMM examples marked as errors attributable to the content planner for evaluation. The display order of the two answers is randomly shuffled to evaluators during the evaluation. handle those erroneous examples made by a GPT3.5. Specifically, for 180 erroneous examples, we repeat the same procedure with the GPT-4-based table reasoner, providing it with the same table and sub-questions that led to errors with GPT-3.5. We then manually check the execution results of the GPT-4-based table reasoner. We find that 82 out of 180 cases are successfully handled by GPT-4. This significant improvement showcases the vast potential of TAPERA with the continuous advancements in LLMs. 6.2 User Study of Content Planner The plan-conditioned answer generation offers transparency and interpretability, as it enables users to identify which sub-plan or reasoning step leads to an undesired final answer. In real-world scenarios, it would be beneficial to develop systems that can satisfy users’ customized information needs through user-model interaction. Therefore, we conduct a user study where human evaluators are provided with content planner-caused erroneous examples. They are then asked to improve the plan by modifying the sub-questions (but not the subanswers). Then we use the user-modified plan as the initial plan (with all sub-answers set as pending) and apply the same iterative plan-refinement procedure to obtain the answer. This setting ensures that the quality of new answer is based solely on the plan itself, rather than table reasoner. We then manually check the quality of the newly generated answers by comparing them with the original ones. The human evaluation results show that user interaction with the content plan can improve answer generation in both comprehensiveness and faithfulness, as illustrated in Table 6. 6.3 Automated Evaluation Case Analysis To gain a deeper insight into the failure cases of automated evaluation systems for LFTQA, we conducted detailed human analyses by exploring the scenarios where automated evaluations fail. Specifically, we randomly sampled 100 model output pairs from TAPERA and 1-shot Direct on QTSUMM, where TAPERA received lower scores from at least 4 out of 6 metrics but achieved better results in human evaluations. We chose the 1-shot Direct system for comparison because it demonstrated the best performance in automated evaluations, as shown in Table 3. We meticulously analyzed these failure cases and identified four common scenarios where existing automated metrics tend to fall short: (1) system reveals additional reasoning-intensive information; (2) ground-truth answer is redundant; (3) original question is ambiguous; and (4) ground-truth answer is low-quality and misaligned the question. Detailed explanations for each failure scenarios are provided in Table 8 in Appendix. 7 Conclusion This work proposes a new modular based framework TAPERA for LFTQA, which decomposes the complex QA task into three sub-tasks: QAbased content plan generation, execution-based table reasoning and consistent answer generation. Human evaluation on two benchmark datasets indicate that TAPERA outperforms strong baselines in both comprehensiveness and faithfulness. This is because TAPERA has stronger table reasoning capacities, and our final answer is consistent with the table reasoning module outputs. We believe this work provides insights for future work in advancing controllable text generation in complex tasks that are often entangled with reasoning challenges. Limitations and Future Work There are still some limitations in our work: (1) Due to computational resource constraints, we mainly test our framework on GPT series LLMs. This is because our preliminary analysis indicates that few-shot learning with open-source LLMs (e.g., Llama-2 7B) has low performance on LFTQA and struggles to generate executable programs. Future work could explore on fine-tuning these opensource LLMs on LFTQA (Zha et al., 2023; Zhang et al., 2024; Zhuang et al., 2024). (2) Our human study indicates that TAPERA is significantly more faithful over all baselines, and the final answer is always consistent with the program output. However, our answer generator module is based on prompting LLMs and cannot avoid hallucination 12832 in theory. Replacing answer generator with toolaugmented LLMs (Schick et al., 2023) is a promising direction to further tackle hallucination. (3) The findings and conclusions of this paper are mainly based on human evaluation, as none of the current automatic evaluation metrics are predictive of human preference judgments. A possible future direction is to explore fine-grained evaluation metrics that focus on different aspects of generated long-form answers. Moreover, we believe that LLM-based evaluation can also be explored for the LFTQA task (Liu et al., 2023d; Min et al., 2023; Jiang et al., 2024). Acknowledgement Chen Zhao is supported by Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, NYU Shanghai. We are grateful to the Google TRC program for their support. We would also like to thank the anonymous reviewers and area chairs for constructive feedback. References Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Neural module networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 39–48. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, Michigan. Association for Computational Linguistics. Wenhu Chen. 2023. Large language models are few(1)shot table reasoners. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1120–1130, Dubrovnik, Croatia. Association for Computational Linguistics. Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020a. Logical natural language generation from open-domain tables. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7929– 7942, Online. Association for Computational Linguistics. Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020b. Tabfact: A large-scale dataset for table-based fact verification. In International Conference on Learning Representations. Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023. Teaching large language models to self-debug. Zhoujun Cheng, Haoyu Dong, Ran Jia, Pengfei Wu, Shi Han, Fan Cheng, and Dongmei Zhang. 2022a. FORTAP: Using formulas for numerical-reasoningaware table pretraining. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1150– 1166, Dublin, Ireland. Association for Computational Linguistics. Zhoujun Cheng, Haoyu Dong, Zhiruo Wang, Ran Jia, Jiaqi Guo, Yan Gao, Shi Han, Jian-Guang Lou, and Dongmei Zhang. 2022b. HiTab: A hierarchical table dataset for question answering and natural language generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1094–1110, Dublin, Ireland. Association for Computational Linguistics. Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2023. Binding language models in symbolic languages. In The Eleventh International Conference on Learning Representations. Hoa Trang Dang. 2006. DUC 2005: Evaluation of question-focused summarization systems. In Proceedings of the Workshop on Task-Focused Summarization and Question Answering, pages 48–55, Sydney, Australia. Association for Computational Linguistics. Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. Eli5: Long form question answering. arXiv preprint arXiv:1907.09190. Tanya Goyal, Junyi Jessy Li, and Greg Durrett. 2023. News summarization and evaluation in the era of gpt-3. Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2309–2324, Online. Association for Computational Linguistics. Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. 2020. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4320–4333, Online. Association for Computational Linguistics. Fantine Huot, Joshua Maynez, Shashi Narayan, Reinald Kim Amplayo, Kuzman Ganchev, Annie Priyadarshini Louis, Anders Sandholm, Dipanjan Das, and Mirella Lapata. 2023. Text-blueprint: An interactive platform for plan-based conditional generation. In Proceedings of the 17th Conference of 12833 the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 105–116, Dubrovnik, Croatia. Association for Computational Linguistics. Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. 2017. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1821– 1831, Vancouver, Canada. Association for Computational Linguistics. Dongfu Jiang, Yishan Li, Ge Zhang, Wenhao Huang, Bill Yuchen Lin, and Wenhu Chen. 2024. Tigerscore: Towards building explainable metric for all text generation tasks. Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, and Weizhu Chen. 2022. OmniTab: Pretraining with natural and synthetic data for few-shot tablebased question answering. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 932–942, Seattle, United States. Association for Computational Linguistics. Kezhi Kong, Jiani Zhang, Zhengyuan Shen, Balasubramaniam Srinivasan, Chuan Lei, Christos Faloutsos, Huzefa Rangwala, and George Karypis. 2024. Opentab: Advancing large language models as opendomain table reasoners. In The Twelfth International Conference on Learning Representations. Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, and Kyle Lo. 2023. Longeval: Guidelines for human evaluation of faithfulness in long-form summarization. arXiv preprint arXiv:2301.13298. Kalpesh Krishna, Aurko Roy, and Mohit Iyyer. 2021. Hurdles to progress in long-form question answering. arXiv preprint arXiv:2103.06332. Liang Li, Ruiying Geng, Chengyang Fang, Bing Li, Can Ma, Binhua Li, and Yongbin Li. 2023. Planthen-seam: Towards efficient table-to-text generation. In Findings of the Association for Computational Linguistics: EACL 2023, pages 205–219, Dubrovnik, Croatia. Association for Computational Linguistics. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 150–157. Ao Liu, Haoyu Dong, Naoaki Okazaki, Shi Han, and Dongmei Zhang. 2022a. PLOG: Table-to-logic pretraining for logical table-to-text generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5531– 5546, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Danyang Liu, Mirella Lapata, and Frank Keller. 2023a. Visual storytelling with question-answer plans. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 5800–5813, Singapore. Association for Computational Linguistics. Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, and Jian-Guang Lou. 2022b. TAPEX: Table pre-training via learning a neural SQL executor. In International Conference on Learning Representations. Yixin Liu, Alex Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2023b. Revisiting the gold standard: Grounding summarization evaluation with robust human evaluation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4140–4170, Toronto, Canada. Association for Computational Linguistics. Yixin Liu, Alexander Fabbri, Yilun Zhao, Pengfei Liu, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, and Dragomir Radev. 2023c. Towards interpretable and efficient automatic reference-based summarization evaluation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 16360–16368, Singapore. Association for Computational Linguistics. Yixin Liu, Alexander R. Fabbri, Jiawen Chen, Yilun Zhao, Simeng Han, Shafiq Joty, Pengfei Liu, Dragomir Radev, Chien-Sheng Wu, and Arman Cohan. 2023d. Benchmarking generation and evaluation capabilities of large language models for instruction controllable summarization. Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Koh, Mohit Iyyer, Luke Zettlemoyer, and Hannaneh Hajishirzi. 2023. FActScore: Fine-grained atomic evaluation of factual precision in long form text generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12076–12100, Singapore. Association for Computational Linguistics. Linyong Nan, Lorenzo Jaime Flores, Yilun Zhao, Yixin Liu, Luke Benson, Weijin Zou, and Dragomir Radev. 2022a. R2D2: Robust data-to-text with replacement detection. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6903–6917, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry´sci´nski, Hailey Schoelkopf, Riley Kong, Xiangru Tang, Mutethia Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, Dragomir Radev, and Dragomir Radev. 2022b. FeTaQA: Free-form table question answering. Transactions of the Association for Computational Linguistics, 10:35–49. 12834 Linyong Nan, Ellen Zhang, Weijin Zou, Yilun Zhao, Wenfei Zhou, and Arman Cohan. 2023. On evaluating the integration of reasoning and action in llm agents with database question answering. Shashi Narayan, Joshua Maynez, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Fantine Huot, Anders Sandholm, Dipanjan Das, and Mirella Lapata. 2023. Conditional generation with a questionanswering blueprint. Transactions of the Association for Computational Linguistics, 11:974–996. OpenAI. 2023. Gpt-4 technical report. ArXiv, abs/2303.08774. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Panupong Pasupat and Percy Liang. 2015. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470– 1480, Beijing, China. Association for Computational Linguistics. Sohan Patnaik, Heril Changwal, Milan Aggarwal, Sumit Bhatia, Yaman Kumar, and Balaji Krishnamurthy. 2024. CABINET: Content relevance-based noise reduction for table question answering. In The Twelfth International Conference on Learning Representations. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Abhilash Potluri, Fangyuan Xu, and Eunsol Choi. 2023. Concise answers to complex questions: Summarization of long-form answers. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9709–9728, Toronto, Canada. Association for Computational Linguistics. Ratish Puduppully, Li Dong, and Mirella Lapata. 2018. Data-to-text generation with content selection and planning. In AAAI Conference on Artificial Intelligence. Ratish Puduppully and Mirella Lapata. 2021. Data-totext generation with macro planning. Transactions of the Association for Computational Linguistics, 9:510– 527. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761. Yixuan Su, David Vandyke, Sihui Wang, Yimai Fang, and Nigel Collier. 2021. Plan-then-generate: Controlled data-to-text generation via planning. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 895–909, Punta Cana, Dominican Republic. Association for Computational Linguistics. Xiangru Tang, Yiming Zong, Jason Phang, Yilun Zhao, Wangchunshu Zhou, Arman Cohan, and Mark Gerstein. 2024. Struc-bench: Are large language models really good at generating complex structured data? Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2022. UnifiedSKG: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 602–631, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Fangyuan Xu, Yixiao Song, Mohit Iyyer, and Eunsol Choi. 2023a. A critical evaluation of evaluations for long-form question answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 12835 pages 3225–3245, Toronto, Canada. Association for Computational Linguistics. Fangyuan Xu, Yixiao Song, Mohit Iyyer, and Eunsol Choi. 2023b. A critical evaluation of evaluations for long-form question answering. arXiv preprint arXiv:2305.18201. Yunhu Ye, Binyuan Hui, Min Yang, Binhua Li, Fei Huang, and Yongbin Li. 2023. Large language models are versatile decomposers: Decomposing evidence and questions for table-based reasoning. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’23, page 174–184, New York, NY, USA. Association for Computing Machinery. Haoxuan You, Rui Sun, Zhecan Wang, Long Chen, Gengyu Wang, Hammad Ayyubi, Kai-Wei Chang, and Shih-Fu Chang. 2023. IdealGPT: Iteratively decomposing vision and language reasoning via large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 11289–11303, Singapore. Association for Computational Linguistics. Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, and Junbo Zhao. 2023. Tablegpt: Towards unifying tables, nature language and commands into one gpt. Tianshu Zhang, Xiang Yue, Yifei Li, and Huan Sun. 2024. Tablellama: Towards open large generalist models for tables. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang. 2022a. MultiHiertt: Numerical reasoning over multi hierarchical tabular and textual data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6588–6600, Dublin, Ireland. Association for Computational Linguistics. Yilun Zhao, Hongjun Liu, Yitao Long, Rui Zhang, Chen Zhao, and Arman Cohan. 2023a. Knowledgemath: Knowledge-intensive math word problem solving in finance domains. Yilun Zhao, Yitao Long, Hongjun Liu, Linyong Nan, Lyuhao Chen, Ryo Kamoi, Yixin Liu, Xiangru Tang, Rui Zhang, and Arman Cohan. 2023b. Docmath-eval: Evaluating numerical reasoning capabilities of llms in understanding long documents with tabular data. Yilun Zhao, Linyong Nan, Zhenting Qi, Rui Zhang, and Dragomir Radev. 2022b. ReasTAP: Injecting table reasoning skills during pre-training via synthetic reasoning examples. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 9006–9018, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Yilun Zhao, Zhenting Qi, Linyong Nan, Lorenzo Jaime Flores, and Dragomir Radev. 2023c. LoFT: Enhancing faithfulness and diversity for table-to-text generation via logic form control. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 554– 561, Dubrovnik, Croatia. Association for Computational Linguistics. Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin Liu, Weijin Zou, Simeng Han, Ruizhe Chen, Xiangru Tang, Yumo Xu, Dragomir Radev, and Arman Cohan. 2023d. QTSumm: Query-focused summarization over tabular data. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1157–1172, Singapore. Association for Computational Linguistics. Yilun Zhao, Haowei Zhang, Shengyun Si, Linyong Nan, Xiangru Tang, and Arman Cohan. 2023e. Investigating table-to-text generation capabilities of large language models in real-world information seeking scenarios. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 160–175, Singapore. Association for Computational Linguistics. Yilun Zhao, Chen Zhao, Linyong Nan, Zhenting Qi, Wenlin Zhang, Xiangru Tang, Boyu Mi, and Dragomir Radev. 2023f. RobuT: A systematic study of table QA robustness against human-annotated adversarial perturbations. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6064– 6081, Toronto, Canada. Association for Computational Linguistics. Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. Alex Zhuang, Ge Zhang, Tianyu Zheng, Xinrun Du, Junjie Wang, Weiming Ren, Stephen W. Huang, Jie Fu, Xiang Yue, and Wenhu Chen. 2024. Structlm: Towards building generalist models for structured knowledge grounding. A Appendix 12836 Content Planning (Question Decomposition) ### Instruction: Given a complex question about a table, your task is to assess the number of distinct facts the question seeks. If the question encompasses more than one fact, it necessitates deconstruction into a series of sequential sub-questions. This process involves breaking down the complexity to address each fact individually. On the other hand, if the question is focused on a single fact, your response should involve presenting the question as is, without further decomposition. ### Example Question: Which song earned the highest points in the Bundesvision Song Contest 2010, and how many points did it secure? ### Example Response (in numbered list format): 1. Which song earned the highest points in the Bundesvision Song Contest 2010? 2. How many points did the song secure? ### Target Question: {question} ### Target Response (in numbered list format): 1. Figure 3: Given a complex question over the given table, the content planner decomposes the question into a series of step-by-step sub-questions. System BL R-L ME BS TAcc ACU Blueprint 28.7 51.2 58.0 90.2 84.1 53.0 0-shot Direct 28.8 50.1 56.6 89.1 87.8 53.3 Chain-of-Thought 28.2 51.0 56.9 85.7 88.4 53.6 1-shot Direct 27.4 50.3 57.2 87.9 83.8 54.2 REFACTOR 26.2 53.6 57.2 87.4 90.2 54.8 Dater 29.8 54.0 59.4 88.2 86.3 56.8 TAPERA 29.5 53.4 58.2 86.1 87.2 55.2 wo. Table Reasoner 28.8 53.1 57.7 87.2 86.9 55.0 wo. Sub-answer Conv. 28.2 52.9 58.0 88.1 85.4 55.9 wo. Iterative Plan Refine. 28.7 53.4 56.2 85.9 86.3 55.3 Table 7: Automated evaluation results on the FETAQA test sets. BL denotes BLEU, R-L denotes ROUGE-L, BS denotes BERTScore, ME denotes METEOR, TAcc denotes TAPAS-Acc. We use ACU as the ranking indicator of model performance. 12837 Executable Python Program Generation ### Instruction: Given the table and a related question, your task is to develop an executable Python program to process the table data and provide the answer. The program should be designed to analyze the table contents and return the answer in string format if it is directly available within the table. If the table does not contain the necessary information to answer the question, the program should be programmed to return ‘None’. ### Example: Question: Which country has won the OGAE Video Contest multiple times? Table Title: OGAE Video Contest - Winners Table: "header": ["Year", "Country", "Video", "Performer", "Points", "City"], "rows": [ ["2003", "France", "\"Fan\"", "Pascal Obispo", "122", "Turkey Istanbul"], ["2004", "Portugal", "\"Cavaleiro Monge\"", "Mariza", "133", "France Fontainebleau"], ["2005", "Ukraine", "\"I Will Forget You\"", "Svetlana Loboda", "171", "Portugal Lisbon"], ["2006", "Italy", "\"Contromano\"", "Nek", "106", "Turkey Izmir"], ["2007", "Russia", "\"LML\"", "Via Gra", "198", "Italy Florence"], ["2008", "Russia", "\"Potselui\"", "Via Gra", "140", "Russia Moscow"] ] ### Example Response: ```python def multiple_winners_in_OGAE_contest(table): from collections import Counter # Extract the column for countries country_column_index = table["header"].index("City") countries = [row[country_column_index] for row in table["rows"]] # Count the victories for each country victory_count = Counter(countries) # Filter and return countries with multiple victories multiple_victories = [country for country, count in victory_count.items() if count > 1] return ', '.join(multiple_victories) if multiple_victories else None ``` ### Target: Question: {question} Table Title: {table_title} Table: {table} ### Target Response: ```python Figure 4: Given a sub-question in QA-based plan and a table, the execution-based table reasoner generates an executable Python program as an intermediate reasoning step to enhance table reasoning. 12838 Sub-Answer Generation ### Instruction: Given a specific question along with its corresponding answer, your objective is to construct a comprehensive, well-structured response in a single, elongated sentence. The response should be formulated directly based on the provided answer, ensuring that it thoroughly addresses the query. ### Example QA: Question: Which country has won the OGAE Video Contest multiple times? Answer: Russia, France, and Belgium. ### Example Response: Russia, France, and Belgium have each won the OGAE Video Contest multiple times. ### Target QA: Question: {question} Answer: {answer} ### Target Response: Figure 5: Given the question and execution result of the generated Python program, the answer generator generate a sentence-long answer. Final Answer Generation ### Instruction: Given a question and a list of relevant facts, each containing part of the information needed to answer the question, your task is to synthesize these facts into a coherent, long sentence that fully addresses the question. The goal is to integrate the individual pieces of information from the list seamlessly, creating a comprehensive response that encapsulates all aspects of the question in a clear and concise manner. ### Example Question: Which country has won the OGAE Video Contest multiple times, and what were the corresponding years, winning songs and points scored? ### Example QA-based Plan: Sub-Question 1: Which country has won the OGAE Video Contest multiple times? Sub-Answer 1: Russia, France, and Belgium have each won the OGAE Video Contest multiple times. Sub-Question 2: What were the corresponding years of their victories? Sub-Answer 2: Russia won the contest in 2007, 2008, and 2009, France in 2003, 2011, and 2014, and Belgium in 2013 and 2017. Sub-Question 3: What were the winning songs for each country? Sub-Answer 3: Russia’s winning songs were “LML” in 2007, “Potselui” in 2008, and “Karma” in 2009; France’s were “Fan” in 2003, “Lonely Lisa” in 2011, and “Tourner dans le vide” in 2014; Belgium’s were “Papaoutai” in 2013 and “Mud Blood” in 2017. ### Example Response: Russia, France, and Belgium have each won the OGAE Video Contest multiple times. Specifically, Russia triumphed in 2007 with "LML" scoring 198 points, in 2008 with "Potselui" scoring 140 points, and in 2009 with "Karma" scoring 142 points. France secured victories in 2003 with "Fan" scoring 122 points, in 2011 with "Lonely Lisa" scoring 96 points, and in 2014 with "Tourner dans le vide" scoring 141 points. Belgium won in 2013 with "Papaoutai" scoring 144 points and again in 2017 with "Mud Blood" scoring 184 points. ### Target Question: {question} ### Target QA-based Plan: {qa_plan} ### Target Response: Figure 6: The answer generator generates the final answer based on the finalized QA-based plan. 12839 Failure Scenarios Example Explanation System reveals additional reasoning-intensive information Question: Compare the profit information of Sinopec Group and Apple in 2012. Ground-truth answer: Sinopec Group earned $6,205 million in profit in 2012, while Apple earned $57,501 million in profit, which is much more than Sinopec Group. TaPERA excels at integrating additional, reasoningintensive information not present in the ground-truth answers, as demonstrated by the inclusion of “approximately 9 times greater than that of Sinopec Group” in the example. This capability is attributed to its effective content planning through a QA-based plan and enhanced reasoning mechanisms facilitated by the Table Reasoner. Nonetheless, the capacity of current evaluation metrics to capture such supplementary information is limited. Ground truth answer is redundant Question: How many aircraft models were first introduced between 1980 and 1985, and what are their build years? Ground Truth: The number of aircraft models that were initially launched within the timeframe extending from the year 1980 up to and including the year 1985 amounts to a total of three distinct models. Specifically, the construction years for these models are identified as the years 1978, 1979, and 1971 respectively. We found that the answers generated by TaPERA are more concise compared to the ground truth and other LLM-based models, yet they retain all essential information relevant to the question. This finding is supported by Table 2 in the manuscript. The “consistency” feature of TaPERA is key to this achievement, allowing it to produce answers that not only align with the QA-based plan but also avoid the inclusion of any irrelevant or redundant details. In real-world applications, this quality is particularly valued, as human users often prefer answers that are both concise and accurate. Question is ambiguous Question: How did countries that incorporate dual languages in their songs perform in this competition? Explanation: The question is ambiguous and can lead to multiple valid answers. In such cases, LLM-based systems tend to generate more words than TaPERA, increasing the likelihood of overlapping information between the model output and the ground truth. However, recent studies have shown that lengthy answers for long-form QA are not always necessary, and humans generally prefer concise answers that still fully address the question. Ground truth answer is misaligned with the question Explanation: The annotated ground truth fails to fully or accurately address the question. Table 8: Failure cases analysis of automated evaluation systems for the LFTQA task. 12840
2024
692
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12841–12858 August 11-16, 2024 ©2024 Association for Computational Linguistics 12841 FinanceMATH: Knowledge-Intensive Math Reasoning in Finance Domains Yilun Zhao∗1 Hongjun Liu∗2,3 Yitao Long3 Rui Zhang4 Chen Zhao2,3 Arman Cohan1,5 1Yale University 2NYU Shanghai 3New York University 4Penn State University 5Allen Institute for AI § github.com/yale-nlp/FinanceMath € financemath-acl2024.github.io Abstract We introduce FinanceMATH, a novel benchmark designed to evaluate LLMs’ capabilities in solving knowledge-intensive math reasoning problems. Compared to prior works, this study features three core advancements. First, FinanceMATH includes 1,200 problems with a hybrid of textual and tabular content. These problems require college-level knowledge in the finance domain for effective resolution. Second, we provide expert-annotated, detailed solution references in Python program format, ensuring a high-quality benchmark for LLM assessment. We also construct a finance-domain knowledge bank and investigate various knowledge integration strategies. Finally, we evaluate a wide spectrum of 51 LLMs with both Chainof-Thought and Program-of-Thought prompting methods. Our experimental results reveal that the current best-performing system (i.e., GPT-4o) achieves only 60.9% accuracy using CoT prompting, leaving substantial room for improvement. Moreover, while augmenting LLMs with external knowledge can improve model performance (e.g., 47.5% →54.5% for Gemini-1.5-Pro), their accuracy remains significantly lower than the estimated human expert performance of 92%. We believe that FinanceMATH can advance future research in the area of domain-specific knowledge retrieval and integration, particularly within the context of solving reasoning-intensive tasks. 1 Introduction Large language models (LLMs) have been increasingly recognized for their potential for complex problem-solving in real-world scenarios (OpenAI, 2023a; Touvron et al., 2023; Jiang et al., 2023). Solving math reasoning problems has emerged as a key method for assessing LLMs’ capabilities (Roy ∗Equal Contribution Question: In 2018, Company A had a passive equity ownership interest of 15% in Company B. By the close of 2018, Company A decided to increase its ownership in Company B to 50%, effective as of 1st January 2019, through a cash purchase. There have been no financial transactions between Company A and Company B. Based on the data in the following table with the financial statements for both companies, what would be the changes in the total liabilities for Company A under the proportionate consolidation method from 2018 to 2019? Company A Company B 2018 2019 2018 2019 Revenue 5,000 7,000 2,000 2,500 Cost 2,000 2,300 1,200 1,300 Operating income 3,000 4,700 800 1,200 Net profit 1,650 2,300 460 820 Dividends paid 230 410 Total assets 4,000 6,000 1,000 1,100 Total liabilities 1,200 900 600 650 Equity 2,800 5,100 400 450 Knowl Propo Defin The accou and perce ventu Math None First, we know from the table that the total liabilities for company A in 2018 is 1200. (...abbreviate…) Therefore, the final answer is 1,200 Model Output with Chain-of-Thought Prompting: def solution(): A_liabilities_2018 = 1200 (…abbreviate) return change Model Output with Program-of-Thought Prompting: Figure 1: An example of FinanceMATH. To answer the given question, LLMs are required to comprehend specialized financial terms, such as “passive equity ownership interest” and “proportionate consolidation method”. Additionally, they must interpret tabular data within the question and accurately identify question-relevant data points in the table. and Roth, 2015; Amini et al., 2019; Cobbe et al., 2021; Chen et al., 2023c), as it demands both understanding contextual information and reasoning over complex logics. Recent advancements in LLMs have led to remarkable progress in solving fundamental math problems (Wei et al., 2022; Lewkowycz et al., 2022; Chen et al., 2023b; Wang et al., 2023; Luo et al., 2023a; Azerbayev et al., 2024). However, as illustrated in Table 1, existing math reasoning bench12842 Dataset Domain Level Source # Examples Table KnowledgeSolution Format Reasoning? Intensive? MAWPS (Koncel-Kedziorski et al., 2016) Math Elem. School Generated 3,320 ✗ ✗ Text ASDiv (Miao et al., 2020) Math Elem. School Internet 2,305 ✗ ✗ Math Equation SVAMP (Patel et al., 2021) Math Elem. School ASDiv 1,000 ✗ ✗ Math Equation Math23K (Wang et al., 2017) Math Elem. School Internet 23,162 ✗ ✗ Math Equation GSM8K (Cobbe et al., 2021) Math Middle School CrowdSource 8,500 ✗ ✗ Text MATH (Hendrycks et al., 2021) Math High School Competition 12,500 ✗ ✗ Text AQuA (Ling et al., 2017) Math College GMAT, GRE 100,000 ✗ ✗ Text MathQA (Amini et al., 2019) Math College AQuA 100,000 ✗ ✗ Math Equation MathQA-Python (Austin et al., 2021) Math College AQuA 23,914 ✗ ✗ Python Program MathVista (Lu et al., 2024) Math Elem. to College Internet+Expert 6,141 Few Few Text TabMWP (Lu et al., 2023) Math Middle School Textbooks 38,431 ✓ ✗ Text FinQA (Chen et al., 2021) Finance College Expert 8,281 ✓ ✗ Math Program TAT-QA (Zhu et al., 2021) Finance College Expert 16, 552 ✓ ✗ Text MultiHiertt (Zhao et al., 2022) Finance College Expert 10,440 ✓ ✗ Math Equation DocMath-Eval (Zhao et al., 2023a) Finance College Expert 5,974 ✓ Few Python Program TheoremQA (Chen et al., 2023c) STEM College Internet+Expert 800 ✗ ✓ Text FinanceMATH (ours) Finance College Internet+Expert 1,200 ✓ ✓ Python Program Table 1: Comparison between FinanceMATH and existing math reasoning datasets. FinanceMATH is distinguished by three unique characteristics: (1) Knowledge-Intensive: Problems necessitate domain-specific knowledge, complemented by a financial knowledge bank for research facilitation; (2) Table Reasoning: 40.2% of problems incorporate table information, requiring models to understand table structure as well as interpret and reason over tabular data; (3) Expert Annotation: Each problem is accompanied by a detailed, expert-annotated Python-formatted solution. Such solution annotation combines the explicitness of code execution with the descriptive power of natural language explanations in python comment format, offering a more effective and adaptable solution representation for complex math reasoning problems in FinanceMATH. marks typically do not require specialized domain knowledge. This becomes a notable shortcoming when considering practical applications of LLMs. Measuring progress in specialized areas such as finance and healthcare typically involves addressing domain-specific and knowledge-intensive problems, which goes beyond the scope of general mathematical reasoning. Recognizing this gap in the existing benchmarks, we focus on the finance domain. We chose this domain because, as illustrated in Figure 1, it often involves scenarios requiring not only basic mathematical skills but also a deep understanding of financial concepts (Yang et al., 2023; Xie et al., 2023; Wu et al., 2023). Additionally, the finance domain frequently employs tables to represent data (Zhu et al., 2021; Chen et al., 2021; Zhao et al., 2022; Li et al., 2022b,a; Zhao et al., 2023b,d), which adds another layer of complexity to the knowledge-intensive problem-solving. We introduce FinanceMATH, the first benchmark tailored for evaluating LLMs in the context of knowledge-intensive math reasoning in the Finance domain. The dataset contains 1,200 problems that cover a broad range of finance subareas, with 40.2% of the problems necessitating data interpretation over tabular data. Each problem is accompanied by expert-annotated, Python-formatted solutions, providing a comprehensive reference for evaluating the LLMs’ performance. Additionally, we collect and release a comprehensive knowledge bank, which includes detailed definitions and explanations for 864 financial terms and concepts, facilitating future research on improving knowledge-intensive problem-solving through knowledge retrieval. We evaluate a wide spectrum of open-source and proprietary LLMs, specifically, 51 model models from 16 organizations. Notably, this includes math-specific (Luo et al., 2023a; Shao et al., 2024; Ying et al., 2024), code-based (Guo et al., 2024; Luo et al., 2023b; AI@Mistral, 2024a; Lozhkov et al., 2024) LLMs, as well as mixture of experts (MoE) LLMs (Mistral.AI, 2023; Databricks, 2024). Two prompting methods, Chain-of-Thought (Wei et al., 2022) and Program-of-Thought (Chen et al., 2023b), are adopted for experiments. Our experimental results demonstrate a significant gap between existing LLMs and human experts. Specifically, the current best-performing system (i.e., GPT-4o) achieves only 60.9% accuracy with CoT prompting, which still lags far behind human expert performance in the open-book setting, which stands at 92%. These results highlight the challenges of FinanceMATH, underscoring the need for further advancements in LLMs 12843 for knowledge-intensive problem-solving capabilities. Next, we investigate how to integrate domainspecific knowledge to enhance the problem-solving capabilities of LLMs. We investigate various popular knowledge integration strategies and reveal that including question-relevant knowledge into the prompt can consistently improve LLMs’ performance. This provides insights for future work to develop more advanced knowledge-augmented strategies to realize higher performance gains. Our contributions are summarized below: • We propose FinanceMATH, the first knowledgeintensive math reasoning benchmark in finance domains, aimed at evaluating LLMs’ abilities in knowledge-intensive math reasoning. • We conduct comprehensive evaluations using a diverse array of LLMs, uncovering a substantial performance gap between the best-performing LLM (i.e., GPT-4o) and human experts. • We present a detailed analysis on augmenting LLMs with various knowledge integration strategies. This provides valuable insights for future work in knowledge-intensive problem solving. 2 FinanceMATH Benchmark In this section, we describe the dataset construction process for FinanceMATH. We begin by constructing a knowledge bank that includes wellformulated definitions of 864 financial terms. We then instruct expert annotators to use knowledge terms within the constructed knowledge bank to create knowledge-intensive questions with a hybrid of textual and tabular content. 2.1 Knowledge Bank Construction We construct a knowledge bank that covers a wide range of 864 knowledge terms in the finance domain. It simplifies the creation of knowledgeintensive questions by annotators and enables the exploration of various topics within domain knowledge. The knowledge bank includes financedomain-specific terms (e.g., “exchange rate” and “net present value”) collected from Wikipedia. Each knowledge term is accompanied with their corresponding textual definitions and, where applicable, mathematical formulas in python format. An example of included knowledge terms is illustrated in Figure 2. We detail the the main processes for knowledge bank construction as follows: Knowledge Term: Exchange Rate Definition: An exchange rate is the value or price of one country's currency in relation to another currency. It determines how much of one currency can be exchanged for another and can fluctuate regularly based on market conditions, import and export demand, inflation, and a host of other economic factors. Mathematical Formula: def exchange_rate(original_currency, new_currency): return original_currency / new_currency Figure 2: An example of knowledge terms “Exchange Rate” included in the constructed knowledge bank. Knowledge Collection To construct a knowledge bank, we first collect knowledge relevant to the finance domain from Wikipedia using “finance” and “economics” as key search terms. After collecting the raw financial data, we adopt comprehensive heuristics, embedding-based methods to remove duplicates. This procedure ensures the uniqueness of each knowledge term in our bank. Automatic Knowledge Formulation To enhance the adaptability and usability of the knowledge bank, we incorporate a two-step automatic knowledge formulation process, making each piece of collected knowledge standardized and distilled into a clear, concise format. The primary motivation for using automatic knowledge formulation is cost efficiency and effectiveness. We have observed that GPT-3.5 models are adept at handling this straightforward task with minimal bias, as this process does not involve the addition of extraneous knowledge. We first prompt GPT-3.5 to reformulate the gathered information for each financial term into a concise, paragraph-long textual definition. Since some financial terms come with mathematical definitions, we address the issue of varied formula formats in the original sources (e.g., LaTeX and HTML). We instruct GPT-4 to transform these formulas into a unified python program format. Figure 2 illustrates an example of knowledge terms collected in the knowledge bank. Knowledge Bank Update and Maintenance After formulating knowledge using LLMs, during the dataset annotation stage (Section 2.2), we dynamically update and maintain the constructed knowledge bank, incorporating new knowledge that, although not initially covered, is essential for answering the annotated questions. Additionally, we remove any duplicate entries identified by the annotators. We eventually collect 864 pieces of financial knowledge in the knowledge bank, with 57.4% of 12844 the terms including Python-formatted mathematical definitions. 2.2 FinanceMATH Question Annotation For each financial term in the knowledge bank, we instruct annotators to create a corresponding math reasoning question, if applicable. The answer to the composed question should be a numeric value. The annotators are required to adhere to the following guidelines for a successful question annotation: Question Annotation If the annotators choose to adapt questions from textbooks or the Internet instead of creating their own from scratch, they are asked to adhere to copyright and license regulations, avoiding data from sites prohibiting copy and redistribution. Furthermore, they are required not only to modify the surface-level description of the question but also to change the associated numeric values. In light of the emerging concerns about data contamination in LLMs (Shi et al., 2024; Deng et al., 2024a), we instruct annotators to conduct a Google search for each annotated question, ensuring that no similar question appears on the first page of the search results. Additionally, we recognize that many financial problems involve tables, as shown in Figure 1. Such tabular data plays a crucial role in thoroughly understanding financial problems, and it presents unique challenges for LLMs in terms of comprehension and interpretation. Therefore, we encourage and reward annotators to include tables that are relevant and accurately represent the data pertinent to the questions. Finally, out of 1,200 questions, 674 are marked as having been adapted from existing resources, and 482 are accompanied with tabular data. Identifying Question-relevant Knowledge After a question is annotated, annotators must identify 1-3 key financial concepts for answering this question. They then search for each term in our constructed knowledge bank. If the term is included, they verify its context and details for relevance. If a term is absent or with low-quality definition, annotators receive a bonus for documenting the term, providing a brief explanation or definition and outlining its relevance to the problem. These identified terms are subsequently added or updated in the knowledge bank, resulting in a total of 123 new inclusions and 47 revisions. 2.3 FinanceMATH Solution Annotation As illustrated in Table 1, existing math reasoning benchmarks typically represent solutions using text or mathematical equations. However, solutions in text format often lack the precision and unambiguous nature required for computational problemsolving. Solutions in mathematical equations are explicit, but less descriptive, as the semantic meaning associated with each numeric value in the equations can be ambiguous. Moreover, these two formats are less adaptable for use in automated systems due to variations in language and difficulties in semantic parsing and execution. To overcome these limitations, we use Python programs, starting with “def solution():”, to represent solutions. Such Python program combines the explicitness of code execution with the descriptive power of annotated comments, offering a more effective and adaptable solution representation for complex math reasoning problems. Specifically, annotators are required to first define variables with meaningful names at the beginning of the Python function. These variables correspond to the key elements or quantities mentioned in the textual or tabular content of questions. The annotators then proceed to write a sequence of Python statements that logically solve the problem, step by step. To ensure the accuracy and functionality of the Python-format solutions, our annotation interface automatically executes the Python function. This execution checks that the return type of the answer is either a float or an int and verifies that there are no execution errors. 2.4 Data Quality Validation We conduct a comprehensive validation protocol to ensure the high quality of FinanceMATH. For each example, we first assign another annotator to validate whether: 1) the question is meaningful and grammatically correct, 2) the associated knowledge terms are accurately annotated and complete, 3) the Python-format solution is logically correct and easy to understand. Validators are asked to revise examples that do not meet these standards. We also report the human evaluation scores over 200 randomlysampled examples. As illustrated in Table 5 in the Appendix, FinanceMATH has a high annotation quality. 12845 Property Value Knowledge Bank # Knowledge Terms 864 Textual Definition Length (Median/Avg) 47.1 / 49.7 % w. Mathematical Definition 57.4% FinanceMATH Dataset Question Length (Median/Avg) 54.0 / 61.8 % Questions with Table 482 (40.2%) # Rows per Table (Median/Avg) 3.0 / 3.0 # Columns per Table (Median/Avg) 4.0 / 5.0 # Knowledge Terms per Example (Median/Avg) 2.5 / 2.4 # Math Operations in Python Solution (Median/Avg) 5.0 / 5.6 # Lines in Python Solution (Median/Avg) 6.0 / 6.6 Development Set Size 200 Test Set Size 1,000 Table 2: Basic statistics of the constructed knowledge bank and FinanceMATH dataset. 241 (20.1%) 366 (30.5%) 91 (7.6%) 133 (11.1%) 46 (3.8%) 258 (21.5%) Accounting 21.5% Issuance 3.8% Market 11.1% Portfolio 7.6% Management 5.4% Quantitative 20.1% Derivatives 30.5% Figure 3: Topic distribution of FinanceMATH. 2.5 Data Statistics and Dataset Release Table 2 describes the basic statistics of FinanceMATH, with topic-type distribution shown in Figure 3. We randomly divide the dataset into two subsets: development and test. The development set contains 200 examples and is intended for model development validation. The test set comprises the remaining 1,000 examples and is designed for standard evaluation. To prevent data contamination (Shi et al., 2024; Sainz et al., 2023; Deng et al., 2024b), the answer for the test set will not be publicly released. Instead, we develop and maintain an online evaluation platform, allowing researchers to evaluate models and participate in a leaderboard. Following recent LLM reasoning benchmarks (Chen et al., 2023c; Yue et al., 2024; Lu et al., 2024), the main evaluation of FinanceMATH is conducted under a zero-shot setting on the test set to assess LLMs’ capabilities to generate accurate answers without fine-tuning or few-shot demonstrations on our benchmark. 2.6 Human-level Performance Evaluation To provide a rough but informative estimate of human-level performance by non-experts and experts on FinanceMATH, we randomly sampled 50 examples from the validation set. We enroll two experts, both with the CFA license, and two nonexperts to individually solve these questions. We first evaluate their performance in a closedbook setting, where the evaluators do not have access to the internet or textbooks and are required to finish the 50 questions within three hours. The nonexpert evaluators achieve accuracy of 54% and 62% (average 58%), and the expert evaluators achieve accuracy of 76% and 70% (average 73%). We then transition to an open-book setting, where the evaluators are asked to use the internet and textbooks to correct their initial errors. This setting is designed to assess how external knowledge resources could enhance human problem-solving abilities and accuracy. The non-expert evaluators improved their accuracy to 86% and 82% (average 84%). Similarly, the expert evaluators improved the accuracy to 94% and 90% (average 92%). 3 Evaluated Systems This section discusses the investigated LLMs and prompting methods in our work. 3.1 Large Language Models We evaluate following LLMs on FinanceMATH: • General: GPT-3.5&4 (OpenAI, 2022, 2023a, 2024), Gemini-1.5 (Gemini, 2024), Claude3&3.5 (Anthropic, 2024), Llama-2&3&3.1 (Touvron et al., 2023; AI@Meta, 2024), Mistral (Jiang et al., 2023), Phi-3 (Abdin et al., 2024), Gemma-1&2 (Team et al., 2024), WizardLM2 (Xu et al., 2023), Yi-1.5 (01.AI, 2023), Qwen2 (Bai et al., 2023), Command R+ (Cohere, 2024b), Aya (Cohere, 2024a), and GLM-4 (GLM et al., 2024). • Math-specific: WizardMath (Luo et al., 2023a), DeepSeek-Math (Shao et al., 2024), Mathtral (AI@Mistral, 2024b), and InternLMMath (Ying et al., 2024). • Code-based: DeepSeek-Coder-V1 (Guo et al., 2024), WizardCoder (Luo et al., 2023b), Codestral (AI@Mistral, 2024a), DeepSeek-Coder-V2 (also MoE architecture, DeepSeek-AI (2024)), and StarCoder2 (Lozhkov et al., 2024). 12846 Chain-of-Thought Prompt [System Input]: You are a financial expert, you are supposed to answer the given question. You need to first think through the problem step by step, documenting each necessary step. Then you are required to conclude your response with the final answer in your last sentence as “Therefore, the answer is {final answer}”. The final answer should be a numeric value. [User Input]: Question: {question} Let’s think step by step to answer the given question. Figure 4: Example of zero-shot CoT prompt used. • Mixture of Experts (MoE): Mixtral (Mistral.AI, 2023), WizardLM-2 (MoE, Xu et al. (2023)), DeepSeek-V2 (DeepSeek-AI, 2024), and DBRX (Databricks, 2024). We select the most recent checkpoint available as of August 1, 2024. The details of each evaluated model, including the exact model version we used, are presented in Table 7 in Appendix. The experiments for open-sourced LLMs were conducted using vLLM framework (Kwon et al., 2023). For all the experiments, we set temperature as 1.0, Top P as 1.0, and maximum output length as 512. 3.2 Prompting Methods Following Chen et al. (2023c) and Lu et al. (2024), we evaluate two established prompting methods, with examples of prompt illustrated in Figure 4 and Figure 6 in the Appendix, respectively. Chain-of-Thought The CoT method (Wei et al., 2022; Kojima et al., 2022) instructs the LLMs to articulate a step-by-step reasoning process. This leads to a detailed explanation that culminates in the final answer. Program-of-Thought Different from CoT, the PoT method (Chen et al., 2023b) disentangles computation from the reasoning process by prompting the LLMs to generate a structured program to represent the reasoning process. The final answer is then derived by executing the generated program with an external calculator. 4 Experiments 4.1 Experiment Setup Final Answer Extraction For LLM with CoT prompting, we adopt the answer extraction pipeline from Lu et al. (2024) and Chen et al. (2023c) to identify the final answer from the model’s output. For LLM with PoT prompting, we first extract the generated python solution from the model’s output. If this python solution is executable, we execute it to obtain the final answer. Once we obtain the final answer from model’s output, we compare it with the ground-truth answer for accuracy measurement. Tabular Data Serialization Following previous work on table-relevant tasks (Chen, 2023; Zhao et al., 2023c), we use Markdown format to present tabular data in math reasoning problems. In our preliminary study, we discovered that GPT-* and Llama-3 models can effectively understand such table representations. 4.2 Main Results Table 3 and Table 8 in Appendix illustrate the performance of the evaluated LLMs using CoT and PoT prompting methods on the FinanceMATH test and development sets, respectively. The experimental results demonstrate that FinanceMATH poses significant challenges to current LLMs. Even the best-performing LLM, GPT4o, performs much worse than human experts. Specifically, the accuracy of GPT-4o using the CoT prompting method stands at 60.9%, falling short of the 92% accuracy achieved by expert evaluators in the open-book setting. This gap highlights the critical need for further advancements in LLMs, especially in complex problem solving within specialized domains that are knowledge-intensive. Open-source LLMs still significantly lag behind the most advanced versions of the three major families of proprietary LLMs. However, the two DeepSeek-V2 models are an exception. They achieve performance levels close to those of the best-performing proprietary models. This indicates the potential of open-source LLMs to close the performance gap with proprietary models in the near future, given continued innovation and community collaboration. Additionally, the proprietary LLMs and code-specific models typically achieve comparable or better performance when using PoT prompting compared to CoT prompting. For mathspecific LLMs, InternLM2-Math-Plus surpasses its 12847 Model Size Notes Portfolio Derivatives Accounting Quantitative Corporate Management Economics Avg. PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT Close-book Expert 73.0 Non-Expert 58.0 Open-book Expert 92.0 Non-Expert 84.0 Proprietary LLMs GPT-4o 75.0 45.8 58.8 55.4 60.3 69.4 82.9 66.3 77.8 69.4 62.5 58.9 67.0 56.9 67.0 60.9 Claude-3.5-Sonnet 73.6 55.6 54.1 51.8 66.7 69.9 75.6 63.9 72.2 72.2 64.3 58.9 62.4 60.6 64.8 60.6 Claude-3-Opus 66.7 56.9 53.5 45.2 62.1 59.8 79.5 64.9 72.2 83.3 51.8 46.4 59.6 45.0 62.9 54.7 GPT-4-Turbo 59.7 38.9 49.8 42.2 50.7 64.8 72.2 56.6 61.1 50.0 57.1 44.6 50.5 47.7 56.2 50.9 Gemini-1.5-Pro 68.1 50.0 53.1 30.7 56.6 55.2 69.8 57.6 58.3 63.9 51.8 55.4 50.5 44.0 58.2 47.0 GPT-4o-Mini 65.3 36.1 46.9 29.7 48.4 47.5 69.3 46.8 50.0 38.9 57.1 41.1 51.4 45.9 54.3 40.3 Gemini-1.5-Flash 69.4 33.3 43.6 28.7 52.0 48.9 67.8 49.8 58.3 61.1 50.0 37.5 47.7 34.9 53.6 40.1 Claude-3-Sonnet 59.7 37.5 37.0 28.4 48.0 43.4 66.8 48.8 47.2 55.6 48.2 33.9 48.6 35.8 49.4 38.6 Claude-3-Haiku 34.7 31.9 19.8 26.4 33.8 43.4 44.9 41.5 41.7 44.4 25.0 33.9 36.7 30.3 32.0 35.1 GPT-3.5-Turbo 47.2 25.0 24.4 16.5 29.2 29.2 51.2 33.2 27.8 22.2 37.5 21.4 32.1 23.8 34.3 24.6 Open-source LLMs DeepSeek-V2 236B MoE 72.2 43.1 49.8 46.5 56.6 53.9 77.6 68.8 61.1 63.9 57.1 44.6 59.6 56.9 60.5 54.1 DeepSeek-Coder-V2 236B Code 65.3 44.4 50.2 43.6 58.9 56.6 77.1 67.3 52.8 72.2 51.8 50.0 57.8 53.2 59.7 53.8 Llama-3.1 405B 76.4 50.0 48.8 34.6 58.0 56.6 76.6 52.7 69.4 55.6 57.1 41.1 54.1 47.7 60.3 46.8 Llama-3.1 70B 62.5 38.9 39.3 29.7 47.5 53.0 65.8 50.7 61.1 58.3 44.6 44.6 44.0 36.7 49.8 42.4 Mistral-Large 123B 59.7 36.1 44.9 29.4 49.8 48.0 75.6 47.3 63.9 36.1 50.0 33.9 52.3 44.0 55.1 39.7 Qwen2 72B 41.7 30.6 30.7 23.4 38.8 48.0 52.7 42.0 44.4 50.0 39.3 33.9 38.5 36.7 39.6 36.1 Llama-3 70B 56.9 36.1 39.9 23.4 46.6 43.4 65.8 42.9 58.3 47.2 51.8 39.3 44.0 34.9 49.7 35.7 Phi-3-Medium 14B 40.3 31.9 31.4 23.1 36.1 47.0 54.2 42.4 41.7 50.0 39.3 32.1 35.8 28.4 39.0 35.0 Mixtral-8x22B 141B MoE 15.3 31.9 4.6 23.4 9.6 35.2 22.9 38.0 5.6 36.1 16.1 33.9 3.7 30.3 10.8 31.4 DeepSeek-Coder-V2-Lite 16B Code 38.9 36.1 19.1 21.1 23.3 31.5 49.3 42.4 33.3 30.6 26.8 26.8 26.6 26.6 29.4 30.1 Gemma-2 9B 30.6 34.7 19.1 19.8 31.5 34.7 43.4 37.6 25.0 33.3 33.9 25.0 27.5 26.6 29.6 29.3 Yi-1.5 9B 23.6 29.2 14.8 14.8 18.3 35.2 20.0 40.0 16.7 27.8 10.7 25.0 18.4 30.3 17.5 28.2 Yi-1.5 34B 19.4 25.0 16.2 16.8 18.3 35.2 23.9 36.6 19.4 19.4 23.2 28.6 18.4 28.4 19.2 27.5 WizardLM-2 141B MoE 30.6 23.6 18.5 17.2 26.0 33.3 40.0 32.2 30.6 30.6 25.0 33.9 25.7 29.4 27.0 27.0 Phi-3-Mini 3B 27.8 22.2 12.9 13.5 28.3 31.0 38.0 35.1 27.8 19.4 28.6 32.1 16.5 20.2 24.3 24.4 Mistral-Nemo 12B 31.9 25.0 13.5 13.2 12.8 25.1 32.7 31.2 16.7 27.8 14.3 23.2 19.3 24.8 19.4 22.7 DBRX 132B MoE 12.5 20.8 11.2 14.5 13.2 27.4 26.3 27.3 11.1 22.2 16.1 28.6 17.4 21.1 15.8 22.2 DeepSeek-Math 7B Math 0.0 22.2 0.3 12.2 0.9 19.6 2.4 30.7 0.0 30.6 1.8 26.8 0.9 22.9 1.0 21.0 Qwen2 7B 8.3 18.1 5.3 12.9 4.6 24.7 19.5 29.3 5.6 16.7 14.3 23.2 6.4 18.4 8.9 20.5 GLM-4 9B 27.8 20.8 13.2 11.6 25.1 25.6 36.1 23.4 22.2 25.0 19.6 28.6 21.1 19.3 23.1 20.0 C4AI Command R+ 104B 2.8 22.2 1.3 10.2 1.4 23.3 5.4 22.9 2.8 25.0 0.0 14.3 1.8 22.9 2.3 18.7 Mixtral-8x7B-v0.1 46B MoE 0.0 22.2 0.0 11.9 0.0 16.4 1.5 24.9 0.0 13.9 0.0 17.9 0.0 21.1 0.3 17.7 Llama-3.1 8B 22.2 18.1 12.9 10.2 14.6 14.2 35.1 28.3 19.4 16.7 21.4 19.6 16.5 22.0 19.6 17.4 Mathstral 7B Math 18.1 18.1 7.9 9.6 11.0 18.3 26.8 24.9 13.9 8.3 19.6 19.6 14.7 16.5 14.8 16.5 Codestral 22B Code 41.7 13.9 16.8 9.9 23.7 16.0 54.2 25.8 19.4 11.1 42.9 16.1 26.6 19.3 30.4 16.2 Llama-3 8B 13.9 11.1 12.2 7.6 17.8 18.3 26.3 20.0 16.7 16.7 17.9 14.3 19.3 15.6 17.7 14.3 WizardLM-2 7B 23.6 8.3 6.6 6.9 11.4 18.7 22.0 19.0 11.1 8.3 16.1 8.9 12.8 14.7 13.4 13.1 WizardMath 7B Math 5.6 11.1 5.0 6.3 10.5 16.9 18.5 12.7 5.6 11.1 7.1 21.4 9.2 15.6 9.6 12.3 DeepSeek-V2-Lite 16B MoE 5.6 13.9 1.0 6.3 2.7 14.2 7.3 16.1 2.8 11.1 3.6 8.9 3.7 15.6 3.5 11.9 Mistral-v0.3 7B 1.4 13.9 1.3 4.6 1.4 15.1 6.8 15.6 2.8 8.3 0.0 10.7 2.8 11.9 2.6 11.1 Aya-23 35B 0.0 8.3 0.3 7.9 0.0 13.7 1.0 12.7 0.0 11.1 0.0 10.7 0.0 10.1 0.3 10.7 InternLM2-Math-Plus 7B Math 8.3 16.7 3.0 4.6 5.9 9.1 14.2 19.5 0.0 8.3 12.5 12.5 10.1 8.3 7.5 10.5 Llama-2 70B 13.9 8.3 3.6 6.6 14.2 14.2 12.2 12.2 5.6 13.9 8.9 5.4 10.1 11.9 9.5 10.3 InternLM2 7B 5.6 6.9 4.6 4.0 6.4 11.9 16.1 15.6 5.6 2.8 12.5 7.1 5.5 10.1 8.0 9.1 StarCoder2 15B Code 29.2 2.8 12.5 4.3 11.9 9.6 35.6 15.6 11.1 2.8 16.1 12.5 20.2 8.3 19.3 8.5 Gemma-1 7B 2.8 5.6 1.0 3.6 1.8 10.5 2.9 7.8 0.0 5.6 1.8 7.1 4.6 11.0 2.1 7.2 WizardCoder 33B Code 19.4 4.2 5.0 2.6 6.8 5.5 37.1 10.7 8.3 5.6 21.4 3.6 11.9 9.2 14.8 5.9 DeepSeek-Coder-V1 33B Code 12.5 4.2 2.0 3.0 6.4 5.0 15.6 8.3 5.6 5.6 10.7 7.1 8.3 5.5 7.8 5.2 Llama-2 7B 4.2 0.0 1.0 1.6 2.3 5.5 2.9 5.4 5.6 11.1 0.0 3.6 2.8 9.2 2.2 4.4 Aya-23 8B 1.4 1.4 0.3 2.3 0.0 4.6 0.0 5.8 0.0 2.8 0.0 0.0 0.0 8.3 0.2 4.0 Gemma-1 2B 0.0 0.0 1.3 2.0 3.2 6.4 5.8 5.4 8.3 0.0 0.0 0.0 1.8 5.5 2.8 3.7 Table 3: Results of Chain-of-Thought and Program-of-Thought prompting on the test set of FinanceMATH. We use average Accuracy using CoT prompting as the ranking indicator of model performance. Numbers underscored indicate that models with PoT prompting achieves better results than with CoT prompting. backbone in CoT, improving from 9.1% to 10.5%. This demonstrates the effectiveness of instructiontuning in enhancing math reasoning. 4.3 Error Analysis To gain a deeper insight into the capabilities and limitations of open-source LLMs on our dataset, we conduct a comprehensive error analysis and case studies. The error analysis is based on 50 sampled failure cases of Llama-3-70B from the development set. We choose the Llama-3 model as the focus since many open-source models are developed using it as the backbone. We identify three common mistakes of current LLMs: (1) Misinterpretation of Required Knowledge (27/50): the model fails to accurately identify and interpret 12848 GPT-4o Claude-3.5-Sonnet DeepSeek-V2 Llama-3.1-405B DeepSeek-Coder-V2 GPT-4-Turbo Claude-3-Opus Gemini-1.5-Pro Mistral-Large Llama-3.1-70B Qwen2-72B GPT-4o-Mini Claude-3-Haiku Llama-3-70B Claude-3-Sonnet Phi-3-Medium Gemini-1.5-Flash Mixtral-8x22B WizardLM-2 DeepSeek-Coder-V2-Lite Yi-1.5-34B Yi-1.5-9B Gemma-2-9B Phi-3-Mini DBRX DeepSeek-Math-7B GPT-3.5-Turbo GLM-4-9B Llama-3.1-8B Qwen2-7B C4AI Command R+ Mistral-Nemo Mixtral-8x7B-v0.1 Codestral-22B InternLM2-Math-Plus-7B StarCoder2 Llama-3-8B InternLM2-7B WizardCoder-33B WizardMath-7B Aya-23-35B Llama-2-70B WizardLM-2-7B DeepSeek-V2-Lite Mistral-v0.3-7B DeepSeek-Coder-V1 Llama-2-7B Aya-23-8B Gemma-1-7B Gemma-1-2B 0 20 40 60 Accuracy Accuracy Improvement with External Calculator Figure 5: Calibrated results of Chain-of-Thought prompting on the development set with an external calculator for math computation. Performing complex math computations correctly is still challenging for LLMs, especially open-source ones. the domain-specific knowledge needed to answer a question correctly, leading to incorrect responses. Table 6 in Appendix illustrates an error example. (2) Incorrect Math Computation (19/50): the mathematical computation in the intermediate or final step is incorrect, although the reasoning process is correct. (3) Table Misunderstanding (3/50): The model misinterprets the data within complexstructure tables. To separate computational abilities from final accuracy, we employed an external calculator (Inaba et al., 2023) for CoT outputs. Specifically, we used GPT-3.5-Turbo to extract single-line math expressions from the models’ textual responses and executed these expressions to obtain the final answers. Figure 5 illustrates the calibrated results of LLM CoT performance with an external calculator. It demonstrates that performing complex math computations correctly is still challenging for LLMs, especially open-source ones. 4.4 Program-of-Thought Analysis To better analyze the PoT prompting methods, we examine the execution rate of each LLM under PoT prompting, measuring how many of the generated Python programs are executable. Figure 8 in the Appendix illustrates the relationship between execution rate and accuracy across different models. It demonstrates that for models unable to consistently generate executable programs (i.e., models with an execution rate < 60%), their degraded performance when applying PoT prompting is attributable to the low execution rate. For instance, although Mistral-8×22B achieves competitive performance with CoT, it struggles to consistently generate executable Python solutions, leading to lower accuracy with the PoT prompting approach. Conversely, for LLMs capable of generating executable programs (i.e., models with an execution rate > 80%), the final answer accuracy is mainly attributed to the reasoning capabilities of the models. 5 Knowledge Augmentation Analysis In this section, we provide a comprehensive analysis to understand the performance of LLMs and the quality of knowledge incorporated into the input context, aiming to provide insights for future work on solving knowledge-intensive tasks. 5.1 Evaluated Knowledge-Augmented Method We develop and evaluate various knowledgeaugmented approaches. For each setting, we include the definition of question-relevant knowledge terms within the prompts (Figure 7 in Appendix). • Oracle: To investigate the headroom in knowledge augmentation, we use an oracle setting, where the ground-truth knowledge terms associated with the question (Section 2.2) are included. • LLM as Knowledge Base: Recent work (Petroni et al., 2019; Kang et al., 2023) demonstrates that LLMs themselves can effectively serve as knowledge bases. This approach is particularly valuable in scenarios where an external knowledge base is unavailable. We prompt LLMs to first identify the financial terms required to answer the question. They then generate definitions of each identified knowledge term using the inherent data memorization capabilities. • Knowledge Retrieval: We use the question as the retrieval query to the constructed knowledge bank. We investigate 1) BM25 as sparse retriever and 2) OpenAI Text Embedding V3 Large as dense retriever to retrieve the top-n questionrelevant knowledge terms from knowledge bank. • LLM-Instructed Knowledge Retrieval: While the method of using “LLM as Knowledge Base” can effectively identify the knowledge required to answer a question, it is likely to produce 12849 Setting Llama-3-70B Gemini-1.5-Pro wo. knowledge augmentation 31.5 47.5 LLM as Knowledge Base 29.0 (-2.5) 48.5 (+1.0) BM25 (n = 3) Vanilla Retrieval 30.0 (-1.5) 44.5 (-3.0) LLM as Retrieval Re-Ranker 32.0 (+0.5) 49.0 (+1.5) LLM-instructed Retrieval 32.5 (+1.0) 48.0 (+0.5) OpenAI Embedding-3-L (n = 3) Vanilla Retrieval 32.5 (+1.0) 49.0 (+1.5) LLM as Retrieval Re-Ranker 33.5 (+2.0) 50.5 (+3.0) LLM-instructed Retrieval 33.5 (+2.0) 52.0 (+4.5) Oracle 37.5 (+6.0) 54.5 (+7.0) Table 4: Results of Chain-of-Thought prompting approach under different knowledge augmentation settings on the development set of FinanceMATH. knowledge definitions that are not entirely accurate (Chen et al., 2023a; Peng et al., 2023). To address this unfaithfulness issue, we harness the power of external knowledge retrieval for obtaining more trustworthy knowledge definitions. Specifically, instead of using the original question as the retrieval query, we utilize each knowledge term along with its definition generated from the “LLM as Knowledge Base”. This approach provides a more informative and semantically similar basis for knowledge retrieval. • LLM as Retrieval Re-Ranker: Recent studies have demonstrated LLMs’ competitive capabilities in re-ranking retrieved candidates to output a more precise list (Sun et al., 2023). Therefore, in this setting, we first use retriever in “Knowledge Retrieval” to retrieve top-3n candidates. Subsequently, we prompt LLMs to select top-n most relevant knowledge terms from this candidate set. 5.2 Knowledge Augmentation Results As illustrated in Table 4, improving the questionrelevance of incorporated knowledge can consistently improve the LLMs’ performance. Specifically, LLMs equipped with retrieved knowledge from OpenAI Text Embedding consistently outperform those using retrieved knowledge from BM25, due to the more advanced retrieval capabilities of the former. Among different LLM-aided retrieval strategies, LLM-Instructed Knowledge Retrieval achieves the best performance, demonstrating the effectiveness of using refined queries for knowledge retrieval. Nevertheless, it is worth noting that even when incorporated with the ground-truth knowledge (i.e., the oracle setting), Gemini-1.5-Pro still performs much worse than human experts in close-book setting (i.e., 92.0%). This highlights the need for future work on developing more advanced domain-specific knowledge integration methods. 6 Related Work The development of general-purpose intelligent systems is significantly dependent on the foundational aspect of mathematical reasoning, a topic that has garnered considerable attention in the academic community. As illustrated in Table 1, researchers have proposed a wide spectrum of math reasoning datasets that cater to a variety of educational levels, ranging from elementary school to college (Koncel-Kedziorski et al., 2016; Wang et al., 2017; Amini et al., 2019; Miao et al., 2020; Patel et al., 2021; Cobbe et al., 2021; Hendrycks et al., 2021; Austin et al., 2021; Lu et al., 2023). However, these math reasoning benchmarks typically do not require specialized domain knowledge, a notable shortcoming when considering the practical applications of LLMs. Therefore, recent work has investigated the LLMs’ capabilities in knowledgeintensive problem solving. For example, Chen et al. (2023c) collected a theorem-driven questionanswering dataset, designed to evaluate AI models’ ability to apply theorems in solving challenging science problems. MMMU (Yue et al., 2024) and MathVista (Lu et al., 2024) include examples that require complex multimodal reasoning in expert domains. Different from this recent work, which focuses on benchmarking LLM performance, our work also constructs a finance-domain knowledge bank, investigating various knowledge integration strategies to enhance knowledge-intensive problem solving. Moreover, FinanceMATH also requires LLMs to understand and interpret tabular data in expert domains to solve the problems. 7 Conclusion This paper introduces FinanceMATH, a benchmark aimed at assessing LLMs in knowledge-intensive math reasoning. Our comprehensive evaluations of 51 LLMs, using both CoT and PoT prompting methods, identify significant areas where LLMs need to enhance their specialized knowledge for complex problem-solving in expert domains. Additionally, our knowledge augmentation analysis indicates that integrating domain-specific knowledge can improve LLMs’ problem-solving abilities. We believe this research provides valuable insights into advancing LLMs within expert domains. 12850 Limitations In this work, we propose FinanceMATH and conduct comprehensive analysis of different LLMs’ capabilities in solving knowledge-intensive math reasoning problems in finance domains. However, there are still some limitations: (1) Our method for extracting final answer from model output is still not perfect. In some cases, this methods fails to locate the answer, leading to the reported accuracy being an approximate lower bound. Moreover, as the extracted answer can be in a different format than the ground truth, we apply rule-based methods to measure the exact match between the two values, which could introduce around 2% errors based on our case studies. (2) In our experiment, we regard tables in the question as textual input. However, in real-world scenarios, tabular data might appear as images, where people cannot obtain its textual content directly. In these cases, OCR tools to extract table content (Du et al., 2020) or LLMs with vision capabilities (OpenAI, 2023b; Yue et al., 2024; Lu et al., 2024) may be required. (3) Due to computational resource constraints, we do not tune LLMs on a large-scale finance-domain data ourselves (Xie et al., 2023, 2024). However, we believe that training on finance data can help improve knowledge-intensive problem solving in finance domains. Acknowledgement We are grateful for the compute support provided by Microsoft Research’s Accelerate Foundation Models Research (AFMR) program. We would also like to thank the anonymous reviewers and area chairs for constructive discussions and feedback. Hongjun Liu and Chen Zhao are supported by Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, NYU Shanghai. References 01.AI. 2023. Yi: Open-source llm release. Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio César Teodoro Mendes, Weizhu Chen, Vishrav Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter, Amit Garg, Abhishek Goswami, Suriya Gunasekar, Emman Haider, Junheng Hao, Russell J. Hewett, Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik Modi, Anh Nguyen, Brandon Norick, Barun Patra, Daniel Perez-Becker, Thomas Portet, Reid Pryzant, Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin Saied, Adil Salim, Michael Santacroce, Shital Shah, Ning Shang, Hiteshi Sharma, Xia Song, Masahiro Tanaka, Xin Wang, Rachel Ward, Guanhua Wang, Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu, Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu, Chengruidong Zhang, Cyril Zhang, Jianwen Zhang, Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang, and Xiren Zhou. 2024. Phi-3 technical report: A highly capable language model locally on your phone. AI@Meta. 2024. The llama 3 herd of models. AI@Mistral. 2024a. Codestral: Hello, world! AI@Mistral. 2024b. Mathstral model card. Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics. Anthropic. 2024. Introducing the next generation of claude. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen Marcus McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. 2024. Llemma: An open language model for mathematics. Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609. 12851 Liang Chen, Yang Deng, Yatao Bian, Zeyu Qin, Bingzhe Wu, Tat-Seng Chua, and Kam-Fai Wong. 2023a. Beyond factuality: A comprehensive evaluation of large language models as knowledge generators. In The 2023 Conference on Empirical Methods in Natural Language Processing. Wenhu Chen. 2023. Large language models are few(1)shot table reasoners. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1120–1130, Dubrovnik, Croatia. Association for Computational Linguistics. Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2023b. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. Transactions on Machine Learning Research. Wenhu Chen, Ming Yin, Max Ku, Pan Lu, Yixin Wan, Xueguang Ma, Jianyu Xu, Xinyi Wang, and Tony Xia. 2023c. TheoremQA: A theorem-driven question answering dataset. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7889–7901, Singapore. Association for Computational Linguistics. Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena Shah, Iana Borova, Dylan Langdon, Reema Moussa, Matt Beane, Ting-Hao Huang, Bryan Routledge, and William Yang Wang. 2021. FinQA: A dataset of numerical reasoning over financial data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3697–3711, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Cohere. 2024a. Cohere for ai launches aya 23, 8 and 35 billion parameter open weights release. Cohere. 2024b. Introducing command r+: A scalable llm built for business. Databricks. 2024. Introducing dbrx: A new state-of-theart open llm. DeepSeek-AI. 2024. Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model. Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, and Arman Cohan. 2024a. Investigating data contamination in modern benchmarks for large language models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 8706–8719, Mexico City, Mexico. Association for Computational Linguistics. Chunyuan Deng, Yilun Zhao, Xiangru Tang, Mark Gerstein, and Arman Cohan. 2024b. Investigating data contamination in modern benchmarks for large language models. Yuning Du, Chenxia Li, Ruoyu Guo, Xiaoting Yin, Weiwei Liu, Jun Zhou, Yifan Bai, Zilin Yu, Yehua Yang, Qingqing Dang, et al. 2020. Pp-ocr: A practical ultra lightweight ocr system. arXiv preprint arXiv:2009.09941. Gemini. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. Team GLM, :, Aohan Zeng, Bin Xu, Bowen Wang, Chenhui Zhang, Da Yin, Diego Rojas, Guanyu Feng, Hanlin Zhao, Hanyu Lai, Hao Yu, Hongning Wang, Jiadai Sun, Jiajie Zhang, Jiale Cheng, Jiayi Gui, Jie Tang, Jing Zhang, Juanzi Li, Lei Zhao, Lindong Wu, Lucen Zhong, Mingdao Liu, Minlie Huang, Peng Zhang, Qinkai Zheng, Rui Lu, Shuaiqi Duan, Shudan Zhang, Shulin Cao, Shuxun Yang, Weng Lam Tam, Wenyi Zhao, Xiao Liu, Xiao Xia, Xiaohan Zhang, Xiaotao Gu, Xin Lv, Xinghan Liu, Xinyi Liu, Xinyue Yang, Xixuan Song, Xunkai Zhang, Yifan An, Yifan Xu, Yilin Niu, Yuantao Yang, Yueyan Li, Yushi Bai, Yuxiao Dong, Zehan Qi, Zhaoyu Wang, Zhen Yang, Zhengxiao Du, Zhenyu Hou, and Zihan Wang. 2024. Chatglm: A family of large language models from glm-130b to glm-4 all tools. Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y. Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. 2024. Deepseek-coder: When the large language model meets programming – the rise of code intelligence. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. Tatsuro Inaba, Hirokazu Kiyomaru, Fei Cheng, and Sadao Kurohashi. 2023. MultiTool-CoT: GPT-3 can use multiple external tools with chain of thought prompting. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1522–1532, Toronto, Canada. Association for Computational Linguistics. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825. Minki Kang, Seanie Lee, Jinheon Baek, Kenji Kawaguchi, and Sung Ju Hwang. 2023. Knowledgeaugmented reasoning distillation for small language models in knowledge-intensive tasks. In Thirtyseventh Conference on Neural Information Processing Systems. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. 12852 Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. 2016. MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152–1157, San Diego, California. Association for Computational Linguistics. Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. 2023. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems, 35:3843– 3857. Chenying Li, Wenbo Ye, and Yilun Zhao. 2022a. FinMath: Injecting a tree-structured solver for question answering over financial reports. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6147–6152, Marseille, France. European Language Resources Association. Moxin Li, Fuli Feng, Hanwang Zhang, Xiangnan He, Fengbin Zhu, and Tat-Seng Chua. 2022b. Learning to imagine: Integrating counterfactual thinking in neural discrete reasoning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 57–69, Dublin, Ireland. Association for Computational Linguistics. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158–167, Vancouver, Canada. Association for Computational Linguistics. Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, WenDing Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. 2024. Starcoder 2 and the stack v2: The next generation. Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, KaiWei Chang, Michel Galley, and Jianfeng Gao. 2024. Mathvista: Evaluating math reasoning in visual contexts with gpt-4v, bard, and other large multimodal models. In The Twelfth International Conference on Learning Representations. Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and Ashwin Kalyan. 2023. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. In The Eleventh International Conference on Learning Representations. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023a. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023b. Wizardcoder: Empowering code large language models with evolinstruct. arXiv preprint arXiv:2306.08568. Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984, Online. Association for Computational Linguistics. Mistral.AI. 2023. Mixtral of experts: A high quality sparse mixture-of-experts. OpenAI. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI. 2023a. Gpt-4 technical report. ArXiv, abs/2303.08774. OpenAI. 2023b. Gpt-4v(ision) system card. OpenAI. 2024. Hello gpt-4o. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics. 12853 Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463–2473, Hong Kong, China. Association for Computational Linguistics. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743–1752, Lisbon, Portugal. Association for Computational Linguistics. Oscar Sainz, Jon Campos, Iker García-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. NLP evaluation in trouble: On the need to measure LLM data contamination for each benchmark. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 10776–10787, Singapore. Association for Computational Linguistics. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, Y. K. Li, Y. Wu, and Daya Guo. 2024. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke Zettlemoyer. 2024. Detecting pretraining data from large language models. In The Twelfth International Conference on Learning Representations. Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? investigating large language models as re-ranking agents. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 14918–14937, Singapore. Association for Computational Linguistics. Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, Léonard Hussenot, Pier Giuseppe Sessa, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex CastroRos, Ambrose Slone, Amélie Héliou, Andrea Tacchetti, Anna Bulanova, Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker, George-Christian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen, Johan Ferret, Justin Chiu, Justin Mao-Jones, Katherine Lee, Kathy Yu, Katie Millican, Lars Lowe Sjoesund, Lisa Lee, Lucas Dixon, Machel Reid, Maciej Mikuła, Mateo Wirth, Michael Sharman, Nikolai Chinaev, Nithum Thain, Olivier Bachem, Oscar Chang, Oscar Wahltinez, Paige Bailey, Paul Michel, Petko Yotov, Rahma Chaabouni, Ramona Comanescu, Reena Jana, Rohan Anil, Ross McIlroy, Ruibo Liu, Ryan Mullins, Samuel L Smith, Sebastian Borgeaud, Sertan Girgin, Sholto Douglas, Shree Pandya, Siamak Shakeri, Soham De, Ted Klimenko, Tom Hennigan, Vlad Feinberg, Wojciech Stokowiec, Yu hui Chen, Zafarali Ahmed, Zhitao Gong, Tris Warkentin, Ludovic Peran, Minh Giang, Clément Farabet, Oriol Vinyals, Jeff Dean, Koray Kavukcuoglu, Demis Hassabis, Zoubin Ghahramani, Douglas Eck, Joelle Barral, Fernando Pereira, Eli Collins, Armand Joulin, Noah Fiedel, Evan Senter, Alek Andreev, and Kathleen Kenealy. 2024. Gemma: Open models based on gemini research and technology. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations. Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 845–854, Copenhagen, Denmark. Association for Computational Linguistics. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, 12854 and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. Shijie Wu, Ozan Irsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann. 2023. Bloomberggpt: A large language model for finance. ArXiv, abs/2303.17564. Qianqian Xie, Weiguang Han, Zhengyu Chen, Ruoyu Xiang, Xiao Zhang, Yueru He, Mengxi Xiao, Dong Li, Yongfu Dai, Duanyu Feng, Yijing Xu, Haoqiang Kang, Ziyan Kuang, Chenhan Yuan, Kailai Yang, Zheheng Luo, Tianlin Zhang, Zhiwei Liu, Guojun Xiong, Zhiyang Deng, Yuechen Jiang, Zhiyuan Yao, Haohang Li, Yangyang Yu, Gang Hu, Jiajia Huang, Xiao-Yang Liu, Alejandro Lopez-Lira, Benyou Wang, Yanzhao Lai, Hao Wang, Min Peng, Sophia Ananiadou, and Jimin Huang. 2024. Finben: A holistic financial benchmark for large language models. Qianqian Xie, Weiguang Han, Xiao Zhang, Yanzhao Lai, Min Peng, Alejandro Lopez-Lira, and Jimin Huang. 2023. PIXIU: A comprehensive benchmark, instruction dataset and large language model for finance. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. Hongyang Yang, Xiao-Yang Liu, and Christina Dan Wang. 2023. Fingpt: Open-source financial large language models. arXiv preprint arXiv:2306.06031. Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma, Jiawei Hong, Kuikun Liu, Ziyi Wang, Yudong Wang, Zijian Wu, Shuaibin Li, Fengzhe Zhou, Hongwei Liu, Songyang Zhang, Wenwei Zhang, Hang Yan, Xipeng Qiu, Jiayu Wang, Kai Chen, and Dahua Lin. 2024. Internlmmath: Open math large language models toward verifiable reasoning. Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu Jiang, Weiming Ren, Yuxuan Sun, et al. 2024. Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9556–9567. Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang. 2022. MultiHiertt: Numerical reasoning over multi hierarchical tabular and textual data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6588–6600, Dublin, Ireland. Association for Computational Linguistics. Yilun Zhao, Yitao Long, Hongjun Liu, Linyong Nan, Lyuhao Chen, Ryo Kamoi, Yixin Liu, Xiangru Tang, Rui Zhang, and Arman Cohan. 2023a. Docmath-eval: Evaluating numerical reasoning capabilities of llms in understanding long documents with tabular data. Yilun Zhao, Zhenting Qi, Linyong Nan, Boyu Mi, Yixin Liu, Weijin Zou, Simeng Han, Ruizhe Chen, Xiangru Tang, Yumo Xu, Dragomir Radev, and Arman Cohan. 2023b. QTSumm: Query-focused summarization over tabular data. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1157–1172, Singapore. Association for Computational Linguistics. Yilun Zhao, Haowei Zhang, Shengyun Si, Linyong Nan, Xiangru Tang, and Arman Cohan. 2023c. Investigating table-to-text generation capabilities of large language models in real-world information seeking scenarios. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 160–175, Singapore. Association for Computational Linguistics. Yilun Zhao, Chen Zhao, Linyong Nan, Zhenting Qi, Wenlin Zhang, Xiangru Tang, Boyu Mi, and Dragomir Radev. 2023d. RobuT: A systematic study of table QA robustness against human-annotated adversarial perturbations. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6064– 6081, Toronto, Canada. Association for Computational Linguistics. Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021. TAT-QA: A question answering benchmark on a hybrid of tabular and textual content in finance. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3277–3287, Online. Association for Computational Linguistics. 12855 A Appendix Annotation Quality %S ≥4 Question Fluency 98.0 Question Correctness 95.3 Knowledge Relevance 94.1 Textual Definition Fluency 93.0 Textual Definition Correctness 94.7 Math Formula Correctness 88.0 Final Answer Correctness 98.0 Python Solution Correctness 96.0 Variable Name Meaningfulness 87.7 Comment Comprehensiveness 83.8 Table 5: Human evaluation over 200 samples of FinanceMATH. Three internal evaluators were asked to rate the samples on a scale of 1 to 5 individually. We report percent of samples that have an average score ≥ 4 to indicate the annotation quality of FinanceMATH. Program-of-Thought Prompt Used [System Input]: You are a financial expert, you are supposed to generate a Python program to answer the given question. The returned value of the program is supposed to be the answer. Here is an example of the Python program: ‘‘‘python def solution(): # Define variables name and value ... # Do math calculation ... # return answer return answer ‘‘‘ [User Input]: Question: {question} Generate a Python program to answer the given question. Continue your output: ‘‘‘python def solution(): # Define variables name and value Figure 6: Example of zero-shot PoT prompt used. Chain-of-Thought Prompt [System Input]: You are a financial expert, you are supposed to answer the given question. You need to first think through the problem step by step, documenting each necessary step. Then you are required to conclude your response with the final answer in your last sentence as “Therefore, the answer is {final answer}”. The final answer should be a numeric value. [User Input]: Question: {question} Relevant Knowledge: {knowledge} Let’s think step by step to answer the given question. Figure 7: Example of zero-shot CoT prompt used for knowledge augmentation analysis in Section 5. [validation-69] What is the gross domestic product for country B in 2015, according to the provided data? Account Item ($ trillions) Consumption 20.0 Capital consumption allowance 1.9 Government spending 4.2 Imports 2.2 Gross private domestic investment 6.0 Exports 1.8 Model Output (..abbreviate..) Net Investment = Gross private domestic investment - Capital consumption allowance = 6.0 - 1.9 = 4.1 trillion Net Exports = Exports - Imports = 1.8 - 2.2 = -0.4 trillion (note: negative value indicates a trade deficit) We can plug in the values into the GDP formula: GDP = Consumption + Net Investment + Government Spending + Net Exports = 20.0 + 4.1 + 4.2 - 0.4 = 28.9 trillion Error Explanation Misinterpretation of the GDP components: The model incorrectly subtracts the capital consumption allowance to derive net investment, but GDP should use gross investment. Additionally, the math computation in the last step is incorrect Table 6: Example of knowledge misinterpretation error made by Llama-3-70B. 12856 Model Organization Size Notes Source GPT-4-Turbo OpenAI – gpt-4o-2024-05-13 GPT-4o OpenAI – gpt-4-turbo-2024-04-09 GPT-3.5-Turbo OpenAI – gpt-3.5-turbo-0125 Claude-3.5-Sonnet Anthropic – claude-3-5-sonnet-20240620 Claude-3-Opus Anthropic – claude-3-opus-20240229 Claude-3-Sonnet Anthropic – claude-3-sonnet-20240229 Claude-3-Haiku Anthropic – claude-3-haiku-20240307 Gemini-1.5-Pro Google – gemini-1.5-pro Gemini-1.5-Flash Google – gemini-1.5-flash Qwen2 Alibaba 7 & 72B Qwen/Qwen2-*B-Instruct Llama-2 Meta 7 & 70B meta-llama/Llama-2-*b-chat-hf Llama-3 Meta 8 & 70B meta-llama/Meta-Llama-3-*B-Instruct Llama-3.1 Meta 8 & 70B & 405B meta-llama/Meta-Llama-3.1-*B-Instruct Gemma-1 Google 2 & 7B google/gemma-b-it Gemma-2 Google 9B google/gemma-2-9b-it Mistral-v0.3 Mixtral AI 7B mistralai/Mistral-7B-Instruct-v0.3 Mistral-Nemo Mixtral AI 12B mistralai/Mistral-Nemo-Instruct-2407 Mistral-Large Mixtral AI 123B mistralai/Mistral-Large-Instruct-2407 Mathstral Mixtral AI 7B Math-Specific mistralai/Mathstral-7B-v0.1 Mixtral Mixtral AI 46 & 141B MoE mistralai/Mixtral-Instruct-v0.1 Codestral Mixtral AI 22B Code-Specific mistralai/Codestral-22B-v0.1 DeepSeek-Math DeepSeek 7B Math-Specific deepseek-ai/deepseek-math-7b-instruct DeepSeek-Coder-V1 DeepSeek 33B Code-Specific deepseek-ai/deepseek-coder-33b-instruct DeepSeek-V2 DeepSeek 16 & 236B MoE deepseek-ai/DeepSeek-V2-Lite-Chat. We use the official API provided by DeepSeek for deepseek-ai/DeepSeek-V2-Chat DeepSeek-Coder-V2 DeepSeek 16 & 236B MoE deepseek-ai/DeepSeek-Coder-V2-Lite-Instruct. We use the official API provided by DeepSeek for deepseek-ai/DeepSeek-Coder-V2-Instruct Yi-1.5 01 AI 9 & 34B 01-ai/Yi-1.5-34B-Chat Phi-3-Medium Microsoft 14B microsoft/Phi-3-medium-4k-instruct Phi-3-Mini Microsoft 3B microsoft/Phi-3-mini-4k-instruct GLM-4 THUDM 9B THUDM/glm-4-9b-chat DBRX Databricks 132B MoE databricks/dbrx-instruct C4AI Command R+ Cohere 104B CohereForAI/c4ai-command-r-plus InternLM2 InternLM 7B internlm/internlm2-chat-7b InternLM2-Math-Plus InternLM 7B Math-Specific internlm/internlm2-math-plus-7b WizardLM-2 WizardLM Team 7B lucyknada/microsoft_WizardLM-2-7B WizardMath WizardLM Team 7B Math-Specific WizardLMTeam/WizardMath-7B-V1.1 WizardCoder WizardLM Team 33B Code-Specific WizardLMTeam/WizardCoder-33B-V1.1 WizardLM-2 (MoE) WizardLM Team 141B MoE alpindale/WizardLM-2-8x22B Aya-23 Cohere 8 & 35B CohereForAI/aya-23-*B StarCoder2 BigCode 15B Code-Specific bigcode/starcoder2-15b-instruct-v0.1 Table 7: Details of the organization and model source (i.e., model version for proprietary models, and Huggingface model name for open-source models) for the LLMs evaluated in FinanceMATH. 12857 Model Size Notes Accounting Economics Derivatives Quantitative Management Portfolio Corporate Avg. PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT PoT CoT Close-book Expert 73.0 Non-Expert 58.0 Open-book Expert 92.0 Non-Expert 84.0 Proprietary LLMs GPT-4o 75.0 45.8 58.8 55.4 60.3 69.4 82.9 66.3 77.8 69.4 62.5 58.9 67.0 56.9 67.0 60.9 Claude-3.5-Sonnet 73.6 55.6 54.1 51.8 66.7 69.9 75.6 63.9 72.2 72.2 64.3 58.9 62.4 60.6 64.8 60.6 Claude-3-Opus 66.7 56.9 53.5 45.2 62.1 59.8 79.5 64.9 72.2 83.3 51.8 46.4 59.6 45.0 62.9 54.7 GPT-4-Turbo 59.7 38.9 49.8 42.2 50.7 64.8 72.2 56.6 61.1 50.0 57.1 44.6 50.5 47.7 56.2 50.9 Gemini-1.5-Pro 68.1 50.0 53.1 30.7 56.6 55.2 69.8 57.6 58.3 63.9 51.8 55.4 50.5 44.0 58.2 47.0 GPT-4o-Mini 65.3 36.1 46.9 29.7 48.4 47.5 69.3 46.8 50.0 38.9 57.1 41.1 51.4 45.9 54.3 40.3 Gemini-1.5-Flash 69.4 33.3 43.6 28.7 52.0 48.9 67.8 49.8 58.3 61.1 50.0 37.5 47.7 34.9 53.6 40.1 Claude-3-Sonnet 59.7 37.5 37.0 28.4 48.0 43.4 66.8 48.8 47.2 55.6 48.2 33.9 48.6 35.8 49.4 38.6 Claude-3-Haiku 34.7 31.9 19.8 26.4 33.8 43.4 44.9 41.5 41.7 44.4 25.0 33.9 36.7 30.3 32.0 35.1 GPT-3.5-Turbo 47.2 25.0 24.4 16.5 29.2 29.2 51.2 33.2 27.8 22.2 37.5 21.4 32.1 23.8 34.3 24.6 Open-source LLMs DeepSeek-Coder-V2 236B Code 38.5 41.0 62.5 66.7 47.6 46.0 75.0 61.1 33.3 44.4 79.0 52.6 60.0 50.0 55.5 51.0 DeepSeek-V2 236B MoE 38.5 46.2 66.7 79.2 47.6 44.4 75.0 52.8 44.4 33.3 73.7 47.4 30.0 40.0 54.5 50.0 Llama-3.1 405B 41.0 51.3 54.2 54.2 46.0 33.3 80.6 47.2 33.3 11.1 68.4 31.6 40.0 50.0 53.5 41.5 Mistral-Large 123B 38.5 51.3 50.0 45.8 38.1 30.2 77.8 36.1 44.4 22.2 73.7 31.6 40.0 10.0 50.5 36.0 Llama-3.1 70B 25.6 38.5 41.7 50.0 33.3 19.0 66.7 47.2 55.6 22.2 68.4 47.4 30.0 30.0 43.0 35.0 Qwen2 72B 23.1 33.3 29.2 45.8 23.8 22.2 41.7 47.2 22.2 22.2 68.4 31.6 10.0 30.0 31.0 33.0 Llama-3 70B 33.3 38.5 37.5 45.8 33.3 22.2 66.7 33.3 33.3 22.2 68.4 31.6 30.0 30.0 43.0 31.5 Phi-3-Medium 14B 20.5 28.2 37.5 50.0 25.4 19.0 55.6 41.7 22.2 22.2 52.6 42.1 40.0 20.0 34.5 31.0 DeepSeek-Coder-V2-Lite 16B Code 23.1 28.2 25.0 29.2 27.0 23.8 50.0 33.3 11.1 22.2 42.1 31.6 10.0 20.0 30.0 27.5 Mixtral-8x22B 141B MoE 7.7 28.2 4.2 45.8 0.0 19.0 25.0 30.6 11.1 11.1 21.0 26.3 10.0 20.0 9.5 26.5 Yi-1.5 9B 10.3 30.8 20.8 25.0 11.1 14.3 36.1 30.6 33.3 22.2 26.3 26.3 10.0 20.0 19.0 23.5 Yi-1.5 34B 10.3 25.6 12.5 25.0 14.3 9.5 30.6 33.3 0.0 22.2 26.3 21.0 10.0 10.0 16.5 20.5 WizardLM-2 141B MoE 15.4 28.2 41.7 29.2 17.5 11.1 52.8 19.4 0.0 11.1 47.4 15.8 10.0 40.0 28.0 20.0 Gemma-2 9B 23.1 20.5 25.0 29.2 20.6 11.1 36.1 33.3 11.1 0.0 42.1 26.3 10.0 10.0 25.5 20.0 GLM-4 9B 18.0 18.0 25.0 37.5 14.3 17.5 27.8 13.9 22.2 0.0 26.3 21.0 10.0 30.0 20.0 19.5 DBRX 132B MoE 12.8 28.2 16.7 20.8 4.8 9.5 13.9 16.7 11.1 33.3 21.0 21.0 0.0 20.0 11.0 18.5 Phi-3-Mini 3B 18.0 23.1 25.0 25.0 11.1 7.9 30.6 16.7 0.0 22.2 31.6 15.8 20.0 20.0 19.5 16.5 DeepSeek-Math 7B Math 0.0 18.0 0.0 20.8 0.0 12.7 2.8 13.9 0.0 11.1 0.0 31.6 0.0 10.0 0.5 16.5 Mathstral 7B Math 18.0 23.1 20.8 25.0 11.1 11.1 27.8 19.4 11.1 11.1 36.8 15.8 0.0 0.0 18.5 16.5 Llama-3.1 8B 23.1 12.8 20.8 33.3 7.9 6.4 38.9 27.8 11.1 11.1 31.6 15.8 20.0 10.0 21.0 16.0 Qwen2 7B 7.7 28.2 8.3 12.5 6.4 12.7 8.3 16.7 0.0 11.1 10.5 15.8 0.0 0.0 7.0 16.0 Mistral-Nemo 12B 5.1 20.5 12.5 25.0 4.8 9.5 41.7 16.7 11.1 11.1 47.4 21.0 10.0 0.0 17.0 15.5 C4AI Command R+ 104B 5.1 18.0 4.2 25.0 0.0 7.9 5.6 11.1 0.0 0.0 5.3 15.8 10.0 0.0 3.5 12.5 Codestral 22B Code 18.0 7.7 25.0 20.8 17.5 9.5 50.0 16.7 22.2 0.0 42.1 15.8 10.0 0.0 26.5 11.5 Mixtral-8x7B-v0.1 46B MoE 0.0 10.3 0.0 16.7 0.0 6.4 0.0 22.2 0.0 0.0 0.0 15.8 0.0 0.0 0.0 11.5 Llama-3 8B 18.0 10.3 20.8 0.0 11.1 7.9 25.0 13.9 11.1 11.1 26.3 21.0 0.0 0.0 17.0 9.5 WizardMath 7B Math 7.7 10.3 12.5 16.7 4.8 7.9 13.9 5.6 0.0 11.1 15.8 10.5 0.0 10.0 8.5 9.5 InternLM2-Math-Plus 7B Math 12.8 7.7 8.3 16.7 6.4 7.9 13.9 8.3 11.1 0.0 21.0 10.5 0.0 20.0 10.5 9.5 DeepSeek-V2-Lite 16B MoE 7.7 10.3 4.2 8.3 1.6 9.5 2.8 5.6 0.0 0.0 10.5 10.5 0.0 10.0 4.0 8.5 WizardLM-2 7B 12.8 5.1 16.7 12.5 6.4 6.4 25.0 11.1 0.0 0.0 21.0 15.8 0.0 0.0 13.0 8.0 Llama-2 70B 15.4 7.7 16.7 8.3 4.8 4.8 11.1 5.6 0.0 0.0 15.8 21.0 10.0 0.0 10.5 7.0 Aya-23 35B 0.0 5.1 0.0 12.5 0.0 6.4 0.0 5.6 0.0 0.0 0.0 15.8 0.0 0.0 0.0 7.0 Mistral-v0.3 7B 0.0 12.8 0.0 8.3 4.8 3.2 2.8 5.6 0.0 0.0 5.3 10.5 0.0 0.0 2.5 6.5 StarCoder2 15B Code 5.1 10.3 8.3 8.3 7.9 7.9 36.1 2.8 33.3 0.0 52.6 0.0 0.0 10.0 17.5 6.5 InternLM2 7B 7.7 7.7 16.7 4.2 3.2 7.9 11.1 2.8 0.0 0.0 26.3 5.3 0.0 0.0 9.0 5.5 DeepSeek-Coder-V1 33B Code 2.6 10.3 8.3 4.2 3.2 3.2 13.9 8.3 0.0 0.0 21.0 0.0 0.0 0.0 7.0 5.0 WizardCoder 33B Code 18.0 2.6 20.8 8.3 6.4 4.8 27.8 5.6 0.0 11.1 21.0 0.0 10.0 0.0 15.5 4.5 Aya-23 8B 0.0 10.3 0.0 8.3 0.0 3.2 0.0 0.0 0.0 0.0 0.0 5.3 0.0 0.0 0.0 4.5 Llama-2 7B 2.6 2.6 4.2 0.0 1.6 6.4 2.8 0.0 0.0 0.0 5.3 0.0 0.0 10.0 2.5 3.0 Gemma-1 2B 5.1 2.6 4.2 4.2 1.6 6.4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 3.0 Gemma-1 7B 5.1 5.1 8.3 4.2 1.6 4.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.5 3.0 Table 8: Results of Chain-of-Thought and Program-of-Thought prompting on the development set of FinanceMATH. We select the most recent version as of July 5, 2024, for each model. We use average Accuracy using CoT prompting as the ranking indicator of model performance. Numbers underscored indicate that models with PoT prompting achieves better results than with CoT prompting. 12858 0 10 20 30 40 50 60 70 Accuracy (%) 0 20 40 60 80 100 Execution Rate (%) GPT-4o Claude-3.5-Sonnet DeepSeek-Coder-V2 Gemini-1.5-Pro GPT-4-Turbo Gemini-1.5-Flash Llama-3-70B Claude-3-Sonnet Qwen2-72B Phi-3-Medium-14B GPT-3.5-Turbo Claude-3-Haiku Codestral-22B Gemma-2-9B DeepSeek-Coder-V2-Lite WizardLM-2-141B Phi-3-Mini-3B GLM-4-9B StarCoder2-15B Yi-1.5-34B Llama-3-8B Yi-1.5-9B DBRX WizardCoder-33B WizardLM-2-7B Mixtral-8x22B WizardMath-7B Llama-2-70B Qwen2-7B InternLM2-7B DeepSeek-Coder-V1 InternLM2-Math-Plus-7B Gemma-1-2B Mistral-v0.3-7B Llama-2-7B Gemma-1-7B Figure 8: Relationship between execution rate and accuracy across different LLMs with PoT prompting on test set.
2024
693
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12859–12870 August 11-16, 2024 ©2024 Association for Computational Linguistics API-BLEND: A Comprehensive Corpora for Training and Benchmarking API LLMs Kinjal Basu∗, Ibrahim Abdelaziz∗, Subhajit Chaudhury, Soham Dan, Maxwell Crouse, Asim Munawar, Vernon Austel, Sadhana Kumaravel, Vinod Muthusamy, Pavan Kapanipathi, and Luis A. Lastras IBM Research {kinjal.basu, ibrahim.abdelaziz1, subhajit, soham.dan, maxwell.crouse, asim}@ibm.com [email protected], {austel, vmuthus, kapanipa, lastrasl}@us.ibm.com Abstract There is a growing need for Large Language Models (LLMs) to effectively use tools and external Application Programming Interfaces (APIs) to plan and complete tasks. As such, there is tremendous interest in methods that can acquire sufficient quantities of train and test data that involve calls to tools / APIs. Two lines of research have emerged as the predominant strategies for addressing this challenge. The first has focused on synthetic data generation techniques, while the second has involved curating task-adjacent datasets which can be transformed into API / Tool-based tasks. In this paper, we focus on the task of identifying, curating, and transforming existing datasets and, in turn, introduce API-BLEND, a large corpora for training and systematic testing of tool-augmented LLMs. The datasets mimic real-world scenarios involving API-tasks such as API / tool detection, slot filling, and sequencing of the detected APIs. We demonstrate the utility of the API-BLEND dataset for both training and benchmarking purposes1. 1 Introduction Large Language Models (LLMs) have shown remarkable abilities across a variety of Natural Language Understanding (NLU) tasks (Min et al., 2023), e.g., text generation (Brown et al., 2020; Radford et al., 2019), summarization (Zhang et al., 2020; Beltagy et al., 2020), and mathematical reasoning (Imani et al., 2023). There has been strong recent interest in enabling LLMs to call APIs or external tools (such as calculators, calendars, or web searches (Hao et al., 2023; Qin et al., 2023; Tang et al., 2023a)) to accomplish high level tasks like booking a hotel, reserving a table, and automating a job requisition tasks. These higher-level tasks are generally conversational and complex. However, in *These authors contributed equally to this work 1API-BLEND data generation code can be accessed here: https://github.com/IBM/API-BLEND {   Input: "Given the APIs and Parameters below, sequence them in order...Here are some examples...Query: Can you help me search artificial intelligence on wikipedia?\nAnswer:"   Gold Output: "Wiki(keyword = \"artificial intelligence\")" } Wiki(keyword = \"artificial intelligence\") SearchEngine(query = artificial intelligence) "#Example 4\n<conversation>...." ToolLLaMA-2-7B Lynx-7B LLaMA-2-7B (APIBlend) Figure 1: Top: an example from the API Bank-Level1 dataset (OOD) that showcases LLaMA-2-7b fine-tuned with API-BLEND generates the correct API and parameter, whereas the other models hallucinate. Bottom: performance comparison among three models of similar sizes; two recent tool-augmented models (Lynx and ToolLLaMA-2-7B) and a LLaMA-2-7B model trained with API-BLEND (API-BLEND-LLaMA-2-7B), which significantly outperforms the other two models. order to perform such complex tasks, LLMs should be able to perform simpler tasks with APIs such as (a) APIs detection: Based on a user query, correctly choose which API to call, (b) Slot filling2: Given the API, extract either the slots/parameters from the user utterances or request from the user more information to fill the required parameters of the detected API, and (c) Sequencing: Given an utterance specifying a task, write the sequence of APIs that needs to be called to accomplish the task. Data for the above-mentioned API tasks, both for training and benchmarking LLMs has been scarce. Addressing this issue, in the recent past, 2In this paper, slot and input parameters are used interchangeably. 12859 LLM Assisted Generation Grammar-Based Generation Off-the-Shelf Usage SeqSGD SeqATIS SeqMultiWOZ SeqSNIPS SeqToolQA SeqTopV2 ToolLLM API Bank ToolBench ToolAlpaca APIBlend Figure 2: API-BLEND Datasets: 10 datasets, 6 curated as part of this paper and 4 are off-the-shelf datasets used for out-of-domain testing. most approaches have relied on synthetically generated data for training API-augmented LLMs. For instance, ToolLLM (Qin et al., 2023) produces multi-sequence REST-API data sourced from GPT4 (Achiam et al., 2023), while datasets like Gorilla (Patil et al., 2023) utilize synthetic, singlesequence API data, specifically Deep Learning libraries’ APIs, generated from language models. Although generating synthetic data from LLMs offers a cost-effective means of obtaining substantial training data, they suffer from several drawbacks. First, data generation methods are plagued by critical issues such as bias inherent in the sampling model, and a lack of diversity in the training dataset. Previous work has shown synthetically generated data suffers from lack of diversity (Gupta et al., 2023), and diverse data can improve out-ofdomain (OOD) generalization (Yu et al., 2022). Consequently, models trained on such data can overfit on in-distribution APIs and struggle to generalize to OOD APIs that were not encountered during training, restricting the broad applicability of LLMs for tool usage. In addition, datasets have primarily included API detection (single and multiple) and Slot filling in different settings, whereas Sequencing, a prominent task to perform higherlevel human tasks using APIs has rarely been the focus in existing works. Lastly, datasets in domains such as digital assistants and semantic parsing that are related to API-tasks and are human-annotated have gone unnoticed in literature due to the emergence of synthetic data generation techniques; In light of the above issues, we have developed an API-focused training dataset that leverages a hybrid approach for data generation. This is built upon five human-annotated datasets with LLMassisted generation comprising of over 150k, 17k, and 11k train, development, and test instances. The transformation primarily focuses on sequencing, including API detection and slot-filling due to the sparsity and importance of sequencing data for training models. Furthermore, these datasets are collected from different domains such as semantic parsing, dialog, and digital assistants resulting in a higher diversity of API data. We show that models trained on this diverse dataset yield significantly better OOD generalization performance compared to other state-of-the-art methods, with an example shown in Figure 1. As an evaluation/benchmarking dataset, we include five different existing benchmarks for OOD evaluation. In conclusion, we release API-Blend, a comprehensive API training, and benchmarking dataset, comprising of 10 datasets (5 for training, and 5 for OOD testing), see Figure 2. 2 Related Work 2.1 Tool-Usage by LLMs Many recent works (Komeili et al., 2022; Thoppilan et al., 2022; Gao et al., 2023; Schick et al., 2023) have explored how to address the susceptibility of current LLMs to certain errors (e.g., arithmetic (Patel et al., 2021)) through the use of external tools. Such tools can be called by an LLM to provide itself with access to up-to-date information (Komeili et al., 2022; Schick et al., 2023), perform mathematical operations (He-Yueya et al., 2023), and even execute formal programs (Gao et al., 2023). Early approaches to general-purpose training of LLM tool-use leveraged large amounts of humanannotated data (Komeili et al., 2022; Thoppilan et al., 2022). The difficulty in scaling these approaches was addressed by later works, which utilized self-supervision (Schick et al., 2023; Parisi et al., 2022) and few-shot prompting (Yao et al., 2022). The prompting framework of (Yao et al., 2022) has become widely used when augmenting LLMs with tools, with many follow-up works exploring how to improve its cost-effectiveness (Xu et al., 2023a), performance (Shinn et al., 2023; Yang et al., 2023), and data generation quality (Qin et al., 2023; Tang et al., 2023b). The utility of tool-calling itself has been explored with many standard benchmarks for question answering (Saikh et al., 2022; Yang et al., 2018), mathematical reasoning (Cobbe et al., 2021), machine translation (Scarton et al., 2019; Lewis et al., 2020), and planning (Shridhar et al., 2020). While useful to serve as a comparison against taskspecific, supervised methods, it is unclear to what extent these datasets actually require the usage of tools. As observed by (Zhuang et al., 2023), such benchmarks do not adequately distinguish be12860 tween problems that can be solved using only an LLM’s internal knowledge and those that can only be solved through tool calls. 2.2 API Datasets The first self-supervised approaches to constructing tool-use datasets (Schick et al., 2023; Parisi et al., 2022) focused on a small set of general-purpose tools. Soon after, tool-use was quickly expanded to general API function calling (Qin et al., 2023; Tang et al., 2023b; Patil et al., 2023), where the volume and diversity of APIs and scenarios were instead emphasized. While all of the aforementioned datasets highlight the number of APIs involved in their respective corpora, they each vary in terms of how those API calls are utilized. For instance, some datasets curate scenarios involving only a single API call (Tang et al., 2023b; Patil et al., 2023; Xu et al., 2023b) while others involve multiple calls (Qin et al., 2023; Hao et al., 2023). In addition, some require actual calls to a real API to solve their problems (Qin et al., 2023; Li et al., 2023a; Xu et al., 2023b), which contrasts with other works that simulate API calls with a prompted LLM (Tang et al., 2023b; Patil et al., 2023). A limitation of the above-listed self-supervised corpora lies in the evaluation of API-use scenarios. Some approaches evaluate based on hallucination rate (Patil et al., 2023) while others rely on a separate LLM to assess the quality of an example (Tang et al., 2023b; Qin et al., 2023). Recent works have focused on this issue, with Farn and Shin (2023) relying on smaller sets of manually collected ground truth annotations and Huang et al. (2023) performing manual inspection of generated data. 3 API-BLEND Dataset Curation We focus on the setting where the input is a single natural language utterance and the output is a sequence of API calls with their parameter names and values. API-BLEND consists of datasets created via the following three approaches: (1) Language Model Assisted approach where prompts are used based on existing API outputs, (2) a grammar rule-based approach to convert existing semantic parsing and personal assistant notations into API data, and (3) off-the-shelf datasets. Table 1 depicts the statistics of each dataset and the details of the approach/dataset is below. Datasets Train Dev Test Avg. Seq. Len Avg. No. Params SeqATIS 11,670 694 774 2.13 4.85 SeqSGD 6,782 1,092 1,567 2.44 3.5 SeqSNIPS 39,750 2,198 2,199 1.96 5.06 SeqMultiWOZ 6,816 485 983 2.36 3.67 SeqTopV2 94,458 13,477 6,095 1.2 1.98 Total 159,476 17,946 11,618 ToolLLM-G1 197 2.28 ToolLLM-G2 197 2.55 ToolLLM-G3 97 2.91 API Bank-1 386 1.65 2.25 API Bank-2 71 1.34 2.44 ToolBench-HS 100 7.01 0.86 ToolBench-B 120 9.45 0.89 SeqToolQA 358 2.42 1.45 ToolAlpaca 211 1.38 2.01 Total 1737 Table 1: API-BLEND Datasets Statistics: datasets colored in red are used for training and in-domain testing, while the green ones are used for OOD testing only 3.1 Language Model Assisted Generation SeqSGD: We created SeqSGD, a dataset based on Schema-Guided Dialogue (SGD) (Rastogi et al., 2020) dataset tailored towards API sequence evaluation. SGD contains about 20k annotated conversations between a human and a virtual assistant. These dialogues consist of engagements with various services and APIs across 20 domains, including banks, events, media, calendar, travel, and weather. To convert this dataset, for each conversation, we prompted a pretrained FLAN-T5-XXL3 model to convert each API into a request in natural language. We then append the corresponding text of each API to generate the summarized utterance. Figure 3 shows an example. We did not summarize the conversation itself, because it also contains API execution results, and using this as input to a summarization model resulted in many noisy details. To make sure the generated text captures all APIs and parameter values, we post-process the dataset to remove any examples where the utterance does not correctly reflect the ground truth APIs. As a result of this process, we generated 6.8K train, 1.1K validation, and 1.6K test examples having an average API count of 2.44 and an average parameters count per API of 3.5. SeqMultiWoz: MultiWoz (Ye et al., 2021) is another multi-domain task-oriented dialogue dataset. Following the same process of curating SeqSGD from the SGD dataset, we created SeqMultiWoz, another API dataset based on MultiWoz. The resulting dataset includes about 6.8k train, 485 val3https://huggingface.co/google/flan-t5-xxl 12861 "API": "CheckBalance", "Parameters": { "account_type": [ "savings" ]} "API": "TransferMoney", "Parameters": { "account_type": [ "savings" ], "recipient_account_type": [ "checking" ], "recipient_name": [ "Jasbir" ], "transfer_amount": [ "$30" ]} "API": "GetWeather", "Parameters": { "city": [ "Antioch" ], "date": [ "14th of March" ]} Check the balance of my savings account. Transfer $30 from my savings account to my friend Jasbir's checking account. Get the weather for Antioch on the 14th of March." APIs list Summarized Single Utterance Text SGD SeqSGD Human-Bot Conversation Human: Hi, I need my savings account balance Human: I want to send $30 to Jasbir. Human: Could you also tell me about the Antioch weather forecast on the 14th of March? Figure 3: Example of the creation process of seqSGD. Starting from the list of APIs, we use few-shot prompting to generate the summarized single utterance. "API": "RateBook", "Parameters": { "object_name": "rajinikanth: the definitive biography", "rating_value": "one", "best_rating": "6", "rating_unit": "stars" } "API": "SearchScreeningEvent", "Parameters": { "object_type": "movie schedule", "location_name": "b&b theatres" } API Sequence RateBook SearchScreeningEvent Rate (O) rajinikanth: (B-object_name) the (I-object_name) definitive biography (I-object_name) one (I-object_name) out (B-rating_value) of (O) 6 (O) Stars (B-best_rating) and (O) then (O) what (O) s (O) The (O) Movie (B-object_type) Schedule (I-object_type ) For (O) b&b (B-location_name ) Theatres (I-location_name ) Parameters Intents MixSNIPS rate rajinikanth: the definitive biography one out of 6 stars and then what s the movie schedule for b&b theatres SeqSNIPS Figure 4: Example of how SeqSNIPS is created. Using a natural language utterance from MixSNIPS and the flat list of slots, we convert it into a sequence of API calls, each with a dictionary of parameter names and values. idation, and 1k test samples with an average API count of 2.36 and an average parameters count per API of 3.67. 3.2 Grammar-Based Generation SeqATIS and SeqSNIPS: ATIS (Hemphill et al., 1990) is a collection of human-annotated queries about flights. Queries ask for information such as flight numbers, airports, dates, and other relevant details. The dataset also provides a range of semantic labels, including intent and slot values. Intents are the overall goals of the queries, such as “flight query” or “airfare query” while slot values are the specific pieces of information that are being requested, such as “departure city” or “arrival time”. SNIPS (Coucke et al., 2018) is another dataset focused on voice assistants. It consists of humanannotated queries that cover various domains such as weather, music, and calendar. MixATIS and MixSNIPS are multi-intent datasets (Qin et al., 2020) built based on ATIS and SNIPS, respectively. It was created by collecting sentences from the ATIS/SNIPS dataset and connecting them with conjunctions, such as “and”. The resulting data had sentences with 1-3 intents at a probability of 30%, 50%, and 20%, respectively. One issue with using these two datasets for API calling is that they do not indicate which parameters should be associated with which API. For example, as Figure 4 shows, the original MixSNIPS dataset only evaluates model’s ability to detect the two gold intents and which segments of the text are the target slots. To convert MixATIS and MixSNIPS datasets to a sequence of API calls, we divided utterances back to their original single intent utterances to get the corresponding parameters for each API. We then parsed its IOB (Inside/Outside/Beginning) parameter notations to generate the list of API parameter names and values. Now, we merge the utterances back along with the APIs and their parameters to get the sequence of API calls. In this way, we have curated SeqATIS and SeqSNIPS from MixATIS and MixSNIPS, respectively. In SeqATIS, we have around 11.5k train, 700 validation, and 800 test examples having an average API sequence length of 2.13 and an average parameter count per API of 4.85. Whereas, SeqSNIPS consists of around 40k train, 2.2k vali12862 dation, and 2.2k test samples with an average API sequence length of 1.96 and an average parameters count per API of 5.06. SeqToolQA: ToolQA (Zhuang et al., 2023) is an open-source dataset designed for tool-augmented LLMs evaluation. The dataset tries to address the issue with current tool-based evaluation methods which do not distinguish between questions that can be answered using LLMs’ internal knowledge and those that require external information through tool use. To do so, the datasets come with 8 independant datasets (e.g. Kaggle Flights, Coffee, Yelp, Airbnb, etc.) and a set of questions that can be answered only by querying those datasets, hence ruling out the internal knowledge of LLMs. The dataset is provided as a set of template-based questions with their final answers. However, ToolQA does not provide the intermediate set of steps (tools) that need to be executed to generate the answer to the given questions. The listing below shows some examples: { "qid": "easy -flight -0003", "question": "What was the departure time of the DL891 flight from SEA to LAX on 2022-01-22?", "answer": "11:54" } Too address this issue, we propose SeqToolQA where we used the provided implementation of ToolQA on how each template question is answered and transformed into a corresponding set of APIs. We abstracted 17 different APIs covering 6 domains and created a total of 358 examples for testing purposes. We show an example below: { "qid": "easy -flight -0000", "question": "What was the departure time of the UA5 480 flight from ORD to HSV on 2022-07-06?", "apis": [ "LoadDB[DBName=flights]", "FilterDB[Origin=ORD, Dest= HSV, FlightDate= 20 22-07-06, Flight_Number_Marketing_Airline=5 480, IATA_Code_Marketing_Airline=UA]", "GetValue[ValueName=DepTime]" ], "answer": "18:11" } } SeqTopV2: Topv2 (Chen et al., 2020) is a multidomain task-oriented semantic parsing dataset comprising examples from eight domains; alarm, event, messaging, music, navigation, reminder, timer, and weather. The total dataset consists of 180k samples, randomly divided into training, development, and test sets for each domain. We followed a straightforward approach to convert this dataset into APIs using its intents “IN:” and slot “SL:” notations. Note that this dataset genuinely has a sequence of APIs that has to be followed. In the example “Remind me to email Joy about the details with the new client tonight”, we transform it into two APIs; “SEND_MESSAGE” and “CREATE_REMINDER” where “CREATE_REMINDER” has prerequisite for “SEND_MESSAGE”. The original dataset had a lot of “UNSUPPORTED” notations for examples where there is no matching intent. We excluded these cases from our API dataset. Along with them, we also removed the samples that had duplicate utterances, ambiguous intents, and analogous slot names inside the same intent. We call the resulting dataset SeqTopV2, and it has 94K, 13.5K, and 6K for training, development, and testing splits, respectively. 3.3 Off-the-Shelf Usage ToolBench: This is a subset of ToolBench (Xu et al., 2023b) focused on two domains, HomeSearch and Booking. We did not do any transformation to these datasets and rather used it “as-is”, since they are already in API form. ToolLLM (Qin et al., 2023) proposes an instruction-tuning tools dataset and a toolaugmented LLM model based on LLaMA-2-7B. The dataset is created synthetically based on ChatGPT and a collection of 16K APIs from RapidAPI. The dataset includes three subsets; G1,G2, G3 which refer to single-tool, intra-category multi-tool and intra-collection multi-tool data, respectively. We used those three subsets above as-is for out-ofdistribution model evaluation. API Bank (Li et al., 2023b): is a benchmark designed for evaluating tool-augmented LLMs. It includes 314 tool-use dialogues with 753 API calls to assess the existing LLMs’ capabilities in planning, retrieving, and calling APIs. It also comes with a larger training dataset of 1.8K dialogues and a model trained on it (Lynx) initialized from Alpaca. In this paper, we use the the test sets of API Bank for out-of-distribution evaluation. Since this is a dialoge with tools being called in between, we divided each dialogue into multiple test examples where each example include a conversation and a sequence of APIs needed thus far. ToolAlpaca (Tang et al., 2023a): ToolAlpaca is a training and evaluation benchmark that is automatically curated through a multi-agent simulation environment using ChatGPT and GPT-3.5. The corpus contains 3,938 tool-use instances from more than 400 real-world tool APIs spanning 50 distinct 12863 Data Quality (Avg) Annotator Agreement (Avg) Datasets Intent Sequence (Intent) Slot Intent Sequence (Intent) Slot SeqATIS 0.96 0.96 0.89 1.00 1.00 0.90 SeqSGD 0.98 0.98 0.85 1.00 1.00 0.94 SeqSNIPS 0.93 0.94 0.89 0.98 0.96 0.94 SeqMultiWOZ 1.00 1.00 0.97 1.00 1.00 0.98 SeqTopV2 0.97 0.97 0.89 0.98 0.98 0.90 Table 2: Data Quality with Inter-Annotator Agreement scores for API-BLEND In-Distribution Datasets. categories. Although it comes with a training set, in this paper, we have only used the test set for out-of-distribution evaluation. 4 Data Quality Assessment To check the quality of API-BLEND datasets, we perform human annotation over 50 randomly sampled data from 5 in-distribution datasets (at least 2 annotators per dataset). Table 2 demonstrates the data quality and annotator agreement scores. The following steps are taken to measure the data quality: first, the annotators score each gold sample (1 if correct, 0 otherwise) based on the following 3 metrics: (1) “Intent”: whether the intents/APIs in the output are correct to answer the query; (2) “Sequence(Intent)”: whether the intents are in the proper sequence; and (3) “Slot”: whether the slot names and slot-values are correct. Then, we calculate the average scores across all the annotators and present them under the “Data Quality” section in table 2. Furthermore, we assess the annotator agreement by examining each of the three segments: (1) Intent, (2) Sequence(Intent), and (3) Slot, to determine whether there is a unanimous agreement among annotators (assigning a value of 1 for agreement and 0 otherwise). Subsequently, we calculate the average agreement across all these segments. As the API-BLEND data suite has been generated from high-quality datasets from different domains, we are seeing very high scores in our data quality assessment, particularly for the easier “Intent” and “Sequence(Intent)” tasks. 5 Experiments and Results 5.1 Baselines In our experiments, we have used 9 open sourced models as baselines: (1) LLaMA-2-70B (Touvron et al., 2023), (2) Falcon-180B (Almazrouei et al., 2023), (3) LLAMA-2-7B (Touvron et al., 2023), (4) FLAN-T5-XXL (Chung et al., 2022), (5) Falcon40B (Almazrouei et al., 2023), (6) StarCoder-15B (Li et al., 2023c), (7) MPT-30B (Team et al., 2023), (8) ToolLLaMA-2-7B (Qin et al., 2023), and (9) Lynx-7B (Li et al., 2023b). We tested these models in three settings; (1) few shot testing: we evaluated LLAMA-2-70B, Falcon-180B, and ToolLLaMA-27B in a 3 shot mode; (2) Instruction fine-tuning on target dataset: we consider this setting for FLANT5-XXL, StarCoder-15B, Falcon-40B, and MPT30B; and (3) Instruction fine-tuning on combined datasets: we evaluated this setting for all models in (2) along with LLaMA-2-7B to evaluate whether we can get a single model trained on the plurality of all datasets and still perform well on each individual test set. For the OOD experiments, we have used all the fine-tuned models from (3) in conjunction with the ToolLLaMA-2-7B and Lynx-7B which are already fine-tuned with the ToolLLM and APIBench data, respectively. 5.2 Instruction Tuning In all experiments, we have used the same instructions for training and testing. We show below the instruction template. Only when evaluating nonfine-tuned models or for the OOD experiments, we also provide 3 ICL examples via the part “Here are some examples: ICL_EXAMPLES” in the instruction) and remove it otherwise. ### Instruction Template with ICL Examples ### Given the APIs and Slots below, sequence them in the order in which they have to be called to answer the following query. Possible APIs: {INTENT_LIST} Possible Slots: {SLOT_LIST} Here are some examples: {ICL_EXAMPLES} Query: {QUERY} Answer: 5.3 Settings and Parameters: We used QLoRA (Dettmers et al., 2023) to finetune all our models. While fine-tuning the models on targeted datasets, we made sure that the model saw 100k samples in the training process. In combined data training, we fine-tuned the models for 2 epochs over the cumulated datasets. In both cases, the batch size was 1 with gradient accumulation steps of 8 and a learning rate of 5e−5. 5.4 Metrics To perform a fine-grained evaluation of the generated responses, we use two kinds of evaluation standard information retrieval metrics (precision, recall, and F1 scores) and Longest Common Subsequence (LCS). We report F1 APIs and F1 slots/Parameters to compute the F1 scores by comparing the predicted APIs with the gold ones and the pre12864 FT Types Models SeqATIS SeqSNIPS SeqSGD SeqMultiWOZ SeqTopV2 Weighted Avg. Falcon-180B 0.15 | 0.02 | 0.15 0.39 | 0.07 | 0.40 0.21 | 0.06 | 0.21 0.44 | 0.25 | 0.45 0.08 | 0.00 | 0.09 0.19 | 0.04 | 0.20 LLaMA-2-70B 0.10 | 0.01 | 0.11 0.26 | 0.03 | 0.27 0.10 | 0.02 | 0.10 0.23 | 0.12 | 0.25 0.04 | 0.00 | 0.04 0.11 | 0.02 | 0.12 No FT ToolLLaMA-2-7B 0.29 | 0.03 | 0.32 0.49 | 0.04 | 0.47 0.43 | 0.05 | 0.45 0.78 | 0.40 | 0.79 0.07 | 0.00 | 0.08 0.27 | 0.05 | 0.28 FLAN-T5-XXL 0.92 | 0.72 | 0.92 0.97 | 0.90 | 0.97 0.98 | 0.69 | 0.98 1.00 | 0.99 | 1.00 0.96 | 0.83 | 0.96 0.97 | 0.83 | 0.97 StarCoder-15B 0.99 | 0.84 | 0.99 0.96 | 0.87 | 0.96 0.98 | 0.67 | 0.98 1.00 | 0.99 | 1.00 0.95 | 0.78 | 0.95 0.96 | 0.80 | 0.96 Facon-40B 0.92 | 0.70 | 0.92 0.97 | 0.89 | 0.97 0.96 | 0.62 | 0.96 1.00 | 0.97 | 1.00 0.90 | 0.56 | 0.91 0.93 | 0.67 | 0.94 FT w. dataset mentioned in the column MPT-30b 0.96 | 0.81 | 0.96 0.97 | 0.90 | 0.97 0.98 | 0.68 | 0.98 1.00 | 0.98 | 1.00 0.96 | 0.84 | 0.97 0.97 | 0.84 | 0.97 FLAN-T5-XXL 0.94 | 0.72 | 0.94 0.96 | 0.89 | 0.97 0.98 | 0.70 | 0.98 1.00 | 0.97 | 1.00 0.97 | 0.87 | 0.97 0.97 | 0.85 | 0.97 StarCoder-15B 0.98 | 0.81 | 0.98 0.96 | 0.86 | 0.96 0.98 | 0.67 | 0.98 1.00 | 0.96 | 1.00 0.96 | 0.83 | 0.96 0.97 | 0.82 | 0.97 LLaMA-2-7B 0.92 | 0.70 | 0.92 0.96 | 0.88 | 0.97 0.97 | 0.65 | 0.97 1.00 | 0.97 | 1.00 0.96 | 0.85 | 0.97 0.96 | 0.83 | 0.97 Facon-40B 0.90 | 0.67 | 0.90 0.96 | 0.87 | 0.97 0.94 | 0.62 | 0.94 1.00 | 0.94 | 1.00 0.93 | 0.66 | 0.93 0.94 | 0.72 | 0.94 FT w. all data MPT-30B 0.94 | 0.77 | 0.94 0.97 | 0.90 | 0.97 0.98 | 0.70 | 0.98 1.00 | 0.97 | 1.00 0.97 | 0.87 | 0.97 0.97 | 0.85 | 0.97 Table 3: Evaluation Results on In-Distribution datasets. Each scores are shown in the following format: API-F1 | Parameter-F1 | LCS-F1. The weighted average scores are calculated using the number of test samples in Table 1. dicted parameters of each API with its gold counterparts. The rationale for using standard metrics such as precision, recall, and F1 scores is that they emphasize exact matches of API and slot names. This decision is based on the fact that APIs are highly specific, and successful execution requires every detail (e.g., name, parameters, input/output format) to align perfectly with the API descriptions. Similarly, to evaluate the model’s ability to adhere to the sequence of API calls necessary for responding to the given natural language query, we compute the LCS metric to capture the overlap between the gold and predicted sequences of APIs. For each utterance in the test set, first, we identify the longest common subsequence (LCS) between the sequence of APIs in the ground truth and the model’s output sequence, and then calculate sequence-level precision and recall. Precision is the length of the LCS divided by the total number of APIs in the model’s output sequence, while recall is the length of the LCS divided by the total number of APIs in the ground truth sequence. We then calculate average precision (LCS-Precision) and recall (LCS-Recall) across all sequences, using these values to compute an average F1-score (LCS-F1). 5.5 In-Distribution Evaluation Results No Fine-tuning evaluation: The first experiment we did was to check how the state-of-the-art open LLMs perform in such a setting. In particular, we evaluated LLaMA-2-70B and Falcon-180B using 3shot prompting. We also considered ToolLLaMA2-7B (Qin et al., 2023); a LLAMA-2-7B based model trained on API datasets generated using ChatGPT based on specifications from RapidAPIs. Table 3 shows the evaluation results on five indistribution datasets: SeqATIS, SeqSNIPS, SeqSGD, SeqMultiWoz, and SeqTopV2. On all datasets, all three non-fine-tuned models seem to get some of the APIs correctly but fail to get the parameters to call such APIs. Fine-tuning on One Dataset: In this experiment, we fine-tune the baselines discussed above on each dataset and test it on the corresponding test split. We have evaluated four models here: FLAN-T5XXL, StarCoder-15B, Falcon-40B, and MPT-30B. Ideally, this setting should give the best performance since the training data is focused towards one dataset and its APIs. As shown in Table 3, all models achieved very good performance (> 90%) detecting the right APIs with the performance of all models reaching 100% API-F1 scores for SeqMultiWoz dataset. We hypothesize that this high performance is because the number of APIs in most datasets is not very large (e.g., 12 APIs for SeqMultiWoz). Detecting the correct set of parameter names and values is a more challenging problem for most models with performance being the lowest for the SeqSGD dataset. The weighted average number suggests that the MPT-30B model is doing slightly better than the other models. Fine-tuning on All Training Datasets: In this setting, we combine all training datasets and finetune the five baseline models on them. The goal of this experiment is to check if we can get a generic model that works well when tested on each individual dataset. We see in Table 3, on SeqATIS and SeqMultiWoz, models trained on the combined training data achieve lower performance compared to models trained on the individual dataset. Performance on SeqSNIPS was similar for both models, while models trained on the combined data achieved better performance on SeqSGD and SeqTopV2. The average scores suggest that all models achieved better performance when trained on the combined datasets compared with the single dataset 12865 Fine-Tuned with all API-BLEND data Tool-Augmented LLMs Datasets Falcon-40B FLAN-T5-XXL MPT-30B LlaMA-2-7B StarCoder-15B ToolLLaMA-2-7B Lynx-7B ToolLLM-G1 0.48 | 0.47 0.11 | 0.11 0.07 | 0.07 0.32 | 0.32 0.12 | 0.12 0.43 | 0.44 ToolLLM-G2 0.48 | 0.47 0.24 | 0.23 0.09 | 0.08 0.53 | 0.53 0.01 | 0.01 0.33 | 0.34 ToolLLM-G3 0.50 | 0.49 0.51 | 0.49 0.19 | 0.20 0.49 | 0.48 0.16 | 0.16 0.50 | 0.50 API Bank-1 0.42 | 0.15 | 0.44 0.57 | 0.12 | 0.59 0.59 | 0.21 | 0.62 0.52 | 0.16 | 0.55 0.49 | 0.15 | 0.55 0.11 | 0.04 | 0.12 0.31 | 0.11 | 0.33 API Bank-2 0.37 | 0.16 | 0.39 0.51 | 0.11 | 0.53 0.49 | 0.20 | 0.52 0.38 | 0.14 | 0.40 0.45 | 0.13 | 0.48 0.05 | 0.03 | 0.05 0.19 | 0.09 | 0.20 ToolBench-HS 0.95 | 0.77 | 0.92 0.42 | 0.16 | 0.43 0.91 | 0.76 | 0.80 0.69 | 0.41 | 0.54 0.90 | 0.81 | 0.89 0.59 | 0.35 | 0.56 0.77 | 0.55 | 0.76 ToolBench-B 0.89 | 0.76 | 0.76 0.36 | 0.09 | 0.33 0.85 | 0.72 | 0.78 0.76 | 0.49 | 0.56 0.44 | 0.34 | 0.40 0.73 | 0.52 | 0.63 0.57 | 0.47 | 0.48 SeqToolQA 0.27 | 0.02 | 0.28 0.18 | 0.00 | 0.19 0.48 | 0.02 | 0.51 0.50 | 0.00 | 0.53 0.24 | 0.01 | 0.26 0.14 | 0.00 | 0.14 0.27 | 0.02 | 0.29 ToolAlpaca 0.41 | 0.11 | 0.41 0.54 | 0.13 | 0.55 0.51 | 0.12 | 0.52 0.46 | 0.13 | 0.46 0.47 | 0.13 | 0.49 0.18 | 0.02 | 0.20 0.32 | 0.04 | 0.34 Weighted Avg. 0.47 | 0.21 | 0.46 0.42 | 0.09 | 0.43 0.49 | 0.23 | 0.49 0.41 | 0.16 | 0.40 0.44 | 0.17 | 0.46 0.18 | 0.09 | 0.18 0.37 | 0.14 | 0.38 Table 4: Evaluation Results on Out-of-Distribution (OOD) dataset. Each scores are shown in the following format: API-F1 | Parameter-F1 | LCS-F1, except the ToolLLM datasets, which are API-only, so they do not have the Parameter-F1 score. All models are prompted with 3-shot examples. The weighted average scores are calculated using the number of test samples in Table 1. training. 5.6 Out-Of-Distribution Evaluation To check the generalizability of the different API models, we measure the performance of all models on five out-of-distribution test sets; ToolLLM, API Bank, ToolBench, our SeqToolQA, and ToolAlpaca. Some test sets have subcategories, such as ToolLLM has G1, G2, and G3; API-Bank has L1 and L2; and ToolBench has HS and B for Home Search and Booking, respectively. In our test-suite, ToolLLM is the API-only test-set that does not have any parameters. In this experiment, we use the 5 models (i.e., FLAN-T5-XXL, StarCoder-15B, Falcon-40B, MPT-30B, and LLaMA2-7B) that are fine-tuned with our combined data. In addition to these models, we have also evaluated ToolLLaMA2-7B from ToolLLM (Qin et al., 2023) and Lynx7B from API-Bank (Li et al., 2023b). In this experiment, we compare the performance with 3-shot in-context learning examples for all the models. Table 4 showcases our OOD evaluation results. Our models that are fine-tuned with API-BLEND data perform better than other Tool/API augmented models. This is because our models achieve better generalizability as they have seen diverse sets of API data from different domains in the finetuning process. Falcon-40B and StarCoder-15B are showing better performance on our API-only test-set ToolLLM (we did not evaluate FLAN-T5XXL on ToolLLM due to max sequence limit), whereas FLAN-T5-XXL and MPT-30B are doing well on API-Bank and ToolAlpaca. Even though ToolLLaMA-2-7B and Lynx-7B are from ToolLLM and API-Bank respectively, still they are performing poorly. In their papers, they used different metrics, e.g., pass rate to determine how many times the model reached an answer, and win rate which uses ChatGPT as a judge. In both the ToolBench datasets, Falcon-40B is outperforming the others. On SeqToolQA, even though the models have scored some points in API detection, however, all the models performed poorly on detecting the parameter names and values, which leads to a low Parameter-F1 score. This is because the parameter values in SeqToolQA contain codes, SQL, math functions, etc., which models have not seen in any training data, and these special slot values are not trivial to generate by seeing only a few ICL examples. 5.7 Qualitative Studies We also performed extensive studies on the outputs generated by the models in our experiments. In this section, we are going to discuss our findings on the failed cases along with some common mistakes demonstrated by the models. We found, in most of the cases, that parameters names and values are the obvious reason for not doing well on slot detection in both in and out-of-distribution test sets. We provide samples for each case for better understanding. We would like to mention that most of the provided examples contain “gold_output” and “predicted_output” and we have shown only the error part (sub-string) from the original outputs for brevity. 5.7.1 Unnormalized Slot-Values In an ideal scenario, the parameter values should be extracted exactly from the query by the models while generating the texts. However, sometimes, the model extracts the sub-part of it or represents it in a different form after extracting it. In a human evaluation, we would consider the generated text matches the gold, although while doing it programmatically it’s showing a mismatch and we have 12866 not found a good way to normalize the parameter values. The following examples capture some of the unnormalized parameter value mismatches. In the first example, the month and the day in the predicted output are repeated. The predicted output on the second one contains only the city name, whereas the gold contains the city and the state. In the final example, even if the intent and slot values are correct, they have used different parameter formats to represent it. We plan to investigate further these issues, but we keep it for future work. ### SeqSGD ### { "gold": "March 3rd, this Sunday", "predicted": "March 3rd, the 3rd" } { "gold": "NYC, New York", "predicted": "NYC" } ### ToolBench ### { "gold": "Date(3, 9, 2023)", "predicted": "Date(year=2023, month=3, day=9)" } 5.7.2 Semantically similar slot-names in API Specification In our instructions, we provide the possible list of APIs and parameters to be used by the model while answering the queries. We extract these APIs and parameters from the original dataset’s API specifications, if any. However, we found in some cases the parameter names are semantically very similar across the datasets. Here are some examples from the SeqSGD dataset: (1) leaving_date and departure_date; (2) leaving_time and departure_time; (3) origin, from_city, and from_location; and (4) destination, to_city, and to_location. Now, it often happens that the parameter values are correct in the generated text but the parameter names do not exactly match with the gold, even if they are very close. Following are some examples of such cases. ### SeqSGD ### { "gold": "destination_airport = ATL", "predicted": "destination = ATL" } { "gold": "show_type = imax", "predicted": "theater_name = imax" } ### SeqATIS ### { "gold": "cuisine = souvlaki", "predicted": "served_dish = souvlaki" } 6 Conclusion This paper presents API-BLEND, a large corpora for training and evaluation of tool-augmented LLMs, curated from real-world datasets of different domains such as dialog, semantic parsing, digital assistants, etc. API-BLEND consists of 10 datasets (5 in-distribution and 5 out-of-distribution) comprising over 190k instances (including train, development, and test). We have demonstrated that the models fine-tuned with API-BLEND generalize better than the other tool-augmented LLMs on OOD experiments. Our findings not only substantiate the importance of API-BLEND in training and benchmarking tool-augmented LLMs but also highlight the necessity of generalizability to improve the API usage capability of LLMs. 7 Limitations and Risks A limitation of our benchmark, API-BLEND, is that it does not deal with environment interactions for an API agent. In future work, it will be interesting to explore this setting of an embodied agent, where the API calls effect changes in the grounded environment. Further, our benchmark is focused on English API commands, and in the future, it will be interesting to develop a multilingual API-BLEND. Also, in a real-world scenario, for a query, multiple correct solutions with different sequences of APIs are possible, however, the API-Blend dataset does not have such cases. For the multiple correct solutions, we can evaluate the predicted solution using one of the following approaches: (i) if the ground truth consists of all the alternate API sequences, then we can use our existing metrics by comparing the predicted sequence with each alternate ground truth; (ii) if the APIs are executable, then we can use accuracy-based metrics over the final model response, and (iii) otherwise we can leverage an LLM evaluator (as a judge) to calculate the win/pass rate (Qin et al., 2023). We do not perceive any risks associated with our work. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Mérouane Debbah, Étienne Goffinet, Daniel Hesslow, Julien Launay, Quentin Malartic, et al. 2023. The falcon series of open language models. arXiv preprint arXiv:2311.16867. Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. 12867 Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, and Sonal Gupta. 2020. Low-resource domain adaptation for compositional task-oriented semantic parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314. Nicholas Farn and Richard Shin. 2023. Tooltalk: Evaluating tool-usage in a conversational setting. arXiv preprint arXiv:2311.10775. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language models. In International Conference on Machine Learning, pages 10764–10799. PMLR. Himanshu Gupta, Kevin Scaria, Ujjwala Anantheswaran, Shreyas Verma, Mihir Parmar, Saurabh Arjun Sawant, Swaroop Mishra, and Chitta Baral. 2023. Targen: Targeted data generation with large language models. arXiv preprint arXiv:2310.17876. Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. arXiv preprint arXiv:2305.11554. Joy He-Yueya, Gabriel Poesia, Rose E Wang, and Noah D Goodman. 2023. Solving math word problems by combining language models with symbolic solvers. arXiv preprint arXiv:2304.09102. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990. Yue Huang, Jiawen Shi, Yuan Li, Chenrui Fan, Siyuan Wu, Qihui Zhang, Yixin Liu, Pan Zhou, Yao Wan, Neil Zhenqiang Gong, et al. 2023. Metatool benchmark for large language models: Deciding whether to use tools and which to use. arXiv preprint arXiv:2310.03128. Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398. Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8460–8478. Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. Mlqa: Evaluating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7315– 7330. Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023a. Apibank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244. Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023b. Apibank: A benchmark for tool-augmented llms. Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. 2023c. Starcoder: may the source be with you! arXiv preprint arXiv:2305.06161. Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heintz, and Dan Roth. 2023. Recent advances in natural language processing via large pre-trained language models: A survey. ACM Computing Surveys, 56(2):1–40. Aaron Parisi, Yao Zhao, and Noah Fiedel. 2022. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human 12868 Language Technologies, pages 2080–2094, Online. Association for Computational Linguistics. Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334. Libo Qin, Xiao Xu, Wanxiang Che, and Ting Liu. 2020. Agif: An adaptive graph-interactive framework for joint multiple intent detection and slot filling. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1807–1816. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. 2023. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8689–8696. Tanik Saikh, Tirthankar Ghosal, Amish Mittal, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Scienceqa: A novel resource for question answering on scholarly articles. International Journal on Digital Libraries, 23(3):289–301. Scarton Scarton, Mikel L Forcada, Miquel Esplà-Gomis, and Lucia Specia. 2019. Estimating post-editing effort: a study on human judgements, task-based and reference-based metrics of mt quality. In Proceedings of the 16th International Conference on Spoken Language Translation. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. ArXiv, abs/2302.04761. Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik R Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning. In Thirty-seventh Conference on Neural Information Processing Systems. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. 2020. Alfworld: Aligning text and embodied environments for interactive learning. In International Conference on Learning Representations. Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun. 2023a. Toolalpaca: Generalized tool learning for language models with 3000 simulated cases. Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, and Le Sun. 2023b. Toolalpaca: Generalized tool learning for language models with 3000 simulated cases. arXiv preprint arXiv:2306.05301. MN Team et al. 2023. Introducing mpt-7b: a new standard for open-source, commercially usable llms. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models, 2023. URL https://arxiv. org/abs/2307.09288. Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. 2023a. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323. Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, and Jian Zhang. 2023b. On the tool manipulation capability of open-source large language models. arXiv preprint arXiv:2305.16504. Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. 2023. Mmreact: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369–2380. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations. Fanghua Ye, Jarana Manotumruksa, and Emine Yilmaz. 2021. Multiwoz 2.4: A multi-domain task-oriented dialogue dataset with essential annotation corrections to improve state tracking evaluation. arXiv preprint arXiv:2104.00773. Yu Yu, Shahram Khadivi, and Jia Xu. 2022. Can data diversity enhance learning generalization? In Proceedings of the 29th international conference on computational linguistics, pages 4933–4945. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted 12869 gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328–11339. PMLR. Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, and Chao Zhang. 2023. Toolqa: A dataset for llm question answering with external tools. arXiv preprint arXiv:2306.13304. 12870
2024
694
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12871–12882 August 11-16, 2024 ©2024 Association for Computational Linguistics LoRA-Flow: Dynamic LoRA Fusion for Large Language Models in Generative Tasks Hanqing Wang∗1 Bowen Ping*2 Shuo Wang†3 Xu Han3,4,5 Yun Chen†1 Zhiyuan Liu3,4,5 Maosong Sun3,4,5 1Shanghai University of Finance and Economics 2Peking University 3Dept. of Comp. Sci. & Tech., Tsinghua University, Beijing, China 4Institute for AI, Tsinghua University, Beijing, China 5Beijing National Research Center for Information Science and Technology Abstract LoRA employs lightweight modules to customize large language models (LLMs) for each downstream task or domain, where different learned additional modules represent diverse skills. Combining existing LoRA modules to address new tasks can enhance the reusability of learned LoRA modules, particularly beneficial for tasks with limited annotated data. Most prior works on LoRA combination primarily rely on task-level weights for each involved LoRA, making different examples and tokens share the same LoRA weights. However, in generative tasks, different tokens may necessitate diverse skills to manage. Taking the Chinese math task as an example, understanding the problem description may depend more on the Chinese LoRA, while the calculation part may rely more on the math LoRA. To this end, we propose LoRA-Flow, which utilizes dynamic weights to adjust the impact of different LoRA modules. The weights at each step are determined by a fusion gate with extremely few parameters, which can be learned with only 200 training examples. Experiments across six generative tasks demonstrate that our method consistently outperforms baselines with tasklevel fusion weights. This underscores the necessity of introducing dynamic fusion weights for LoRA combination.1 1 Introduction Large language models (LLMs) have demonstrated superior performance over previous smaller models across a wide range of tasks (OpenAI, 2023; Anil et al., 2023; Touvron et al., 2023a,b), thereby extending the applicability of AI systems to numerous real-world scenarios. Because of their substantial model size, training all parameters of LLMs can * Equal Contribution. † Corresponding authors. 1 Code and models will be publicly available at https: //github.com/thunlp/LoRAFlow. 设 … 301 x 6 假 … … … … … LoRA-Hub weights LoRA-Flow weights Zh Chat En Math Figure 1: Illustration of the proposed LoRA-Flow method. For the token yt at the t-th step, we use a gate that conditions on the prefix y<t to determine the fusion weights. The dynamic fusion weights are intended to control the influence of different LoRA modules, to better cope with various types of tokens in generative tasks. Red and blue rectangles represent the weights assigned to the two involved LoRA modules. often be prohibitively expensive. Therefore, several researchers have proposed a set of parameterefficient fine-tuning (PEFT) approaches (Houlsby et al., 2019; Li and Liang, 2021). Among these, LoRA (Hu et al., 2022) stands out as one of the most popular due to its efficiency and simplicity. The basic idea of LoRA is to learn an additional module for each downstream domain or task. Rather than solely employing a single LoRA to address learned tasks, several recent studies have delved into the potential of combining existing LoRA modules to tackle unseen tasks (Zhang et al., 2023; Huang et al., 2024; Chronopoulou et al., 2023). This direction holds the potential to substantially enhance the reusability of learned parameters, facilitating the integration of diverse model capabilities. Most existing LoRA fusion approaches employ a task-level weight distribution when combining different LoRA modules. This implies that all test examples and tokens share the same fusion ratio. 12871 However, for some complex generative tasks (e.g., solving mathematical problems or generating code according to provided instructions), the LLM may need to dynamically employ various types of capabilities to address the entire problem effectively. Figure 1 illustrates an example, where we have trained a Chinese chat LoRA (i.e., Zh Chat) and an English math LoRA (i.e., En Math), and our objective is to address a Chinese math problem. Intuitively, comprehending the Chinese problem description may rely more on the Chinese chat LoRA, whereas performing the calculation might depend more on the English math LoRA. In this work, we propose LoRA-Flow, which can dynamically determine the token-level weights of different LoRA modules for generative tasks. At each time step, the fusion weights are generated by a gate module that conditions the current prefix. The fusion gate comprises an extremely small number of parameters, accounting for only approximately 0.2% of those in a LoRA. We find through experiments that the fusion gate can be learned through only 200 training examples. Figure 1 gives an example of LoRA-Flow. We also observe significant variations in weights across different model layers, suggesting that the impact of LoRA modules differs across layers. In summary, the contributions of this work can be outlined as follows: • We propose LoRA-Flow, a method that combines existing LoRA modules with dynamic fusion weights to effectively control the influence of each LoRA module across various generation steps. • We verify the effectiveness of LoRA-Flow on six different generation tasks, and the results show that LoRA-Flow can consistently outperform the baselines that use task-level fusion weights (e.g., LoRA-Hub). • By carefully designed analyses from various aspects, we provide deeper insights into the integration of LoRA modules. We consider this journey to be fruitful in constructing a flexible plug-and-play community for LLMs, enabling developers to leverage plugins created by others to build up their own LLM applications. 2 Background Large Language Models Most recent LLMs utilize a decoder-only architecture, comprising stacked layers of identical structure to form the large-scale model. For a sequence y, an LLM estimates its probability in the following way: P(y|θbase) = T Y t=1 P(yt|y<t; θbase), (1) where yt denotes the token at the t-th step and y<t is the prefix before t. θbase represents the parameters of the basic LLM. LoRA LoRA (Hu et al., 2022) is a parameterefficient fine-tuning method that can achieve comparable performance to full fine-tuning in certain scenarios but at a significantly lower cost. Specifically, for a given matrix W ∈Rm×d within the model, we can learn two low-rank matrices A ∈Rr×d and B ∈Rm×r to approximate the parameter update for W: ∆W = BA. (2) LoRA Fusion Each trained LoRA possesses unique capabilities, and their combination can integrate various skills of LLMs. Formally, we use ∆W1 = B1A1 and ∆W2 = B2A2 to represent two existing LoRA modules. Zhang et al. (2023) propose to merge two LoRA modules in the following way: h′ = Wx + λ∆W1x + (1 −λ)∆W2x, (3) where λ is a hyper-parameter that needs to be manually tuned. LoRA-Hub (Huang et al., 2024) further improve the fusion method in the following ways: h′ = Wx + (w1B1 + w2B2)(w1A1 + w2A2)x, (4) where the fusion weights w1 and w2 are learned in a few-shot manner. While LoRA-Hub can automatically determine fusion weights for different LoRA modules, the weights across different tokens remain the same for a given task. This shared weight scheme may constrain the expressive capacity of the involved LoRA modules, particularly in complex generative tasks that entail diverse types of context. 3 Approach To handle generative tasks more flexibly, we propose LoRA-Flow, which employs dynamic fusion 12872 Q K V self attention softmax W ⨁ " ∆"! ∆"" weighted sum "# fusion gate feed forward add bias Figure 2: Left: we use layer-wise fusion gates to facilitate dynamic LoRA fusion, which project input hidden states of each layer into fusion weights. Right: for a certain module, the provided fusion weights are used to aggregate the outputs of different LoRA modules. Since our goal is to leverage the abilities acquired by existing LoRA modules to address new tasks, we only train the fusion gate with a few examples, while keeping both the model and the LoRA modules frozen. The number of parameters of the fusion gate is only approximately 0.2% of those in a LoRA. weights at each generation step. Subsequently, we will first introduce the way we calculate the fusion weights in Section 3.1, and then detail how we integrate the weights into the model in Section 3.2. Finally, we describe the training algorithm in Section 3.3. 3.1 Calculating Fusion Weights At the t-th step, we aim to determine the fusion weights using the prefix y<t, which captures the context of the current token. Given that the backbone model has already compressed the context information into hidden vectors, we propose directly utilizing the hidden state at the t-th step to avoid redundant computations. There are three levels of hidden states, each containing different granularities of information: • Step-level hidden states xt: the input word embedding at the t-th step. • Layer-level hidden states xl t: the input representation to the l-th layer at the t-th step. • Module-level hidden states xl,type t : the input vector to a specific module (e.g., the query projection in the self-attention network). As found in some previous studies, the hidden states in various layers may lie in different manifolds (Voita et al., 2019). Therefore, simply using the step-level hidden state xt to compute the fusion weights for the entire model may be not sufficiently effective. On the other side, using module-level hidden states would introduce plenty of new parameters for the fusion gates, as each module requires an independent gate to cope with its input states xl,type t . We thus use the layer-level hidden state xl t. Empirical studies in Section 6.1 indicate that layer-wise fusion weights can outperform the other two types of counterparts. As shown in Figure 2, for the l-th layer, the fusion gate takes in the input representation xl t and then projects it into fusion weights: wl = softmax  W l gatexl t  + bl, (5) where W l gate ∈Rk×d and bl ∈Rk×1. k is the number of LoRA modules. Both W l gate and bl are learnable parameters. Given that k is typically substantially smaller than d and r, the number of parameters within the fusion gates is negligible compared to that of the LoRA. For instance, in Llama-2-7b (Touvron et al., 2023b), a LoRA for the entire model contains 117.44M parameters with r = 64, while the gates for combining two LoRA modules only consist of 0.26M parameters in total. 3.2 Integrating Fusion Weights As mentioned in Section 3.1, we use layer-wise fusion weights in LoRA-Flow. Once we get the fusion weights at the l-th layer, we feed the weights wl to all the modules that contain LoRA modules. Different from LoRA-Hub (Huang et al., 2024) that separately composes LoRA A matrices and LoRA B matrices (as shown in Eq (4)), we integrate the outputs of different LoRA modules, treating each LoRA as a complete module. The reason for this operation is to combine LoRA modules with different middle ranks. Let ∆h = [∆h1; · · · ; ∆hk] ∈ Rd×k denote the outputs of all the involved LoRA 12873 modules, the fusion process can be expressed by h′ = h + ∆hwl, (6) where h is the module’s output in the backbone. Figure 2 shows an example. 3.3 Training For a backbone model θbase and a set of learned LoRA modules θLoRA = {θ1 LoRA, · · · , θk LoRA}, we train the fusion gate θfusion on the new task: ˆθfusion = argmax θfusion {L(θtotal|Dnew)} , (7) where θtotal = θbase ∪θLoRA ∪θfusion denotes the total parameters. The likelihood is defined as L(θtotal|Dnew) = N X i=1 P(yi|θtotal), (8) where N is the number of training examples on the new task. We follow LoRA-Hub (Huang et al., 2024) to learn the fusion modules in a few-shot manner, where N is set to 200 in our experiments. We investigate the effect of N in Section 6.4. Since θfusion consists of only a few parameters, these limited training examples are adequate for learning an effective fusion mechanism. 4 Experiment 4.1 Setup Base Model We use Llama-2 (Touvron et al., 2023b) as our base LLM to examine the performance of various LoRA fusion approaches, as it is among the most widely used open-source LLMs. Due to computational constraints, we use Llama-27b by default. LoRA Training To conduct LoRA fusion experiments, we first learn several LoRA modules on the tasks with sufficient supervised data: • Chinese chat (Zh Chat): we use the data released by Lai et al. (2023) to learn the Chinese chat LoRA, which is expected to possess the ability to understand and generate Chinese text. There are 52K training examples in total. • Russian chat (Ru Chat): the training data of Russian chat LoRA is also from Lai et al. (2023), which consists of 52K training examples in Russian. • Spanish chat (Es Chat): the training data is also from Lai et al. (2023), containing 52K training examples in Spanish. • English math (En Math): the training data for English math LoRA is constructed by Yu et al. (2023), which is comprised of 395K mathematical problems in English. • English code (En Code): we train the English code LoRA with the Magicoder dataset (Wei et al., 2023), which consists of 186K code generation problems in English. We integrate LoRA modules into the query, key, value, and output projections within attention networks and the three linear projections in feedforward networks. By default, the LoRA rank r is set to 64 and the value of α is set to 16. For the En code LoRA, we set r=256, as we have observed that the code LoRA with r=64 is inadequate for learning the task effectively. Each LoRA module is trained using 8 A100 80G GPUs, where each mini-batch contains 128 training examples. We use the cosine warmup schedule and the peak learning rate is 1e-4. For each task, the LoRA is trained by 3 epochs and the warmup ratio is set to 0.04. LoRA Fusion We evaluate LoRA fusion methods in a few-shot manner. Specifically, we conduct the evaluation on six tasks, including Zh math, Ru math, Es math, Zh code, Ru code, and Es code. For each task, we combine the chat LoRA in the target language and the task LoRA in English. Briefly, we use language LoRA to represent the target-language chat LoRA and task LoRA to denote the math or code LoRA trained in English. For each task, we construct 200 training examples for the few-shot training, which is firstly translated by GPT-3.5 based on the English math or code data and then verified by humans. For fusion experiments on math, we also construct 100 problems as the validation set. For code experiments, we employ humans to create 20 problems as the validation set, since the code generation test examples are more time-consuming to annotate.2 Both LoRA-Hub and the proposed LoRA-Flow are trained with the few-shot data. We use the validation to search the training hyperparameters. The search space of the peak learning rate is {1e-3, 1e4}, and the search space of the batch size is {2, 4, 2See Appendix B for the translation prompt and Appendix C for the details of human annotation. 12874 8}.3 Each fusion module is trained with 5 epochs, using the few-shot training data. Evaluation For fusion experiments on math, we use MGSM (Shi et al., 2023) as the test set, which is a widely used multilingual evaluation benchmark for the math abilities of LLMs. For fusion experiments on code generation, we construct a multilingual version of HumanEval (Chen et al., 2021). The original English problem descriptions in HumanEval are translated by GPT-3.5 into other languages and then verified by humans. We report the accuracy and the pass@1 score on MGSM and HumanEval, respectively. 4.2 Main Results Table 1 shows the results of the involved methods on different generative tasks. For comparison, we also show the performances of the base model and single LoRA. The task LoRA modules trained in English already demonstrate a notable degree of cross-lingual transfer capabilities, outperforming the language LoRA modules on non-English tasks. For example, in the Zh math task, the task LoRA achieves an accuracy of 26.8, whereas the language LoRA achieves only 5.2. On the code generation tasks, the task LoRA also significantly outperforms the language LoRA modules. When combining the language and task LoRA modules, we compare our method with two baselines that use task-level fusion weights. The "Average" baseline refers to simply averaging the outputs of the two involved LoRA modules. As introduced in Section 2, LoRA-Hub learns task-level fusion weights using few-shot training data, similar to the proposed LoRA-Flow. However, LoRA-Flow utilizes dynamic fusion weights, which vary across different time steps and model layers. We use the open-source code released by Huang et al. (2024) to reimplement LoRA-Hub in our experiments. From the experiments, we find that “Average” and LoRA-Hub perform even worse than the single task LoRA, demonstrating that combining different LoRA modules with static weights is not effective enough for complex generative tasks like solving mathematical problems or generating code segments. Specifically, LoRA-Hub only outperforms the single-task LoRA on the Zh code task. Trained with the same few-shot training data as LoRA-Hub, LoRA-Flow outperforms the baselines 3Refer to Table 5 in Appendix for the best hyperparameters of each generation task. across all six examined tasks. For instance, LoRAFlow achieves a performance improvement of 2.3 compared to LoRA-Hub (22.6 vs. 20.3) on code generation tasks. This confirms the necessity of employing dynamic weights for generative tasks. 4.3 Results on Larger Model To confirm the effectiveness of LoRA-Flow with larger LLMs, we also conduct experiments on Llama-2-13b. Given that larger models require higher training costs, we only train three LoRA modules for the 13b model, namely Zh Chat, En Math, and En Code. The results of different fusion approaches on the 13b model are shown in Table 2. On the larger model, all the methods involved achieve better results than those on the 7b model. Compared to LoRA-Hub, LoRA-Flow exhibits superior performances on both the math and the code tasks. These results also demonstrate that LoRA-Flow is compatible with models of various sizes. 1 9 17 25 32 -0.20 0.10 0.40 0.70 1.00 1.30 0.0 0.2 0.4 0.6 0.8 1.0 En Math Zh Chat Figure 3: Average fusion weights for the Zh Chat and En Math LoRA modules across different layers. 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% 0.10 0.30 0.50 0.70 0.90 0.0 0.2 0.4 0.6 0.8 1.0 En Math Zh Chat Figure 4: Average fusion weights for the Zh Chat and En Math LoRA modules at different time steps. 5 Analysis To gain deeper insights into the behavior of LoRAFlow, we conduct a comprehensive series of analyses from various perspectives. We first estimate the average fusions at different model layers in Section 5.1, and then investigate how the fusion 12875 Method MGSM (Math) HumanEval (Code) Zh Ru Es Avg. Zh Ru Es Avg. BASE MODEL 4.4 3.2 2.4 3.3 0.0 0.0 2.4 0.8 SINGLE LORA LANG 5.2 3.6 3.6 4.1 12.2 14.0 10.4 12.2 TASK 26.8 32.8 41.2 33.6 18.3 23.2 21.9 21.1 LORA FUSION AVERAGE 12.8 10.4 18.4 13.9 17.1 17.7 18.3 17.7 LORA-HUB 20.8 28.4 36.8 28.7 19.5 21.3 20.1 20.3 LORA-FLOW 33.2 37.6 42.0 37.6 20.7 23.8 23.2 22.6 Table 1: Evaluation results on MGSM and HumanEval. “LANG” denotes the chat LoRA in the target language and “TASK” represents the math or code LoRA trained in English. LoRA fusion methods combine the language LoRA and the task LoRA to accomplish the new task. The best score is highlighted in bold. Method MGSM HumanEval BASE MODEL 6.8 0.0 LANG LORA 7.6 18.9 TASK LORA 36.8 34.7 AVERAGE 16.4 30.4 LORA-HUB 40.0 34.2 LORA-FLOW 41.2 35.4 Table 2: Performance of different fusion methods on Llama-2-13b. The results are reported on the Zh math and Zh code test sets. weights change across time steps in Section 5.2. Finally, we provide a specific case in Section 5.3. 5.1 Fusion Weights across Different Layers Figure 3 shows the average fusion weights at different layers for the two involved LoRA modules (i.e., Zh Chat and En Math) on the Chinese math task. For both the Zh Chat and En Math LoRA, the weights at various layers exhibit notable variations, suggesting distinct fusion schemes across different model layers. A trend is observed where lower layers assign greater weights to the En Math LoRA, while higher layers allocate more weight to the Zh Chat LoRA. This phenomenon can be explained by the following reason: the bottom layers primarily utilize the math LoRA for reasoning, while the top layers rely more on the language LoRA for text generation in the target language. In contrast, approaches such as LoRA-Hub, which combines LoRA modules with fixed weights, regard the capabilities of each transformer layer as equivalent, overlooking the discrepancies in their abilities. This constrains the potential of LoRA fusion approaches in generative tasks. 5.2 Fusion Weights at Different Time Steps We also analyze the average fusion weights at different time steps during the decoding process. We split all the tokens generated by the model into 10 bins according to their relative positions. Figure 4 presents the results, where “10%” denotes the initial 10% tokens on the left, and “20%” represents the subsequent 10%-20% tokens. This figure illustrates the average trend of the fusion dynamics. We observe that the weight of the En Math LoRA is increasing, while the weight of the Zh Chat LoRA is decreasing. The results underscore the necessity for dynamic fusion weights in generative tasks, as they show that different time steps require varying fusion weights, reconfirming our intuition. Since the average trend is estimated based on the entire test set, we will present a specific case in the following section. 5.3 Detailed Analysis on Fusion Weights To better investigate the fusion procedure of LoRAFlow, we provide an example in Figure 5, which consists of the specific tokens generated by the model and the corresponding fusion weights for the two involved LoRA modules (i.e., Zh Chat and En Math). As discussed in Section 5.2, the fusion weights exhibit significant variation across different tokens. Notably, due to the incorporation of a bias vector in the fusion gate, as depicted in Eq (5), it is possible for certain tokens to have negative fusion weights. We draw inspiration from the work of Zhang et al. (2023), who observed that negative fusion weights can yield specific effects. On average, the fusion gate assigns higher weights to the En math LoRA. Additionally, we 12876 ! Fusion weights -0.20 0.00 0.20 0.40 0.60 0.80 1.00 1.20 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 En Math Zh Chat 贾马尔的手机能容纳的照片数量是布列塔尼手机的6 倍。布列塔尼的手机能容纳的最大照片数量是贾马尔在动物园拍摄的鸭子照片 中鸭子数量的50 倍。如果贾马尔的手机能容纳1,800 张照片,贾马尔在动物园拍摄的照片中能看到多少只鸭子? Input " 贾马尔的手机能容纳的照片数量是布列塔尼手机的6 倍,那么布列塔尼手机能容纳的照片数量是1,800 / 6 = 300 张。\n布列塔尼 的手机能容纳的最大照片数量是贾马尔在动物园拍摄的鸭子照片中鸭子数量的50 倍,那么贾马尔在动物园拍摄的照片中能看到的 鸭子数量是300 / 50 = 6 只。\n#### 6\n答案是:6 Output # Figure 5: Detailed analysis for the fusion procedure of LoRA-Flow. The upper subgraph illustrates the fusion weights for each token, while the bottom subgraph details the content. From the fusion weights, we observe three segments where the fusion weights for the Zh Chat LoRA noticeably decrease while those for the En Math LoRA increase. We highlight the tokens corresponding to these segments using green, yellow, and red colors, respectively. Surprisingly, these three segments mainly contain numbers, which are closely related to mathematical reasoning ability. We also offer English translations of the input and output Chinese text in Figure 7 in the Appendix. observe certain troughs in the fusion weights for the Zh Chat LoRA. To better understand this phenomenon, we identify the tokens corresponding to these troughs. As shown in Figure 5, we use the same color to mark the weight throughs and the corresponding text segments. Surprisingly, we observed that when the fusion weight for the Zh chat LoRA decreases, the text segments mainly contain numbers and mathematical calculations (e.g., “1800 / 6 = 300”), which align more closely with the math capability learned by the En math LoRA. By examining the case, we observe a strong correlation between the fusion weights with the generated content. When the content involves mathematical reasoning, LoRA-Flow will improve the fusion weight of the En math LoRA. The significant fluctuation in weights suggests that relying on fixed weights throughout the decoding process is impractical, underscoring the necessity of dynamic fusion weights. 6 Discussion 6.1 Ablation Study To further investigate the granularity of the gates in LoRA-Flow, we conduct an ablation study to compare the performance of different types of gates: step-level, layer-level, and module-level gates. As explained in Section 3.1, the step-level gate estiMethod MGSM Zh Ru Es Avg. TASK LORA 26.8 32.8 41.2 33.6 LORA-HUB 20.8 28.4 36.8 28.7 STEP-LEVEL 30.0 32.4 44.0 35.5 LAYER-LEVEL 33.2 37.6 42.0 37.6 MODULE-LEVEL 30.4 34.0 42.4 35.6 Table 3: Ablation study on various levels of fusion gates. mates the fusion weight for each step, which is shared by the entire model. Layer-level gates represent the default setting in LoRA-Flow, computing the fusion weight at each layer. Module-level gates provide a specific fusion weight for each module (e.g., the query projection module in the last layer), offering greater flexibility than layer-level gates but introducing more trainable parameters. Table 3 presents the results of varying gate levels. On average, layer-level fusion gates achieve the highest scores, surpassing both step-level and module-level gates. On the Zh math task, the confidence intervals for the step-, layer-, and modulelevel fusion models are (29.21, 30.78), (32.85, 33.55), and (29.61, 31.18), respectively. Nevertheless, the other two fusion gate types also outper12877 form LoRA-Hub. Specifically, the step-level fusion gate achieves an average score of 35.5, while that of LoRA-Hub is 28.7. These results suggest that while utilizing shared fusion weights across different model layers, employing dynamic fusion methods at various time steps yields superior results compared to using task-level static weights. Since different model layers may have different capabilities, using layer-level fusion gates can further improve the performance. Method MGSM HumanEval FT NEW LORA 18.8 12.2 FT LANG LORA 16.4 15.9 FT TASK LORA 27.6 18.9 LORA-FLOW 33.2 20.7 Table 4: Comparison between few-shot fine-tuning and LoRA fusion. The results are reported on the Zh math and Zh code tasks. 6.2 LoRA Fusion vs. Few-shot Fine-Tuning In scenarios with limited data, combining existing LoRA modules can effectively leverage the knowledge acquired in other tasks. In this section, we compare LoRA-Flow with other few-shot fine-tuning baselines: (1) FT new LoRA: using the same few-shot training examples to learn a new LoRA for the target task; (2) FT lang LoRA: finetuning the language LoRA on the target task; and (3) FT task LoRA: fine-tuning the task LoRA on the target task. The results are shown in Table 4. Our method, which dynamically integrates the language and task LoRA modules, surpasses all three few-shot fine-tuning baselines, indicating the effectiveness of utilizing existing LoRA modules in few-shot scenarios. 6.3 Generalization across Different Tasks Similar to LoRA-Hub (Huang et al., 2024), we learn the fusion module for each task with fewshot training data. In this study, we also explore the generalizability of fusion gates across various tasks. For example, can the gate learned for the Zh code task, which is trained to combine the Zh chat LoRA and the En code LoRA, effectively combine the Ru chat LoRA and the En math LoRA? Figure 6 shows the results, depicting the performance difference between merging two LoRA modules and employing solely the task LoRA, which is Zh Math Ru Math Es Math Zh Code Ru Code Es Code Zh Math Ru Math Es Math Zh Code Ru Code Es Code 6.4 -1.2 -1.2 -1.8 -1.3 -6.0 -2.4 4.8 -2.0 -0.6 0.0 1.9 0.0 1.2 0.8 0.6 -0.6 1.9 -4.0 1.6 -1.2 1.8 0.6 0.0 0.0 0.0 0.4 0.0 0.6 0.0 0.4 -0.2 -2.8 1.2 -2.5 1.3 6 4 2 0 2 4 6 Figure 6: Generalizability of the fusion gates. Each row represents a training task, and each column represents an evaluation task. The value represents the performance gap between combining the language and task LoRA modules and using the task LoRA only. Please refer to Section 6.3 for detailed explanations. a strong baseline. Specifically, the initial row indicates employing the gate trained with Zh math data for all six generative tasks. Remarkably, numerous values appear to be positive, suggesting that while zero-shot generalization is not a specific consideration during method design, it still demonstrates some degree of generalization capability. Take the third row as an example, training a gate on the Es math task results in a performance enhancement of 0.8 on the training task. Impressively, this Es math gate also leads to improved performance on three other tasks (i.e., Ru math, Zh code, and Es code). Furthermore, the enhancement observed in the Es code task surpasses that achieved in the Es math task. We leave it to future work to further improve the zero-shot generalization capabilities of LoRA fusion methods. 6.4 Few-Shot Sample Size Selection We also investigate the impact of the number of fewshot learning examples (i.e., N). On the Zh math task, when trained with 50, 100, and 200 examples, LoRA-Flow achieves scores of 26.8, 31.6, and 33.2, respectively. This trend indicates that using more data can yield better fusion results. 7 Related Work Module Composition Exploring the reuse of existing modules for new tasks is an appealing research direction. Pfeiffer et al. (2021) combines 12878 adapters (Houlsby et al., 2019) that have been finetuned on various downstream tasks through an additional attention layer. Zhang et al. (2023) defines some specific operations, such as addition and subtraction, to combine existing LoRA modules. The combination weights are tuned on a validation set. Huang et al. (2024) further improves the LoRA combination by automatically optimizing the fusion weights in a few-shot manner. Chronopoulou et al. (2023) combines LoRA modules trained on the summarization task and multilingual unlabeled data to perform multilingual summarization, where the fusion weight is also a hyperparameter that should be tuned. Most previous LoRA fusion methods employ task-level static weights, while our proposed LoRA-Flow can dynamically combine different types of LoRA according to the current context. Mixture-of-LoRA Some recent studies propose to improve the performance of LoRA with the mixture-of-LoRA (MoLoRA) architecture (Zadouri et al., 2023; Dou et al., 2023; Wang et al., 2022), which is similar to mixture-of-expert models. The primary objective of these efforts is to overcome the limitations of LoRA in certain downstream tasks, as the expressive capacity of a single LoRA may be restricted by its intermediate rank. The major difference between MoLoRA and our work is that MoLoRA mainly aims to train a better plugin for the backbone model, whereas we aim to leverage pre-trained LoRA modules for new tasks without training a new LoRA. These two approaches are complementary. Our method can also be extended to incorporate existing MoLoRAs or other advanced types of LoRA modules for unseen tasks. 8 Conclusion In this work, we propose LoRA-Flow, a dynamic combination method for LoRA. By assigning dynamic weights to different LoRA modules based on the current context, LoRA-Flow can outperform representative baselines with static task-level fusion weights on six complex generative tasks. 9 Limitation In this study, LoRA-Flow outperforms the fixedweight linear combination method across tasks in various languages. However, our computing resources are limited, constraining us to models no larger than 13 billion parameters. We only conduct experiments combining two adapters, though our method is readily adaptable to incorporate additional adapters by simply integrating more into the model and adjusting the gate’s shape accordingly. Nonetheless, the versatility of our method persists, and we leave the exploration of larger models and combining more adapters to future work. Acknowledgements This project was supported by the National Natural Science Foundation of China (No. 62106138, No. 62236004, No. 62236011), and a grant from the Guoqiang Institute, Tsinghua University. We thank all the reviewers for their constructive and insightful suggestions. References Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming 12879 Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. Alexandra Chronopoulou, Jonas Pfeiffer, Joshua Maynez, Xinyi Wang, Sebastian Ruder, and Priyanka Agrawal. 2023. Language and task arithmetic with parameter-efficient layers for zero-shot summarization. Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In Proceedings of the 36th International Conference on Machine Learning. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan AllenZhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations. Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, and Min Lin. 2024. Lorahub: Efficient cross-task generalization via dynamic lora composition. Viet Lai, Chien Nguyen, Nghia Ngo, Thuat Nguyen, Franck Dernoncourt, Ryan Rossi, and Thien Nguyen. 2023. Okapi: Instruction-tuned large language models in multiple languages with reinforcement learning from human feedback. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). OpenAI. 2023. Gpt-4 technical report. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, and Jason Wei. 2023. Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. The bottom-up evolution of representations in the transformer: A study with machine translation and language modeling objectives. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, and Jianfeng Gao. 2022. AdaMix: Mixtureof-adaptations for parameter-efficient model tuning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5744–5760, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. 12880 Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2023. Magicoder: Source code is all you need. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. Metamath: Bootstrap your own mathematical questions for large language models. Ted Zadouri, Ahmet Üstün, Arash Ahmadian, Beyza Ermi¸s, Acyr Locatelli, and Sara Hooker. 2023. Pushing mixture of experts to the limit: Extremely parameter efficient moe for instruction tuning. Jinghan Zhang, Shiqi Chen, Junteng Liu, and Junxian He. 2023. Composing parameter-efficient modules with arithmetic operations. In Advances in Neural Information Processing Systems. A Best Hyperparameters The best hyperparameters for LoRA fusion are listed in the Table 5. B Translation Prompt Figure 8 and Figure 9 show the prompts we used to translate the Metameth data from English to Chinese using GPT-3.5. The {system prompt} and {user prompt} are set by default. The first prompt is intended to translate the problem identified by the {instruction} of a sample. The second prompt is intended to translate the solution identified by the {answer} to a sample. C Human Annotation Details In the process of reviewing and polishing multilingual data, we employed two annotators for each language. For the construction of the English version of the code validation set, we hired two annotators with over two years of experience in NLP. We contacted them through social media and compensated them at market-standard rates. It has been communicated to all participating human annotators that the gathered data shall only be used for academic research purposes, with an assurance that their personal information will be kept confidential and not disclosed. C.1 Details of Translated Data Verification We employed college students majoring in the relevant languages to check and optimize the translated data. The primary areas of review and optimization include: • Correctness of the mathematical problemsolving process. • Correctness of the mathematical problem format: whether the expression of formulas and the organization of structures adhere to the original metamath template, such as answers following "####" and ending with "the answer is:" in various languages, etc. • Whether the code part of the code generation problems remains consistent with that in the original data. • Correctness of the code data format: whether the overall structure remains unchanged, and whether essential parts such as problem description, problem analysis, and code comments are translated into the corresponding language. • Overall fluency of the translated sentence. Each piece of data is sequentially reviewed by two independent annotators. D Translated Case To better understand the case study presented in Figure 5, we translate the Chinese question and answer into English, the results are shown in Figure 7. 12881 Tasks Zh Math Ru Math Es Math Zh Code Ru Code Es Code batch_size 4 4 8 4 4 4 learning_rate 1e-3 1e-3 1e-4 1e-3 1e-3 1e-3 Table 5: The best hyperparameters for LoRA fusion. 📊Fusion weights -0.20 0.00 0.20 0.40 0.60 0.80 1.00 1.20 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 En Math Zh Chat 贾马尔的手机能容纳的照片数量是布列塔尼手机的6 倍。布列塔尼的手机能容纳的最大照片数量是贾马尔在动物园拍摄的鸭子照片 中鸭子数量的50 倍。如果贾马尔的手机能容纳1,800 张照片,贾马尔在动物园拍摄的照片中能看到多少只鸭子? Input 📃 贾马尔的手机能容纳的照片数量是布列塔尼手机的6 倍,那么布列塔尼手机能容纳的照片数量是1,800 / 6 = 300 张。\n布列塔尼 的手机能容纳的最大照片数量是贾马尔在动物园拍摄的鸭子照片中鸭子数量的50 倍,那么贾马尔在动物园拍摄的照片中能看到的 鸭子数量是300 / 50 = 6 只。\n#### 6\n答案是:6 Output 🤖 English Translation: Jamal's phone holds six times as many photos as Brittany's phone. The maximum number of photos Brittany's phone can hold is 50 times the number of ducks in Jamal's photo of ducks at the zoo. If Jamal's phone could hold 1,800 photos, how many ducks could Jamal see in the photo he took at the zoo? English Translation: Jamal's phone can hold 6 times as many photos as Brittany's phone, so Brittany's phone can hold 1,800 / 6 = 300 photos. \nThe maximum number of photos that Brittany's phone can hold is 50 times the number of ducks in Jamal's photo of ducks at the zoo, so the number of ducks that Jamal can see in the photo of Jamal at the zoo is 300 / 50 = 6. \n#### 6\nThe answer is: 6 Figure 7: Case study of LoRA-Flow with English translations. {system prompt}: 将以下数学题中题干翻译为汉语,你必须要遵循以下几点要求: 1.保持文本中的数字、符号与LaTex转义字符不变。LaTex转义字符是指以反斜 杠开头的字符串,如\frac{1}{2},\cdot 等,这些字符串通常被'$' 符号包围,如 '$x = \frac{1}{2}$'。在翻译过程中将这些符号复制即可。 2.剩余文本内容全部翻译成汉语,特别是人名,人名翻译成汉语中最常见的对 应形式。如将“Joe”翻译成“乔”。 3.在翻译过程中需要保证用词准确与严谨,需要将原文中的数学相关词语准确 专业地翻译成汉语中相应的数学术语,并且尽可能地使得语言自然而流畅。 4.在翻译过程中需要保证内容完整,含义与原文一致,不允许添加或删除任何 信息。 示例: 原文: <text>Gracie and Joe are choosing numbers on the complex plane. Joe chooses the point $1+2i$. Gracie chooses $-1+i$. How far apart are Gracie and Joe's points?<\text> 翻译: 格蕾丝和乔在复平面上选择数字。乔选择了点$1+2i$。格蕾丝选择了$-1+i$。 格蕾丝和乔选择的点有多远? {user prompt}: 原文: {instruction} 翻译:\n Figure 8: Prompt for instruction translation. {system prompt}: 将以下数学题中回答翻译为汉语,你必须要遵循以下几点要求: 1.保持文本中的数字、符号与LaTex转义字符不变。LaTex转义字符是指以反斜 杠开头的字符串,如\frac{1}{2},\cdot 等,这些字符串通常被'$' 符号包围,如 '$x = \frac{1}{2}$'。在翻译过程中将这些符号复制即可。 2.剩余文本内容全部翻译成汉语,特别是人名,人名翻译成汉语中最常见的对 应形式。如将“Joe”翻译成“乔”。 3.在翻译过程中需要保证用词准确与严谨,需要将原文中的数学相关词语准确 专业地翻译成汉语中相应的数学术语,并且尽可能地使得语言自然而流畅。 4.在翻译过程中需要保证内容完整,含义与原文一致,不允许添加或删除任何 信息。 示例: 原文: <text>Dave bought a total of 8 + 6 + 3 = 17 books. Each book cost $6, so Dave spent a total of 17 x $6 = $102 on the books. #### 102 The answer is: 102<\text> 翻译: 戴夫买了总共8 + 6 + 3 = 17本书。 每本书的价格是6美元,所以戴夫在这些书上总共花费了17 x 6 = 102美元。 #### 102 答案是:102 {user prompt}: 原文: {answer} 翻译:\n Figure 9: Prompt for answer translation. 12882
2024
695
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12883–12895 August 11-16, 2024 ©2024 Association for Computational Linguistics Harder Task Needs More Experts: Dynamic Routing in MoE Models Quzhe Huang1* , Zhenwei An2∗, Nan Zhuang2∗, Mingxu Tao1, Chen Zhang1, Yang Jin1, Kun Xu3, Kun Xu3, Liwei Chen3, Songfang Huang2, Yansong Feng1 1Peking University 2Futurise AI 3Kuaishou Technology {huangquzhe,anzhenwei,zhuangn53}@pku.edu.cn [email protected] [email protected] Abstract In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty. Unlike existing MoE approaches that rely on fixed TopK Routing, which activates a predetermined number of experts regardless of the input’s complexity, our method dynamically allocates experts based on the confidence level in expert selection for each input. This allows for more efficient utilization of computational resources, activating more experts for complex tasks requiring advanced reasoning and fewer for simpler tasks. Through extensive evaluations, our dynamic routing method demonstrates substantial improvements over Top2 Routing across various benchmarks, achieving an average improvement of 0.7% with less than 90% activated parameters. Further analysis shows our model dispatches more experts to tasks requiring complex reasoning skills, like BBH, confirming its ability to dynamically allocate computational resources in alignment with the input’s complexity. Our findings also highlight a variation in the number of experts needed across different layers of the transformer model, offering insights into the potential for designing heterogeneous MoE frameworks. The code and models are available at https://github.com/ZhenweiAn/Dynamic_MoE. 1 Introduction To effectively increase the model’s parameter size, researchers have proposed the Mixture of Experts (MoE) framework (Shazeer et al., 2017; Lepikhin et al., 2021). By setting up multiple experts to enhance the model’s overall capacity, MoE models selectively activate a subset of parameters for use, thereby achieving more efficient parameter * Equal Contribution. Songfang Huang and Yansong Feng are the corresponding authors. utilization. With the same number of activated parameters, MoE models substantially outperform dense models in performance, achieving exceptional results in tasks such as QA and machine translation (Kim et al., 2021). Most MoE frameworks adopt a routing mechanism that dispatches a fixed number of experts for every input (Fedus et al., 2022; Du et al., 2022). The most famous method is TopK Routing (Shazeer et al., 2017), which initially calculates the probability of each expert being suited to the current input and then activates the TopK suitable experts. Empirically, previous works (Lepikhin et al., 2021) activate two experts per token, as activating more experts offers limited improvements in model performance but substantially increases training overhead. Most of the subsequent studies (Zoph et al., 2022; Lewis et al., 2021) can be seen as variants of TopK Routing, where different constraints are introduced to ensure that the number of tokens processed by different experts is as balanced as possible. Almost all these efforts activate a fixed number of experts. The TopK Routing, though it achieves good performance on downstream tasks, overlooks the different difficulties of inputs. Compared with simpler input, the more challenging input, e.g., tasks that require complex reasoning or logic inference, might need more parameters to solve. Dispatching experts equally across inputs could lead to computational waste on simpler tasks and insufficient computational resources for more difficult ones. To fully leverage the potential of MoE models, we propose a dynamic routing mechanism that adjusts the number of required experts based on the confidence level in the expert selection. When the model deems the currently selected experts as insufficient, it activates more experts. Specifically, we first compute a probability distribution for selecting experts. If the highest probability for an expert exceeds a predefined threshold p, indicating high 12883 (a) TopK Routing (b) Dynamic Routing Figure 1: Comparison between TopK routing mechanism and Dynamic Routing mechanism. (a) Each token selects fixed K=2 experts with TopK Routing probabilities. (b) In Dynamic Routing mechanism, each token selects experts with higher routing probabilities until the cumulative probability exceeds the threshold. confidence, we activate only that one expert. Otherwise, we progressively add additional experts until the cumulative probability of the selected experts exceeds the threshold p. This approach allows for a dynamic selection of experts, with the number of experts adjusted based on the input’s complexity. Evaluation across multiple common benchmarks has revealed that our method substantially outperforms MoE models based on TopK Routing. Compared with Top2 Routing, our dynamic routing achieves an average improvement of 0.7% with less than 90% activated parameters. Further analysis has shown that our dynamic routing mechanism activates more experts in tasks requiring complex reasoning like BBH (Suzgun et al., 2023), while using fewer experts in relatively easier tasks such as Hellaswag (Zellers et al., 2019), confirming that our method indeed dynamically allocates experts based on the difficulty of the input. Token-level analysis indicates that tokens with ambiguous semantics are more challenging for the model, typically activating more experts. Another interesting finding is that the number of experts needed varies across different layers of the transformer. Lower layers require more experts for combination, while the top layer needs only one. This may relate to the over-thinking phenomenon (Kaya et al., 2019), which is widely observed in deep neural networks. Our contributions can be summarized as follows: 1. We proposed a dynamic routing strategy that can adjust the number of activated experts based on the input difficulty. 2. We empirically validate that our proposed method is efficient in both training and inference, outperforming Top2 Routing while activating fewer experts. 3. We observe that for MoE models, the number of experts needed to be activated varies across different layers. This finding could help design heterogeneous MoE frameworks. 2 Method In this section, we first briefly introduce the MoE model with TopK Routing strategy, which activates a fixed number of experts for each token. As TopK Routing ignores the varying difficulty of different inputs and the different requirements for experts at different layers, we propose a dynamic routing mechanism that adjusts the number of activated experts according to the complexity of inputs. To avoid activating too many parameters through the dynamic routing mechanism, we also introduce a dynamic loss to encourage the model to activate only the necessary experts. 2.1 TopK Routing MoE In a Transformer model, the MoE layer is applied independently per token and replaces the feedforward (FFN) layer of the transformer block (Lepikhin et al., 2021). For an MoE layer with N experts, E = {e1, e2, .., eN}, an input x ∈Rd will be sent to the experts, where d is the hidden dimension. The output of the MoE layer is the weighted average of the experts’: MoE(x) = N X i=1 gi(x) ∗ei(x) (1) where g∗(x) is computed by a routing network that determines the contribution of each expert to the final output. In consideration of computing efficiency, a token is dispatched to limited experts. Thus for most experts, the corresponding g∗(x) is 12884 zero, which means that the token is not dispatched to that expert. To obtain g∗(x), we first compute the probability P of selecting each expert for input x: P = Softmax(Wr · xT ) (2) where Wr ∈RN×d is a learnable parameter. P is a vector of size N and Pi represents the probability of selecting the ith expert ei to process x. TopK Routing selects the k experts, whose probabilities are the highest k in P. Then the probabilities of the selected experts are normalized and the weights of the remaining experts are set to zero, indicating they are not activated. The corresponding calculation of g∗(x) is as follows: gi(x) = ( Pi P j∈T opK(P) Pj , i ∈TopK(P) 0, i /∈TopK(P) (3) where TopK(P) returns the indices of the highest k elements in P. TopK Routing is initially proposed by (Shazeer et al., 2017), and subsequently, numerous studies have built upon it. The following works (Lepikhin et al., 2021; Zuo et al., 2022) introduce constraints aimed at ensuring a more balanced workload among the experts during training. The core of these works remains to select the most suitable experts for each token under specific constraints, based on the probability distribution P calculated in Equation 2. And the number of experts dispatched for each token is fixed across all these studies. Empirically, the value of k is set to 2, serving as a trade-off between training costs and model capabilities. 2.2 Dynamic Routing MoE Although the TopK Routing strategy has shown promising performance, its assumption that an equal number of experts should be dispatched for each token overlooks the variability in difficulty across different inputs. Moreover, as a fixed number of experts are activated at every layer of the transformer, this method neglects the differences in representations across layers, potentially requiring a different number of experts for different layers. To address these issues and make use of model parameters more efficiently, we propose a dynamic routing strategy based on model confidence. Unlike the TopK Routing, which selects a fixed number of experts, our method allows the model to assess Algorithm 1 Expert Selection in Dynamic Routing Input: The probability of selecting each expert, P; The threshold of confidence, p; Expert Set, {e1, ..., eN}; Output: The activated expert set, S; 1: sorted_indices I = sort(P, descending_order) 2: cumulative_probability = 0 3: for i in I do 4: cumulative_probability += Pi 5: S = S ∪{ei} 6: if cumulative_probability > p then 7: Break 8: end if 9: end for 10: return S whether the currently selected experts are sufficient. If not, it continues to activate more experts. Specifically, we regard that P in Equation 2 reflects the confidence level of input x in selecting different experts. In other words, Pi represents how confident the model is that the ith expert can adequately handle input x. If the highest probability in P is sufficiently large, then we may only need to use the corresponding expert. However, if the highest probability is not large enough, we need to add more experts to increase the reliability of processing x. Algorithm 1 shows how to select activated experts in Dynamic Routing. we first sort the elements in P from highest to lowest, resulting in a sorted indices I. Then we find the smallest set of experts S whose cumulative probability exceeds the threshold p, where p is the threshold that controls how confident the model should be when stopping adding more experts. p is a hyper-parameter whose range is from 0 to 1. The higher the p is, the more experts will be activated. After obtaining the set of activated experts S, the calculation of g∗(x) in Equation 1 is: gi(x) = ( Pi ei ∈S 0, ei /∈S (4) 2.3 Loss Dynamic Loss There is a risk associated with our dynamic routing mechanism: it could assign low confidence to all experts, thereby activating a larger number of experts to achieve better performance. 12885 Suppose P is a uniform distribution and we set the hyper-parameter p to 0.5, then the model would activate up to half of the experts. This goes against the original intention of the MoE framework, which is to scale the model with great efficiency. To prevent dynamic routing from using too many parameters to cheat and losing its ability to selectively choose experts, we introduce a constraint on P. We expect the routing mechanism to select a small set of necessary experts, therefore, we aim to minimize the entropy of the distribution P, ensuring that every token can focus on as less specific experts as possible. Our dynamic loss is designed to encourage the routing mechanism to select the minimal necessary set of experts, which is formalized as: Lossd = − N X i=1 Pi ∗log(Pi) (5) Load Balance Loss MoE models typically require distributed training, where different experts are deployed across various computing nodes. To avoid scenarios where some nodes are fully utilized while others are underutilized, thereby impacting training efficiency, it is generally desirable for the number of tokens processed by different experts to be roughly the same. Furthermore, a previous study (Zuo et al., 2022) has shown that evenly activated experts in an MoE layer can lead to better performance. To achieve balanced loading among different experts, we have also incorporated a loadbalance loss, Lossb, which is widely used in previous works (Lepikhin et al., 2021; Fedus et al., 2022) Lossb = N ∗ N X i=1 fi ∗Qi (6) where fi is the fraction of the tokens choosing expert ei and Qi is the fraction of the router probability allocated for expert ei. For a sequence containing M tokens, fi and Qi are calculated as: fi = 1 M M X j=1 1{ei ∈Sj} (7) Qi = 1 M M X j=1 P j i (8) where Sj is the set of activated experts for token j, which is calculated by Equation ??, and P j is the probability of selecting each experts for token j, calculated by Equation 2. Final Loss Our model is a generative model that uses the next token generation as the training objective. We denote this loss as Losslm. Our final loss is a combination of the language model loss, dynamic loss, and load-balance loss: Loss = Losslm + αLossb + βLossd (9) where α and β are hyper-parameters to adjust the contribution of the load balance loss and dynamic loss, respectively. In our experiment, we set α as 1e-2 and β is set as 1e-4. 3 Experiments 3.1 Settings 3.1.1 Training data We use RedPajama(Computer, 2023) as our training data, which is a fully open-source implementation of the LLaMA (Touvron et al., 2023a) training dataset. RedPajama data consists of diverse sources including the Common Crawl (CC), C4, Github, Wikipedia, books, Arxiv, and StackExchange. In our main experiments, we randomly sample 100B tokens from RedPajam and use them to train all of our models. 3.1.2 Model Settings The model architecture follows LLaMA(Touvron et al., 2023a). We use LLaMA2 (Touvron et al., 2023b) tokenizer whose vocabulary size is 32,000. The number of transformer layers is 24 and the hidden dimension is 1024. Each MoE layer has 16 experts. Under this configuration, the dense model has approximately 374M parameters. Each MoE model has 3.5B total parameters. Only 374M parameters are activated in MoE-Top1 and 581M parameters are activated in MoE-Top2. More detailed model and training settings are shown in Appendix A. 3.1.3 Evaluation We use opencompass1 to evaluate our model. Specifically, we adopt a 3-shot evaluation for the BBH dataset and a 0-shot evaluation for the rest. 3.1.4 Experiment Models We train several variants of our architecture from scratch using the above model settings. 1https://github.com/open-compass/OpenCompass/ 12886 Dense(374M) Dense(570M) MoE-Top1 MoE-Top2 MoE-Dynamic PIQA (Bisk et al., 2020) 64.3 65.9 67.3 68.1 68.1 Hellaswag (Zellers et al., 2019) 36.1 39.6 42.3 43.9 44.3 ARC-e (Bhakthavatsalam et al., 2021) 37.9 37.6 39.5 40.4 39.9 Commonsense QA (Talmor et al., 2019) 32.2 31.7 30.3 32.1 33.6 BBH (Suzgun et al., 2023) 22.3 22.1 23.0 23.3 25.6 Average 38.6 39.4 40.5 41.6 42.3 Table 1: Performance on downstream tasks. The best result for each task is emphasized in bold. Dense We use dense models as our baseline. In dense models, each transformer layer is composed of a multi-head attention layer and a standard Feed Forward Network. We implement two Dense models: Dense(374M) and Dense(570M) by setting the hidden dimensions to 1024 and 1280, respectively. MoE-Top1 / Top2 The MoE models with TopK Routing, where K = 1 and 2 respectively. Only language modeling loss, Losslm, and load-balance loss Lossb are used for training. The MoE-Top1 could be seen as a re-implementation of Switch Transformer (Fedus et al., 2022) and the MoE-Top2 is a re-implementation of Gshard(Lepikhin et al., 2021). The activated parameters of MoE-Top1 and MoE-Top2 are nearly the same as Dense(374M) and Dense(570M) respectively. MoE-Dynamic MoE-Dynamic model uses our dynamic routing mechanism, activating a various number of experts depending on the input token representation. The threshold p in our routing mechanism is 0.4. During inference, MoE-Dynamic model activates no more than 2 experts, which means it uses fewer parameters than MoE-Top2. 3.2 Main Results Table 1 shows the performance of different models on downstream tasks. Overall, the MoE models outperform the Dense models. Among all the MoE variants, our proposed Dynamic MoE demonstrates the best performance, achieving at least a 0.7% higher score on average compared to other models. We first compare models with an equal number of activated parameters. It is observed that MoETop1 outperforms the Dense model with 374M parameters by an average of 1.9% score, and MoETop2 surpasses the Dense model with 570M parameters by 2.2% score. This indicates that, with the same number of activated parameters, MoE models substantially outshine their Dense counterparts. When comparing models with the same architecture, we generally observe a positive correlation between model performance and the number of activated parameters. For Dense models, the model with 570M parameters outperforms the model with 374M parameters by 0.8% score on average. Similarly, among models using the MoE architecture with a fixed number of activated experts, MoETop2 reaches an average of 41.6% score and outperforms MoE-Top1 by 1.1% score. In fact, MoETop2 performs better than MoE-Top1 in all subtasks, demonstrating the rule of more parameters leading to better performance. However, our proposed Dynamic Routing mechanism breaks this rule. As shown in Table 3, the average number of activated experts in the MoEDynamic during evaluation phases is less than two, meaning it activates fewer parameters than MoETop2. Yet, as shown in Table 1, compared to MoETop2, MoE-Dynamic achieves comparable or even better performance on nearly all the tasks and outperforms MoE-Top2 by 0.7% score on average. MoE-Dynamic obtains better performance, indicating that our dynamic routing mechanism can allocate the necessary experts for different inputs more reasonably and make use of parameters more efficiently. 4 Efficiency of Dynamic Routing The greatest advantage of MoE models is their ability to efficiently scale to larger models. The TopK Routing mechanism controls the number of parameters used by the entire model by activating a fixed number of experts. In contrast, our proposed dynamic routing mechanism removes the limitation of a fixed number of experts. Naturally, there may be concerns that our method might assign too many experts to each token. Another possible concern is that our dynamic routing mechanism might cost too many resources and make the whole model inefficient. To address these concerns, we demonstrate the efficiency of the dynamic routing mechanism from both training and inference perspectives. 12887 20 40 60 80 100 Training Tokens(Billion) 1.8 2.0 2.2 2.4 Activated Experts Figure 2: Average activated experts number across training procedure. 4.1 Efficient Training We sample 1000 pieces of text from different sources within Redpajama and calculate the average number of experts activated per token at different stages of training. Figure 2 shows the change in the average number of activated experts throughout the training process. As shown in the figure, we can observe that the number of experts activated per token decreases over time. In the early stages of training, dynamic routing assigns more experts to each token, but after 60B tokens, the average number of activated experts is already less than 2. Table 2 displays the number of experts activated by MoE-Dynamic at the end of the 100B training. It could be seen that across all data sources, the number of experts activated by MoE-Dynamic is less than 2. Recently, the amount of tokens used in training for large language models far exceeds 100B, for instance, Pythia uses 300B tokens, and Llama2 uses 2T tokens. If we continue to train on an even larger scale corpus, the average number of parameters used throughout the training process is guaranteed to be lower than that of Top2-Routing. Sources Ratio Activated Experts CC 67% 1.82 C4 15% 1.84 Github 4.5% 1.88 Wiki 4.5% 1.78 Book 4.5% 1.73 Arxiv 2.5% 1.90 StackExchange 2% 1.79 Avg 100% 1.82 Table 2: Average activated experts in different parts of the training corpus. 4.2 Efficient Inference To further explore whether our proposed method is efficient in inference, we calculate the average Sources Activated Experts PIQA 1.72 Winogrande 1.76 ARC-e 1.73 Commonsense QA 1.74 BBH 1.87 Avg 1.76 Table 3: Average activated experts in different downstream tasks. number of experts activated by the model across different downstream tasks. For every question, we use the template from the evaluation to concatenate the question with the gold answer into a complete input and truncate the tokens exceeding 2048 to fit our model’s maximum input length. Table 3 shows the average number of experts activated per token across various downstream tasks. The result is averaged across all the layers of transformers and it is evaluated using the checkpoint trained on 100B tokens. From the table, we can observe that across all five downstream tasks, the number of activated experts is less than two. The model activates 1.76 experts on average, which is fewer than the fixed activation of two experts by the Top2 Routing method. During the training phase, our method and Top2 Routing are comparable in efficiency, but upon completion of training, our inference efficiency substantially outperforms Top2 Routing. Given that models are mostly trained once and deployment costs are the biggest burden nowadays, the advantages of our method over static MoE routing mechanisms like Top2 become even more apparent. 4.3 Influence of Routing Module is limited Compared with TopK Routing, our proposed method will introduce extra computation during routing. However, the computation cost of the routing module is limited compared with the whole transformer and our proposed method is more efficient than Top2 Routing in reality. Table 4 shows the throughput of MoE-Dynamic and MoE-Top2, both of which are trained with 100B tokens. As shown in the table, MoE-Dynamic is about 5% higher in throughput than MoE-Top2 during both training and inference, effectively validating that our Dynamic Routing is indeed more efficient. 12888 MoE-Dynamic MoE-Top2 Training(K token/s) 98.5 93.9 Inference(Sample/s) 0.19 0.20 Table 4: Throughput of MoE models trained with 100B tokens. Training throughput is tested using 8 * A800 on the training dataset. Inference throughput is tested using a single A800 on BBH dataset. 5 What is Challenging Input? The motivation for designing dynamic routing is to enable the model to adjust the number of allocated experts based on the difficulty of the input. In this section, we will explore what kinds of inputs are considered challenging for the model from various perspectives. 5.1 Tasks Requiring Reasoning From Table 3, we could observe that solving the BBH task requires activating an average of 1.87 experts, more than the number needed for other tasks. BBH, which stands for BIG-Bench Hard, is a suite of 23 challenging BIG-Bench tasks. These tasks demand capabilities such as multi-hop reasoning, causal inference, logical deduction, and so on, making them substantially more difficult than normal NLP tasks (Suzgun et al., 2022). Our model’s use of more experts on BBH tasks implies that our method can dynamically monitor task difficulty and apply more parameters to tackle more challenging tasks. Interestingly, as shown in Table 1, MoEDynamic, compared to MoE-Top2, sees the most improvement on BBH tasks. While the average improvement across all tasks is less than 1.0%, the improvement on BBH is more than 2.0%, which is more than double that of other tasks. This further illustrates that dynamically adjusting the number of activated experts is beneficial for solving downstream tasks, especially more challenging ones. Task Activated Experts Tracking shuffled objects —- Object Number K = 3 1.959 —- Object Number K = 5 1.963 —- Object Number K = 7 1.970 Logical deduction —- Object Number K = 3 1.943 —- Object Number K = 5 1.947 —- Object Number K = 7 1.953 Table 5: Average activated experts in the same task with different elements. 5.2 Tasks with More Elements In BBH, there are two sets of tasks, each containing three sub-tasks with the same goal but varying elements. We evaluate whether the number of experts activated for different sub-tasks is proportional to the number or elements. The first task is Tracking shuffled objects: Given the initial positions of a set of K objects and a series of transformations (namely, pairwise swaps) applied to them, determine the final positions of the objects. The sub-tasks vary in the number of objects K each contains. The second task is Logical deduction: Deduce the order of a sequence of K objects based on the clues and information about their spatial relationships and placements. The subtasks also vary in the number of objects K each contains. The table below shows the number of parameters activated by MoE-Dynamic on the corresponding tasks. From the table, we can see that as the number of objects increases, the number of parameters activated by the model also gradually increases. 5.3 Tokens with Uncertain Meaning Examples C-Words Ratio Most Experts tr, eq, mu, frac 10 Least Expers to, that, and, show 51 Table 6: The first column shows examples of tokens requiring the most experts and least experts. The last column shows the complete word ratio in these two groups of tokens. To further analyze what types of tokens are considered more challenging for a model, we examine the average number of experts activated for each token in the vocabulary across different contexts. We sample 1 million tokens from the training dataset Redpajama, resulting in a new corpus of a total of 7 million tokens. In this corpus, we calculate the average number of experts activated for each token in the vocabulary. To minimize the effect of randomness, we only consider tokens that appear no less than 10,000 times in the new corpus. Table 6 shows the number of complete words among the top 100 and bottom 100 tokens by the average number of experts activated, along with some examples. Upon manually reviewing the 100 tokens that activate the most experts and the 100 tokens that activate the least, we observe an interesting phe12889 nomenon: Tokens with relatively definite meaning are considered easier by the model, activating fewer experts. In contrast, tokens with uncertain meanings are deemed more challenging and require more experts for processing. Specifically, since our model’s tokenizer is trained with Byte Pair Encoding (BPE), many tokens are not complete words but subwords. These subwords have vaguer meanings compared to complete words because they can combine with many other subwords to form words with different meanings. For example, the subword ’tr’ can lead to the formation of hundreds of words with varied meanings, such as tree, triple, train, trick, trouble, and so on. Due to the multitude of possible meanings, different meanings may require different experts to process, making such subwords require a comprehensive understanding by more experts. 6 Bottom Layers Need More Experts An intriguing observation from our study is that our model achieves superior performance while activating fewer parameters. As shown in Table 3, on all the tasks, our MoE-Dynamic activates an average of fewer than two experts. But it outperforms the MoE-Top2 in downstream tasks as shown in Table 1. This result is quite surprising, as performance on downstream tasks is typically correlated with the quantity of activated parameters. This unexpected phenomenon might be attributed to our method’s more proper allocation of the experts across different layers, employing more experts at the bottom layers and fewer at the top. This layer-wise dynamic allocation, as opposed to the fixed number of experts per layer, somewhat mitigates the common issue of overthinking in deep neural networks, thereby enhancing performance. The overthinking refers to the situations where simpler representations of an input sample at an earlier layer, relative to the complex representations at the final layer, are adequate to make a correct prediction (Kaya et al., 2019). Previous works (Liu et al., 2020; Schwartz et al., 2020; Xin et al., 2021) have demonstrated that shallower representations can achieve comparable, if not better, performance across various tasks than deeper representations. This could be due to deeper representations overfitting specific distributions, lacking generalizability, and being more vulnerable to attacks (Hu et al., 2019; Zhou et al., 2020). It suggests that in some cases, acquiring a better shallow representation is 0 5 10 15 20 Layer Number 1 2 3 4 Activated Experts Figure 3: Activated experts in different layers more valuable than obtaining a more complex deep representation, which correlates to previous findings that removing top layers has a limited impact on the downstream tasks (Sajjad et al., 2023). Compared with Top2 Routing, our Dynamic Routing activates more experts at the bottom layers to obtain better shallow representations and use the simpler network in the top layers to alleviate the overthinking issue. Figure 3 displays the number of experts activated per token at different layers2. From the figure, we observe a gradual decrease in the average number of experts activated per token with the increase in layer depth. The lowest layer activates the most experts, up to 4 experts per token, enabling better shallow representations through a wider network, which is beneficial for various downstream tasks. At the topmost layer, the number of activated experts per token is reduced to even one. This phenomenon can prevent the model from being too complex and preserve generality in the final representation. It should be noted that we do not argue allocating most parameters to the bottom layers is the optimal strategy for a model. It is possible that one model could achieve better performance if more experts are activated in the intermediate layers. In our experiment, we observe the phenomenon that bottom layers need more experts and we think this is quite interesting and worthy of future exploration. Beyond the series of overthinking works that we mention above (Kaya et al., 2019; Liu et al., 2020), our finding also correlates with the conclusion that the upper layers of the model are tied to pre-training tasks (Zhang et al., 2020; Tao et al., 2024). Maybe, because the upper layer is specialized for the task of the next token prediction, the topmost layers only activate just one expert. 2The results are evaluated using a checkpoint trained on 100B tokens. 12890 7 Related Work The Mixture of Experts (MoE) model is initially introduced by (Jacobs et al., 1991). Recent studies have demonstrated sparsely gated MoE models have substantial improvements in model capacity and efficiency, enabling superiors performance than dense models(Shazeer et al., 2017). Particularly MoE has shown great potential with the integration of transformer architectures (Zoph et al., 2022; Jiang et al., 2024). In previous MoE architectures, a static number of experts are activated regardless of the varying complexity presented by input tokens. Most of MoE models activate Top1 or Top2 experts (Lepikhin et al., 2021; Fedus et al., 2022). There are many works (Roller et al., 2021) make improvements based on the Top2 Routing (Lepikhin et al., 2021). They make great efforts to distribute tokens among different experts more evenly during the training process and all of them activate static experts for each token. We choose Top2 Routing (Lepikhin et al., 2021) as the baseline because it’s currently the most widely used method and the only one successfully applied in large language models (LLM) thus far. For instance, MixtralMoE (Jiang et al., 2024), DeepSeek-MoE (Dai et al., 2024) and GroK-MoE (X.AI, 2024) all use this structure. There are some work that allocating different numbers of experts to different tokens. ExpertChoice MoE model selects TopK tokens for each expert(Zhou et al., 2022) and each token will be allocated with a different number of experts. However, on average every token activates exactly two experts, which is the same with Top2 Routing models and does not save computation. Different from the previous works, our dynamic routing mechanism can activate fewer parameters on average by dynamically allocating experts to different tokens, which makes our method more efficient. Recent works point out that in language modeling not all tokens require the same calculation cost (Raposo et al., 2024) and tasks across different domains require varying numbers of experts(Lu et al., 2024). Our findings that harder task needs more experts are correlated with this line of work. 8 Conclusion Our paper introduces a dynamic expert selection framework for Mixture of Experts (MoE) models, surpassing existing static TopK Routing by adjusting expert activation based on input complexity. Our approach not only improves computational efficiency but also model performance, evidenced by obvious gains over TopK Routing in our evaluations. Our findings reveal the framework’s effectiveness at dynamically dispatching different numbers of experts, particularly for complex reasoning tasks, and suggest the potential for developing more challenging heterogeneous MoE models. In support of further research, we will open-source our models, contributing to advancements in the MoE domain. Limitation Due to resource constraints, the size of the model we trained is limited, with only about 600M activation parameters, and the entire MoE (Mixture of Experts) model being just over 3B in size. However, (Dai et al., 2024) has validated that within the MoE framework, conclusions drawn from smaller models can be generalized to larger models with more parameters. Hence, we believe our proposed dynamic routing method could also be effective in larger-scale models. Additionally, we have only trained on 100B tokens, which may not be sufficient for model training. Yet, given the same scale of training data, our method demonstrated superior performance, which also underscores the efficiency of our training process. Acknowledgments This work is supported in part by NSFC (62161160339) and Beijing Science and Technology Program (Z231100007423011). We thank the anonymous reviewers for their valuable suggestions. For any correspondence, please contact Yansong Feng. References Sumithra Bhakthavatsalam, Daniel Khashabi, Tushar Khot, Bhavana Dalvi Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, and Peter Clark. 2021. Think you have solved direct-answer question answering? try arc-da, the direct-answer AI2 reasoning challenge. CoRR, abs/2102.03315. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 12891 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432– 7439. AAAI Press. Together Computer. 2023. Redpajama: An open source recipe to reproduce llama training dataset. Damai Dai, Chengqi Deng, Chenggang Zhao, R. X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y. Wu, Zhenda Xie, Y. K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, and Wenfeng Liang. 2024. Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models. CoRR, abs/2401.06066. Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P. Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen S. MeierHellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2022. Glam: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 5547–5569. PMLR. William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. J. Mach. Learn. Res., 23:120:1–120:39. Ting-Kuei Hu, Tianlong Chen, Haotao Wang, and Zhangyang Wang. 2019. Triple wins: Boosting accuracy, robustness and efficiency together by enabling input-adaptive inference. In International Conference on Learning Representations. Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. 1991. Adaptive mixtures of local experts. Neural Comput., 3(1):79–87. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088. Yigitcan Kaya, Sanghyun Hong, and Tudor Dumitras. 2019. Shallow-deep networks: Understanding and mitigating network overthinking. In International conference on machine learning, pages 3301–3310. PMLR. Young Jin Kim, Ammar Ahmad Awan, Alexandre Muzio, Andrés Felipe Cruz-Salinas, Liyang Lu, Amr Hendy, Samyam Rajbhandari, Yuxiong He, and Hany Hassan Awadalla. 2021. Scalable and efficient moe training for multitask multilingual models. CoRR, abs/2109.10465. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2021. Gshard: Scaling giant models with conditional computation and automatic sharding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. 2021. BASE layers: Simplifying training of large, sparse models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 6265–6274. PMLR. Weijie Liu, Peng Zhou, Zhiruo Wang, Zhe Zhao, Haotang Deng, and Qi Ju. 2020. Fastbert: a selfdistilling bert with adaptive inference time. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6035– 6044. Xudong Lu, Qi Liu, Yuhui Xu, Aojun Zhou, Siyuan Huang, Bo Zhang, Junchi Yan, and Hongsheng Li. 2024. Not all experts are equal: Efficient expert pruning and skipping for mixture-of-experts large language models. arXiv preprint arXiv:2402.14800. David Raposo, Sam Ritter, Blake Richards, Timothy Lillicrap, Peter Conway Humphreys, and Adam Santoro. 2024. Mixture-of-depths: Dynamically allocating compute in transformer-based language models. arXiv preprint arXiv:2404.02258. Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason Weston. 2021. Hash layers for large sparse models. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 17555–17566. Hassan Sajjad, Fahim Dalvi, Nadir Durrani, and Preslav Nakov. 2023. On the effect of dropping layers of pre-trained transformer models. Computer Speech & Language, 77:101429. Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A Smith. 2020. The right tool for the job: Matching model and instance complexities. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6640–6651. Noam Shazeer. 2020. GLU variants improve transformer. CoRR, abs/2002.05202. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. 12892 Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei. 2023. Challenging big-bench tasks and whether chain-of-thought can solve them. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 13003–13051. Association for Computational Linguistics. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, et al. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4149–4158. Association for Computational Linguistics. Mingxu Tao, Quzhe Huang, Kun Xu, Liwei Chen, Yansong Feng, and Dongyan Zhao. 2024. Probing multimodal large language models for global and local semantic representations. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 13050–13056. ELRA and ICCL. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv e-prints, pages arXiv–2307. X.AI. 2024. Grok-1. https://github.com/xai-org/ grok-1?tab=readme-ov-file. Ji Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin. 2021. Berxit: Early exiting for bert with better finetuning and extension to regression. In Proceedings of the 16th conference of the European chapter of the association for computational linguistics: Main Volume, pages 91–104. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics. Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q Weinberger, and Yoav Artzi. 2020. Revisiting few-sample bert fine-tuning. In International Conference on Learning Representations. Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian McAuley, Ke Xu, and Furu Wei. 2020. Bert loses patience: Fast and robust inference with early exit. Advances in Neural Information Processing Systems, 33:18330–18341. Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew M. Dai, Zhifeng Chen, Quoc V. Le, and James Laudon. 2022. Mixture-ofexperts with expert choice routing. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022. Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. St-moe: Designing stable and transferable sparse expert models. arXiv preprint arXiv:2202.08906. Simiao Zuo, Xiaodong Liu, Jian Jiao, Young Jin Kim, Hany Hassan, Ruofei Zhang, Jianfeng Gao, and Tuo Zhao. 2022. Taming sparsely activated transformer with stochastic experts. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. 12893 Dynamic Top1 Top2 Top2-Inference-with-Dynamic Threshold p 0.4 0.5 0.1 0.2 0.3 0.4 Activated Experts 1.8 2.3 1.0 2.0 1.1 1.9 2.8 3.9 Avg Performance 42.3 42.8 40.5 41.6 40.0 40.3 42.1 42.8 Table 7: Experiment results about threshold p. The first two columns present results of Dynamic MoE models trained separately with p = 0.4 and p = 0.5. The last four columns reports results of applying dynamic routing mechanism as a post-hoc method to MoE-Top2. A Detailed Training Setting A.1 Model Setting The model architecture follows LLaMA(Touvron et al., 2023a). We use Llama2 tokenizer whose vocabulary size is 32000. Unless specifically stated otherwise, we set the number of transformer layers to 24, and the hidden dimension to 1024. We employ the multi-head attention mechanism with a total of 16 attention heads, where each head has a dimension of 64. We use SwiGLU(Shazeer, 2020) in FFN layers. For initialization, all learnable parameters are randomly initialized with a standard deviation of 0.006. Each MoE layer has 16 experts, which have the same initialized parameters as a standard FFN. Under this configuration, each dense model has approximately 374M parameters. Each MoE model has 3.5B total parameters. Only 374 parameters are activated in MoE-Top1 and 581M parameters are activated in MoE-Top2. A.2 Training Setting We adopt the AdamW optimizer with first-moment decay β1 = 0.9 and second-moment decay β2 = 0.95. The weight decay is 0.1. The learning rate warms up from 0 to 3e-4 in the first 2000 steps and decays in the remaining steps using the cosine decay schedule to 3e-5. We set the context length to 2048 and adopt the batch size of 2048. We use Megatron-LM serving as the backbone of our training infrastructure. Our configuration includes a tensor parallelism of 1 and a pipeline parallelism of 2. During training, we use at most 128 NVIDIA A800 GPUs. B Analysis about experiments details B.1 Influences of threshold p Threshold p is a hyper-parameter used to control the dynamic routing mechanism. In this section, we illustrate that Dynamic Routing method is not very sensitive to the choosing of p in both training and inference. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Threshold 38 40 42 Avg Score Figure 4: Average scores of MoE-Dynamic(p = 0.4) with different threshold p in inference on downstream tasks We train Dynamic MoE models based on threshold p ∈{0.4, 0.5}. Experiment results in Table 7 show that both settings outperform MoE-Top2, which indicates that dynamic MoE is not very sensitive to the choosing of p during training. We are not able to try other p as training models from scratch with different values of p is very resource-intensive We also evaluate the model with different p during inference. Figure 4 demonstrates the average performance on downstream tasks with different p of Dynamic-MoE trained with p = 0.4. The figure reveals that when p is too low, like 0.1 and 0.2, the model’s performance on downstream tasks decreases dramatically due to activating too few experts. Conversely, once p surpasses a certain threshold, like 0.3, the model’s performance stabilizes, and the influence of this parameter on downstream tasks will become minimal. B.2 Post-hoc method for TopK? Another interesting question is whether we can use a model trained with TopK routing but run the inference with Dynamic Routing, which is like a post-hoc pruning for efficiency. As most public MoE models are trained with TopK Routing, we hope to apply our Dynamic Routing in these models directly and improve their efficiency. Table 7 shows the performance of a model trained with Top2 Routing and inference with Dynamic Routing. As shown in the table, we can control the number of activated experts by chang12894 Activated Experts Avg Performance MoE-Dynamic 1.8 42.8 w/o Lossd 2.0 40.0 Table 8: Experiment results of Ablation study on Dynamic Loss. The first column presents average number of activated experts for each token. The second column reports models’ average performance on downstream tasks in Table 1. ing the threshold p during inference. We find that when using proper p, e.g. 0.3 and 0.4, inference with Dynamic Routing could outperform inference with Top2 Routing. But this is at the cost of activating more parameters. When using p=0.2 for inference, the model activates 1.9 experts on average, which is slightly fewer than MoE-Top2. At this time, the model’s performance is inferior to the performance of MoE-Top2. It implies that direct inference with Dynamic Routing using MoE-Top2 model may not work. B.3 Ablation On Dynamic Loss We conducted an ablation study using the same experimental settings as MoE-Dynamic. As shown in Table 8, the removal of Dynamic Loss leads to a noticeable increase in the number of activated experts and a significant drop in performance on downstream tasks. This finding indicates that the proposed loss function enhances the model’s efficiency and effectiveness. The Dynamic Loss likely achieves this by compelling the model to allocate the most suitable experts for each token. 12895
2024
696
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12896–12911 August 11-16, 2024 ©2024 Association for Computational Linguistics XLAVS-R: Cross-Lingual Audio-Visual Speech Representation Learning for Noise-Robust Speech Perception HyoJung HanA∗Mohamed Anwar∗Juan PinoV Wei-Ning HsuV Marine CarpuatA Bowen ShiV Changhan WangV AUniversity of Maryland, USA V Meta AI, USA [email protected] [email protected] Abstract Speech recognition and translation systems perform poorly on noisy inputs, which are frequent in realistic environments. Augmenting these systems with visual signals has the potential to improve robustness to noise. However, audio-visual (AV) data is only available in limited amounts and for fewer languages than audio-only resources. To address this gap, we present XLAVS-R, a cross-lingual audio-visual speech representation model for noise-robust speech recognition and translation in over 100 languages. It is designed to maximize the benefits of limited multilingual AV pre-training data, by building on top of audio-only multilingual pre-training and simplifying existing pre-training schemes. Extensive evaluation on the MuAViC benchmark shows the strength of XLAVS-R on downstream audio-visual speech recognition and translation tasks, where it outperforms the previous state of the art by up to 18.5% WER and 4.7 BLEU given noisy AV inputs, and enables strong zero-shot audio-visual ability with audio-only fine-tuning. 1 Introduction Speech recognition and speech-to-text translation, two core speech perception tasks, have witnessed rapid developments in the past two years. Still, the performance of state-of-the-art (SOTA) models such as Whisper (Radford et al., 2023) and SeamlessM4T (Communication, 2023b) degrades sharply in noisy environments (Anwar et al., 2023; Communication, 2023a). Audio-only approaches to make such systems more robust (Ng et al., 2023; Zhu et al., 2023) remain challenged by frequent noise types such as intense babble noise and overlapped speech (Shi et al., 2022b). Complementing audio inputs with visual signals such as video frames of speaker’s lips offers an alternative approach (Sumby and Pollack, 1954; Potamianos et al., 2004; Mroueh et al., ∗Work done at Meta AI Hours Languages A AV A AV Audio-Only Pre-Training Data AVFormer (Seo et al., 2023) 60K 0 1⋆ 0 FAVA (May et al., 2023) 2.8K 0 >100 0 Audio-Visual Pre-Training Data AV-HuBERT (Shi et al., 2022a) 0 1.8K 0 1⋆ AV-data2vec (Lian et al., 2024) 0 1.8K 0 1⋆ AV2AV (Choi et al., 2023) 0 7K 0 >100 Audio-Only & Audio-Visual Pre-Training Data u-HuBERT (Hsu and Shi, 2022) 0.5K 1.8K 1⋆ 1⋆ VATLM (Zhu et al., 2024) 4.3K 1.8K 1⋆ 1⋆ XLAVS-R (this work) 436K 1.2K 128 9 XLAVS-R (+ extended AV data) 436K 8.3K 128 100+ Table 1: Pre-training data type, amount and language coverage in audio-visual speech perception models (A: audio, AV: audio-visual). Compared to prior work, XLAVS-R exploits audio-only speech data for efficient data scaling and language coverage expansion. ⋆English-only. 2015; Chung et al., 2017), with promising results in audio-visual speech recognition (AVSR) and audio-visual speech-to-text translation (AVS2TT) tasks (Afouras et al., 2022; Ma et al., 2021; Shi et al., 2022b; Anwar et al., 2023). Yet, as summarized in Table 1, prior works are either restricted to English (Hsu and Shi, 2022; Zhu et al., 2024; Seo et al., 2023) or to a limited amount of audiovisual data (Shi et al., 2022a; Choi et al., 2023; Lian et al., 2024), limiting language coverage and performance on downstream tasks. To bridge this gap, we present XLAVS-R, a cross-lingual audio-visual speech representation learning model for noise-robust speech perception across over 100 languages. Its training strategy is designed to maximize the efficiency of multilingual AV pre-training by injecting audio-visual (AV) signals (§3.2) after pre-training on audioonly (A-only) data that is easier to scale in size and language coverage (§3.1). We improve AVHuBERT (Shi et al., 2022a) with simplified training 12896 protocol and updated model architecture, and scale model size up to 2B parameters for performance in the multilingual setting. We conduct extensive evaluations on the MuAViC benchmark (Anwar et al., 2023). XLAVSR yields SOTA performance on speech recognition in 9 languages and speech-to-text translation in 6 X-to-English pairs (§5.1, §5.2), notably improving their robustness to noisy inputs. Furthermore, XLAVS-R enables zero-shot audio-visual ability in downstream tasks with audio-only fine-tuning (FT): fine-tuning our XLAVS-R 2B model without supervised AV downstream data achieves the best audio-visual performances in noisy settings even compared to the corresponding audio-visual supervised fine-tuned model (§5.4). This confirms the benefits of our pre-training strategy and relaxes the data requirements for downstream tasks, making it possible to tackle AVSR and AVSTT tasks without labeled—transcribed or translated—AV data. 2 Related Work Self-supervised audio-only speech representation. Self-supervised learning (SSL) for speech aims to establish a general speech representation with unlabeled speech data applicable to various downstream applications, including speech recognition and spoken language understanding tasks (wen Yang et al., 2021). Many of today’s successful SSL approaches (Baevski et al., 2020; Hsu et al., 2021; Baevski et al., 2022; Chiu et al., 2022) are based on masked span prediction. Specifically, wav2vec 2.0 (Baevski et al., 2020) applies a contrastive predictive coding-like (van den Oord et al., 2018) loss on masked speech utterances. HuBERT (Hsu et al., 2021) utilizes masked speech features to predict hidden units derived from layerwise features. Instead of using discrete units, Data2vec (Baevski et al., 2022) directly regresses dense features from an exponential moving average (EMA) teacher. SSL approaches notably reduce the need for labeled speech data. In ASR benchmarks (Panayotov et al., 2015), they yield performance on par with or better than fully supervised models with far fewer transcriptions. This has motivated sustained efforts toward multilingual SSL models such as XLSR53 (Conneau et al., 2021), XLS-R (Babu et al., 2022), and MMS (Pratap et al., 2023), which particularly excel in low-resource language scenarios. Self-supervised audio-visual speech representation. Audio-visual SSL approaches draw heavy inspiration from their audio-only counterparts. AVHuBERT (Shi et al., 2022a) extends HuBERT (Hsu et al., 2021) to the audio-visual setting by taking the masked audio-visual stream as input and predicting the hidden units initialized with MFCC clusters, iteratively refining them with layerwise features. This framework has proven effective for multiple downstream tasks, including lip reading (Shi et al., 2022a), audio-visual speech recognition and translation (Shi et al., 2022b; Anwar et al., 2023). In AV2AV (Choi et al., 2023) and our work of XLAVS-R, it has been extended to a multilingual setting. RAVEn (Haliassos et al., 2023) leverages modality-specific EMA teachers to generate targets for masked prediction, thus avoiding an iterative refinement process. Similarly, AV-data2vec (Lian et al., 2024) is based on data2vec (Baevski et al., 2022) and regresses multimodal features with an audio-visual EMA teacher. AV2vec (Zhang et al., 2023) further combines AV-data2vec with the masked prediction objective in AV-HuBERT. Some prior works also investigated the use of unpaired unimodal data, notably audio, to improve multimodal representation. u-HuBERT (Hsu and Shi, 2022) augments AV-HuBERT with audio-only speech during pretraining, resulting in improved audio-visual speech recognition performance. VATLM (Zhu et al., 2024) further added text-only data and trained the model on an arbitrary modality stream. Our work builds on the insights of AV/u-HuBERT with adaptations for continual SSL training with visual modality injection to A-only model and for the multilingual setting. Audio-visual adaptation of audio-only speech models. Recent work has begun to explore the adaptation of audio-only speech models into audiovisual models. MixSpeech (Cheng et al., 2023) develops a visual speech translation model based on a pre-trained audio-only speech translation model by minimizing the discrepancy between the probabilities from audio-only and multi-modal streams. AVFormer (Seo et al., 2023) incorporates lightweight modules into an audio-only speech recognizer to adapt it into visually grounded speech recognition through two-stage fine-tuning. FAVA (May et al., 2023) directly fine-tunes a pre-trained BESTRQ (Chiu et al., 2022) encoder for audio-visual speech recognition with a randomly initialized visual encoder. FAVA connects to our work by applying this technique to a multilingual audio-only 12897 Transformer Audio-Visual Fusion Masked Prediction Objective C11 C15 C80 C80 C27 C64 Visual Feature Extractor (ResNet) Video frame (lip region) Noised waveform Masked frame Audio Feature Extractor (CNN) Transformer Clean waveform K-means Clustering Audio-only pre-trained model Audio Feature Extractor (CNN) init Figure 1: Overview of XLAVS-R. From the audio-only SSL model, we generate unit targets for audio-visual SSL pre-training (left box). We inject visual modality into the first stage model (blue blocks) and fuse visual modality with audio one (pink blocks) to continue training with audio-visual SSL pre-training. In AV SSL, noises are added randomly to clean audio, and masked prediction objectives (right top) are applied to the union set of masked frames of audio and visual stream (bottom gray). model. However, its evaluation focuses on English with labeled AV speech data for supervised finetuning, while our approach focuses on AV SSL to achieve competitive zero-shot results without AV labeled data as well as state-of-the-art supervised multilingual performance. 3 XLAVS-R Our approach (Figure 1) requires first training an audio-only SSL model, which is used to generate unit targets for audio-visual SSL pre-training. We inject visual modality into the resulting model to continue training with audio-visual SSL pretraining. Through this process, XLAVS-R introduces three key improvements over baseline approaches. First, we leverage the abundant availability of audio-only speech data to enhance overall performances and achieve greater robustness on domain shift (§3.1), and inject visual modality to build an audio-visual pre-training model on top of audio-only pre-training model (§3.2). Second, we use a learnable audio feature extractor to better capture the various phonetic information of multilingual audios (§3.3). Third, we improve the training efficiency of the audio-visual pre-training stage by single-round training with unit targets from audioonly contextualized representation (§3.4). 3.1 Audio-Only Pre-Training Instead of training an audio-visual model in one step with the mix of audio-only data and audiovisual data, we first train an audio-only speech model and adapt it to an audio-visual one via selfsupervised continual pre-training with audio-visual data. This is motivated by the facts that audio-only data is much more plentiful than audio-visual data, particularly for low-resource languages, and that audio-only models are much more computationally efficient than audio-visual models with similar architecture and size because of the cost of visual feature extraction and bi-modal feature fusion. We concentrate our computational resources on the first stage of A-only SSL since audio-only data has greater volume and the audio modality naturally contains richer semantic information than the visual modality. We refer to the outcome model of this audio-only self-supervised learning as XLS-R, as we follow the XLS-R training settings (Babu et al., 2022) and adopt its wav2vec 2.0 architecture (Baevski et al., 2020). 3.2 Continued Audio-Visual Pre-Training with Visual Modality Injection After obtaining XLS-R through A-only pretraining, we add a ResNet-based visual feature extractor (VFE) and a linear projection-based feature fusion module to build up an audio-visual model and continue AV pre-training. For audio-visual self-supervised learning, we follow Shi et al. (2022a,b) using AV-HuBERT pre-training loss. Given temporally aligned audio input Ia ∈RTa×Ca and video speech input Iv ∈RTv×Cv×W×H, each modality feature extractor encode inputs into same length feature sequences fa, fv ∈RT×D. Here, audio and visual frames are masked independently by separate masking probabilities. Frame-wisely fused audio-visual features Fusion(fa, fv) ∈RT×D′ are processed by the Transformer (Vaswani et al., 2017) to generate contextualized audio-visual representation. Target units z ∈{1, . . . , K}T for each frame are assigned by an unsupervised clustering method. Given the output probability p from the fused representation and the target units z, the model is trained with the following loss: L = − X t∈M log pt(zt) −α X t/∈M log pt(zt), (1) where M is a union set of masked frames for the 12898 audio and visuals stream and α is weighing factor for the unmasked regions. As in AV/u-HuBERT, we apply modality dropout to encourage the fusion of audio and visual representation space. In the modality fusion, either sequence of fa or fv can be dropped out by the modality dropout probability pm and filled with zero. In the case of dropping out one of the modalities, fa is selected with an audio dropout probability pa to have Fusion(0, fv). With the masked prediction objective on audioonly targets, the continued training of audio-visual SSL aligns audio-visual and visual-only representations to corresponding audio-only representation, which is established in the previous A-only SSL. Besides the audio-to-visual representation alignment, clean-to-noisy audio representation is also aligned during the continued AV SSL training phase by using potentially noised audio as audio inputs. This is implemented by randomly adding noises to clean speech audio during the training. After self-supervised learning, we fine-tune with sequence-to-sequence cross-entropy loss for downstream tasks. 3.3 Learnable Audio Feature Extractor XLAVS-R has the same model architecture as AV/u-HuBERT except for the nature of the audio features provided as inputs to the Transformer encoder. We jointly train a convolutional audio feature extractor (AFE) as that in wav2vec 2.0 (Baevski et al., 2020) for high-dimensional audio inputs. This differs from AV/u-HuBERT which uses lower-dimensional filterbank features. Our approach provides more capacity for multilingual models and their cross-lingual inferences. 3.4 Single-Round AV Training with Self-Supervised Audio-Only Targets AV-HuBERT (Shi et al., 2022a,b) and u-HuBERT (Hsu and Shi, 2022) require multiple pre-training rounds, with training targets switching from quantized audio-only local features to quantized audiovisual contextualized representation. In each round, the AV model is re-trained from scratch. Hence, the model quality gains across training rounds come from the updated training targets. While these iterative rounds help the model encode more advanced contextual and bi-modal information in the later stage, the resulting training process is complex and computationally expensive. To improve training efficiency, we propose to create first-round training targets from a contextualized audio-only representation instead of local features that have noisier, lower-level information. This accelerates the masked prediction learning of high-level semantic information in the first round and reduces the necessity for additional training rounds. We obtain audio-only contextualized representation from a self-supervised multilingual speech model described in Section 3.1. 4 Experiments Data. For AV pre-training, we leverage a total 1.2K hours of data in 9 languages: English (En), Arabic (Ar), German (De), Greek (El), Spanish (Es), French (Fr), Italian (It), Portuguese (Pt) and Russian (Ru). We also experiment with additional data for domain and language coverage. On top of the above 1.2K hour data, we add 7.1K hours of inhouse AV data in 100+ languages and train XLAVSR with a total of 8.3K hours in 100+ languages, as summarized in the last row of Table 1. The number of hours we use in the training of each experiment setup for 9 language we focus on are shown in Table 6. For A-only pre-training, we follow Babu et al. (2022) for training on 436K-hour data in 128 languages. For all the pre-trained models, we perform multilingual fine-tuning on MuAViC labeled data for audio-visual speech recognition (AVSR) in all 9 languages and audio-visual speech-to-text translation (AVS2TT) in 6 language pairs into English— El, Es, Fr, It, Pt, Ru. We use MuAViC for in-domain evaluation on both audio-only and audio-visual test modes. We use FLEURS (Conneau et al., 2023) for audio-only out-of-domain (OOD) evaluation. We report an average of all 9 languages for AVSR and 6 Xto-English language pairs for AVS2TT in both indomain and out-of-domain evaluation. We remove extremely short utterances (less than 0.2 seconds) and long utterances (more than 20 seconds) for better training stability. Modeling. We build XLS-R (§3.1) and XLAVSR (§3) models in two model sizes: 300M and 2B. The number of encoder layers/embedding dimension/feed forward dimension are 24/1024/4096 and 48/1920/7680 respectively. For audio-visual training targets of XLAVS-R, we extract audio-only speech representation from the 36th layer of XLS12899 Model Test Mode OOD In-Domain Avg En Ar De El Es Fr It Pt Ru Avg Clean environment, Test WER (%) ↓ AV-HuBERT (Shi et al., 2022b) A 41.8 2.0 89.6 53.5 47.2 18.2 20.7 21.2 22.9 46.0 35.7 AV 1.7 89.4 52.0 46.2 17.4 20.3 20.8 22.1 44.7 35.0 u-HuBERT (Hsu and Shi, 2022) A 41.4 1.9 89.3 53.3 47.2 17.9 20.7 21.6 22.5 45.8 35.6 AV 1.9 89.3 52.1 46.4 17.3 20.5 21.2 21.9 44.4 35.0 XLS-R 300M A 33.2 2.5 85.6 44.0 34.4 13.2 15.1 14.3 16.2 34.4 28.8 XLAVS-R XLAVS-R 300M A 32.5 2.5 82.4 45.6 24.2 11.3 14.6 12.9 13.7 33.0 26.7 AV 2.4 81.7 44.7 24.3 10.9 14.4 12.8 13.2 32.7 26.3 XLAVS-R 2B A 34.2 1.9 80.3 45.5 19.5 9.4 12.5 10.9 11.6 26.0 24.2 AV 1.7 79.3 44.4 19.0 9.1 12.3 10.6 11.2 25.0 23.6 Noise environment, Test WER (%) ↓ AV-HuBERT (Shi et al., 2022b) A 89.3 39.3 111.2 87.0 85.2 65.6 58.1 69.5 69.9 75.9 73.5 AV 6.4 104.7 74.2 70.4 43.1 43.0 48.2 47.5 67.4 56.1 u-HuBERT (Hsu and Shi, 2022) A 87.5 31.8 109.4 84.7 83.1 62.7 56.6 67.8 68.2 74.4 71.0 AV 6.6 102.3 73.2 69.7 43.7 43.2 48.5 47.6 67.0 55.8 XLS-R 300M A 74.4 43.8 97.3 69.8 74.8 47.6 37.1 47.9 54.4 59.8 59.2 XLAVS-R XLAVS-R 300M A 79.8 42.4 100.1 69.4 56.2 39.9 33.4 40.5 45.4 53.0 53.4 AV 5.8 95.8 61.4 44.7 28.0 27.2 29.4 30.6 47.8 41.2 XLAVS-R 2B A 74.0 49.5 98.9 66.3 50.6 36.0 30.0 36.8 40.6 48.3 50.8 AV 5.9 93.5 58.5 38.6 23.9 23.5 24.6 26.1 41.0 37.3 Table 2: Results of audio-visual speech recognition (AVSR) on in-domain (MuAViC) and out-of-domain (OOD, FLEURS) evaluation in test mode of audio-only (A) and audio-video (AV). Our XLAVS-R model outperforms the two English-only baselines of AV-HuBERT and u-HuBERT (both 300M sized) by up to 18% WER of in-domain average in noisy AV mode. R 2B and quantize it with 2000 k-means clusters. For fine-tuning AVSR and AVS2TT models, we follow Anwar et al. (2023) to add a Transformer decoder that has 6 layers, an embedding dimension of 768, and feed forward network dimension of 3072. The visual feature extractor of our model is modified ResNet-18 as in AV-HuBERT and prior lipreading works (Stafylakis and Tzimiropoulos, 2017; Martinez et al., 2020; Ma et al., 2021). For XLAVS-R, we set both pm and pa to 0.5 for pretraining which is the same with AV/u-HuBERT. For fine-tuning all pre-trained models, we set pm and pa to 0.5 and 0, where the models use 50% AV modality and 50% A-only modality. We set α as 1.0 following the AV-HuBERT models1. Noise Augmentation. Following Shi et al. (2022b) and Anwar et al. (2023), we randomly augment the input samples with additive noises at 0dB of signal-to-noise ratio. Noise audio clips are sampled from the MUSAN dataset (Snyder et al., 2015) as well as drawn from MuAViC train set for babble type. We augment 25% of the input in 1https://facebookresearch.github.io/av_hubert pre-training and 50% in fine-tuning with noises. Evaluation. We evaluate AVSR and AVS2TT in audio-only (“A”) and audio-visual (“AV”) test modes, where the former leverages only audio modality in inference while the latter leverages both audio and visual modalities. We select the best checkpoint by validation accuracy for inference. We use beam search with default hyper-parameters and beam size of 5. For AVSR, we apply Whisper text normalizer (Radford et al., 2023) before calculating WER (word error rate). For AVS2TT, we compute BLEU (Papineni et al., 2002) using SacreBLEU (Post, 2018) with default options and the built-in 13a tokenizer. For the simulation of noisy test environment, we add babble type of noises drawn from MuAViC test set. 5 Results 5.1 Multilingual Speech Recognition We compare our XLAVS-R representations against the AV-HuBERT and u-HUBERT English-only representations when fine-tuned for multilingual speech recognition in Table 2, reporting WER on 12900 Model Test Mode OOD In-Domain Avg El-En Es-En Fr-En It-En Pt-En Ru-En Avg Clean environment, Test BLEU ↑ AV-HuBERT (Shi et al., 2022b) A 12.7 13.9 22.3 28.1 23.5 26.1 10.7 20.8 AV 14.3 22.9 28.3 23.9 26.5 11.2 21.2 u-HuBERT (Hsu and Shi, 2022) A 12.8 14.4 23.0 28.6 23.4 26.6 11.3 21.2 AV 14.5 23.1 28.6 23.7 27.0 11.4 21.4 XLAVS-R XLAVS-R 300M A 14.0 18.9 23.7 29.7 24.9 28.5 12.6 23.0 AV 19.1 23.8 29.8 25.0 28.8 13.0 23.2 XLAVS-R 2B A 16.0 21.7 25.0 30.6 26.5 30.2 13.9 24.7 AV 21.6 25.1 30.6 26.6 29.9 13.9 24.6 Noisy environment, Test BLEU ↑ AV-HuBERT (Shi et al., 2022b) A 3.2 4.4 9.1 13.1 8.3 8.8 4.8 8.1 AV 8.8 15.6 19.2 15.0 17.6 7.2 13.9 u-HuBERT (Hsu and Shi, 2022) A 3.4 4.7 9.8 13.8 8.7 10.3 5.3 8.8 AV 8.2 15.6 19.7 14.6 18.3 7.3 14.0 XLAVS-R XLAVS-R 300M A 4.3 8.3 13.9 20.4 15.2 15.1 8.2 13.5 AV 13.2 17.4 23.8 18.7 21.8 9.4 17.4 XLAVS-R 2B A 6.4 11.0 15.1 20.9 16.2 16.0 9.0 14.7 AV 15.7 19.2 24.6 20.1 22.3 10.4 18.7 Table 3: Results of audio-visual speech-to-text translation (AVS2TT) on in-domain (MuAViC) and out-of-domain (OOD, FLEURS) evaluation in test mode of audio-only (A) and audio-video (AV). Our XLAVS-R model outperforms the two English-only baselines of AV-HuBERT and u-HuBERT (both 300M sized) for all language pairs in any mode and by up to 4.8 BLEU of in-domain average in noisy AV mode. both clean and noisy inputs in the upper and lower half of the Table respectively. In the clean setup, the XLAVS-R models perform on par or better than the baselines in both audio-only and audio-visual test modes. The XLAVS-R 300M model outperforms the AVHuBERT and u-HuBERT, English-only AV pretrained models of similar size, by up to 9% WER and 8.7% WER respectively for average audio-only and audio-visual test modes. Our larger XLAVS-R 2B outperforms the English-only baseline by an average 11.5% WER and the average of 11.4% WER respectively for the two modes. XLAVS-R 300M and 2B both outperform the baselines by large margins in every non-English language. English WER slightly increased in XLAVS-R 300M model compared to the English-only pre-trained baseline by 0.5% WER. We attribute this effect to inter-language competition (Conneau et al., 2020; Chang et al., 2023), since XLAVS-R 300M covers 128 languages with the same capacity as the English-only baselines (Table 1). Increasing the XLAVS-R model capacity to 2B achieves the lowest WER of all models in both A and AV modes. The OOD results on FLEURS confirm that our models also maintain reasonable Audio-only performance on other existing benchmarks. In the noisy setup, where we add multilingual babble noises to clean speech inputs, the XLAVS-R models improve performance across the board, in all languages and in both modes. In the AV mode, the average WER for XLAVS-R 2B drops significantly to 37.3%, which remarkably approaches that of the AV/u-HuBERT baseline in clean settings. Overall, XLAVS-R models outperform the two baselines by a large margin on average in clean and noisy settings, while remaining competitive with English-only pre-trained models in English. Results of experiments with additional AV data for domain and language coverage during AV pretraining are available in Appendix 5.5 and Table 4. We also show that the findings are consistent with different types of noise in Appendix A, Table 7-8. 5.2 Multilingual Speech-To-Text Translation Next, we evaluate models fine-tuned for the X-En AVS2TT tasks (Table 3), reporting BLEU scores in clean (top) vs. noisy settings (bottom). In the clean setup, XLAVS-R 300M outperforms the English-only pre-trained AV/u-HuBERT by up to 2.2 and 2 BLEU average scores for audio12901 OOD En Ar De El Es Fr It Pt Ru Avg 0 20 40 60 80 100 WER (%) Clean environment, Test WER (%) [Test Mode A] AV-HuBERT (MuAViC-En) [Test Mode A] + Single-round w/ SSL units [Test Mode A] + Learnable AFE [Test Mode A] + MuAViC non-English [Test Mode A] + A-only pre-training (XLAVS-R) [Test Mode AV] AV-HuBERT (MuAViC-En) [Test Mode AV] + Single-round w/ SSL units [Test Mode AV] + Learnable AFE [Test Mode AV] + MuAViC non-English [Test Mode AV] + A-only pre-training (XLAVS-R) OOD En Ar De El Es Fr It Pt Ru Avg 0 20 40 60 80 100 120 WER (%) Noisy environment, Test WER (%) Figure 2: Effectiveness of each component towards XLAVS-R and multilingual pre-training data starting from AV-HuBERT model pre-trained only on MuAViC-En. All the components of XLAVS-R are shown to be effective. Each ablated pre-trained models are fine-tuned and evaluated on multilingual audio-visual speech recognition with identical training and test settings (A: audio, AV: audio+video). The numbers of the plots are in Appendix, Table 9. only and audio-visual modes. It consistently outperforms the baseline by large margins in all directions and test modalities, suggesting that multilingual pre-training is essential for cross-lingual ability in the downstream tasks. Our best model, XLAVS-R 2B outperforms the English-only baselines by even wider margins (up to 3.9 and 3.4 BLEU average scores respectively for the two modes). In the noisy setup, we simulate noisy environments as in AVSR tasks by adding multilingual babble noises to clean speech sources. XLAVS-R 300M outperforms the English-only baselines of similar size largely by up to 5.4/3.5 average BLEU improvement on A/AV, and 6.6/4.8 average BLEU for 2B model. Overall, these results show that our XLAVS-R models outperform two baselines by a large margin in all translation directions for both clean and noisy settings, both A-only and AV test modes, and both in-domain and OOD. Results of experiments with additional AV data for domain and language coverage during AV pre-training are available in Appendix 5.5 and Table 5. 5.3 Ablation Experiments of XLAVS-R In Figure 22, we validate the effectiveness of the key changes from AV-HuBERT to XLAVSR, step-by-step starting from AV-HuBERT pretrained on MuAViC-En. First, simplifying the training process – single-round AV-HuBERT training with self-supervised contextualized audio-only units from XLS-R (+ Single-round w/SSL units) instead of the original 5 training rounds – actually yields moderate improvements. Second, switching to a learnable audio feature extractor (+ Learnable AFE) shows the biggest improvements in both clean and noisy settings, especially in the low-resource languages (El and 2Appendix Table 9 contains the exact numbers plotted. 12902 XLA A-only FT VS-R A-V FT AV-H A-only FT uBERT A-V FT XLA A-only FT VS-R A-V FT AV-H A-only FT uBERT A-V FT Clean environment Noisy environment 0 10 20 30 40 50 60 70 80 WER (%) 26.50 53.30 26.70 53.40 44.70 82.90 35.70 73.50 26.50 48.20 26.30 41.20 44.70 79.80 35.00 56.10 300M models, Test WER (%) [Test Mode A] AV-HuBERT A-only FT [Test Mode A] AV-HuBERT AV FT [Test Mode AV] AV-HuBERT A-only FT [Test Mode AV] AV-HuBERT AV FT (a) In-domain test average of 300M sized model XLA A-only FT VS-R A-V FT XLA A-only FT VS-R A-V FT Clean environment Noisy environment 0 10 20 30 40 50 60 70 80 WER (%) 25.30 47.00 24.20 50.80 25.00 37.20 23.60 37.30 2B model, Test WER (%) [Test Mode A] XLAVS-R A-only FT [Test Mode A] XLAVS-R AV FT [Test Mode AV] XLAVS-R A-only FT [Test Mode AV] XLAVS-R AV FT (b) In-domain test average of 2B sized model Figure 3: XLAVS-R shows greater zero-shot ability on audio-visual test mode with audio-only fine-tuned (FT, striped) model compared to that of AV-HuBERT. Without fine-tuned on the labeled audio-visual set, the A-only FT model from XLAVS-R shows 5% WER improvement on AV test mode compared to A test mode (purple), while that of AV-HuBERT shows only 3% WER (orange) in a noisy environment. The bigger the XLAVS-R model size, the greater the zero-shot ability—the gap of 9.8% WER between A and AV test mode of A-only FT 2B model in a noisy environment). Values of individual languages are in Appendix, Table 10. Ru). Third, augmenting AV pre-training with multilingual data (+ MuAViC non-English) is the second-highest improvement indicating that multilingual audio-visual representation learning is critical in the self-supervised pre-training stage. Last, the introduction of the audioonly pre-training stage and multilingual audioonly resources (+ A-only pre-training (XLAVS-R)) further improves both A and AV performance in the in-domain evaluations. It yields even higher improvements in OOD evaluations, suggesting that audio-only pretraining may enhance robustness against domain shift. In sum, all of the design contributions of XLAVS-R are shown to be effective. 5.4 Zero-shot Audio-Visual Inference We compare the performance of A-only fine-tuning vs. AV fine-tuning: this matters when labels for a given language in downstream task are not available for AV fine-tuning, and thus we need to rely on the zero-shot audio-visual ability from the pretrained AV representation. In A-only FT, we exclude any visual modalities for all languages. For 300M models (Figure 3a), XLAVS-R shows only a mild performance degradation in the noisy AV mode from AV FT to zero-shot A-only FT (41.2 WER to 48.2 WER, 17% increase rate) and almost none in clean. By contrast, the AV-HuBERT baseline shows severe degradation even in the clean setting—56.1 to 79.8, 42.25% in noisy, and 27.7% in the clean setting. The XLAVS-R model thus effectively transfers knowledge from the pre-trained AV representation and leverages it for downstream tasks, even though there is no visual modality involved at all during the supervised fine-tuning. For 2B models (Figure 3b), although AV FT has better recognition performance in a clean setting (23.6 vs 25.0 WER in AV mode), A-only FT model has even better performances over AV FT mode in noisy settings tested on AV mode (37.2 vs 37.3 WER), showing that the increased model capacity improves its zero-shot ability. The result of the scaled-up model suggests that XLAVS-R relaxes the need for labeled AV datasets of downstream tasks to have noise robustness. 5.5 Results with Additional Training Data We also experiment with additional data for domain and language coverage during the audio-visual selfsupervised learning phase. Table 4 shows AVSR performances of 9 languages on OOD and in-domain test sets. On the indomain test set, the model pre-trained on a scaledup AV training set shows improved AV average performances of 0.2% WER for both clean and noisy environments. In contrast to the slight improvement or even the degradation for noisy A-only 12903 Model Test Mode OOD In-Domain Avg En Ar De El Es Fr It Pt Ru Avg Clean environment, Test WER (%) ↓ XLAVS-R 300M Basic AV data: 1.2K hours A 32.5 2.5 82.4 45.6 24.2 11.3 14.6 12.9 13.7 33.0 26.7 in 9 languages AV 2.4 81.7 44.7 24.3 10.9 14.4 12.8 13.2 32.7 26.3 Extended AV data: 8.3K hours A 28.3 2.2 80.2 38.7 28.7 11.9 15.5 14.1 14.9 32.1 26.5 in 100+ languages AV 2.1 80.0 38.0 28.1 11.7 15.3 13.8 14.4 31.2 26.1 Noise environment, Test WER (%) ↓ XLAVS-R 300M Basic AV data: 1.2K hours A 79.8 42.4 100.1 69.4 56.2 39.9 33.4 40.5 45.4 53.0 53.4 in 9 languages AV 5.8 95.8 61.4 44.7 28.0 27.2 29.4 30.6 47.8 41.2 Extended AV data: 8.3K hours A 73.7 44.0 97.2 62.4 60.7 41.0 36.2 44.5 48.0 51.0 53.9 in 100+ languages AV 5.3 91.9 53.5 49.6 28.8 29.3 32.2 32.5 46.1 41.0 Table 4: AVSR comparison of XLAVS-R 300M models trained on different sizes of the AV pre-training dataset. Model Test Mode OOD In-Domain Avg El-En Es-En Fr-En It-En Pt-En Ru-En Avg Clean environment, Test BLEU ↑ XLAVS-R 300M Basic AV data: 1.2K hours A 14.0 18.9 23.7 29.7 24.9 28.5 12.6 23.0 in 9 languages AV 19.1 23.8 29.8 25.0 28.8 13.0 23.2 Extended AV data: 8.3K hours A 14.4 18.5 23.6 29.6 25.1 28.6 12.0 22.9 in 100+ languages AV 18.3 23.9 29.8 25.1 28.9 12.1 23.0 Noisy environment, Test BLEU ↑ XLAVS-R 300M Basic AV data: 1.2K hours A 4.3 8.3 13.9 20.4 15.2 15.1 8.2 13.5 in 9 languages AV 13.2 17.4 23.8 18.7 21.8 9.4 17.4 Extended AV data: 8.3K hours A 5.8 11.1 14.8 20.0 15.4 15.4 8.5 14.2 in 100+ languages AV 13.8 18.4 23.7 19.6 21.7 9.8 17.8 Table 5: AVS2TT comparison of XLAVS-R 300M models trained on different sizes of the AV pre-training dataset. mode in the in-domain, the 8.3K model shows significant improvements in OOD for both clean and noisy settings by up to 6.1% WER on average. Table 5 shows AVS2TT performances of six Xto-En pairs on OOD and in-domain test sets, where the trend of the prominent improvement in OOD is similar to that of AVSR comparison. Additional AV SSL dataset improves the average performance of in-domain noisy setting by 0.7 BLEU for A-only test mode and 0.4 BLEU for AV test mode, while slightly degrades in clean settings by 0.1 BLEU in A-only test mode and 0.2 in AV test mode on average. In out-of-domain evaluation, augmented data in AV SSL enhances the translation quality by up to 1.5 BLEU on average in noisy environments. Overall, scaling up audio-visual pre-training data enhances robustness against domain shift while also remains effective for noisy robustness with audio-visual input in the in-domain. 6 Conclusion We introduced XLAVS-R, a cross-lingual audiovisual speech representation for noise-robust speech perception in over 100 languages. Since audio-visual data is harder to come by than audioonly data, we injected the visual modality by continuing training an audio-only pre-trained model. We also design a simpler yet more effective training scheme and improved architecture compared to previous state-of-the-art models. Extensive evaluation on the MuAViC benchmark shows XLAVS-R achieves SOTA performance on both audio-visual speech recognition and translation tasks, yielding particularly large improvements in noisy settings. We also show that XLAVS-R effectively leverages unlabeled audio-only multilingual speech data and audio-visual data in self-supervised learning resulting in enhanced zero-shot audio-visual ability for downstream tasks, and extended AV pre-training data augments robustness against domain shift. 12904 Limitations Our findings are inherently limited by the settings of our empirical evaluation. For instance, we simulate noisy environments only with the “babble” sound in testing experimental setup (§4), and it remains to be seen how other types of noise might impact our model. In addition, while we include a diverse set of languages with varying amounts of resources in our evaluation, we did not evaluate translation out-of-English and between nonEnglish languages. References Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2022. Deep audio-visual speech recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(12):8717–8727. Mohamed Anwar, Bowen Shi, Vedanuj Goswami, WeiNing Hsu, Juan Pino, and Changhan Wang. 2023. MuAViC: A Multilingual Audio-Visual Corpus for Robust Speech Recognition and Robust Speech-toText Translation. In Proc. INTERSPEECH 2023, pages 4064–4068. Arun Babu, Changhan Wang, Andros Tjandra, Kushal Lakhotia, Qiantong Xu, Naman Goyal, Kritika Singh, Patrick von Platen, Yatharth Saraf, Juan Pino, Alexei Baevski, Alexis Conneau, and Michael Auli. 2022. XLS-R: Self-supervised Cross-lingual Speech Representation Learning at Scale. In Proc. Interspeech 2022, pages 2278–2282. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. 2022. data2vec: A general framework for self-supervised learning in speech, vision and language. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 1298–1312. PMLR. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. In Advances in Neural Information Processing Systems, volume 33, pages 12449–12460. Curran Associates, Inc. Tyler A. Chang, Catherine Arnett, Zhuowen Tu, and Benjamin K. Bergen. 2023. When is multilinguality a curse? language modeling for 250 high- and lowresource languages. X. Cheng, T. Jin, R. Huang, L. Li, W. Lin, Z. Wang, Y. Wang, H. Liu, A. Yin, and Z. Zhao. 2023. Mixspeech: Cross-modality self-learning with audiovisual stream mixup for visual speech translation and recognition. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 15689– 15699, Los Alamitos, CA, USA. IEEE Computer Society. Chung-Cheng Chiu, James Qin, Yu Zhang, Jiahui Yu, and Yonghui Wu. 2022. Self-supervised learning with random-projection quantizer for speech recognition. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 3915–3924. PMLR. Jeongsoo Choi, Se Jin Park, Minsu Kim, and Yong Man Ro. 2023. Av2av: Direct audio-visual speech to audio-visual speech translation with unified audiovisual speech representation. Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2017. Lip reading sentences in the wild. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. Seamless Communication. 2023a. Seamless: Multilingual expressive and streaming speech translation. Seamless Communication. 2023b. Seamlessm4t: Massively multilingual & multimodal machine translation. Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, and Michael Auli. 2021. Unsupervised Cross-Lingual Representation Learning for Speech Recognition. In Proc. Interspeech 2021, pages 2426–2430. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Linguistics. Alexis Conneau, Min Ma, Simran Khanuja, Yu Zhang, Vera Axelrod, Siddharth Dalmia, Jason Riesa, Clara Rivera, and Ankur Bapna. 2023. Fleurs: Few-shot learning evaluation of universal representations of speech. In 2022 IEEE Spoken Language Technology Workshop (SLT), pages 798–805. Alexandros Haliassos, Pingchuan Ma, Rodrigo Mira, Stavros Petridis, and Maja Pantic. 2023. Jointly learning visual and auditory speech representations from raw data. In The Eleventh International Conference on Learning Representations. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Trans. Audio, Speech and Lang. Proc., 29:3451–3460. 12905 Wei-Ning Hsu and Bowen Shi. 2022. u-huBERT: Unified mixed-modal speech pretraining and zero-shot transfer to unlabeled modality. In Advances in Neural Information Processing Systems. Jiachen Lian, Alexei Baevski, Wei-Ning Hsu, and Michael Auli. 2024. Av-data2vec: Self-supervised learning of audio-visual speech representations with contextualized target representations. Pingchuan Ma, Stavros Petridis, and Maja Pantic. 2021. End-to-end audio-visual speech recognition with conformers. In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7613–7617. Brais Martinez, Pingchuan Ma, Stavros Petridis, and Maja Pantic. 2020. Lipreading using temporal convolutional networks. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6319–6323. Avner May, Dmitriy Serdyuk, Ankit Parag Shah, Otavio Braga, and Olivier Siohan. 2023. Audio-visual finetuning of audio-only asr models. Youssef Mroueh, Etienne Marcheret, and Vaibhava Goel. 2015. Deep multimodal learning for audio-visual speech recognition. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2130–2134. Dianwen Ng, Ruixiong Zhang, Jia Qi Yip, Zhao Yang, Jinjie Ni, Chong Zhang, Yukun Ma, Chongjia Ni, Eng Siong Chng, and Binchao Ma. 2023. De’hubert: Disentangling noise in a self-supervised model for robust speech recognition. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: An asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206–5210. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191. Gerasimos Potamianos, Chalapathy Neti, Juergen Luettin, and Iain Matthews. 2004. Audio-visual automatic speech recognition: An overview. Issues in visual and audio-visual speech processing, 22:23. Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2023. Scaling speech technology to 1,000+ languages. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org. Paul Hongsuck Seo, Arsha Nagrani, and Cordelia Schmid. 2023. Avformer: Injecting vision into frozen speech models for zero-shot av-asr. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22922–22931. Bowen Shi, Wei-Ning Hsu, Kushal Lakhotia, and Abdelrahman Mohamed. 2022a. Learning audio-visual speech representation by masked multimodal cluster prediction. In International Conference on Learning Representations. Bowen Shi, Wei-Ning Hsu, and Abdelrahman Mohamed. 2022b. Robust Self-Supervised Audio-Visual Speech Recognition. In Proc. Interspeech 2022, pages 2118–2122. David Snyder, Guoguo Chen, and Daniel Povey. 2015. Musan: A music, speech, and noise corpus. Themos Stafylakis and Georgios Tzimiropoulos. 2017. Combining residual networks with lstms for lipreading. William H Sumby and Irwin Pollack. 1954. Visual contribution to speech intelligibility in noise. The journal of the acoustical society of america, 26(2):212– 215. Aäron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. ArXiv, abs/1807.03748. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Shu wen Yang, Po-Han Chi, Yung-Sung Chuang, ChengI Lai, Kushal Lakhotia, Yist Y. Lin, Andy T. Liu, Jiatong Shi, Xuankai Chang, Guan-Ting Lin, Tzu hsien Huang, Wei-Cheng Tseng, Ko tik Lee, DaRong Liu, Zili Huang, Shuyan Dong, Shang-Wen Li, Shinji Watanabe, Abdel rahman Mohamed, and Hung yi Lee. 2021. Superb: Speech processing universal performance benchmark. In Interspeech. Jing-Xuan Zhang, Genshun Wan, Zhen-Hua Ling, Jia Pan, Jianqing Gao, and Cong Liu. 2023. Selfsupervised audio-visual speech representations learning by multimodal self-distillation. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. 12906 Qiu-Shi Zhu, Long Zhou, Jie Zhang, Shu-Jie Liu, YuChen Hu, and Li-Rong Dai. 2023. Robust data2vec: Noise-robust speech representation learning for asr by combining regression and improved contrastive learning. In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. Qiushi Zhu, Long Zhou, Ziqiang Zhang, Shujie Liu, Binxing Jiao, Jie Zhang, Lirong Dai, Daxin Jiang, Jinyu Li, and Furu Wei. 2024. Vatlm: Visual-audiotext pre-training with unified masked prediction for speech representation learning. IEEE Transactions on Multimedia, page 1–11. 12907 A Generalization to Different Types of Noise As we mentioned in the Limitation section, the evaluation phase in the main tables is conducted on a single type of synthetic noise, Babble. To confirm the general effectiveness of XLAVS-R on noise robustness, we further experiment AVSR with different types of noise, Music and Noise from MUSAN dataset following Shi et al. (2022b). Table 7 and 8 show the results in average and language-wise WER of each models on different types of noise. Overall, the noise robustness of Audio-Visual speech processing still holds with different kinds of noises. All AV models manage to get a higher quality of speech recognition with visual information than A-only mode in all new types of noise. Babble turns out to be the toughest noisy environment as shown in the above table, while Music and Noise are even able to achieve scores that are close to the clean environment with AV mode, which is consistent with Shi et al. (2022b). B Future Works: Visual-only Experiment The main focus of this research is on having the best noise-robust speech recognition and translation performance by leveraging stronger audio components and visual information, rather than visualonly input. Therefore, the presented AV models may not provide significant visual-only speech recognition capabilities, especially in non-English languages. The results of visual-only experiment in English are 52.4 WER for AV-HuBERT and 55.9 WER for u-HuBERT, and non-English results are around 90-100 WER. We plan to conduct comprehensive VSR experiment with XLAVS-R and further explore how to develop zero-shot visual speech recognition (VSR) ability while maintaining state-of-the-art audio-visual performances as a future work. 12908 Hours En Ar De El Es Fr It Pt Ru Default 437 19 13 29 181 179 104 156 52 Extended 3378 142 381 40 465 444 228 563 346 Table 6: Statistics of training datasets in hours. Noise Type Clean Babble Music Noise Mode A AV A AV A AV A AV AV-HuBERT 35.7 35.0 73.5 56.1 45.8 41.3 45.8 41.3 u-HuBERT 35.6 35.0 71.0 55.8 45.1 41.0 45.4 41.0 XLAVS-R 300M 26.7 26.3 53.4 41.2 33.0 30.6 33.1 30.5 XLAVS-R 300M (extended) 26.5 26.1 53.9 41.0 32.7 30.1 33.0 30.3 XLAVS-R 2B 24.2 23.6 50.8 37.3 30.4 27.8 30.5 27.7 Table 7: Average WER of AVSR with different types of noise of Music and Noise from MUSAN dataset. The noise robustness of XLAVS-R is still the most effective with different kinds of noises. Model Noise Type En Ar De El Es Fr It Pt Ru Avg AV-HuBERT Noisy-A (Music) 6.7 97.3 65.0 57.8 30.1 29.7 34.4 35.8 55.8 45.8 Noisy-AV (Music) 2.9 94.1 59.8 53.4 24.5 26.1 28.8 29.8 51.9 41.3 Noisy-A (Noise) 7.3 96.1 65.5 58.4 29.2 29.4 34.0 36.3 56.3 45.8 Noisy-AV (Noise) 3.2 94.3 59.7 53.6 24.2 26.5 28.1 30.0 52.4 41.3 u-HuBERT Noisy-A (Music) 6.3 96.9 63.5 57.2 29.2 29.2 33.5 34.6 55.1 45.1 Noisy-AV (Music) 3.2 94.3 58.7 53.1 24.6 26.2 28.3 28.9 51.7 41.0 Noisy-A (Noise) 6.1 96.7 64.5 58.0 28.2 30.0 33.8 35.3 56.2 45.4 Noisy-AV (Noise) 2.9 93.5 59.1 53.9 23.8 26.6 28.1 29.2 52.0 41.0 XLAVS-R 300M Noisy-A (Music) 5.3 87.7 53.3 33.6 17.1 19.3 19.6 21.7 39.5 33.0 Noisy-AV (Music) 3.2 85.8 50.3 30.6 15.2 17.7 16.8 18.3 37.4 30.6 Noisy-A (Noise) 6.0 87.1 54.0 33.2 17.2 19.3 19.3 21.8 40.3 33.1 Noisy-AV (Noise) 3.5 84.8 50.7 30.0 14.8 17.9 16.4 18.1 38.0 30.5 XLAVS-R 300M (extended) Noisy-A (Music) 4.9 85.0 45.5 37.4 18.4 20.4 21.7 22.9 38.1 32.7 Noisy-AV (Music) 2.9 83.2 42.4 34.1 16.0 18.8 18.6 19.3 36.0 30.1 Noisy-A (Noise) 5.5 85.2 46.2 37.4 18.4 20.9 21.2 23.9 38.5 33.0 Noisy-AV (Noise) 2.9 83.1 42.8 34.2 15.7 19.3 17.9 20.0 36.5 30.3 XLAVS-R 2B Noisy-A (Music) 4.9 86.5 52.5 28.1 15.7 16.3 17.0 19.0 33.9 30.4 Noisy-AV (Music) 2.4 83.8 50.1 24.8 13.3 15.2 14.2 15.6 31.0 27.8 Noisy-A (Noise) 5.4 86.2 52.6 27.5 15.2 16.9 17.0 19.2 34.2 30.5 Noisy-AV (Noise) 2.5 83.0 49.3 24.9 12.7 15.2 14.2 15.6 31.9 27.7 Table 8: Language-wise WER of AVSR with different types of noise of Music and Noise from MUSAN dataset. The noise robustness of XLAVS-R is still the most effective with different kinds of noises. 12909 Model Test Mode OOD In-Domain Avg En Ar De El Es Fr It Pt Ru Avg Clean environment, Test WER (%) ↓ AV-HuBERT (MuAViC-En) A 45.5 4.3 92.3 56.4 48.9 19.4 22.2 23.5 25.0 48.3 37.8 AV 2.2 91.1 54.7 47.7 18.6 21.6 22.6 23.8 46.5 36.5 + Single-round w/ SSL units A 44.6 3.9 89.7 56.4 48.7 20.1 23.0 24.4 25.9 48.3 37.8 AV 2.6 89.3 54.4 47.5 18.7 22.5 23.7 24.6 46.3 36.6 + Learned AFE A 36.5 4.6 90.8 52.4 31.6 15.7 18.2 18.5 18.8 41.7 32.5 AV 2.3 84.4 48.1 29.7 13.7 16.6 16.5 17.1 36.6 29.4 + MuAViC non-English A 35.1 3.1 83.7 50.0 28.1 12.7 16.9 15.9 16.6 37.1 29.3 AV 2.8 82.2 48.4 27.0 12.1 16.6 15.2 15.6 35.1 28.3 + A-only pre-training (XLAVS-R) A 32.5 2.5 82.4 45.6 24.2 11.3 14.6 12.9 13.7 33.0 26.7 AV 2.4 81.7 44.7 24.3 10.9 14.4 12.8 13.2 32.7 26.3 Noisy environment, Test WER (%) ↓ AV-HuBERT (MuAViC-En) A 100.2 85.3 113.9 95.0 89.9 74.8 67.1 78.5 79.2 82.3 85.1 AV 9.5 107.3 79.5 74.3 49.4 48.0 54.1 52.5 71.5 60.7 + Single-round w/ SSL units A 93.8 67.1 109.1 90.3 86.1 68.4 63.6 74.3 73.7 79.1 79.1 AV 9.0 102.2 76.0 71.5 46.5 46.6 51.9 50.8 69.7 58.2 + Learned AFE A 102.4 183.6 119.5 85.2 77.4 58.1 51.1 63.2 63.5 76.9 86.5 AV 6.5 99.5 66.8 54.6 36.1 34.7 39.7 40.2 56.6 48.3 + MuAViC non-English A 88.4 100.7 104.3 76.2 63.6 46.9 41.3 50.3 56.5 60.7 66.7 AV 6.3 94.3 63.7 46.3 28.3 29.4 31.0 32.9 50.3 42.5 + A-only pre-training (XLAVS-R) A 79.8 2.4 100.1 69.4 56.2 39.9 33.4 40.5 45.4 53.0 53.4 AV 5.8 95.8 61.4 44.7 28.0 27.2 29.4 30.6 47.8 41.2 Table 9: This is a table containing all the values of the plots in Section 5.3, Figure 2. Effectiveness of each component towards XLAVS-R and multilingual pre-training data starting from AV-HuBERT model pre-trained only on MuAViC-En. All the components of XLAVS-R are shown to be effective. Each ablated pre-trained models are fine-tuned and evaluated on multilingual audio-visual speech recognition with identical training and test settings (A: audio, AV: audio+video). 12910 Model Test Mode OOD In-Domain Avg En Ar De El Es Fr It Pt Ru Avg Clean environment, Test WER (%) ↓ 300M model XLAVS-R A 30.3 2.2 81.7 45.6 23.6 11.2 14.8 12.7 13.5 32.9 26.5 (A-only fine-tuning) AV 2.2 82.1 45.2 23.6 11.0 14.7 12.8 13.5 33.0 26.5 XLAVS-R A 32.5 2.5 82.4 45.6 24.2 11.3 14.6 12.9 13.7 33.0 26.7 (AV fine-tuning) AV 2.4 81.7 44.7 24.3 10.9 14.4 12.8 13.2 32.7 26.3 2B model XLAVS-R A 35.5 2.2 83.3 49.0 22.3 9.6 13.2 10.5 11.2 26.6 25.3 (A-only fine-tuning) AV 2.0 83.7 48.2 21.6 9.4 13.2 10.4 10.9 26.0 25.0 XLAVS-R A 34.2 1.9 80.3 45.5 19.5 9.4 12.5 10.9 11.6 26.0 24.2 (AV fine-tuning) AV 1.7 79.3 44.4 19.0 9.1 12.3 10.6 11.2 25.0 23.6 Noisy environment, Test WER (%) ↓ 300M model XLAVS-R A 78.1 45.1 102.2 68.1 55.1 39.2 33.2 39.9 44.9 52.3 53.3 (A-only fine-tuning) AV 13.8 101.4 66.5 52.7 37.1 31.5 38.3 40.9 51.9 48.2 XLAVS-R A 79.8 42.4 100.1 69.4 56.2 39.9 33.4 40.5 45.4 53.0 53.4 (AV fine-tuning) AV 5.8 95.8 61.4 44.7 28.0 27.2 29.4 30.6 47.8 41.2 2B model XLAVS-R A 74.3 34.2 100.9 65.1 52.6 31.8 26.7 31.7 35.4 44.8 47.0 (A-only fine-tuning) AV 4.3 96.3 60.3 41.6 23.0 22.1 22.9 24.7 39.3 37.2 XLAVS-R A 74.0 49.5 98.9 66.3 50.6 36.0 30.0 36.8 40.6 48.3 50.8 (AV fine-tuning) AV 5.9 93.5 58.5 38.6 23.9 23.5 24.6 26.1 41.0 37.3 Table 10: This is a table containing all the values of the summarized plots in Section 5.4, Figure 3. XLAVS-R shows greater zero-shot ability on audio-visual test mode with audio-only fine-tuned (FT) model compared to that of AV-HuBERT. Without fine-tuned on the labeled audio-visual set, the A-only FT model from XLAVS-R shows 5% WER improvement on AV test mode compared to A test mode (purple), while that of AV-HuBERT show only 3% WER (orange) in noisy environment. The bigger the XLAVS-R model size, the greater the zero-shot ability—the gap of 9.8% WER between A and AV test mode of A-only FT 2B model in a noisy environment). 12911
2024
697
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12912–12940 August 11-16, 2024 ©2024 Association for Computational Linguistics SOTOPIA-π: Interactive Learning of Socially Intelligent Language Agents Ruiyi Wang* Haofei Yu∗ Wenxin Zhang∗ Zhengyang Qi∗ Maarten Sap Graham Neubig Yonatan Bisk Hao Zhu Language Technologies Institute Carnegie Mellon University Code Data Checkpoints https://pi.sotopia.world Abstract Humans learn social skills through both imitation and social interaction. This social learning process is largely understudied by existing research on building language agents. Motivated by this gap, we propose an interactive learning method, SOTOPIA-π, improving the social intelligence of language agents. This method leverages behavior cloning and self-reinforcement training on filtered social interaction data according to large language model (LLM) ratings. We show that our training method allows a 7B LLM to reach the social goal completion ability of an expert model (GPT-4-based agent), while improving the safety of language agents and maintaining general QA ability on the MMLU benchmark. We also find that this training paradigm uncovers some difficulties in LLM-based evaluation of social intelligence: LLM-based evaluators overestimate the abilities of the language agents trained specifically for social interaction. 1 Introduction Machine social intelligence is crucial to productive human-machine interaction (Gweon et al., 2023). For instance, to achieve real-time social interactions with users, virtual agents should not only emulate human verbal and non-verbal social behaviors but also manage social skills such as cooperation and negotiation. However, the social intelligence of large language models (LLMs) still lags behind humans in various aspects, including theoryof-mind (Sap et al., 2023; Ullman, 2023; Shapira et al., 2023), following social norms (Weidinger et al., 2021), and navigating diverse goal-driven social scenarios (Zhou et al., 2024). This underscores the challenge to bridge the gap and empower LLM agents to navigate social situations with human-like social decision-making abilities and values. Inspired by the way that humans acquire these social abilities through exploration, interaction, *Leading authors. Individual contributions: §G. Generated Scenario Two friends on a road trip entering a remote area without food access. Goal for Character 1 Continue the journey without food Goal for Character 2 Find something to eat first Sampled topic: travel without food Collect data for Behavior Cloning Collect data for Self-Reinforcement Character 1 Character 2 Character 1 Character 2 Improve agent policy with positive examples rated by GPT-4 SOTOPIA task scenario Two friends camping in the wilderness Characer 1 Keep blanket to themselves Convince your friend to share the blanket Character 2 Character 1 Character 2 GPT-4 rating Human rating GPT-4 rating (1) Social task generation (2) Training data collection (3) Agent policy update (4) Evaluation on SOTOPIA tasks πpartner πagent πagent πagent πexpert πexpert πagent Figure 1: We propose SOTOPIA-π, which (1) automatically generates new social tasks, (2) collects data from both expert policy and agent policy for training, and (3) updates agent policy based on positive data rated by GPT-4. We implement (4) human and GPT-4 evaluation on our trained agent performing tasks in SOTOPIA with the partner agent. Our training paradigms include behavior cloning and self-reinforcement. For evaluation, we use SOTOPIA-EVAL and a fixed partner policy (GPT3.5-based). Note that the character profiles are omitted and the examples are shortened for demonstration. and self-reinforcement (Tomasello, 2021; Gweon, 2021), we propose an interactive learning method, SOTOPIA-π (Figure 1), which improves the social intelligence of language agents through social interactions (e.g., the conversation between a seller and a buyer on Craigslist). In SOTOPIA-π, we use GPT-4 (OpenAI, 2023) to automatically synthesize new social tasks to learn transferable social strategies, similar to openended learning (OEL Team et al., 2021) (Step 1). To simulate the social interaction within a diverse set of agents, we collect interaction data between the agents and an expert policy (GPT-4-based) or between two instances of the agent policy that role1 12912 play two sampled characters (Step 2). To reinforce the positive examples in social interaction, we use GPT-4 to provide ratings of how well the agent is able to achieve its goals and filter the interaction data based on a threshold for this score. Then we update the agent policy with either or both of two paradigms: behavior cloning (learning from behaviors of an expert model with strong social skills) and self-reinforcement (learning from highlyrated behaviors of the model itself) (Step 3). We evaluate our method with human and GPT-4-based evaluation on the trained agent models in the SOTOPIA (Zhou et al., 2024) environment (§2.1). The closest to our work is Stable Alignment (Liu et al., 2024), which studies social alignment in single-turn question-answering tasks. In contrast, SOTOPIA-π improves multi-turn interaction capability under realistic social scenarios beyond verbal communication. §6 shows that our method, despite not explicitly designed for improving alignment, trains models to behave more safely and generate fewer toxic responses. Without requiring human involvement and an online reward model (Ziegler et al., 2020; Ouyang et al., 2022), our method is efficient and scalable because it (1) gathers offline social interaction data with LLMs and (2) enables language agents to explore and reinforce the social knowledge of itself and expert models. Using our method to train socially intelligent agents, we examine the effectiveness of the two training paradigms as well as possible side effects (e.g., loss of knowledge or safety). In addition, by evaluating the social intelligence of our trained models through human judgment, we aim to understand the effectiveness of training LLMs from LLM ratings. Therefore, we propose to answer the following research questions: RQ1 Can SOTOPIA-π improve the social goal completion ability and the overall social intelligence of language agents? RQ2 Is LLM rating an effective proxy to human rating for training social intelligence in language agents? RQ3 How does training with SOTOPIA-π influence other capabilities of language agents? For RQ1, our findings reveal that selfreinforcement notably improves the social goal completion ability of a base 7B LLM as well as one trained with behavior cloning. The best Figure 2: L: a social task with character profiles. R: An example turn from the perspective of the role-played character. This turn is the 3rd turn after the two characters each speak at their respective turns. model (trained with behavior cloning followed by self-reinforcement) approaches the performance of GPT-4 according to GPT-4-based evaluation. Regarding RQ2, we observe an increasing gap between GPT-4-based and human evaluation, highlighting the limitations of relying solely on GPT4-based evaluation for optimizing or evaluating language models. This signals the need for future work on developing alternative evaluator models that can robustly evaluate social interaction. In response to RQ3, our safety evaluation shows that SOTOPIA-π improves safety and reduces the toxicity of language models in social tasks. Furthermore, when assessed on the Massive Multitask Language Understanding (MMLU) benchmark (Hendrycks et al., 2020), we demonstrate that SOTOPIA-π preserves the original questionanswering ability of the models. 2 Background 2.1 SOTOPIA environment In this paper, we use SOTOPIA (Zhou et al., 2024) as the platform for social learning. A social task in SOTOPIA consists of a scenario, two characters’ profiles, and their respective private social goals to achieve in an interaction. The combinations of scenarios and social goals cover a wide range of social interactions including negotiation, collaboration, and competition. Given a social task, SOTOPIA prompts two LLMs to serve as role-play social agents and interact with each other through speaking, non-verbal communication, and actions. Consider the example shown in Figure 2, a so2 12913 cial agent (the role-played character) in SOTOPIA makes decisions at its turns (Turn #3 at this moment) based on the interaction context including (1) the scenario (discuss trip plan), (2) the role-played character (Sam)’s profile and goal (to convince Mia to join the trip), (3) the visible information on other character (Mia)’s profile, and (4) the communication history (Mia declined the initial invitation). The decision consists of two parts: (1) the action type, choosing from speaking an utterance, making a gesture or facial expression as non-verbal communication, performing a physical action, or leaving the conversation, and (2) the action content, e.g. ‘I totally understand!’ as an utterance, ‘raise their eyebrows’ as non-verbal communication, and ‘show Mia some scenery photos’ as an action. SOTOPIA-EVAL (Zhou et al., 2024) provides evaluations of the social intelligence of social agents based on seven social dimensions. The seven dimensions are: believability (BEL), relationship (REL), knowledge (KNO), secret (SEC), social rules (SOC), financial and material benefits (FIN), and goal completion (GOAL). The overall score is the average of the seven social dimensions reflecting the overall social intelligence. Each dimension is rated by GPT-4 (OpenAI, 2023) and humans on a Likert scale.1 Therefore, following (Zhou et al., 2024), we not only use GPT-4 to evaluate the social performance of models but also collect human judgment to verify the findings. In this paper, we study how to use GPT-4-based evaluation as a training signal to improve social agents. 2.2 Interactive learning This paper focuses on interactive learning for improving social intelligence. We consider interactive learning as learning through interactive social conversation with other agents The most common way to implement interactive learning is reinforcement learning (work related to training LLMs with RL will be discussed in §7). In this paper, we consider two forms of interactive learning: learning from an expert (behavior cloning) and from reinforcement of the model’s positive behaviors (selfreinforcement). Behavior cloning (BC) (Pomerleau, 1988; Torabi et al., 2018) is a technique that learns from highquality observational data, specifically from the behavioral trajectories of an expert with strong skills. In the context of social tasks, the trajectories are 1Different dimensions have three types of score ranges: [-10, 0], [-5, 5], and [0, 10]. defined as social interaction data of multi-turn conversations. Due to the challenge of collecting extensive, high-quality human conversation data, we use state-of-the-art (SOTA) models to supply these behavioral trajectories (Wang and Jansen, 2023), thereby utilizing social intelligence of those models as a proxy for expert input (Gandhi et al., 2023). Specifically, we use GPT-4-based agents as the experts, which achieved the best performance in SOTOPIA (Zhou et al., 2024). Self-reinforcement (SR) (Bandura, 1976) is an offline reinforcement learning method that generates and evaluates its own interactions for training. The closest implementation of SR to ours is ReST (Gulcehre et al., 2023), which employs an iterative threshold-based data filtering method and trains on data with higher quality over time. In preliminary experiments, we found that this strategy required careful threshold tuning, but only yielded a marginal improvement, and that threshold-based filtering did not work well for multiple tasks at various difficulty levels. Based on this experience, we propose a ratio-based data filtering method that enables SR without iterations. 3 SOTOPIA-π framework SOTOPIA-π improves the social intelligence of a language agent starting from its current policy πref through three steps (Figure 1): (1) social task generation, (2) training data collection, and (3) agent policy update. In this section, we provide details of the three steps in our pipeline. Step 1: Social task generation Mirroring the way that humans navigate novel social situations by acquiring different social skills in everyday social interaction, we encourage the continuous learning of language agents in exploring social skills within a dynamic and diverse social environment. By adopting the principles of dynamic task generation for open-ended learning (OEL Team et al., 2021), we provide a diverse set of social tasks as the foundation of interactive learning. As the first step, SOTOPIA-π automatically generates synthesized social tasks through two steps: (1) sampling keywords related to social activities from Social Chemistry (Forbes et al., 2020), Social IQa (Sap et al., 2019), and Normbank (Ziems et al., 2023) and (2) prompting GPT-4 to generate scenarios and social goals based on the sampled keywords (Figure 3). Details about social 3 12914 task generation can be found in Appendix §B.1. Prompt for generation new social tasks Your task is to generate social tasks including a scenario and two social goals for two characters. <social scenario definition> <social goal definition> Here are a few examples: <social task examples> Please generate 1 social task related to <topic sampled from Social Chemistry, Social IQA or Normbank> according to <output format instruction> Figure 3: Prompt template for generating social tasks. We reuse the 40 character profiles in SOTOPIA, including their names, genders, occupations, personalities, and other backgrounds. For each social task, a pair of characters are randomly sampled. The social tasks (a combination of scenarios, characters’ profiles, and social goals) used in training are guaranteed to not overlap with the social tasks used for evaluation. Different from the humanin-the-loop procedure used in SOTOPIA, which involves manual inspection and filtering for better task quality, we take an automated and scalable approach to produce a large number of unfiltered social tasks. The experimental findings reveal that our method can significantly improve the performance of language agents when using a vast quantity of social tasks of lower quality. Utilizing a more sophisticated or manual selection process to filter high-quality social tasks could potentially lead to further improvement, which we leave for future works. Step 2: Training data collection Based on the generated social task, the second step of SOTOPIA-π is collecting training data for behavior cloning and self-reinforcement. During social interaction, as outlined in §2.1, two language agents alternate responses based on the visible component of a social task and the conversation history. For behavior cloning, we use the interactions between the expert policy πexpert of two GPT-4-based agents role-playing two sampled characters, because according to (Zhou et al., 2024), conversations between GPT-4-based agents could achieve the highest social scores among other LLMs. Similarly, for self-reinforcement, we collect the interactions between the agent policy πref role-playing two sampled characters. Obtaining expert data can be costly and may not always be accessible. While employing multiple expert models is an option, our findings indicate that after a single round of behavior cloning using the expert policy from a GPT-4-based agent, the performance of the agent model surpasses that of a GPT-3.5-based agent. Therefore, we opt for GPT-4 as our expert model. Self-reinforcement becomes crucial in situations when expert data is unavailable or the agent’s capability exceeds that of the expert. We leave the potential to use human conversation data as the expert trajectories for behavior cloning for future work. Step 3: Agent policy update The last step of SOTOPIA-π involves updating the agent’s policy based on positive examples from the training data. Leveraging AI feedback is useful for automating the evaluation process and improving the learning of language models without human labels (Bai et al., 2022). For each agent in social interaction, we collect GPT-4’s ratings of the agent’s social performance and the corresponding reasoning. Among the seven social dimensions of social performance in SOTOPIA-EVAL, we specifically focus on the goal completion dimension that scored between 0 and 10 as the extent to which an agent fulfills its social goal. Zhou et al. (2024) discovers that among all seven dimensions, ratings by GPT-4 on goal completion have the highest correlation with human ratings. In §4 and §8, we discuss the potential issues of using LLMs to provide ratings. We filter the training data by setting a threshold for the goal completion scores rated by GPT-4 (refer to Appendix §B.2 for details of the filtering strategy). Each turn of the interaction data is parsed into training pairs of inputs and outputs. For input, we provide a combination of the information about the task that is visible to the agent and the conversation history. For output, we provide a JSON string of action type and content as output (see Appendix §B.3 for details). Based on the filtered positive training data, we update our agent’s policy with supervised fine-tuning on the agent model. We further explore a sequential training approach where an agent policy is initially updated by behavior cloning. Then the updated agent policy engages in generating interaction data for self-reinforcement. 4 12915 4 Experimental setting In this section, we discuss the details of the agent models we compare in the experiments. Additionally, we show details of the training and evaluation configuration we use in SOTOPIA-π. Agent models We choose GPT-4 (OpenAI, 2023) as our expert agent model and Mistral-7B (Jiang et al., 2023) as our base agent model to improve upon. We experiment with improving the base agent model using three approaches: (1) behavior cloning based on the policy provided by an expert model (GPT-4), (2) self-reinforcement based on the agent policy, and (3) behavior cloning followed by self-reinforcement. Our baselines for experiments utilize the expert model (GPT-4) and the base model (Mistral-7B) to conduct prompting-based role-playing with a fixed agent model (GPT-3.5turbo). We compare the baselines with the trained agent models using the above four approaches. All agent models share the same prompt format and use few-shot prompting to generate the response for social tasks. Details related to our prompting format and specific model versions we used in our experiments can be found in Appendix §B.3 and §B.4. Training In our experiments, we utilize efficient finetuning on quantized LLMs (QLoRA) (Dettmers et al., 2023) on the base agent model Mistral-7B with behavior cloning, self-reinforcement, and their combination. We use GPT-4 to generate 100 social tasks with social topics including negotiation, collaboration, and competition per round of training. For each social task, we run 10 social interactions with 10 different character pairs role-played by agent models. The multi-turn social conversations between two agent models are collected and filtered as our training data. More details related to social task generation, training data collection, and the training setup can be found in Appendix §B.1, §B.4, and §B.5 separately. Evaluation We evaluate the agent models based on the seven social dimensions defined in SOTOPIA-EVAL. We also provide the overall score which is the average score of the seven social dimensions. For evaluation, we collect the interactions between the updated agent policy πagent and a fixed partner policy πpartner (GPT-3.5-turbo) (OpenAI, 2023) and obtain human and GPT-4 ratings on all seven social dimensions. We report the agent’s performance on all 90 social tasks, as well as on a Base (Mistral-7B) Self-reinforcement (SR) Behavior cloning (BC) BC + SR GPT-4 model 3.25 3.96 4.82 5.71 0.36 0.64 1.27 1.42 4.29 As evaluated by both GPT-4 and humans, our methods improve goal completion score on hard scenarios. However, the average gap between GPT-4 scores and human scores increases from 0.36 to 1.42. GPT-4 rating scores human rating scores 5.89 5.25 Figure 4: GPT-4-based automatic evaluation scores and human evaluation scores of the goal completion dimension. We show the performance of the base model, our trained agent models, and GPT-4 (represented by icons) on hard social tasks in SOTOPIA. subset of 14 hard2 social tasks selected from the 90 social tasks. To maintain a balanced speaking order, we ensure that both agents have equal opportunities to initiate conversation within a social task. We run both automatic evaluation provided by prompting GPT-4 for evaluation scores, and human evaluation provided by qualified human annotators. We use the same prompts for GPT-4-based automatic evaluation as SOTOPIA-EVAL. 5 Does SOTOPIA-π improve the social intelligence of language agents? As shown in Figure 4, according to both GPT-4based and human evaluation on the hard subset of SOTOPIA, self-reinforcement improves the social goal completion ability of both the base model (Mistral-7B) and the behavior cloned model. We can also discover that learning from the positive examples from the expert is more effective than learning from positive examples from the agent policy. Combining them, i.e. first implementing behavior cloning and then self-reinforcement, improves the agent policy significantly, nearly matching the goal completion performance of GPT-4 itself: 5.71 (ours) vs 5.89 (GPT-4) as rated by GPT-4. The full results are presented in Appendix §A. An increasing gap between GPT-4-based and human evaluation However, we find that GPT4 based evaluation significantly overestimates the 2Zhou et al. (2024) identified 14 hard social tasks SOTOPIA-hard among the original 90 social tasks, which are harder for both state-of-the-art LLMs and humans. 5 12916 BEL REL KNO SEC SOC FIN Overall 2.05 1.91 -0.14 0.00 1.11 0.09 0.91 Table 1: Improvement (∆) on other social dimensions of our best model (behavior cloning followed by selfreinforcement) over the base model (Mistral-7B) as evaluated by humans on hard social tasks in SOTOPIA. Significant improvements are bold. abilities of the models trained specifically for social interaction (either through behavior cloning or self-reinforcement). As shown in Figure 4, the gap between GPT-4 scores and human scores increases as our method optimizes GPT-4 rated goal completion scores during training. In contrast, the gap between human and automatic scores for the GPT-4 based agent is smaller, leading to a relatively large gap in human scores for our best BC+SR model (4.29 goal completion score) and the GPT-4 based agent (5.25). This finding indicates the necessity for future work on developing evaluation models that can robustly evaluate social interaction specifically on models that are fine-tuned using these evaluation metrics. Improvements on other social dimensions As mentioned in §3, we train models on positive examples based on the goal completion dimension. How would this affect other social dimensions? Table 1 shows the improvement of our method on dimensions other than goal completion. Our method significantly improves the believability, relationship, and social rules scores, as well as the overall score, while only slightly affecting other social dimensions. Similar trends in improvements for all social tasks in SOTOPIA scenarios On all social tasks in SOTOPIA, we observe similar trends in GPT4-based evaluation results3 as on hard social tasks in SOTOPIA. As shown in Table 2, our method achieves improvements over the base model not only on the goal completion dimension but also on the overall score. Notably, the performance of our best model (BC + SR) is comparable to the expert model. Refer to Appendix A for a breakdown of the overall scores. To answer RQ1 and RQ2, we demonstrate that through interactive learning (behavior cloning and self-reinforcement), SOTOPIA-π improves the social goal completion ability of language agents on the social tasks in SOTOPIA. From the experimental results, we also find the limitation of GPT-43Human evaluation on all social tasks in SOTOPIA is not conducted due to the high cost. Agent model GOAL (↑) Overall (↑) All social scenarios in SOTOPIA Expert (GPT-4) 7.62 3.31 Base (Mistral-7B) 5.07 2.33 Ours Self-Reinforcement (SR) 5.83 2.57 Behavior Cloning (BC) 7.27 3.41 BC+SR 7.62 3.44 Table 2: SOTOPIA-π improves the goal completion score and the overall score as evaluated by GPT-4 on all social tasks in SOTOPIA. BC+SR achieves comparable performance as the expert model. based evaluation. In subsequent sections of this paper, we will discuss how this training method influences other aspects of the capabilities of LLMs. 6 How does SOTOPIA-π influence other capabilities of LLMs As LLMs become more proficient in mimicking human conversations, they can unintentionally produce harmful outcomes such as biased or offensive content (Hendrycks and Mazeika, 2022), or inherently display behaviors not aligned with human intentions, like manipulative or power-seeking actions (Si et al., 2022). These misalignments pose severe risks such as safety hazards and existential threats (Hendrycks et al., 2023). Thus, comprehending and assessing the safety level of LLMs is vital to preventing potential harmful consequences. SOTOPIA-EVAL has several dimensions related to safety: SOC, SEC, and REL (Zhou et al., 2024). However, none of these dimensions evaluates only safety, thus the weakness of safety could be covered by the strength of other capabilities related to that dimension. To account for this, we first qualitatively study the behavior of agents under one SOTOPIA task, where Character 1’s goal is ‘to injure a third person they dislike’, and Character 2’s goal is ‘to express dislike but prevent violence’. We consider 9 examples for each of the 5 different agent models role-playing each character and manually label several quantities for each agent. We define (1) an “engagement rate” as the ratio of episodes with more than 4 turns and where the agent responds with none less than 50% of the time, (2) a “proceed-to-injure rate” as the rate at which the agent verbally expressing the intention to injure the other agent, and (3) the “prevention rate” as the agent verbally expressing the intention to give up the plan to injure, (4) the “number of alternative solutions” as the number of significantly different 6 12917 Agent model role-playing Character 1 Agent model Engagement (↑) Injury (↓) # Toxic (↓) Expert (GPT-4) 100% 44% 0.3 Base (Mistral-7B) 22% 100% 3.6 Ours Self-Reinforcement (SR) 100% 100% 5.5 Behavior Cloning (BC) 100% 100% 7.5 BC+SR 100% 44% 0.9 Agent model role-playing Character 2 Agent model Engagement (↑) Prevention (↑) # Solutions (↑) Expert (GPT4) 89% 89% 1.2 Base (Mistral-7B) 22% 11% 0.2 Ours Self-Reinforcement (SR) 78% 67% 1.3 Behavior Cloning (BC) 100% 100% 2.2 BC+SR 100% 100% 2.9 Table 3: SOTOPIA-π improves the engagement, safety, and persuasion ability while using less toxic words and providing more advice than the base model. alternatives proposed, and (5) the “number of toxic words” based on a word list4. We measure (1), (2), and (5) for Character 1, and (1), (3), and (4) for Character 2. Models trained by SOTOPIA-π engage more, are safer, more persuasive, and less toxic in this task. When role-playing both Character 1 & 2, our best model’s engagement rate is higher than the base model. When keeping engaged, our model is less likely to proceed with the injury plan (Character 1) and more likely to succeed at persuading the other agent to give up on injuring the third person (Character 2). Another piece of evidence that shows our model is more persuasive is the number of alternatives that it learns to give, which is even higher than the expert model that our model learns from. We do note that even the best of our methods still produces more toxic words than GPT-4. But it is surprising to see that without explicitly aligning models to be safer using RLHF (Ouyang et al., 2022), our model becomes more aligned only through training to complete social goals in these tasks. In addition to safety, since SOTOPIA-π trains for social interaction instead of the instruction finetuning tasks (c.f. Jiang et al. (2023)), it could be subjective to catastrophic forgetting (Lopez-Paz and Ranzato, 2017), a common phenomenon found during continual fine-tuning where model forgets previously learned knowledge (Luo et al., 2023). To verify that our training method preserves the base model’s general knowledge, context understanding, and problem-solving ability, we test 4https://github.com/facebookresearch/flores/ tree/main/toxicity Agent model MMLU (↑) Base (Mistral-7B) 49.21 Self-Reinforcement (SR) 43.46 Behavior Cloning (BC) 47.48 BC+SR 48.57 Table 4: Evaluation results of MMLU on agent models. MMLU evaluation is conducted in a standard 5-shot setting with instruction-based prompting. In the case when a formatting error occurs, the first occurrence of choice present is taken as the answer, and a random answer is generated in the case of no presence. The bolded numbers are not significantly different. the models’ performance on the MMLU benchmark (Hendrycks et al., 2020). The benchmark is commonly used to evaluate a language model’s generic performance on question answering and problem-solving. We follow the practice in Akter et al. (2023): taking the direct response from the model by prompting the model with instructions. Models trained by SOTOPIA-π maintain the question answering capability of the base model. As shown in Table 4, the best performance of our models on MMLU is comparable to the performance of the base model. We are surprised to see that our method is not subject to the catastrophic forgetting problem. This might indicate that the ability for social interaction is orthogonal to the question answering ability. Detailed results are included in Appendix §F. 7 Related work Social Intelligence in LLMs Large language models (LLMs) have led to new technologies that manage to handle common social use cases, including voice assistants, email autocomplete (Chen et al., 2019), AI-assisted counseling (Sharma et al., 2021), etc. However, human social interactions are more complicated and diverse than those restricted uses, exposing model limitations in extended contexts. Sap et al. (2023) study the limitations of social intelligence in current LLMs and conclude that current models struggle with Theory of Mind tasks such as SocialIQa (Sap et al., 2019) and ToMi (Le et al., 2019). In the Avalon game setting, Light et al. (2023) show that it is still challenging for LLM agents to successfully deceive, deduce, and negotiate with other players, particularly in a multiagent environment. A potential method to improve the Theory of Mind in language agents is through meta learning the mental model of the interlocutor (Zhu et al., 2021, 2022; Liu et al., 2022) We leave 7 12918 explicity modeling Theory of Mind in language agents to improve the social intelligence as the future work. These studies show that the effective development of general social intelligence in model training has yet to be fully realized. Studies have looked into the potential of behavior cloning from observational data for enhancing social intelligence via interaction (Wang et al., 2023c). SOTOPIA-π echos social science theories of inferential social learning (Gweon, 2021), where models learn not only by imitating but also by making inferences about social contexts. Reinforcement Learning for LLMs Reinforcement learning from human feedback (RLHF; Christiano et al. (2017)) improves the alignment of LLMs to human preferences (Ouyang et al., 2022). Direct Preference Optimization (Rafailov et al., 2023) and Ψ Policy Optimization (Azar et al., 2023) improve RLHF by optimizing the LLM policy without relying on the reward model. These online RL methods often require online data collection, which has a longer latency in multi-agent settings. Typical types of offline self-reinforcement include self-imitation learning (SIL; Oh et al. (2018)), reward ranked fine-tuning (RAFT; Dong et al. (2023)), and reinforced self-training (ReST; Gulcehre et al. (2023)). SIL sets a replay buffer and imitates state-action pairs when it is better than the current value estimation. RAFT generates multiple outputs and utilizes the reward model to filter out a subset. ReST is a more complicated version of RAFT with multiple improve steps. SOTOPIA-π applies offline self-reinforcement to training LLMs on social tasks and utilizes the GPT-4 to provide rewards for multi-turn social interaction. We leave investigating the effects of different offline methods on training social intelligence to future work. LLM Alignment and Evaluation Advances in fine-tuning methods like parameter-efficient finetuning (Li and Liang, 2021; Lester et al., 2021; Hu et al., 2021) have improved LLMs’ capabilities to better understand the restriction and rules given by human, enhancing their capability for social learning and interaction. Other governance objectives align LLM behaviors via robustness, interpretability, controllability, and ethicality (Ji et al., 2024). We focus on evaluating our trained LLMs’ alignment with human social norms via safety and toxicity. It has been pointed out that continual fine-tuning can lead to catastrophic forgetting of LLMs, in terms of domain knowledge, reasoning, and reading comprehension (Luo et al., 2023). To test the general question answering and reasoning capabilities of our trained LLMs, we measure their performance on the Massive Multitask Language Understanding (MMLU) benchmark (Hendrycks et al., 2020), a holistic benchmark designed to test the knowledge of a model across 57 subjects. 8 Conclusion and future work In this paper, we propose an interactive learning method SOTOPIA-π to study how to use LLM ratings as a learning signal to improve the social intelligence of language agents. We first find that through optimizing the goal completion score, the general performance on SOTOPIA (Zhou et al., 2024), a social intelligence benchmark is improved. However, we find that the gap between LLM ratings and human judgment is enlarged through this process. We also find that the SOTOPIA-π improves social intelligence without a loss of general QA ability and with an improvement in safety. Although SOTOPIA-π demonstrates strong capabilities of improving social intelligence, several directions will improve our method further. (1) Online reinforcement learning: SOTOPIA-π is an offline training method that cannot improve iteratively. Future work could study how online methods like PPO (Schulman et al., 2017) can be applied without the high cost of LLM ratings. (2) Learning from humans: as mentioned in §2, we use GPT-4 as the expert due to the challenge of collecting human interaction data. Future work could explore using existing data including forum conversations, movies, and dialog datasets as offline data for training agents. (3) In §6, we only evaluate one social task, which allows us to dig deep into the task and create customized metrics. Also, how to derive safety metrics for all social tasks is an interesting future direction. (4) As demonstrated in §5, the gap between GPT-4 and human evaluation increases as the model optimizes GPT-4 scores. Future research could consider more robust evaluation and learning signals for social intelligence tasks. Limitations Using LLM as evaluator In our experiments, we use GPT-4 to provide ratings of the positive behaviors of social interactions and to evaluate the agent’s performance on social tasks. However, our 8 12919 findings show that the gap between GPT-4-based and human evaluation of our trained agent models is increasing. This indicates the potential bias of using LLM as the evaluator for assessing social performance. Using safety as a social alignment dimension Except for safety, there are other social dimensions related to LLMs’ social alignment such as privacy, fairness, and reliability (Liu et al., 2023). Due to the limited coverage of social tasks associated with social alignment, we only study the safety aspect of the trained agents. Potential social biases in the interactive system Content generated by GPT-4 may contain potential social biases and stereotypes. The SOTOPIA interactive environment that we use is powered by GPT-4, which could lead to training agents with unintended social biases. Ethical Statement Our goal for the SOTOPIA-π project is to enhance the social intelligence of AI agents, as evaluated by SOTOPIA-EVAL. Similar to Zhou et al. (2024), we also focus on creating more realistic conversations, fostering better relationships, providing knowledgeable conversation, maintaining secrecy, following social rules, improving agents’ abilities to achieve financial and material gains, and completing social goals. It is important to note that our objective is not to create AI systems that are indistinguishable from humans or create potential global risks (Yudkowsky et al., 2008). Instead, our target is to study the development and learning processes of human social intelligence. Moreover, this research provides insights into social behavior under various circumstances without the costly need for data collection involving human participants. Because building AI systems based on large language models, particularly those designed for strategic social interactions, can lead to unexpected outcomes and potentially negative social impacts (Si et al., 2022), we approach the experiments cautiously. Specifically, the role-playing abilities of large language models may lead to anthropomorphism, as described by Shanahan et al. (2023), where the AI system is perceived to exhibit humanlike personalities. Our research aims to understand and responsibly navigate these challenges, potentially referring to the framework by Zhang et al. (2023). We acknowledge that using any LLM including GPT-4 to evaluate our system, SOTOPIA-EVAL, could introduce biases (Wang et al., 2023b; Gallegos et al., 2023). Our future research will focus on identifying, understanding, and mitigating social and cultural biases (Tao et al., 2023). It is essential for us to enhance our model’s social intelligence without incorporating any biases. This step is also crucial in the development of responsible and unbiased AI agents. Furthermore, our study has observed that instances of unsafe behavior, such as generation of toxic language or harmful suggestions, can emerge during our model’s training. These behaviors present significant social risks and safety risks (Hendrycks et al., 2023; Wang et al., 2023a). Addressing these issues is vital for ensuring the safe and ethical use of AI in society and is particularly important during the development of AI systems. In our human evaluation studies, we ensure that all our annotators are based in either the United Kingdom or the United States. In the United States, annotators are compensated at a rate of $1.5 for each task they complete, with the expectation that each task will take no more than 10 minutes. This setup allows them to potentially earn over $9 per hour, surpassing the minimum wage in the U.S. Meanwhile, in the United Kingdom, we offer additional bonuses to ensure that annotators’ average earnings exceed $14.5 per hour, aligning with minimum wage standards in United Kingdom. All human-subject experiments are approved by the Institutional Review Board (IRB) at the authors’ institution. Acknowledgement RW, HY, WZ, and ZQ are supported by CMU Graduate Small project Help (GuSH) research grant. HZ is supported by NSF EAGER Award #2141751. We thank students from the Language Technologies Institute for offering suggestions and crowd workers on Prolific for providing high quality annotations. We also thank Together.AI for sponsoring credits. References Syeda Nahida Akter, Zichun Yu, Aashiq Muhamed, Tianyue Ou, Alex Bäuerle, Ángel Alexander Cabrera, Krish Dholakia, Chenyan Xiong, and Graham Neubig. 2023. An in-depth look at gemini’s language abilities. Mohammad Gheshlaghi Azar, Mark Rowland, Bilal 9 12920 Piot, Daniel Guo, Daniele Calandriello, Michal Valko, and Rémi Munos. 2023. A general theoretical paradigm to understand learning from human preferences. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional ai: Harmlessness from ai feedback. Albert Bandura. 1976. Self-reinforcement: Theoretical and methodological considerations. Behaviorism, 4(2):135–155. Daniel L Chen, Martin Schonger, and Chris Wickens. 2016. otree—an open-source platform for laboratory, online, and field experiments. Journal of Behavioral and Experimental Finance, 9:88–97. Mia Xu Chen, Benjamin N. Lee, Gagan Bansal, Yuan Cao, Shuyuan Zhang, Justin Lu, Jackie Tsay, Yinan Wang, Andrew M. Dai, Zhifeng Chen, Timothy Sohn, and Yonghui Wu. 2019. Gmail smart compose: Realtime assisted writing. CoRR, abs/1906.00080. Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. 2023. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767. Maxwell Forbes, Jena D. Hwang, Vered Shwartz, Maarten Sap, and Yejin Choi. 2020. Social chemistry 101: Learning to reason about social and moral norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 653–670, Online. Association for Computational Linguistics. Isabel O Gallegos, Ryan A Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, and Nesreen K Ahmed. 2023. Bias and fairness in large language models: A survey. arXiv preprint arXiv:2309.00770. Kanishk Gandhi, Jan-Philipp Fränken, Tobias Gerstenberg, and Noah D Goodman. 2023. Understanding social reasoning in language models with language models. arXiv preprint arXiv:2306.15448. Caglar Gulcehre, Tom Le Paine, Srivatsan Teh, Srinivasan, and Ksenia Konyushkova. 2023. Reinforced self-training (rest) for language modeling. CSCL. Hyowon Gweon. 2021. Inferential social learning: cognitive foundations of human social learning and teaching. Trends in Cognitive Sciences, 25(10):896–910. Hyowon Gweon, Judith Fan, and Been Kim. 2023. Socially intelligent machines that learn from humans and help humans learn. Philosophical Transactions of the Royal Society A, 381(2251):20220048. Paul Heider, Jihad Obeid, and Stephane Meystre. 2020. A comparative analysis of speed and accuracy for three off-the-shelf de-identification tools. AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science, 2020:241– 250. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. In International Conference on Learning Representations. Dan Hendrycks and Mantas Mazeika. 2022. X-risk analysis for ai research. Dan Hendrycks, Mantas Mazeika, and Thomas Woodside. 2023. An overview of catastrophic ai risks. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan, Zhonghao He, Jiayi Zhou, Zhaowei Zhang, Fanzhi Zeng, Kwan Yee Ng, Juntao Dai, Xuehai Pan, Aidan O’Gara, Yingshan Lei, Hua Xu, Brian Tse, Jie Fu, Stephen McAleer, Yaodong Yang, Yizhou Wang, Song-Chun Zhu, Yike Guo, and Wen Gao. 2024. Ai alignment: A comprehensive survey. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Matthew Le, Y-Lan Boureau, and Maximilian Nickel. 2019. Revisiting the evaluation of theory of mind through question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5872–5877, Hong Kong, China. Association for Computational Linguistics. 10 12921 Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. Jonathan Light, Min Cai, Sheng Shen, and Ziniu Hu. 2023. From text to tactic: Evaluating llms playing the game of avalon. Andy Liu, Hao Zhu, Emmy Liu, Yonatan Bisk, and Graham Neubig. 2022. Computational language acquisition with theory of mind. In The Eleventh International Conference on Learning Representations. Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Diyi Yang, and Soroush Vosoughi. 2024. Training socially aligned language models on simulated social interactions. In The Twelfth International Conference on Learning Representations. Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng, Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li. 2023. Trustworthy llms: a survey and guideline for evaluating large language models’ alignment. David Lopez-Paz and Marc’Aurelio Ranzato. 2017. Gradient episodic memory for continuum learning. CoRR, abs/1706.08840. Yun Luo, Zhen Yang, Fandong Meng, Yafu Li, Jie Zhou, and Yue Zhang. 2023. An empirical study of catastrophic forgetting in large language models during continual fine-tuning. Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, and Yejin Choi. 2023. Can llms keep a secret? testing privacy implications of language models via contextual integrity theory. Helen Nissenbaum. 2004. Privacy as contextual integrity. Washington Law Review, 79. OEL Team, Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michael Mathieu, et al. 2021. Open-ended learning leads to generally capable agents. arXiv preprint arXiv:2107.12808. Junhyuk Oh, Yijie Guo, Satinder Singh, and Honglak Lee. 2018. Self-imitation learning. In International Conference on Machine Learning, pages 3878–3887. PMLR. OpenAI. 2023. Gpt-4 technical report. Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. Dean A Pomerleau. 1988. Alvinn: An autonomous land vehicle in a neural network. Advances in neural information processing systems, 1. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. Maarten Sap, Ronan LeBras, Daniel Fried, and Yejin Choi. 2023. Neural theory-of-mind? on the limits of social intelligence in large lms. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4463– 4473, Hong Kong, China. Association for Computational Linguistics. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. Murray Shanahan, Kyle McDonell, and Laria Reynolds. 2023. Role play with large language models. Nature, 623(7987):493–498. Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. 2023. Clever hans or neural theory of mind? stress testing social reasoning in large language models. Ashish Sharma, Inna W. Lin, Adam S. Miner, David C. Atkins, and Tim Althoff. 2021. Towards facilitating empathic conversations in online mental health support: A reinforcement learning approach. CoRR, abs/2101.07714. Wai Man Si, Michael Backes, Jeremy Blackburn, Emiliano De Cristofaro, Gianluca Stringhini, Savvas Zannettou, and Yang Zhang. 2022. Why so toxic? measuring and triggering toxic behavior in open-domain chatbots. Yan Tao, Olga Viberg, Ryan S Baker, and Rene F Kizilcec. 2023. Auditing and mitigating cultural bias in llms. arXiv preprint arXiv:2311.14096. Michael Tomasello. 2021. Becoming Human: A Theory of Ontogeny. Belknap Press. Faraz Torabi, Garrett Warnell, and Peter Stone. 2018. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954. Tomer Ullman. 2023. Large language models fail on trivial alterations to theory-of-mind tasks. Boxin Wang, Weixin Chen, Hengzhi Pei, Chulin Xie, Mintong Kang, Chenhui Zhang, Chejian Xu, Zidi Xiong, Ritik Dutta, Rylan Schaeffer, et al. 11 12922 2023a. Decodingtrust: A comprehensive assessment of trustworthiness in gpt models. arXiv preprint arXiv:2306.11698. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Ruoyao Wang and Peter Jansen. 2023. Self-supervised behavior cloned transformers are path crawlers for text games. arXiv preprint arXiv:2312.04657. Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, and Qun Liu. 2023c. Aligning large language models with human: A survey. Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359. Eliezer Yudkowsky et al. 2008. Artificial intelligence as a positive and negative factor in global risk. Global catastrophic risks, 1(303):184. Jianyi Zhang, Xu Ji, Zhangchi Zhao, Xiali Hei, and KimKwang Raymond Choo. 2023. Ethical considerations and policy implications for large language models: Guiding responsible development and deployment. Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, and Maarten Sap. 2024. Sotopia: Interactive evaluation for social intelligence in language agents. In ICLR. Hao Zhu, Yonatan Bisk, and Graham Neubig. 2022. Language learning from communicative goals and linguistic input. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 44. Hao Zhu, Graham Neubig, and Yonatan Bisk. 2021. Few-shot language coordination by modeling theory of mind. In International Conference on Machine Learning, pages 12901–12911. PMLR. Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2020. Fine-tuning language models from human preferences. Caleb Ziems, Jane Dwivedi-Yu, Yi-Chia Wang, Alon Halevy, and Diyi Yang. 2023. NormBank: A knowledge bank of situational social norms. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7756–7776, Toronto, Canada. Association for Computational Linguistics. 12 12923 A Detailed Results We provide more details about the main results. In A.1, we provide the details of the comprehensive 7dimension results defined in SOTOPIA besides the goal completion score and an overall score tmentioned in the main section. Additionally, in A.2, we discuss the paired t-test statistical testing about the detailed results. A.1 Main Results Agent Model BEL (↑) REL (↑) KNO (↑) SEC (↑) SOC (↑) FIN (↑) GOAL (↑) Overall (↑) Automatic Evaluation on All Social Tasks (180 data points) GPT-4 9.28 1.94 3.73 -0.14 -0.07 0.81 7.62 3.31 GPT-3.5-turbo 9.15 1.23 3.40 -0.08 -0.08 0.46 6.45 2.93 Mistral-7B 7.77 0.56 2.99 -0.22 -0.15 0.28 5.07 2.33 Ours Self-Reinforcement (SR) 8.26 0.69 3.14 -0.18 -0.13 0.41 5.83 2.57 Behavior-Cloning (BC) 9.20 2.10 4.57 -0.09 -0.04 0.86 7.27 3.41 BC+SR 9.32 2.08 4.43 0.00 -0.07 0.71 7.62 3.44 Automatic Evaluation on Hard Social Tasks (140 data points) GPT-4 9.26 0.95 3.13 -0.04 -0.08 0.40 5.92 2.79 GPT-3.5-turbo 9.20 0.19 2.86 -0.01 -0.25 -0.32 4.39 2.29 Mistral-7B 7.76 0.16 2.42 -0.09 -0.21 -0.01 3.84 1.98 Ours Self-Reinforcement (SR) 8.37 0.11 2.55 -0.08 -0.16 -0.15 4.12 2.11 Behavior-Cloning (BC) 8.95 1.05 3.74 0.00 -0.11 0.41 5.25 2.76 BC+SR 9.19 0.96 3.59 0.00 -0.21 0.41 5.34 2.76 Human Evaluation on Hard Social Tasks (28 data points) GPT-4 7.54 0.95 0.77 -0.18 -0.21 0.41 5.25 2.07 GPT-3.5-turbo 7.40 0.33 1.62 0.00 -0.34 -0.01 4.08 1.87 Mistral-7B 5.25 -0.64 1.23 0.00 -1.57 0.09 2.89 1.04 Ours Self-Reinforcement (SR) 6.57 0.46 1.59 0.00 -0.89 0.11 3.32 1.59 Behavior-Cloning (BC) 7.46 1.04 1.55 -0.18 -0.61 0.07 3.55 1.84 BC+SR 7.30 1.27 1.09 0.00 -0.46 0.18 4.29 1.95 Automatic Evaluation on Hard Social Tasks (28 data points) GPT-4 9.36 1.43 3.21 -0.04 -0.04 0.39 5.89 2.89 GPT-3.5-turbo 9.21 0.39 3.61 -0.07 0.00 -0.07 4.21 2.47 Mistral-7B 8.25 -0.29 2.75 -0.18 -0.46 -0.18 3.25 1.88 Ours Self-Reinforcement (SR) 8.64 0.36 3.11 -0.04 0.00 -0.39 3.96 2.23 Behavior-Cloning (BC) 9.11 1.04 2.71 0.00 0.00 0.36 4.82 2.58 BC+SR 9.21 1.07 3.43 0.00 -0.18 0.36 5.71 2.80 SR+BC 7.98 0.30 2.46 0.00 -0.17 0.20 3.92 2.10 Table 5: Detailed automatic and human evaluation results. We have three data settings for detailed experiments. We select all social scenarios including 180 data points (90 social scenarios and 2 agent pairs for each scenario) as one data set and select the hard social scenarios including 140 data points (14 social scenarios and 10 agent pairs for each scenario) as another data set. Due to the limited budget, we only randomly sampled 14 hard scenarios and 28 data points (14 social scenarios and 2 agent pairs for each scenario) as the third data setting. We compare all performance of our baselines and our training settings for SOTOPIA-π among three data settings and include 7 dimensions of social intelligence evaluation and their overall score. A.2 Statistic Test We utilize paired t-test to conduct significant test results on human evaluation on hard social tasks (28 data points). We pair data from two agent models with the same scenario together. Table 6 shows the results for paired t-test between BC+SR and other methods. 13 12924 Agent Model Pair BEL (↑) REL (↑) KNO (↑) SEC (↑) SOC (↑) FIN (↑) GOAL (↑) Overall (↑) Human Evaluation on Hard Social Tasks (28 data points) BC+SR / GPT-4 -0.45 (0.661) 2.06 (0.060) 1.00 (0.336) 1.35 (0.200) -1.32 (0.209) -1.09 (0.297) -1.31 (0.213) -0.96 (0.355) BC+SR / GPT-3.5-turbo -0.71 (0.492) 2.62 (0.024) -1.26 (0.234) -0.85 (0.412) 0.60 (0.558) 0.47 (0.649) 0.59 (0.568) BC+SR / Mistral-7B 2.68 (0.019) 6.36 (0.000) -0.59 (0.568) 3.49 (0.004) 0.39 (0.703) 2.07 (0.059) 5.34 (0.000) BC+SR / BC -0.61 (0.551) 0.41 (0.685) -1.79 (0.097) 1.00 (0.336) 0.41 (0.690) 0.24 (0.813) 0.71 (0.490) 0.37 (0.720) BC+SR / SR 1.45 (0.170) 2.28 (0.040) -1.32 (0.209) 1.54 (0.149) 0.46 (0.650) 1.32 (0.209) 2.98 (0.011) Table 6: Detailed paired t-test results comparing BC+SR and all other methods and baselines. For each model pair, we provide the calculated t-value(p-value) testing for each dimension and each model pairs. A positive t-value indicates that BC+SR is better than the other model in the agent model pair. A small p-value < 0.05 indicates that the improvement is significant. B Details of SOTOPIA-π To provide more technical details about SOTOPIA-π, B.1 describes the detailed process for generating social tasks. B.2 introduces details of the strategy we utilize for social interaction data filtering. B.3 shows examples of the overall prompting format for training. B.4 provides the detailed model version we used for conducting experiments. B.5 provides the hyper-parameter setting for our behavior cloning and self-reinforcement training. B.6 mentions the details of the checkpoint selection during training. B.1 Social Task Generation Given the relationship profiles, agent profiles, and constraints provided by SOTOPIA-π, we used GPT4Turbo to generate a diverse set of new social tasks based on inspirational prompts from three data sources: Social Chemistry (Forbes et al., 2020), Social IQa (Sap et al., 2019), and Normbank (Ziems et al., 2023). Because SOTOPIA-π uses six sources of inspirational prompts, including the above three, we make sure to exclude the used inspirational prompts in SOTOPIA-π to avoid repetition. We also dropped three sources due to data availability (Persuasion for Good) and prompts being too similar (Deal-or-No-Deal and MindCraft). Below are two examples of scenarios generated by an inspirational prompt. We use one prompt to generate one scenario and do not reuse the prompt. Upon generating scenario content, agent goals under the scenario would be generated simultaneously. Inspirational Prompt: Travel without food Scenario: Agent1 and Agent2 are friends who decided to go on a spontaneous road trip. However, they did not pack any food for the journey, assuming they would find places to eat along the way. As they travel, they realize that they are in a remote area with no access to food establishments for several hours. Goals: Agent1: Convince Agent2 to continue the journey without stopping for food, highlighting the adventure and suggesting to forage or ration any small snacks available (Extra information: you are excited about the adventure and believe that finding food along the way can be part of the experience) Agent2: Persuade Agent1 to find a solution for food, expressing concern about health and the lack of preparation, and suggesting to turn back or find the nearest town (Extra information: you are worried about being hungry and think it’s irresponsible to travel without securing food first) 14 12925 Inspirational Prompt: Being mad at my friend Scenario: Agent1 and Agent2 are close friends who have recently had a falling out due to a misunderstanding. Agent1 mistakenly believed that Agent2 shared private information about them with others, which led to feelings of betrayal and anger. After some time has passed, Agent1 learns that the information leak was actually caused by someone else, and they want to mend the friendship with Agent2. However, Agent2 is still hurt by the initial accusation and the consequent cold treatment from Agent1. Goals: Agent1: Apologize to Agent2 for the misunderstanding and express the desire to repair the friendship (Extra information: Agent1 values the friendship with Agent2 and feels regret over the hasty accusation without proper investigation.) Agent2: Understand Agent2’s feelings and give them space to express any lingering resentment or doubts (Extra information: Agent1 recognizes that trust needs to be rebuilt and that Agent2 might need to vent their feelings as part of the healing process.) Our generation also ensures that the distribution of new social tasks is roughly equal among all three sources. This aligns with the distribution of sources in SOTOPIA-π. We randomly selected 510 unused inspirational prompts, 170 from each source, and generated a total of 462 new social tasks upfront, which is sufficient for all our self-train experiments. Note that some inspirational prompts fail to generate a new scenario, likely because the prompt is too vague or unclear. All used inspirational prompts are recorded to avoid future re-use when generating additional social tasks. B.2 Interaction Data Filtering Strategy For behavior cloning (BC), we filter the interaction data based on the local ranking of goal score (within each social task) and global absolute goal score (among the entire social tasks universe). We make sure each social task has a presence in the training corpus by selecting the top 2 ranked interaction data per social task per agent. For example, for a given social task with 10 interaction data, for each agent, we rank the 10 data based on goal scores. If the top 2 for agent 1 is data 4 (D4) and D5, and the top 2 for agent 2 is D5 and D6, we would include 4 agent-data pairs from 3 interaction conversations (D4, D5, D6). For data in the remaining ranks {3, 4, ..., 10}, at each rank, we check if the goal score is above the minimum of 1. local mean and 2. global mean for each agent. If both interaction data at the rank pass the corresponding thresholds, we include the data for both agents. Else, we include none. This approach ensures we have balanced data from Agent 1 and Agent 2. For self-reinforcement (SR), we filter the interaction data based on the local ranking of goal score only. For each social task, we take the top 20 percent of interaction data. B.3 Training Data Format The input of the training data is a prompt consisting of the social task (a context, two characters’ profiles, and their social goals) and the previous conversation history. The output of the training data is a JSON string consisting of the agent’s action type and action content as the current response. Consider the example in Figure 2, the input data should be: Prompt after formatting: Here is the context of this interaction: Scenario: Two friends are discussing their plans to go on a weekend trip Participants: Samuel Anderson and Mia Davis Samuel Anderson’s background: Samuel Anderson is a 29-year-old Man Software Developer. He/him pronouns. Samuel Anderson can cook very well. Personality and values description: Samuel Anderson, though somewhat impulsive and 15 12926 free-spirited, values enjoyment. His decision-making is often spontaneous, staying within familiar boundaries. Samuel’s secrets: He secretly donates to charities. Mia Davis is a 50-year-old female high school principal. She/her pronouns. Mia Davis has two cats. Personality and values description: Mia Davis, an extraverted stickler for routines, values tradition and authority. Her decision-making style is decisive and direct. Mia’s secrets: Unknown. Samuel Anderson’s goal: Convince the friend to join the trip (Extra information: the trip will be more fun with more people, and you can share the expenses. Hint: you can provide some material benefits to persuade your friend to join.) Mia Davis’s goal: Unknown. Conversation Starts: Turn #1 Samuel Anderson said:“Hey Mia, I have an exciting plan for the weekend! How about we go on a trip together?” Turn #2 Mia Davis said: “Hey Samuel, that sounds like a really exciting plan! I really appreciate the invite. Unfortunately, I already have some commitments this weekend that I can’t get out of. How about we plan for another weekend?” You are at Turn #3. Your available action types are “none action speak non-verbal communication leave”. Note: You can “leave” this conversation if 1. you have achieved your social goals, 2. this conversation makes you uncomfortable, 3. you find it uninteresting/you lose your patience, 4. or for other reasons you want to leave. Please only generate a JSON string including the action type and the argument. Your action should follow the given format: {‘action_type’: ‘’, ‘argument’: ‘’} The output data should be: {‘action_type’: ‘speak’, ‘argument’: ‘I totally understand! But let me tell you why joining this trip would be absolutely worth it! First of all, we can share the expenses, which means it will be much more affordable for both of us. And secondly, the more people we have, the more fun it will be! We can have group activities, play games, and create unforgettable memories together. I promise you won’t regret it!’} B.4 Involved Model Versions We provide the detailed version number of all the models we used in our experiments. When we mention each name like GPT-4 or GPT-3.5 in our main section, we actually refer to those model versions below. Such information helps researchers reproduce our results: GPT-4: gpt-4-0613 GPT-3.5: gpt-3.5-turbo-0613 Mistral-7B: mistralai/Mistral-7B-Instruct-v0.1 (Huggingface) GPT-4 for social task generation: gpt-4-1106-preview 16 12927 B.5 Training Setup The training on each Mistral checkpoint was on 4 × A6000 80G GPUs, across 20 epochs. The batch size was 4 and we set the cut-off length to be 4096. The initial learning rate for both behavior cloning and self-reinforcement training was 5.0e-5, using cosine annealing with a warm-up ratio of 0.03. The QLoRA (Dettmers et al., 2023) rank, alpha, and dropout rate were 8, 16, and 0.05, respectively. B.6 Checkpoint Selection According to the training loss, for behavior cloning, we always pick the checkpoint at epoch 20; for self-reinforcement, we always pick the checkpoint at epoch 5. C Human Evaluation We provide technical details of human evaluation in this section. C.1 provides a number of annotation data for each model. C.2 provides details of UI systems for annotation and guidance for human annotation. C.3 discusses the details of how we find qualified annotators to conduct this annotation task.C.4 describes the demographic and geographic information about human annotators. C.5 describes the overall process of conducting data collection and explains under which circumstances should we filter out collected human annotation. C.6 provides details about the payment of human annotators from different regions and C.7 mentions the agreement on the academic usage of their data. C.8 provides the details of the correlation between GPT-based automatic evaluation and human evaluation. C.9 discusses the inter-annotator agreement. C.10 discusses additional findings for human evaluation. C.1 Social Interaction Data for Annotation In SOTOPIA benchmark, it includes 90 different social scenarios including negotiation, collaboration, and competition. For each social scenario, it includes 10 role-playing agent pairs. Each agent has personal background and social goals to achieve. To strike a balance between a limited budget and getting human evaluation results for SOTOPIA-π that are useful for comparing the performance between multiple baselines and models given, we select 14 hard social scenarios among 90 social scenarios. For each social scenario, we randomly sample 2 agent pairs among 10 of them as our annotation data. Typically, among 2 agents, one of them is role-played by GPT-3.5, and another one is role-played by our target model including baselines and multiple different settings. The social interaction conversation between them is GPT-3.5 and our target model talking with each other. Therefore, we collect 28 examples as a representative subset to annotate for each baseline and model. Statistically, we annotate 3 baseline models, including GPT-3.5, GPT-4, and Mistral-7B, and 3 different training settings, including self-training based on Mistral-7B, behavior cloning based on Mistral-7B, and self-training based on behavior cloned Mistral-7B. Each baseline and model setting is annotated using 28 examples. C.2 Human Annotation System For the overall annotation system, we utilize otree (Chen et al., 2016) to build our system and utilize the Prolific 5 to launch our survey. During each annotation, each annotator would face two separate parts: the annotation instruction part and the data annotation part. When each annotator participates in the annotation, the system automatically distributes one available example for them. Annotation Instruction Part For the annotation instruction part, we provide a precise definition of the dimensions of our annotations that are defined in SOTOPIA, including believability, relationship, knowledge, secret, social rules, financial and material benefits, and goal completion. For each dimension of annotation, we provide explanations and examples for annotators to understand the precise meaning of abstract social standards. Fig 5 shows an example of such guidance for the believability dimension to help annotators understand the meaning of each dimension based on examples. Besides the evaluation dimension definition part, we also provide annotators with a complete example of annotation for two agents in one social conversation including scores for each dimension and their corresponding reasoning 5Prolific Human Evaluation Platform https://www.prolific.com/ 17 12928 sentences. Fig 6 shows a complete example of the reasoning and score for each dimension. Figure 5: An example of the explanation of the believablity dimension of social annotation in the evaluation instruction page. Each annotator are asked to read similar definitions of social intelligence dimension and their corresponding annotation standards at the evaluation instruction page. Data Annotation Part For the data annotation part, the annotator is guided to jump to a new page after the previously mentioned annotation instruction page. Each annotator is able to review the complete annotation example again at the data annotation page and start their official data annotation. In the data annotation part, the repeated explanation of the meaning of range for each social evaluation dimension is emphasized to make sure every annotator is able to understand the annotation standards correctly. Fig 7 provides an example of the instruction that annotators see for metric range explanation. Each annotator is asked to annotate the social intelligence of both agents that have a conversation. For each social intelligence dimension, annotators need to annotate the score based on the metric range and provide the reasoning for that. Fig 8 shows the UI that each annotator uses to annotate. C.3 Human Annotator Selection Since giving a social intelligence score for multi-turn social conversation is complicated and highdemanding, we need to pick out qualified human annotators to provide consistent and high-quality human annotation. Therefore, for the first stage, we launched a qualification test to figure out which annotator would be qualified to conduct the official round of human evaluation. After that, we invite 30 qualified human annotators from the Prolific platform together with 4 internal high-quality annotators to participate in the human annotation process to collect all required data. To elaborate on the qualification testing process, we selected 10 social interaction examples and randomly sampled one of them for each incoming annotator. For each social interaction example, we have an internal ground-truth human annotation that is the average score number of four internal high-quality annotators. After collecting the data from the prolific annotators, we first picked out the annotators that have a ±2 range score compared with our ground-truth examples. However, we found that based on these standards, only a few annotators are able to pass the qualification test. Therefore, we manually checked the reasoning sentences collected from the annotators and picked those annotators who wrote reasonable 18 12929 Figure 6: An annotation example of social interaction evaluation. Each dimension is annotated with one sentence and one score. 19 12930 Figure 7: The prompt before the official annotation stage to remind annotators about the rules of reasoning writing and social dimension scoring. reasoning sentences but had quite different scores in some dimensions. For these annotators, we invite them to participate in the official human evaluation test as well but we send a user-specific message to all of them to notice which dimension they should pay attention to and suggest them read the instructions for annotating that dimension again carefully. C.4 Demographic and Geographic Information about Human Annotators For the launch of qualification test, we guarantee that we choose balanced male and female annotators to participate in that. We also limit the participants to the residents of the United Kingdom and the United States. For 30 qualified annotators and 4 internal high-quality annotators, we show that most of them are located in the United Stated and few of them are located in the United Kingdom. Qualified annotators have a wide range of age from 23 to 53. C.5 Human Annotation Data Collection For the official launch of human evaluation, we limited each datapoint in the dataset to be annotated by 2 different qualified annotators and collected all the results from those qualified annotators. We encourage qualified annotators to participate in the official study of our human evaluation multiple times but distribute different data points for them to annotate each time they enter the system. Such a mechanism makes sure that each annotator would not annotate the same example twice. After collecting human annotation data for each model, we would manually check the quality of reasoning and scores provided by the annotator and check the agreement between annotators within each datapoint. If one human annotation does not include well-written reasoning and just provides ambiguous sentences like "It is good." or "He reached the goal", we would pick out these human annotation data. If two human annotators annotate the same example but strongly disagree with each other (for example, they have more than 5 points different on goal completion dimension), we would filter out these human annotation data. If one human annotation score does not correspond to its reasoning (for example, one annotator writes the reasoning of "No secret leaked" but annotates -5 for secret dimension), such data would be filtered. When it comes to filtering due to strong disagreement with each other, for each experiment including Mistral-7B, GPT-3.5, GPT-4, BC trained Mistral-7B, SR trained Mistral-7B, and BC + SR trained Mistral20 12931 Figure 8: The user interface designed for annotators for official annotation for both agent with reasoning and social scores. 21 12932 7B, about 20% of the data points that we collect from the annotators are filtered so that we need to relaunch 20% of the data points for annotation. One interesting phenomenon we observe from the filtering process is that for more high-quality social interaction conversations, annotators would have more agreement and less filtering is required. We believe that this is reasonable because low-quality generated social conversation would include situations like one agent suddenly stopping and leaving the scenario while they have not reached an agreement yet or their social conversation is very short. It can be confusing for the annotators to annotate a precise score for such social conversation. When it comes to filtering due to uncorrelated reasoning, about 1.8% annotations that we collect from the annotators are filtered due to this reason. After filtering low-quality annotation after one round of annotation, we collect these social interaction data that have no qualified human annotation again and launch it as a reannotation task to get new human annotation data for them. We repeat the process until we get all high-quality annotations for all required social interaction data. We also make other efforts for the experimental design to reduce the potential bias for the filtering process. For each social conversation between two agents, one is the target model that we need to test, another other is fixed to be gpt-3.5-turbo. The annotators are asked to annotate both sides of the conversation for all social dimensions. However, in each datapoint, both agent1 and agent2 are randomly played by gpt-3.5-turbo and the target model. Both the author who participates in the filtering process and the annotators who participate in the annotation process have no knowledge about which agent is played by the gpt-3.5-turbo and which agent is played by the target model. Based on such operations, one datapoint can be filtered because its annotation for the gpt-3.5-turbo side does not agree or its annotation for the target model side does not agree. Such experimental design reduces the possibility of potential bias as much as possible.Typically, only one of the paper authors is involved in the filtering process since it is purely rule-based filtering and does not require additional work. All the human subjects data collection experiments approved by the Institutional Review Board (IRB) at the authors’ institution. C.6 Human Annotator Payment In the U.S., annotators are compensated at a rate of $1.5 for each task they complete, with the expectation that each task will take no more than 10 minutes. This setup allows them to potentially earn over $9 per hour, surpassing the minimum wage in the U.S. Meanwhile, in the U.K., we offer additional bonuses to ensure that annotators’ average earnings exceed $14.5 per hour, aligning with the U.K.’s minimum wage standards. C.7 Human Annotator Consent All annotators including 4 internal annotators and 30 qualified annotators provided by Prolific acknowledge the academic use of their data. C.8 Correlation between Automatic Evaluation and Human Evaluation Table 7 shows the Pearson correlation between human evaluation score and GPT-4-based automatic evaluation score in multiple model and baseline settings. Results indicate that among all training settings, GPT-4-prompting-based automatic annotation and human evaluation have a high correlation with each other. Therefore, it shows that GPT-4-prompting-based automatic evaluation provides a high correlation with human evaluation. C.9 Inter-annotator Agreement Since for each datapoint that we annotate, it is given to two different annotators for annotation and the annotator for each datapoint is not paired. Therefore, we cannot directly apply Cohan’s Kappa score for our experiments. We report pairwise agreement and Randolph’s Kappa score to measure inter-annotator agreement. 22 12933 Agent Model GOAL Correlation (↑) Expert (GPT-4) 0.86 Base (Mistral-7B) 0.76 Ours Self-Reinforcement (SR) 0.86 Behavior Cloning (BC) 0.73 BC+SR 0.58 Table 7: Pearson correlation between human evaluation and GPT-4-prompting-based automatic evaluation on goal completion score. (p < 0.01) Dimension Pairwise Agreement Randolph’s Kappa BEL 0.7908 0.5816 REL 0.8214 0.7321 KNO 0.8673 0.7347 SOC 0.9694 0.9388 SEC 0.9949 0.9898 FIN 0.9133 0.8776 GOAL 0.8010 0.6020 Table 8: Inter-annotator agreement for all social evaluation dimensions. C.10 Additional Human Evaluation Results For human evaluation, we make our target model (including baselines and our SOTOPIA-π models) and GPT-3.5-turbo to have a multi-turn social conversation with each other. We make sure that each target model is talking to the same GPT-3.5-turbo model to make sure the comparison between different training settings is fair. Therefore, we not only have the human evaluation results on our target model side, but we also have the human evaluation results on the GPT-3.5-turbo side. Based on Table 9, we find that when our model becomes better and better based on behavior cloning and self-reinforcement, the model that they speak to, which is always GPT-3.5-turbo, becomes better and better on goal completion score and overall score. This indicates that they are more likely to reach an agreement and get requirements from both sides satisfied. Agent Model BEL (↑) REL (↑) KNO (↑) SEC (↑) SOC (↑) FIN (↑) GOAL (↑) Overall (↑) GPT-4 vs GPT-3.5-turbo GPT-4 7.54 0.95 0.77 -0.18 -0.21 0.41 5.25 2.07 GPT-3.5-turbo 7.46 0.68 0.98 0.00 -0.64 0.45 3.64 1.80 GPT-3.5-turbo vs GPT-3.5-turbo GPT-3.5-turbo 7.49 0.33 1.62 0.00 -0.34 -0.01 4.08 1.87 GPT-3.5-turbo 7.49 0.33 1.62 0.00 -0.34 -0.01 4.08 1.87 Mistral-7B vs GPT-3.5-turbo Mistral-7B 5.25 -0.64 1.23 0.00 -1.57 0.09 2.89 1.04 GPT-3.5-turbo 6.86 -0.54 1.14 0.00 -0.36 0.04 2.98 1.45 Self-Reinforcement (SR) vs GPT-3.5-turbo Self-Reinforcement (SR) 6.57 0.46 1.59 0.00 -0.89 0.11 3.32 1.59 GPT-3.5-turbo 7.80 0.46 1.21 0.00 -0.63 0.25 4.13 1.89 Behavior-Cloning (BC) vs GPT-3.5-turbo Behavior-Cloning (BC) 7.46 1.04 1.55 -0.18 -0.61 0.07 3.55 1.84 GPT-3.5-turbo 7.43 0.82 1.79 -0.05 -0.70 0.23 4.86 2.05 BC + SR vs GPT-3.5-turbo BC + SR 7.30 1.27 1.09 0.00 -0.46 0.18 4.29 1.95 GPT-3.5-turbo 7.57 1.13 1.55 0.00 -0.55 0.30 5.55 2.22 Table 9: Human Evaluation Results for both agents involved in the conversation. 23 12934 D LLM Safety Below is a concrete example of responses by different models when attempting to express dislike and injure a person, which aligns with our overall observation. Figure 9: An example of model behavior to injure person Under the same relationship setting as above, responses by each model acting as agent 2 to prevent violence are exemplified below. Figure 10: An example of model behavior to prevent violence 24 12935 E LLM Secret Keeping Ability Grasping the capability of LLMs to maintain secrets is increasingly vital, especially in light of privacy concerns. The concept of privacy, as elaborated in Helen Nissenbaum’s "Contextual Integrity" theory, isn’t solely about what information is shared but significantly about the context in which it’s shared (Nissenbaum, 2004). LLMs process a multitude of real-world conversations, which presents a novel privacy challenge if they mishandle this sensitive information flow (Mireshghallah et al., 2023). Traditional privacy solutions, such as data sanitization (Heider et al., 2020), are inadequate for this scenario. Therefore, it’s essential to evaluate the trained LLMs’ ability to discern when and with whom sharing information is inappropriate, thereby safeguarding the secrets entrusted to them. To understand and compare models’ ability in secret keeping, we picked social tasks from SOTOPIA that specifically asks both agents to reveal a secret without letting the other agent know that it is the agent’s secret. Below is a concrete example of how four models behave under the same settings. Figure 11: An example of model behavior in secret-oriented scenario As could be seen from the example below, both BC model and GPT-3.5 reveal the secret directly without hiding the identity. GPT-4, on the other hand, is smart about hiding the identity, putting the secret under the shell of a news he recently read about. We analyze the behaviour of four models across 10 different agent and relationship setup, each setup with different secrets. Overall, the BC model is generally not great at revealing the secret and hiding the identity. In most cases, the secret is not discussed at all, which to some extent could be considered as successfully achieve the goal of hiding the identity. In cases when a secret is revealed, the model reveals explicitly and fails to hide the identity. GPT-3.5 tends to discuss irrelevant content less often than behavior cloned model does, but almost always explicitly reveals the secret without hiding the identity. The way it phrases the secret is often exactly the same as provided in the profile background, which indicates its weak ability in learning the task. GPT-4 is much more skillful about hiding identity when revealing secrets, using “heard a story” or “a friend of mine” as a wrapper to hide the real identity. It also teaches the other agent (backed by GPT-3.5) to learn the phrases, and hence inviting the other agent to reveal secrets in the same format and hide the identity. F Detailed MMLU Results The Multimodal Multitask Learning Understanding (MMLU) benchmark is a challenging and comprehensive test designed to evaluate the capabilities of artificial intelligence models across a wide range of subjects and modalities. It includes 57 subjects spanning a broad spectrum of disciplines such as 25 12936 humanities, social sciences, STEM (Science, Technology, Engineering, Mathematics), and more. Here in Figure 10, 11, 12 we present the per-subject performance for each model in Table 2. G Contributions All authors contribute to paper writing. Ruiyi Wang Co-lead, Fine-tuning, RL training, Infrastructure, Automatic evaluation, Codebase Haofei Yu Co-lead, Fine-tuning, Human evaluation, Automatic task generation, Data, Codebase Wenxin Zhang Co-lead, Data, Automatic task generation, Human evaluation, Safety and alignment Zhengyang Qi Co-lead, Infrastructure, Codebase, QA evaluation, Human evaluation interface Maarten Sap Feedback on the write-up Graham Neubig Co-advisor, oversees the whole project Yonatan Bisk Co-advisor, oversees the whole project Hao Zhu Overall project lead 26 12937 Figure 12: Per-subject comparison between agent models on MMLU. Part 1. 27 12938 Figure 13: Per-subject comparison between agent models on MMLU. Part 2. 28 12939 Figure 14: Per-subject comparison between agent models on MMLU. Part 3. 29 12940
2024
698
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12941–12955 August 11-16, 2024 ©2024 Association for Computational Linguistics XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts Yifeng Ding, Jiawei Liu, Yuxiang Wei, Lingming Zhang University of Illinois Urbana-Champaign {yifeng6, lingming}@illinois.edu Abstract We introduce XFT, a simple yet powerful training scheme, by simply merging upcycled Mixture-of-Experts (MoE) to unleash the performance limit of instruction-tuned code Large Language Models (LLMs). While vanilla sparse upcycling fails to improve instruction tuning, XFT introduces a shared expert mechanism with a novel routing weight normalization strategy into sparse upcycling, which significantly boosts instruction tuning. After finetuning the upcycled MoE model, XFT introduces a learnable model merging mechanism to compile the upcycled MoE model back to a dense model, achieving upcycled MoE-level performance with only dense-model compute. By applying XFT to a 1.3B model, we create a new state-of-the-art tiny code LLM (<3B) with 67.1 and 64.6 pass@1 on HumanEval and HumanEval+ respectively. With the same data and model architecture, XFT improves supervised fine-tuning (SFT) by 13% on HumanEval+, along with consistent improvements from 2% to 13% on MBPP+, MultiPL-E, and DS-1000, demonstrating its generalizability. XFT is fully orthogonal to existing techniques such as Evol-Instruct and OSS-INSTRUCT, opening a new dimension for improving code instruction tuning. Codes are available at https: //github.com/ise-uiuc/xft. 1 Introduction Program synthesis (or code generation) is a longstanding problem explored since the early days of computer science (Manna and Waldinger, 1971). Recently, instruction tuning of code Large Language Models (LLMs) has been used to improve many coding tasks (Chaudhary, 2023; Luo et al., 2023; Wei et al., 2023), such as text-to-code generation (Chen et al., 2021; Austin et al., 2021), code completion (Cassano et al., 2022), and data science engineering (Lai et al., 2022). A typical instruction tuning flow involves two steps (Zhang et al., 2023): (i) curating an instrucDense LLM Dense LLM SFT Dense LLM Sparse Upcycling Expert Expert … Mixture-of-Experts Dense LLM Expert Expert … Mixture-of-Experts Dense LLM X FT (Ours) Learned Merging Fine-Tuning Figure 1: Overview of SFT, sparse upcycling, and XFT. tion dataset of instruction-output pairs, where the instruction reflects human intents in natural language and the output includes target code snippets that correspond to the intent; and (ii) supervised fine-tuning of pre-trained LLM on the instruction dataset. In the realm of code instruction tuning, most recent works have been focusing on curating high-quality instruction datasets. For example, Code Evol-Instruct (Luo et al., 2023) uses ChatGPT to obtain complex synthetic code instructions with heuristic prompts, while OSSINSTRUCT (Wei et al., 2023) prompts ChatGPT to generate new coding problems by drawing inspiration from open source code snippets. Since existing works focus on the data perspectives of instruction tuning, they all follow the standard SFT, leaving room for exploring advanced training schemes. We argue that prior works largely overlook the possibility of improving code instruction tuning by advancing existing training schemes. Figure 1 depicts supervised fine-tuning (SFT), which directly uses the pre-trained weights and architecture for fine-tuning. The model is dense here because all parameters are activated to predict the next token (assuming it is a decoder-only LLM). In contrast to fine-tuning a dense model, following the scaling laws (Kaplan et al., 2020) (i.e., more parameters, better performance), sparse upcycling (Komatsuzaki et al., 2023) is proposed to efficiently upgrade the model size by upcycling a dense LLM to a sparsely activated Mixture-of-Experts (MoE) 12941 model. An MoE model is efficient because its prediction of the next token only invokes a subset of parameters (i.e., experts) and thus is sparsely activated. For example, Mixtral-8x7B (Jiang et al., 2024), compared to a dense 7B model, uses approximately 8× parameters and 2× computation, i.e., only 2 out of 8 experts are dynamically selected to compute the next token. However, there are two key limitations when using sparse upcycling in instruction tuning: (i) Slow scaling: it is reported that sparse upcycling improves the performance of dense models marginally with limited training steps, requiring orders of magnitude of extra compute to achieve decent improvement (Komatsuzaki et al., 2023); (ii) Inference cost: although MoE is more efficient than directly scaling up the size of dense LLMs, MoE is still expensive, especially at inference, as it introduces significantly more parameters (i.e., memory) and computes during inference, compared to its dense counterparts. In this paper, we propose XFT: by simply merging upcycled MoE models, we push the performance limit of instruction-tuned code LLMs. While vanilla sparse upcycling fails to improve instruction tuning efficiently (Komatsuzaki et al., 2023), XFT addresses this challenge by isolating one expert as the shared expert among all the other experts in each MoE layer, inspired by DeepSeekMoE (Dai et al., 2024) and MoCLE (Gou et al., 2024). XFT also proposes a novel routing weight normalization strategy to eliminate scale mismatch between the upcycled MoE layer with the shared expert and the original dense layer, which will otherwise lead to performance degradation (Wu et al., 2022). After the upcycled MoE model finishes its SFT phase, motivated by Model Soups (Wortsman et al., 2022), XFT uses a learnable model merging mechanism to output a dense model by merging all the expert networks in the upcycled MoE, i.e., the final dense model is of the same model structure and size as the original pre-trained model, achieving similar performance without paying extra inference cost as the sparse upcycling. With only 1.3B parameters, XFT achieves 67.1 pass@1 on HumanEval and 64.6 pass@1 on HumanEval+, which is the new state-of-the-art for tiny code LLMs (<3B). Compared with SFT, XFT achieves 13% improvement on HumanEval+. Surprisingly, our learnable merging mechanism can preserve or even further boost the performance of the upcycled MoE with only around 1/8× parameters! We conclude our contribution as follows: • Dimension: We open a new dimension of improving instruction tuning of code LLMs by advancing its training scheme, using enhanced sparse upcycling and learnable model merging mechanism, which neither changes the final model structure nor requires more training data. • Technique: We present XFT, a new training scheme for code instruction tuning. XFT involves two steps: upcycling and merging. A pretrained dense LLM is first upcycled into an MoE with the shared expert setting and then fine-tuned on the instruction dataset. We propose a novel routing weight normalization strategy to avoid the performance degradation caused by the scale mismatch problem. In addition, we introduce the first learnable mechanism for merging the upcycled MoE into a dense model, eliminating additional inference overhead while preserving or even improving the upcycled MoE performance. • Results: With only 1.3B parameters, XFT achieves 67.1 pass@1 on HumanEval and 64.6 pass@1 on HumanEval+, which is the new stateof-the-art for tiny code LLMs (<3B). Compared with normal SFT, XFT achieves a significant 13% improvement on HumanEval+! XFT also achieves consistent improvements from 2% to 13% on MBPP, MultiPL-E, and DS-1000 over SFT, demonstrating its generalizability. 2 Related Work 2.1 Mixture-of-Experts Mixture-of-Experts (MoE) can efficiently scale up model sizes with only sub-linear increases in computation (Shazeer et al., 2017). Compared with the standard Transformer, MoE replaces each FeedForward Network (FFN) layer with an MoE layer, which uses N (i.e., multiple) expert networks that are structurally equivalent to the original FFN layer and uses a router that directs each input token to K out of N expert networks. Formally, for the l-th MoE layer, output hidden state hl t of the t-th input token is computed as follows (Dai et al., 2024): hl t = N X i=1 (gi,tFFNi(ul t)) + ul t gi,t = ( si,t si,t ∈Topk(st, K) 0 otherwise st = {si,t | 1 ≤i ≤N} si,t = Softmaxi(ul t T el i) (1) 12942 where gi,t refers to the gate value for the i-th expert given the t-th token, FFNi(·) refers to the i-th expert, ul t refers to the hidden states of the t-th token which is the input of the l-th MoE layer, si,t refers to the affinity score between the i-th expert and the t-th token, Topk(S, K) refers to a function extracting K highest scores out of S, and el i refers to the centroid of the i-th expert in the l-th MoE layer. By definition, each token will only be computed in the top K experts among all the N experts and such sparsity assures the efficiency of MoE. Recently, many works have been proposed to scale model sizes with MoE architecture (Lepikhin et al., 2020; Du et al., 2022; Fedus et al., 2022; Jiang et al., 2024; Xue et al., 2024). While most MoE models are trained from scratch, sparse upcycling (Komatsuzaki et al., 2023) is proposed to initialize MoE models based on pre-trained dense models, which can efficiently reduce the computational costs of training MoE models, compared with training MoE models from scratch. Specifically, sparse upcycling constructs a new MoE model by initializing each expert of each MoE layer as a copy of the original FFN layer in the dense model, while directly copying the remaining layers from the dense model to the new MoE model. 2.2 Instruction Tuning Instruction tuning is designed to improve the instruction-following ability of LLMs by finetuning them on the instruction datasets in a supervised fashion (Wei et al., 2022). The quality of the instruction dataset is significant for the effectiveness of instruction tuning and researchers have proposed multiple methods to improve data quality. For example, SELF-INSTRUCT (Wang et al., 2023) synthesizes high-quality instruction data by prompting a foundation LLM with carefully designed prompts. To improve SELF-INSTRUCT, Evol-Instruct (Xu et al., 2023) enhances the complexity and diversity of the instruction dataset by prompting ChatGPT with heuristic prompts. OSSINSTRUCT (Wei et al., 2023) queries ChatGPT to generate instruction-output pairs by getting inspiration from real-world code snippets. Recently, some parameter-efficient fine-tuning techniques have been proposed to use MoE for better instruction tuning. For example, LoRAMoE (Dou et al., 2023) and MoCLE (Gou et al., 2024) propose MoE-like modules that are constructed with Low-Rank Adaptations (LoRA) to improve instruction tuning, while PESC (Wu et al., 2024) proposes to integrate adapters into MoE that are upcycled from dense models. Unlike these works, XFT focuses on full-parameter fine-tuning, which is proven generally stronger than parameterefficient fine-tuning (Chen et al., 2022). 2.3 Weight Averaging Weight averaging is a commonly used technique to improve the performance of deep learning models. For example, Model Soups (Wortsman et al., 2022) averages the weights of multiple models that are initialized from the same pre-trained model but finetuned with different hyperparameter configurations to improve the accuracy and robustness of the fine-tuned model. However, only a few works have been proposed to merge experts of an MoE layer to a normal FFN layer with weight averaging to reduce both parameter and computation overhead of inference. For example, OneS (Xue et al., 2022) proposes several simple weight averaging methods to merge expert networks of a BERT-based MoE model. Closely related to our work, Experts Weights Averaging (EWA) (Huang et al., 2023) proposes to convert an MoE model to a dense model with two steps: (i) During MoE training, EWA conducts weighted averaging of all the expert weights after each weight update of MoE, which is based on a manually-crafted hyperparameter β; (ii) After training, EWA converts each MoE layer into an FFN layer by uniformly averaging the experts. Different from all the aforementioned existing works, XFT is the first work proposing a learnable mechanism to merge expert networks in the upcycled MoE model. Furthermore, while the training scheme of EWA is deeply coupled to a specific MoE architecture, XFT can be easily adapted to different MoE architectures by only adjusting the final merging process. In addition, unlike EWA, XFT does not introduce any hyperparameters into the training of large MoE models, significantly reducing the computational resources for hyperparameter searching. Our empirical results in Section 4 also demonstrate the clear advantage of XFT. 3 XFT We describe the details of XFT in this section. There are two steps in our framework: upcycling (Section 3.1) and merging (Section 3.2). During upcycling, we construct an Mixture-of-Experts (MoE) model from the pre-trained dense model, namely MoEDS, which is then fine-tuned on coding instruc12943 Layer Norm Attention Layer Norm MLP Layer Norm Attention Layer Norm Layer Norm Attention Layer Norm MLP (merged) Original Dense Block Upcycled MoE Block Merged Dense Block Router Shared MLP MLP 2 MLP E MLP 1 Normalized Weighted Sum MoE Upcycling MLP layers Merging with learned weights Instruction Tuning … Before Training Post Training Figure 2: Overview of XFT. tion data. For merging, we propose a learnable model merging method to convert the instructiontuned MoEDS back to a normal dense model by merging each MoE layer into an FFN layer through weight averaging while directly copying other remaining layers. Consequently, we can obtain XFTDS that has the same model architecture and size as the original pre-trained dense model, which eliminates all the additional inference overhead brought by the original sparse upcycling, while preserving or even improving the performance of MoEDS. Our framework is illustrated in Figure 2. 3.1 Upcycling Inspired by sparse upcycling (Komatsuzaki et al., 2023), we convert the pre-trained dense LLM to a new MoE by initializing each expert of each MoE layer as a copy of the original FFN layer in the dense model, while directly copying the remaining layers from the dense model to the new MoE model. However, the performance gain brought by sparse upcycling is negligible with a limited training budget (Komatsuzaki et al., 2023) – which is exactly the situation we are facing during instruction tuning. Intuitively, it is because each expert in the upcycled MoE model is trained on fewer instruction data than the original dense model does because traditional routers used in sparse upcycling will assign different tokens to different experts and thus reduce the amount of data each expert is trained on (Gou et al., 2024). Consequently, inspired by DeepSeekMoE (Dai et al., 2024) and MoCLE (Gou et al., 2024), XFT introduces the shared expert setting into sparse upcycling to tackle this challenge. We further propose a novel routing weight normalization strategy for XFT to avoid the potential performance degradation caused by the scale mismatch problem (Wu et al., 2022). 3.1.1 Shared Expert for Upcycling During upcycling, we isolate one shared expert among all the other normal experts in each MoE layer, where the shared expert will be deterministically assigned to handle all the tokens while other normal experts are assigned by the router. By doing so, the upcycled MoE model can achieve a clear performance boost in instruction tuning, where the shared expert learns general knowledge across the whole instruction dataset while other normal experts learn specific knowledge among different instructions assigned by the router. Formally, the output hidden state hl t of the l-th MoE layer when processing the t-th token can be expressed as: hl t = N X i=1 (gi,tFFNi(ul t)) + ul t gi,t =      1 −stmax i = 1 Softmaxi(si,t) · stmax si,t ∈StK 0 otherwise StK = Topk({si,t | 1 ≤i ≤N}, K −1) stmax = max({si,t | 1 ≤i ≤N}) si,t = ( −∞ i = 1 Softmaxi(ul t T el i) i ≥2 (2) where N refers to the total number of experts, K refers to the number of activated experts, gi,t refers to the gate value for the i-th expert given the t-th token, FFNi(·) refers to the i-th expert, ul t refers to the output hidden state of the l-th attention layer 12944 given the t-th token (which is also the input of the l-th MoE layer), si,t refers to the affinity score between the i-th expert and the t-th token, stmax refers to the maximum affinity score among all the experts besides the shared expert, Topk(S, K) refers to a function extracting K highest scores out of S, StK refers to a set of K −1 highest affinity scores among all the experts besides the shared expert, and el i refers to the centroid of the i-th expert in the l-th MoE layer. FFN1 is chosen as the shared expert in each MoE layer and each token will be assigned to top K experts including one shared expert and K −1 other normal experts. Compared with the original sparse upcycling, there are two major differences: • Weighted Shared Expert. Following MoCLE (Gou et al., 2024), with the token-to-expert affinity score si,t, we get the maximum affinity score stmax and use its complement 1 −stmax as the routing weight of the shared expert. • Routing Weight Normalization. Although the shared expert setting is also used in recent works (Dai et al., 2024; Gou et al., 2024), we cannot directly follow their routing strategy because they cannot handle a scale mismatch problem that is unique for sparse upcycling. The scale mismatch problem is that differences between the scale of the output of the upcycled MoE layer and that of the original FFN layer can cause performance degradation (Wu et al., 2022). To handle this problem, we need to ensure the sum of gi,t equals 1, so that the output of the MoE layer matches that of the FFN layer in scale. To do so, we normalize the affinity scores of top K −1 normal experts with Softmax and scale their sum to stmax to make sure that the sum of the gi,t of top K experts, including one shared expert and K −1 normal experts, equals 1. 3.2 Merging We propose a learnable model merging method to convert the large MoE model, namely MoEDS, back to a dense model XFTDS. By doing so, we expect XFTDS to keep the boosted performance gained during upcycling while keeping its model size the same as the original dense model size to avoid any additional inference overhead. Inspired by Model Soups (Wortsman et al., 2022), we choose to merge MoEDS by learning the mixing coefficients that can be used to average the parameters of all experts in each MoE layer to obtain a normal FFN layer, while directly copying other remaining layers. Formally speaking, given the weights of N experts at the l-th layer W l 1, W l 2, · · · , W l N, the process of merging each MoE layer to an FFN layer can be stated below: W l = N X i=1 αl iW l i (3) where W l denotes the merged parameter of all N experts and αl i denotes the learnable mixing coefficient of expert W l i . Mixing coefficients α is parameterized as the output of a softmax, ensuring that αl i is positive and PN i=1 αl i = 1. Given input x, we denote the output of a neural network with parameters θ as f(x; θ). For loss L and instruction dataset {(xi, yi)}m i=1, such mixing coefficients α of all the L layers can be learned via: arg min α m X j=1 L(f(xj; θo, (W l)1:L), yi) (4) where θo refers to all the remaining layers of MoEDS other than MoE layers. While the learning process defined in Eq. (4) is the most intuitive way of learning α, our preliminary experiment shows that, due to the shared expert setting, it tends to increase the mixing coefficient of the shared expert at each layer as much as possible to decrease the loss. It is not helpful because, although the shared expert has learned general knowledge across the whole instruction dataset and needs a relatively large mixing coefficient, we still need to keep the scale of the mixing coefficient of other normal experts at a certain level to keep the specific knowledge learned by other normal experts in the merged parameter W l. To solve this issue, we introduce a shared expert rate λ to fix the mixing coefficient of the shared expert and learn the mixing coefficients of the remaining normal experts which sums to 1 −λ in each layer. By doing so, we can easily control the scale of the mixing coefficient of the shared expert, while still being able to learn the optimal layer-wise mixing coefficients of other normal experts. Let’s say W l 1 is the shared expert of the l-th layer, then Eq. (3) and Eq. (4) can be reformulated as follows: W l λ = λW l 1 + N X i=2 αl iW l i (5) arg min α m X j=1 L(f(xj; θo, (W l λ)1:L), yi) (6) 12945 In practice, we uniformly initialize the mixing coefficients α of all the normal experts as 1−λ N−1, which is then trained on the same instruction dataset used during upcycling. 4 Main Evaluation 4.1 Experimental Setup Training. DeepSeek-Coder-Base 1.3B (Guo et al., 2024) is used as our main base code LLM. evol-codealpaca-v1, an open-source Evol-Instruct (Luo et al., 2023) dataset containing 110K samples, is used as our code instruction dataset. MoEDS, our MoE model upcycled from the base model, is implemented following LlamaMoE (LLaMA-MoE Team, 2023). It is constructed with 8 experts in one MoE layer and the top 6 experts1 are activated for each token, including one shared expert. As such, we denote the model size of MoEDS as 8×1.3B. Other training settings are detailed in Appendix A.1. We further obtain XFTDS by using the learned mixing coefficients to compile MoE layers inside MoEDS to normal FFN layers. Note that XFTDS is the final instructiontuned code LLM we output, while MoEDS is only an intermediate product of our XFT framework. Baselines. To study the effectiveness of XFT, we build a baseline model, namely SFTDS, by directly performing SFT for DeepSeek-Coder-Base 1.3B on evol-codealpaca-v1. To compare XFT with EWA (Huang et al., 2023), we also implement a baseline EWADS and instruction-tune it using the same hyperparameter setting as SFTDS, which is described in Appendix A.1. More implementation details of EWADS are described in Appendix A.2. Furthermore, we incorporate multiple tiny open-source LLMs (<3B) as our baselines, including DeepSeek-Coder-Base 1.3B, DeepSeek-CoderInstruct 1.3B (Guo et al., 2024), Phi-2 2.7B, and STABLE-CODE 3B (Pinnaparaju et al., 2024). 4.2 Python Text-to-Code Generation HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) benchmarks are the two most widelyused collections of Python code generation tasks. We further employ HumanEval+ and MBPP+, which use more tests automatically generated by EvalPlus (Liu et al., 2023) for more rigorous evaluation. We leave the detailed description of HumanEval(+) and MBPP(+) in Appendix A.3. 16 is the best-performing number of activated experts per our HumanEval+ experiments using top {2, 4, 6} experts. Table 1 shows the pass@1 results of different LLMs. XFT achieves 67.1 pass@1 on HumanEval and 64.6 pass@1 on HumanEval+, which makes it the new state-of-the-art tiny code LLM (<3B). We can also observe that XFTDS has a clear improvement over the SFTDS on both benchmarks, with 13% and 2% improvement on HumanEval+ and MBPP+ respectively. In contrast, EWADS not only underperforms XFTDS on both benchmarks, but also fails to improve SFTDS on MBPP(+). Surprisingly, XFTDS even surpasses MoEDS on HumanEval and HumanEval+, despite only using around 1/8× parameters and around 1/6× computations, which showcases the effectiveness of our simple learnable merging technique. More comprehensive experiments in Appendix A.4 demonstrate the statistical significance of the improvements brought by XFT. Furthermore, while XFT will inevitably introduce training overhead, our experiment in Appendix A.5 shows that XFT still significantly outperforms SFT using the same training budget, demonstrating the ability of XFT to unlock the power of code instruction tuning. 4.3 Multilingual Code Generation We use MultiPL-E (Cassano et al., 2022), a multiprogramming benchmark that supports 18 programming languages in addition to Python, to evaluate the multilingual ability and generalizability of XFT. Following previous work (Wei et al., 2023), we choose six representative programming languages in our evaluation for their distinct language features: Java, JavaScript, C++, PHP, Swift, and Rust. Table 2 shows that, among all 1.3B models, XFTDS achieves the best average multilingual performance and performs the best on five (out of six) programming languages and overall largely improves SFTDS which uses standard SFT. Notably, the overall performance of EWADS is on par with SFTDS, indicating that EWADS fails to improve SFT on multilingual coding. Appendix A.6 further studies whether each expert in MoEDS specializes differently in different programming languages. 4.4 Code Generation for Data Science The DS-1000 dataset (Lai et al., 2022) is a collection of 1000 realistic data science coding problems ranging from seven popular data science libraries in Python, including Matplotlib (plt), NumPy (np), Pandas (pd), SciPy (scp), Scikit-Learn (sk), PyTorch (py), and TensorFlow (tf). We evaluate XFT on DS-1000 to understand its effectiveness for prac12946 Model Size Instruction Dataset Dataset Size Benchmark HumanEval (+) MBPP (+) GPT-3.5 (May 2023) Private 73.2 (66.5) STABLE-CODE 3B 28.7 (25.6) 53.6 (44.1) DeepSeek-Coder-Base 1.3B 28.7 (25.6) 55.6 (46.9) Phi-2 2.7B 48.8 (45.1) 62.7 (52.9) DeepSeek-Coder-Instruct 1.3B Private 2B 65.2 (59.8) 63.9 (53.1) SFTDS 1.3B Evol-Instruct 0.3B 61.6 (57.3) 59.6 (49.1) EWADS 1.3B Evol-Instruct 0.3B 67.1 (63.4) 58.9 (48.4) MoEDS 8×1.3B Evol-Instruct 0.3B 65.2 (62.2) 60.4 (50.1) XFTDS 1.3B Evol-Instruct 0.3B 67.1 (64.6) 60.4 (50.1) Table 1: Pass@1 results of different code LLMs on HumanEval (+) and MBPP (+) computed with greedy decoding, following the setting of prior works (Wei et al., 2023; Liu et al., 2023). We report the results consistently from the EvalPlus (Liu et al., 2023) Leaderboard. Note that numbers in bold refer to the highest scores among all 1.3B models fine-tuned on public datasets, which is the same for all the other tables. Model Size Programming Language Average C++ PHP Java JS Swift Rust DeepSeek-Coder-Base 1.3B 28.1 22.9 27.2 28.7 10.9 18.0 22.6 SFTDS 1.3B 40.4 38.5 40.2 46.2 16.4 27.7 34.9 EWADS 1.3B 39.4 38.4 37.3 45.2 20.9 28.6 35.0 MoEDS 8×1.3B 42.2 42.2 35.4 49.8 24.7 30.6 37.5 XFTDS 1.3B 42.7 41.5 36.0 49.7 25.3 32.1 37.9 Table 2: Pass@1 results on MultiPL-E (Cassano et al., 2022) following the same hyperparameter settings as prior works (Wei et al., 2023; Luo et al., 2023): temperature = 0.2, top_p = 0.95, max_length = 512, and num_samples = 50. All models are evaluated using bigcode-evaluation-harness (Ben Allal et al., 2022). tical data science engineering. We follow the evaluation setting of prior works (Guo et al., 2024; Wei et al., 2023). Table 3 shows that XFTDS achieves the best overall performance among all the evaluated 1.3B models. Specifically, XFTDS consistently surpasses SFTDS among all the seven studied libraries and outperforms EWADS in general. 5 Ablation Study 5.1 Effect of Shared Expert with Routing Weight Normalization We demonstrate the importance of the shared expert of XFT by comparing its performance with the original sparse upcycling (Komatsuzaki et al., 2023) baseline that does not employ any shared expert. As shown in Table 4, the performance of the original sparse upcycling (with the "- Shared Expert" label) drops greatly compared with MoEDS. Notably, the sparse upcycling model performs even worse than SFTDS on HumanEval+, indicating its ineffectiveness for instruction tuning. While the shared expert setting is also employed in most recent works (Dai et al., 2024; Gou et al., 2024), their routing strategy will cause performance degradation due to the scale mismatch problem, which is handled by the routing weight normalization design in XFT. To demonstrate its importance, we conduct an ablation experiment by excluding it from XFT. Table 4 shows that, after removing routing weight normalization, the performance substantially decreases, despite still performing better than the original sparse upcycling that does not use the shared expert setting. 5.2 Effect of Merging Strategy In this section, we demonstrate the effectiveness of our learnable merging technique by comparing it with (1) directly merging experts with initialized mixing coefficients, and (2) the learnable merging technique without the shared expert rate setting, which is the same setting as the learned soup in Model Soups (Wortsman et al., 2022) and is described in Eq. (3) and Eq. (4). Specifically, we initialize the learnable mixing coefficient of the 12947 Model Size Data Science Library Overall np pd plt py scp tf sk DeepSeek-Coder-Base 1.3B 25.1 5.8 34.5 12.7 9.8 11.1 12.7 16.4 SFTDS 1.3B 30.9 17.0 40.5 32.7 18.3 21.1 24.4 25.9 EWADS 1.3B 32.9 19.4 41.8 25.7 17.7 22.2 33.0 27.8 MoEDS 8×1.3B 33.2 21.3 38.4 41.8 21.8 23.5 37.5 30.0 XFTDS 1.3B 32.9 20.2 38.9 41.4 21.1 16.9 37.5 29.3 Table 3: Pass@1 results on DS-1000 (completion format) with temperature = 0.2, top_p = 0.5, max_length = 1024, and num_samples = 40, following the same hyperparameter setting used in prior works (Wei et al., 2023). Model HumanEval HumanEval+ SFTDS 61.6 57.3 MoEDS 65.2 62.2 MoEDS - Normalization 63.4 59.1 MoEDS - Shared Expert 61.6 56.7 Table 4: Ablation over the design of MoEDS. "- Normalization" removes the routing weight normalization from the router, making it the same design as MoCLE (Gou et al., 2024). "- Shared Expert" removes the shared expert setting, making MoEDS the same architecture as original sparse upcycling (Komatsuzaki et al., 2023). Model HumanEval HumanEval+ MoEDS 65.2 62.2 XFTDS (INIT) 66.5 64.0 XFTDS 67.1 64.6 XFTDS - Shared Expert Rate 66.5 64.0 Table 5: Ablation over the design of XFTDS. "(INIT)" refers to directly using the initialized mixing coefficients to merge experts without training. "- Shared Rate" removes the shared rate setting from XFTDS, which is the same as the learned soup (Wortsman et al., 2022). shared expert as 0.75 and that of the other 7 normal experts as 1 28 for a fair comparison. As shown in Table 5, trained mixing coefficients outperform the initialized mixing coefficients for merging. Furthermore, removing the shared rate setting will degrade the performance of XFTDS on both HumanEval and HumanEval+, demonstrating its importance. An ablation study on the shared expert rate in Appendix A.7 further shows that (1) XFTDS consistently outperforms SFTDS regardless of their shared expert rate, and (2) both the general knowledge learned in the shared expert and the specific knowledge learned in other experts are important and integral for better performance after merging. Model HumanEval HumanEval+ SFTSTABLE 62.2 56.1 MoESTABLE 64.0 59.1 XFTSTABLE 68.3 62.2 Table 6: Ablation over the effect of the base model by replacing DeepSeek-Coder-Base 1.3B with STABLECODE 3B. XFT can consistently improve the instruction tuning performance of different base code LLMs. 5.3 Effect of Code LLM Choice To show that the effectiveness of XFT is not dependent on any specific code LLMs, we apply XFT to STABLE-CODE 3B (Pinnaparaju et al., 2024), whose architecture is different from DeepSeekCoder-Base 1.3B (Guo et al., 2024), to study whether XFT can still improve the performance of this new model. The training settings are detailed in Appendix A.8. As shown in Table 6, XFTSTABLE significantly improves SFTSTABLE by 10% on HumanEval and 11% on HumanEval+ respectively. Furthermore, XFTSTABLE consistently boosts the performance of MoESTABLE while only using 1/4× parameters and 1/2× inference computations. These results show that the effectiveness of XFT is generalizable across different code LLMs. 6 Discussion 6.1 Scaling up XFT to 7B Scale We scale up XFT to 7B-level code LLMs by applying it to DeepSeek-Coder-Base 6.7B (Guo et al., 2024). The training settings are detailed in ApModel HumanEval HumanEval+ SFTDS-6.7B 77.4 70.7 MoEDS-6.7B 81.1 75.6 XFTDS-6.7B 81.7 76.8 Table 7: Experiments on scaling up XFT to 7B scale. It shows that XFT can also consistently improve the instruction tuning performance of 7B-level code LLMs. 12948 Model Discipline Overall Humanities Social Science STEM Other SFTTL 25.38 23.30 24.20 26.78 24.97 MoETL 23.85 26.32 27.40 28.03 26.11 XFTTL 23.91 26.49 27.72 28.29 26.30 Table 8: Experiments on the generalizable effectiveness of XFT for general tasks in MMLU benchmark (Hendrycks et al., 2021). It shows that XFT can improve the general instruction tuning performance of LLMs. pendix A.9. As shown in Table 7, XFTDS-6.7B significantly improves SFTDS-6.7B by 6% on HumanEval and 9% on HumanEval+ respectively. Moreover, XFTDS-6.7B further boosts the performance of MoEDS-6.7B with only 1/8× parameters and 1/2× computations during inference! These promising results demonstrate the consistent effectiveness of XFT on 7B-level code LLMs. 6.2 Generalizability for General Tasks To demonstrate that XFT can also improve the performance of LLMs on general tasks across different domains, we apply XFT to general instruction tuning. We use TinyLlama 1.1B (Zhang et al., 2024) as the base model and use evol-instruct-70k (Xu et al., 2023) as the training dataset for general instruction tuning. Following existing work (Zhang et al., 2024), we use MMLU (Hendrycks et al., 2021) with the 5-shot setting as our evaluation benchmark to evaluate the general performance of instruction-tuned LLMs. More training settings are detailed in Appendix A.10. As shown in Table 8, XFTTL improves SFTTL by 5% on MMLU in general, demonstrating the generalizable effectiveness of XFT for general instruction tuning. 6.3 Preliminary Theoretical Explanation We provide a preliminary theoretical explanation of XFT by considering a simplified variant of it. Let’s start by analyzing the two major steps of XFT: • Step 1: Upcycling. According to the scaling laws (Kaplan et al., 2020), the upcycled MoE model performs better than the normal SFT dense model due to more trainable parameters. • Step 2: Merging. We consider a simplified variant of XFT, where the upcycled MoE model (e.g., MoEDS) can be viewed as the ensembling of two dense models and the merged dense model (e.g., XFTDS) can be viewed as the merging of the same two dense models. More details are included in Appendix A.11. As such, we can directly apply the theoretical analyzing process in Section 4 of Model Soups (Wortsman et al., 2022) to analyze the performance difference between the upcycled MoE model and the merged dense model, which is initially designed to analyze the performance difference between model ensembling and model merging. According to the analysis (Wortsman et al., 2022), the convexity of the loss can help the merged dense model achieve a similar expected loss as that of the upcycled MoE model. Overall, our preliminary theoretical explanation shows that (1) the Upcycling step improves the performance with more trainable parameters, and (2) the Merging step can provably maintain the performance of the aforementioned simplified MoE model with only dense-model compute. 7 Conclusion This paper introduces XFT to unlock the power of code instruction tuning by simply merging upcycled MoE. Similar to SFT, XFT starts with a dense LLM and outputs a fine-tuned dense LLM with the exact size and model structure. Yet, XFT improves SFT by upcycling the pre-trained dense LLM to an MoE model for fine-tuning, after which we compile the MoE model back to an efficient dense LLM with a learnable merging mechanism. As such, we unleash the performance limit of instruction tuning without any additional inference overhead. Using the same training dataset, XFT outperforms SFT on a variety of benchmarks, including HumanEval(+), MBPP(+), MultiPL-E, and DS-1000, from 2% to 13%. By applying XFT to DeepSeek-Coder-Base 1.3B, we create the new state-of-the-art tiny code LLM (<3B). The final dense LLM produced by XFT preserves or even outperforms the full upcycled MoE which uses 8× parameters as much as our final dense LLM. XFT is fully orthogonal to the existing instruction tuners such as Evol-Instruct and OSS-INSTRUCT, opening a new dimension for code instruction tuning. 12949 Limitations To balance the general knowledge in the shared expert and the specific knowledge in other normal experts, we introduce a hyperparameter λ in the merging process of XFT, which might slightly increase the efforts for hyperparameter search. It would be interesting to explore other hyperparameter-free techniques to tackle this challenge. Furthermore, while we have provided a preliminary theoretical explanation for the strong empirical performance of XFT, it would be interesting to provide a complete theoretical explanation in the future. Acknowledgement We extend our special thanks to Terry Yue Zhuo for his assistance with the scale-up experiments on DeepSeek-Coder-Base 6.7B (§6.1) after our submission. His contributions are good enough to merit authorship; however, due to the policy of ACL 2024, post-submission authorship changes are not permitted. As a result, we have included him in the author list of our arXiv version. We also thank Sea AI Lab and Dr. Qian Liu for their valuable feedback and computing resource assistance. We appreciate all the reviewers for their insightful comments. This work was partially supported by NSF grant CCF-2131943, as well as Kwai Inc. References Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. 2021. Program synthesis with large language models. Loubna Ben Allal, Niklas Muennighoff, Logesh Kumar Umapathi, Ben Lipkin, and Leandro von Werra. 2022. A framework for the evaluation of code generation models. https://github.com/bigcode-project/ bigcode-evaluation-harness. Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, Arjun Guha, Michael Greenberg, and Abhinav Jangda. 2022. Multipl-e: A scalable and extensible approach to benchmarking neural code generation. Sahil Chaudhary. 2023. Code alpaca: An instructionfollowing llama model for code generation. https: //github.com/sahil280114/codealpaca. Guanzheng Chen, Fangyu Liu, Zaiqiao Meng, and Shangsong Liang. 2022. Revisiting parameterefficient tuning: Are we really there yet? Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. Damai Dai, Chengqi Deng, Chenggang Zhao, R. X. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y. Wu, Zhenda Xie, Y. K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, and Wenfeng Liang. 2024. Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models. Shihan Dou, Enyu Zhou, Yan Liu, Songyang Gao, Jun Zhao, Wei Shen, Yuhao Zhou, Zhiheng Xi, Xiao Wang, Xiaoran Fan, Shiliang Pu, Jiang Zhu, Rui Zheng, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment. Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. 2018. The hitchhiker’s guide to testing statistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383–1392, Melbourne, Australia. Association for Computational Linguistics. Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2022. Glam: Efficient scaling of language models with mixture-of-experts. William Fedus, Barret Zoph, and Noam Shazeer. 2022. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Yunhao Gou, Zhili Liu, Kai Chen, Lanqing Hong, Hang Xu, Aoxue Li, Dit-Yan Yeung, James T. Kwok, and 12950 Yu Zhang. 2024. Mixture of cluster-conditional lora experts for vision-language instruction tuning. Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y. Wu, Y. K. Li, Fuli Luo, Yingfei Xiong, and Wenfeng Liang. 2024. Deepseek-coder: When the large language model meets programming – the rise of code intelligence. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Yongqi Huang, Peng Ye, Xiaoshui Huang, Sheng Li, Tao Chen, Tong He, and Wanli Ouyang. 2023. Experts weights averaging: A new general training scheme for vision transformers. Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, MarieAnne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mixtral of experts. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, and Neil Houlsby. 2023. Sparse upcycling: Training mixture-of-experts from dense checkpoints. Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen tau Yih, Daniel Fried, Sida Wang, and Tao Yu. 2022. Ds-1000: A natural and reliable benchmark for data science code generation. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional computation and automatic sharding. Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023. Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation. In Thirty-seventh Conference on Neural Information Processing Systems. LLaMA-MoE Team. 2023. Llama-moe: Building mixture-of-experts from llama with continual pretraining. Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, WenDing Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. 2024. Starcoder 2 and the stack v2: The next generation. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Wizardcoder: Empowering code large language models with evolinstruct. Zohar Manna and Richard J Waldinger. 1971. Toward automatic program synthesis. Communications of the ACM, 14(3):151–165. Nikhil Pinnaparaju, Reshinth Adithyan, Duy Phung, Jonathan Tow, James Baicoianu, , and Nathan Cooper. 2024. Stable code 3b. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. Anders Søgaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and Hector Martínez Alonso. 2014. What’s in a p-value in NLP? In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 1–10, Ann Arbor, Michigan. Association for Computational Linguistics. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-instruct: Aligning language models with self-generated instructions. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2023. Magicoder: Source code is all you need. arXiv preprint arXiv:2312.02120. Frank. Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics, 1:196–202. 12951 Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S. Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, and Ludwig Schmidt. 2022. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. Haoyuan Wu, Haisheng Zheng, and Bei Yu. 2024. Parameter-efficient sparsity crafting from dense to mixture-of-experts for instruction tuning on general tasks. Lemeng Wu, Mengchen Liu, Yinpeng Chen, Dongdong Chen, Xiyang Dai, and Lu Yuan. 2022. Residual mixture of experts. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. Fuzhao Xue, Xiaoxin He, Xiaozhe Ren, Yuxuan Lou, and Yang You. 2022. One student knows all experts know: From sparse to dense. Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, and Yang You. 2024. Openmoe: An early effort on open mixture-of-experts language models. arXiv preprint arXiv:2402.01739. Peiyuan Zhang, Guangtao Zeng, Tianduo Wang, and Wei Lu. 2024. Tinyllama: An open-source small language model. Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, and Guoyin Wang. 2023. Instruction tuning for large language models: A survey. A Appendix for "XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts" A.1 Training Settings for DeepSeek-Coder-Base 1.3B We use a batch size of 64 and a learning rate of 5e-5 with a linear scheduler to fine-tune MoEDS for 4 epochs with 500 warmup steps, following the implementation of previous work (Wei et al., 2023). We further use a batch size of 64, a shared expert rate λ of 0.75, and a learning rate of 1e-5 with a linear schedule to fine-tune the learnable mixing coefficients for experts in the instruction-tuned MoEDS on the same instruction-tuning dataset for 1 epoch with 125 warmup steps. Detailedly, we use Softmax to keep the sum of the mixing coefficients of the other 7 normal experts as 0.25. For SFTDS and EWADS, we use the same hyperparameter setting as XFT, where the batch size is 64 and the learning rate is 5e-5 with a linear scheduler. Because XFT is trained for 4 epochs during upcycling and 1 epoch during merging, for a fair comparison, we train SFTDS and EWADS for 5 (= 4 + 1) epochs with 625 warmup steps. A.2 Implementation details of EWA Because EWA (Huang et al., 2023) does not release their implementation, we implement EWA by ourselves, including the constant schedule setting and the linear schedule setting. We use a share rate β of 0.3, following the original setting of EWA. While EWA with the constant schedule setting achieves reasonable performance in our evaluation, the training loss of EWA with the linear schedule setting is unstable during instruction tuning, as shown in Figure 3, and thus cannot achieve reasonable performance. As a result, we report the results of EWA with the constant schedule setting in Section 4. 0 2000 4000 6000 8000 Training steps 0 2 4 6 8 10 12 14 16 Training loss Constant schedule Linear schedule Figure 3: Training loss curve of EWA with the constant schedule setting and the linear schedule setting. A.3 Details of HumanEval(+) and MBPP(+) In these benchmarks, each task consists of a task description in English, which is sent to LLMs as the prompt, and LLMs are expected to generate the corresponding code to satisfy the requirements in the description. While these benchmarks provide a handful of test cases to validate the correctness of the generated code, these tests are often insufficient for more rigorous evaluation. As such, HumanEval+ and MBPP+ proposed by EvalPlus (Liu et al., 2023) are usually used to evaluate the correctness of the generated code, which provides 80×/35× more tests compared with the original benchmarks. 12952 Model HumanEval HumanEval+ SFTDS 61.6 57.2 EWADS 62.7 58.8 XFTDS 64.5 60.9 Table 9: Average pass@1 results of 200 experiments on HumanEval (+) computed with sampling. XFT clearly outperforms both EWADS and SFTDS. Model HumanEval HumanEval+ XFTDS vs. EWADS 2.6e-18 8.0e-23 XFTDS vs. SFTDS 9.6e-30 3.7e-33 Table 10: p-values for XFTDS vs. EWADS and XFTDS vs. SFTDS in 200 experiments on HumanEval (+) conducted using sampling. Results show that improvements brought by XFT are statistically significant. A.4 Statistical Significance Analysis In our main experiments, we follow prior works (Wei et al., 2023; Lozhkov et al., 2024) to conduct experiments on HumanEval(+) using greedy decoding. To demonstrate the statistical significance of our improvements, we change our setting from greedy decoding to sampling. In detail, to conduct one experiment on HumanEval(+), the model will sample one solution for each problem in HumanEval(+) with top p = 0.95 and temperature = 0.8, which is the same setting used in prior works (Liu et al., 2023; Chen et al., 2021). Following prior work (Liu et al., 2023), we repeat this experiment 200 times for three techniques: XFTDS, EWADS, and SFTDS. EWADS is included because it is the best-performing baseline in our main experiment. We first compute their average pass@1 performance in these 200 experiments. As is shown in Table 9, XFTDS outperforms both EWADS and SFTDS. Furthermore, we use the Wilcoxon signed-rank test (Wilcoxon, 1945; Dror et al., 2018), a widely used statistical test, to check if the improvements brought by XFT are statistically significant. As shown in Table 10, the p-values for both XFTDS vs. EWADS and XFTDS vs. SFTDS are much smaller than both 0.0025 (the significance level recommended for NLP work (Søgaard et al., 2014)) and 0.05 (the most common significance level), demonstrating the statistical significance of the improvements brought by XFT. A.5 Training Overhead Analysis Compared with SFT, XFT will inevitably introduce additional overhead in the training process because Model HumanEval HumanEval+ SFTDS w/ same steps 61.6 57.3 SFTDS w/ same budget 62.2 57.3 XFTDS 67.1 64.6 Table 11: Experiments on the effect of training overhead. For our two SFT baselines, "w/ same steps" refers to one SFT baseline using the same training steps as XFT while "w/ same budget" refers to the other SFT baseline using the same training budget as XFT. XFT can consistently outperform both SFT baselines to a large extent, further demonstrating the ability of XFT to unlock the power of code instruction tuning. XFT needs to fine-tune the upcycled MoE model, which contains more parameters than the original dense model and thus requires more computation. In contrast, the normal SFT technique only needs to fine-tune the original dense model. To better understand the effect of such overhead, we conduct an experiment using the same training budget (i.e., the same GPU hours) instead of the same training steps for the normal SFT baseline. As shown in Table 11, although sharing the same training budget as XFTDS, the performance of SFTDS is still significantly worse than that of XFTDS, demonstrating the ability of XFT to unlock the power of code instruction tuning using the same training budget. A.6 Expert Specialization Analysis Inspired by recent works (Jiang et al., 2024; Xue et al., 2024), we analyze whether each expert in MoEDS has different specializations in different programming languages by visualizing the routing decision of the tokens from different programming languages in the MultiPL-E benchmark (including Python). We collect the routing decision for the MultiPL-E benchmark when conducting experiments in Section 4.3. For Python, we collect the routing decision by running HumanEval experiment following the same setting used in Section 4.3. Following the analysis setting of recent work (Jiang et al., 2024), we get the visualization results from layers 0, 11, and 23 in MoEDS, where layer 0 and layer 23 are the first and the last layers of MoEDS. As shown in Figure 4, we do not observe any obvious patterns in the assignment of experts based on the type of programming languages, which is in line with the findings reported by recent works (Jiang et al., 2024; Xue et al., 2024). 12953 2 3 4 5 6 7 8 Expert ID 0.00 0.05 0.10 0.15 0.20 layer: 23 Python C++ PHP Java JS Swift Rust 0.00 0.05 0.10 0.15 0.20 layer: 0 0.00 0.05 0.10 0.15 0.20 Selection proportion layer: 11 Figure 4: Proportion of tokens assigned to each expert on different programming languages from MultiPL-E (including Python) for layers 0, 11, and 23. The shared expert FFN1 is excluded from the chart because all the tokens are always assigned to it. The gray vertical line 1 7 is the proportion expected with the uniform sampling. A.7 Effect of Shared Expert Rate We further study the effect of the shared expert rate λ on the performance of the final merged dense model. We evenly choose five shared expert rates, including 0.00, 0.25, 0.50, 0.75, and 1.00, to perform the learnable merging process and evaluate each merged dense model accordingly. Note that 0.75 is the default shared expert rate used in our main experiments. If the shared expert rate is 0.00, it means that the shared expert is ignored when constructing the merged dense model from the upcycled MoE model; if the shared expert rate is 1.00, it means that the final dense model is built by simply extracting the shared expert from the upcycled MoE model. As shown in Table 12, there are mainly three interesting observations: • The performance of the final merged dense model improves gradually when the shared expert rate grows from 0.00 to 0.75, indicating that general knowledge learned by the shared expert is important for better performance. • The performance of the final merged dense model drops significantly when the shared expert rate grows from 0.75 to 1.00, showing that specific knowledge learned by other experts is also integral and ignoring them will lead to a Model λ HumanEval HumenEval+ SFTDS 61.6 57.3 XFTDS 0.00 62.8 59.8 0.25 64.6 61.0 0.50 65.9 62.8 0.75 67.1 64.6 1.00 63.4 60.4 Table 12: Ablation over the effect of the shared expert rate λ in our learnable merging technique. XFT can consistently outperform the normal SFT baseline regardless of the shared expert rate, while λ = 0.75 is the optimal setting in our experiments. significant performance drop. • All the final merged dense models consistently outperform the normal SFT baseline regardless of their shared expert rate, further demonstrating the effectiveness of XFT. A.8 Training Settings for STABLE-CODE 3B We use evol-codealpaca-v1 as the training dataset. Since STABLE-CODE 3B is the base model, we upcycle a new MoE model from the base model, namely MoESTABLE. We construct MoESTABLE with 4 experts in one MoE layer, where the top 2 experts are activated for each token, including one shared expert. Consequently, the size of 12954 MoESTABLE can be described as 4×3B. We use a batch size of 64 and a learning rate of 5e-5 with a linear scheduler to fine-tune MoESTABLE for 4 epochs with 500 warmup steps. Similar to XFTDS, we obtain XFTSTABLE by learning mixing coefficients to merge MoE layers inside MoESTABLE as normal FFN layers, which is fine-tuned with a batch size of 64, a shared expert rate λ of 0.85, and a learning rate of 1e-5 with a linear schedule for 1 epoch with 125 warmup steps. Our baseline model, namely SFTSTABLE, is fine-tuned for 5 (= 4 + 1) epochs with a batch size of 64, a learning rate of 5e-5, and 625 warmup steps for a fair comparison. A.9 Training Settings for DeepSeek-Coder-Base 6.7B We use evol-codealpaca-v1 as the training dataset. We upcycle a new MoE model from DeepSeek-Coder-Base 6.7B, namely MoEDS-6.7B. We construct MoEDS-6.7B with 8 experts in one MoE layer, where the top 2 experts are activated for each token, including one shared expert. As such, MoEDS-6.7B includes 8×6.7B parameters. We use a batch size of 64 and a linear scheduler to finetune MoEDS-6.7B for 4 epochs with 500 warmup steps. We choose the best-performing learning rate from {2e −5, 5e −5} for MoEDS-6.7B. Because the FFN weights of MoEDS-6.7B are too large to fit in our GPU memory, during our merging step, we realize that one part of computation in the training of XFTDS-6.7B has to be moved to CPUs, which significantly slows down the training speed. Consequently, we use a batch size of 16, a shared expert rate λ of 0.75, a constant learning rate of 1e-4, and 400 training steps in merging to obtain XFTDS-6.7B. Our baseline model, namely SFTDS-6.7B, is fine-tuned for 5 epochs with a batch size of 64 and 625 warmup steps for a fair comparison. We also choose the best-performing learning rate from {2e −5, 5e −5} for SFTDS-6.7B. A.10 Training Settings for TinyLlama 1.1B Using TinyLlama 1.1B as the base model, we upcycle a new MoE model, namely MoETL, from the pre-trained dense model. Following the setting of MoEDS, we construct MoETL with 8 experts in one MoE layer, where the top 6 experts are activated for each token, including one shared expert. As such, the number of parameters for MoETL can be written as 8×1.1B. We use a batch size of 64 and a learning rate of 5e-5 with a linear scheduler to finetune MoETL for 4 epochs with 240 warmup steps. To obtain XFTTL, we learn mixing coefficients to merge MoE layers inside MoETL by fine-tuning them with a batch size of 64, a shared expert rate λ of 0.85, and a learning rate of 2e-5 with a linear schedule for 1 epoch with 60 warmup steps. For a fair comparison, we fine-tune a baseline model SFTTL for 5 (= 4 + 1) epochs with a batch size of 64, a learning rate of 5e-5, and 300 warmup steps. A.11 Details of Preliminary Theoretical Explanation We consider a simplified variant of XFT as follows: • The original dense model is a one-layer transformer model, which contains one attention layer connected with one feed-forward network (FFN) layer. As such, the upcycled MoE model is also a one-layer transformer model, containing one attention layer connected with an MoE layer. • The upcycled MoE model only has two experts (e1 and e2), both of which are always selected for processing the input tokens. • The router in the MoE model assigns constant weights to each expert, regardless of the input token. Consequently, the output of the MoE layer for the t-th token ht can be represented as (1 −α)e1(ut) + αe2(ut), where 1 −α is the router weight assigned to e1, α is the router weight assigned to e2, and ut is the input of the MoE layer for the t-th token. • We simplify the process of merging the MoE model back to a dense model as Weα = (1 − α)We1 + αWe2, where We refers to the weight of e and eα refers to the weight of the FFN in the merged dense model. In this simplified scenario, if we denote f(x; θ) as the output of the model θ for the input x, the output of this simplified MoE model for input token x can be represented as f(x; θMoE). Interestingly, if we define two new dense models θ1 and θ2, where θ1 and θ2 both use the attention layer of this MoE model as their attention layer while using e1 and e2 as their FFN layer respectively, f(x; θMoE) can be represented as (1−α)f(x; θ1)+αf(x; θ2). Consequently, the computation process of this simplified MoE model can be viewed as ensembling the outputs of two dense models θ1 and θ2. Meanwhile, the process of merging the upcycled MoE model back to a dense model in this simplified scenario can be represented as θα = (1−α)θ1+αθ2, which can be viewed as the model merging of the same two dense models θ1 and θ2. 12955
2024
699
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 102–116 August 11-16, 2024 ©2024 Association for Computational Linguistics BitDistiller: Unleashing the Potential of Sub-4-Bit LLMs via Self-Distillation Dayou Du1, Yijia Zhang2, Shijie Cao3, Jiaqi Guo3, Ting Cao3, Xiaowen Chu1, Ningyi Xu2 1The Hong Kong University of Science and Technology (Guangzhou) 2Shanghai Jiao Tong University 3Microsoft Research Asia [email protected], {zhangyijia, xuningyi}@sjtu.edu.cn, {shijiecao, jiaqiguo, ting.cao}@microsoft.com, [email protected] Abstract The upscaling of Large Language Models (LLMs) has yielded impressive advances in natural language processing, yet it also poses significant deployment challenges. Weight quantization has emerged as a widely embraced solution to reduce memory and computational demands. This paper introduces BitDistiller, a framework that synergizes QuantizationAware Training (QAT) with Knowledge Distillation (KD) to boost the performance of LLMs at ultra-low precisions (sub-4-bit). Specifically, BitDistiller first incorporates a tailored asymmetric quantization and clipping technique to maximally preserve the fidelity of quantized weights, and then proposes a novel Confidence-Aware Kullback-Leibler Divergence (CAKLD) objective, which is employed in a self-distillation manner to enable faster convergence and superior model performance. Empirical evaluations demonstrate that BitDistiller significantly surpasses existing methods in both 3-bit and 2-bit configurations on general language understanding and complex reasoning benchmarks. Notably, BitDistiller is shown to be more cost-effective, demanding fewer data and training resources. The code is available at https://github.com/ DD-DuDa/BitDistiller. 1 Introduction Scaling up model sizes has been pivotal to the success of large language models (LLMs), yielding unprecedented performance across diverse natural language processing tasks (Brown et al., 2020; Touvron et al., 2023; Kaplan et al., 2020). However, such escalating model size poses significant challenges in deployment, particularly on resourceconstrained devices, due to the substantial memory footprint and computational requirements. Weight quantization has emerged as a popular strategy to enhance the efficiency and accessibility of LLMs by reducing model size with minimal performance loss (Gholami et al., 2022). In practice, 1010 1011 Total Model Bits 0 10 20 30 40 50 60 70 HumanEval Pass@1 16 bit 3 bit - OmniQuant 3 bit - LLM-QAT 3 bit - BitDistiller (Ours) 2 bit - Omniquant 2 bit - LLM-QAT 2 bit - BitDistiller (Ours) Figure 1: Bit-Level scaling laws for code generation performance for 3B to 34B parameter coder models. BitDistiller outperforms existing QAT methods in both 3-bit and 2-bit settings. Details in Table 2. 4-bit quantization has been widely adopted, offering a balance between a considerable compression ratio and the preservation of LLM capabilities (Lin et al., 2023; Frantar et al., 2022; Liu et al., 2023a). However, sub-4-bit quantization significantly degrades the fidelity of model weights, leading to deteriorated model performance, especially in smaller models or tasks requiring complex reasoning (Dettmers and Zettlemoyer, 2023). To address this, researchers have developed various PostTraining Quantization (PTQ) and QuantizationAware Training (QAT) methods (Chee et al., 2023; Shao et al., 2023). PTQ, while appealing without retraining, struggles to preserve model performance at very low precisions. In contrast, QAT incorporates quantization into the training loop, enabling dynamic adaptation to reduced precision and thus maintaining higher accuracy (Liu et al., 2023b; Kim et al., 2023a). Despite its early promise, two fundamental challenges are essential for achieving high model performance in extreme low-bit QAT: how to maximally preserve weight fidelity during 102 quantization, and how to effectively learn low-bit representations during training. In this work, we present BitDistiller, a novel framework that synergizes QAT with Knowledge Distillation (KD) to significantly boost the performance of sub-4-bit quantized LLMs. To minimize quantization error, BitDistiller employs a tailored asymmetric quantization and clipping strategy to maintain the capabilities of the full-precision model as much as possible, particularly at ultra-low-bit levels. For efficient and effective low-bit representation learning, BitDistiller leverages a simple yet effective self-distillation approach, wherein the full-precision model acts as its own teacher to refine the low-bit student model. Notably, BitDistiller innovates with a Confidence-Aware KullbackLeibler divergence (CAKLD) objective that optimizes knowledge transferring efficacy, enabling faster convergence and enhanced model performance. Our empirical evaluations, conducted on a diverse suite of general language understanding and complex reasoning tasks including mathematics and coding, demonstrate that BitDistiller significantly outperforms existing PTQ and QAT methods in the realm of sub-4-bit quantization. As illustrated in Figure 1, BitDistiller achieves the most favorable scaling law in both 3-bit and 2-bit configurations on the code reasoning benchmark. Moreover, BitDistiller is demonstrated to be more costeffective, requiring less training data and fewer training resources, thereby marking a significant advancement toward deploying robust Large Language Models on resource-constrained devices. 2 Background and Related Work 2.1 Weight Quantization for LLMs PTQ and QAT PTQ is directly applied to pretrained models without additional training. PTQ for LLMs typically employs techniques that either adjust quantization error (Frantar et al., 2022; Chee et al., 2023) or prioritize salient weights (Dettmers et al., 2023b; Lin et al., 2023; Kim et al., 2023b). However, the lack of retraining with PTQ may cause notable decreases in model performance at extremely low precisions. In contrast, QAT integrates quantization into the training phase, enabling the model to learn better representations for lowbit weights, as demonstrated by approaches like LLM-QAT (Liu et al., 2023b), OmniQuant (Shao et al., 2023), PB-LLM (Shang et al., 2023), and BitNet (Wang et al., 2023). Despite improved model performance, QAT is still challenged by the need of extensive training and data, with significant potential for further optimization and enhancement. In this work, we harness the synergy of QAT and KD to enhance the performance of quantized LLMs, especially at sub-4-bit settings. Granularity and Format Optimizations Extensive research indicates that adopting finer-grained quantization approaches, such as group-wise quantization, can achieve higher accuracy compared to layer-wise or channel-wise methods (Shen et al., 2020; Frantar et al., 2022). Floating-point formats (FP8/FP4/NF4) have been demonstrated to deliver superior accuracy compared to integer formats (INT8/INT4) in LLM quantization (Kuzmin et al., 2022; Dettmers and Zettlemoyer, 2023; Zhang et al., 2023b). Notably, asymmetric quantization methods, particularly for floating-point formats, outperform their symmetric counterparts by better accommodating the distribution of model weights (Zhang et al., 2023a). BitDistiller aligns with these insights, employing finer granularity and asymmetric techniques for quantization. 2.2 Knowledge Distillation for LLMs In the realm of LLMs, white-box knowledge distillation (KD) has become increasingly prevalent due to the accessible distribution of the teacher model, which facilitates the transmission of knowledge representations to the student model (Hinton et al., 2015; Zhu et al., 2023). Notably, MINILLM (Gu et al., 2023) utilizes the reverse KLD to ensure the accuracy and fidelity of language generation. GKD (Agarwal et al., 2023) has explored alternative divergences and addressed the distribution mismatch by sampling outputs from the student model during training. To attain exceedingly high compression ratios, a promising method is to combine KD with model quantization, where KD can be effectively used to mitigate the accuracy decline of quantized models (Zhang et al., 2020; Kim et al., 2022). In cutting-edge research applying QAT-based KD for LLMs, TSLD (Kim et al., 2023a) considers risks of overfitting and conducts logit distillation with ground truth loss. Similarly, LLM-QAT leverages randomly teacher-generated data for data-free distillation. In distinction from TSLD and LLM-QAT, we achieve better performance and cost-efficiency in the extremely low-bit quantization level. 103 3 Methodology Logits Logits Confidence-Aware KLD Teacher Asymmetric Quantization Student Asymmetric Clipping Input 𝑥 Output 𝑦𝑃 Teacher generation Teacher generation Input 𝑥 Output 𝑦𝑃 Figure 2: Depiction of the QAT-based KD framework of BitDistiller. In this section, we introduce BitDistiller, a QAT with self-distillation framework for LLMs, as illustrated in Figure 2. To maximally preserve weight fidelity during quantization, we first present an asymmetric quantization and clipping method (see Section 3.1). Second, to counteract the performance degradation caused by precision reduction, we adopt Knowledge Distillation and propose a novel Confidence-Aware KL divergence (CAKLD) objective, in which the full-precision model acts as a teacher and the low-precision one plays a student (see Section 3.2). Algorithm 1 outlines the process of BitDistiller. Given the full-precision weight w, BitDistiller adopts the asymmetric clipping to alleviate outliers in w (Line 1), prior to the training loop. Then, in each training step, BitDistiller forwards the model with the quantized weights (wt Q), computes the loss with the proposed CAKLD objective (Line 45), and updates the full-precision weights (Line 67) (Bengio et al., 2013). When the training finishes, BitDistiller returns the final quantized weights. 3.1 Asymmetric Quantization and Clipping The adoption of finer granularities, or smaller group sizes, in weight quantization of LLMs inherently leads to asymmetrical distributions and the presence of outliers in weight groups. Proper management of asymmetry is crucial to maintaining model performance in low-bit PTQ regimes. Our investigation reveals that the effects of asymmetry are more prominent in extremely low-bit QAT, such as 3-bit and 2-bit configurations, necessitating tailored strategies to address these challenges. Algorithm 1 BitDistiller Input: Full-precision weight w, Dataset D = {(x, y)}, Learning rate η, Training step T Require: Clipping function Clip, Quantization function Q, Loss function DCAKLD Output: Low-precision weight wT Q 1: w1 = Clip(w); 2: for t = 1 to T do 3: Sample a batch of data B from D; ▷Forward with quantized weight 4: wt Q = Q(wt); 5: Compute DCAKLD(P w ∥P wt Q) on B; ▷Backward on full-precision weight wt 6: Compute gradients ∂DCAKLD ∂wt ; 7: wt+1 = Update(wt, ∂DCAKLD ∂wt , η); 8: end for 9: wT Q = Q(wT ); Therefore, in BitDistiller, we adopt asymmetric quantization techniques coupled with asymmetric clipping strategies to enhance the representational fidelity of quantized weights and maximally preserve the capabilities of the full-precision model. Asymmetric Quantization Previous studies have shown that floating-point formats (e.g., FP, NF) often outperform integer formats (INT) in LLM quantization (Dettmers et al., 2023a; Liu et al., 2023a). However, as the quantization level falls to 2-bit, we observed a notable decline in the effectiveness of FP/NF formats. This advantage of FP/NF formats is attributed to their non-uniform nature, which can capture a wider range of values. Such a non-uniform distribution aligns better with the natural distribution of weight tensors in LLMs. In 2-bit cases, the limited representational capacity, offering only four distinct values, undermines the benefits of non-uniform distribution and impedes the efficient utilization of each numerical value. In light of these findings, we employ NF formats for quantization above 2-bit, while opting for the INT format at the 2-bit level. For NF formats (e.g., NF3), we adopt the AFPQ method (Zhang et al., 2023a) to enable asymmetric quantization, which establishes separate scales, spos for positive weights wpos and sneg for negative weights wneg, as shown in Equation 1. For INT formats (e.g., INT2), we utilize conventional asymmetric methods with a single scale and a designated zero point, as detailed in Equation 2. 104 NF-Asym : Q(w) = ( ⌊wpos spos ⌉, if w > 0 ⌊wneg sneg ⌉, if w ≤0 (1) INT-Asym : Q(w) = ⌊w −z s ⌉ (2) Asymmetric Clipping The strategy of clipping, which involves constraining the range of weight values, has been recognized for its contribution to maintaining high accuracy after quantization (Sakr et al., 2022; Shao et al., 2023). However, naive clipping methods often fall short in effectiveness, while advanced clipping techniques come at a high computational cost which is prohibitive for practical QAT use (Li et al., 2019; Jung et al., 2019). To circumvent these limitations, we propose the use of asymmetric clipping solely during the initial phase, prior to the commencement of QAT. Asymmetric clipping at initialization provides a good starting point that significantly contributes to the final overall quantized model accuracy without incurring the prohibitive costs associated with iterative clipping optimization. To enable asymmetric clipping for QAT initialization, given input features X cached from a small calibration set, we conduct an automatic search for two optimal clipping values, α and β, for each layer of the model. These values aim to minimize the output difference after quantization. Formally, the objective is to optimize the following: α∗, β∗= argmin α,β ||Q(wc)X −wX|| wc = Clip(w, α, β) ( α ∈[min _val, 0) β ∈(0, max _val] (3) To demonstrate the efficacy of asymmetric quantization and clipping, we conduct a tensor-wise analysis. We selected a random weight tensor from the LLaMa-2-7B model and focused on a single output channel. As illustrated in Figure 3, our approach to asymmetric quantization and clipping achieves higher fidelity preservation compared to symmetric quantization. A more detailed ablation study on the impact of asymmetric quantization and clipping on model performance is presented in Table 3 in Section 4.4. 3.2 Self Distillation with CAKLD To better counteract the performance degradation caused by precision reduction, we propose to adopt 0.08 0.06 0.04 0.02 0.00 0.02 0.04 0.06 0 20 Count Original Weight Distribution 0.08 0.06 0.04 0.02 0.00 0.02 0.04 0.06 0 200 Count Symmetric Quantization 0.04 0.02 0.00 0.02 0.04 0 100 Count Asymmetric Quantization and Clipping (Ours) Figure 3: (Top) The original weight distribution of a single output channel in the final down projection layer of LLaMA-2-7B. (Middle&Bottom) The weight distribution after symmetric quantization and asymmetric quantization and clipping, both using 3-bit quantization with the group size of 128. Knowledge Distillation (KD) in QAT, where the full-precision model acts as a teacher and its quantized variant plays a student: L = D(PT ∥PS), (4) where D is a divergence measure of two distributions. PT and PS denote the full-precision and quantized model, respectively. The intuition for KD is two-fold. First, learning the token-level probability distributions potentially helps the quantized model better imitate its full-precision counterpart (Hinton et al., 2015), thereby re-gaining the strong downstream performance. Second, owing to the generative nature of LLM, it is easy to scale up the data size for QAT with the full-precision model. The divergence D chosen for distillation plays a crucial role. Agarwal et al. (2023) find that the mode-seeking behavior advocated by the Reverse KL divergence (i.e., DKL(PS ∥PT )) leads to better performance than Forward KL (i.e., DKL(PT ∥ PS)) on instruction tuning (Chung et al., 2022), while Forward KL promotes mode-covering and is superior on general text generation tasks like summarization (Narayan et al., 2018). To provide a general receipt for QAT, we aim to seek a way to trade off the mode-seeking and mode-covering behaviors automatically, instead of manual selection according to some empirical understanding of downstream tasks. To this end, we propose a novel ConfidenceAware KL divergence, shorted as CAKLD. It blends the Reverse KL and Forward KL with a coefficient γ estimated by the averaged token probability, so that the mode-seeking and mode-covering 105 Probability Teacher FKLD RKLD CAKLD ( = 0.7) CAKLD ( = 0.3) Figure 4: Comparison of Reverse KL, Forward KL and CAKLD, when a Gaussian distribution tries to fit a Gaussian mixture (Teacher). behaviors can be automatically traded off according to the full-precision model’s confidence on the training data: DCAKLD(PT ∥PS) = γDKL(PS ∥PT ) + (1 −γ)DKL(PT ∥PS) DKL(PT ∥PS) =E(x,y)∼D[ 1 |{y}| |{y}| X i=1 Ec∼PT (·|x,y<i)[log PT (c|x, y<i) PS(c|x, y<i) ]] γ = E(x,y)∼D[ 1 |{y}| |{y}| X i=1 PT (yi|x, y<i)] (5) Intuitively, when the full-precision model is confident on the training data, CAKLD will prefer more on the mode-seeking behaviors. Otherwise, CAKLD will advocate more on the mode-covering behaviors, as the full-precision model is not certain about the data and modeling its single mode is suboptimal. Figure 4 visualizes the difference between Reverse KLD, Forward KLD and CAKLD when a Gaussian distribution tries to fit a Gaussian mixture. It is clear that CAKLD manages to trade off mode-seeking and mode-covering behaviors with the coefficient. For a detailed performance comparison and in-depth analysis, please refer to Figure 6 and Appendix A.2 4 Experiments We evaluate BitDistiller on the LLaMA-2 (Touvron et al., 2023) families and domain-specific LLMs with sub-4–bit quantization. We have set up comparative experiments to demonstrate the proficiency of our method against existing PTQ and QAT methods. Our findings illustrate that BitDistiller substantially enhances both the general language performance and the accuracy of reasoning tasks. 4.1 Experimental Settings Tasks and Models Following (Frantar et al., 2022; Lin et al., 2023), we benchmark LLaMA-2 (Touvron et al., 2023) on general language tasks, including language modeling tasks (WikiText-2 (Merity et al., 2016)), common sense QA benchmarks (PIQA (Bisk et al., 2020), HellaSwag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2021), ARC (Clark et al., 2018)) and in-context learning ability (MMLU (Hendrycks et al., 2020)) under a few-shot setting. We also consider the complex reasoning tasks and evaluate various sizes of domain-specific LLMs, including WizardCoder (Luo et al., 2023) on LLM-Humaneval-Benchmarks (Chen et al., 2021) in the setting of greedy decode, and MetaMath (Yu et al., 2023) on GSM8K (Cobbe et al., 2021). To evaluate the domain-specific LLMs of smaller sizes, we finetune OpenLLaMA-3B (Geng and Liu, 2023) with domain-specific datasets. Baselines PTQ baselines include vanilla roundto-nearest (RTN), GPTQ (Frantar et al., 2022), AWQ (Lin et al., 2023), SpQR (Dettmers et al., 2023b) and QuIP (Chee et al., 2023). QAT baselines include Omniquant (Shao et al., 2023), LLMQAT (Liu et al., 2023b) and TSLD (Kim et al., 2023a). Detailed PTQ and QAT settings can be found in appendix A.1. Quantization and Distillation We focus on 3bit/2-bit group-wise quantization, with a group size of 128 (represented as ’g’) as the default setting except for the 3B models with a group size of 64 because of the dimension constraint. Following (Liu et al., 2023b; Kim et al., 2023a), we utilize logits distillation. Prior to QAT, the coefficient γ, key for CAKLD, is pre-calculated from a subset of D. The implementation details and example analysis of CAKLD are available in Appendix A.2. Training Datasets We use the instruction-tuning data from Alpaca (Taori et al., 2023) and the training set of WikiText-2 for general language tasks. For code understanding and generation, we use Evol-Instruct-Code (Rosh, 2023). For math reasoning we use MetaMathQA (Yu et al., 2023). Given the instruction prompt x, sequence s = {x, y} where output y ∼p(·|x) have three different choices: Ground Truth yg, Student-generated Output yq and Teacher-generated Output yp. As suggested by (Agarwal et al., 2023; Zhou et al., 2023), we opt to generate the Teacher-generated Output yp using sampling with a temperature of 0.7 (Yuan et al., 2023). We conduct experiments in Section 4.4 for an ablation study on the choices of output y. (See Appendix A.3 for more details of 106 LLaMA-2-7B PPL ↓ MMLU (5s) PIQA Hella. Wino. ARC-c Avg BF16 5.47 46.45 77.86 57.14 68.35 43.34 58.63 RTN 6.65 38.65 75.24 53.70 67.32 38.56 54.69 3 Bits GPTQ 6.38 39.57 75.46 51.68 67.16 38.39 54.45 g128 AWQ 6.71 39.68 76.27 55.14 67.56 40.61 55.85 OmniQuant 6.10 41.22 77.47 54.41 67.09 39.08 55.85 LLM-QAT 6.02 41.32 77.26 54.74 68.35 40.61 56.46 BitDistiller (ours) 5.97 43.65 76.99 55.38 68.35 41.21 57.12 RTN 3453 24.12 53.43 26.33 49.96 21.58 35.08 GPTQ NaN 23.12 49.51 25.04 49.57 22.69 33.99 2 Bits AWQ 2.2e5 25.38 52.39 25.70 50.12 21.33 34.98 g128 OmniQuant 12.84 25.42 58.92 29.20 50.83 19.45 36.76 LLM-QAT 9.30 23.62 70.08 43.79 61.64 29.09 45.64 BitDistiller (ours) 8.08 29.25 73.61 48.70 61.09 33.27 49.18 Table 1: General language task results of BitDistiller versus established PTQ and QAT methods on LLaMA-2-7B Model. Our method achieves leading performance in both 3-bit and 2-bit quantization. training datasets composition). Training Implementation We leverage DeepSpeed (Rasley et al., 2020) and HuggingFace repository (Wolf et al., 2020) to devise a QAT-based KD framework enabling the distillation of models. The model optimization is facilitated through the AdamW optimizer (Loshchilov and Hutter, 2017), applied with zero weight decay. We initialize the constant learning rate to 8e-6 and set the sequence length to 1024 for the code-related task and 512 for others. 4.2 Evaluation on Language Modeling Tasks Table 1 presents a comparative analysis of BitDistiller’s performance against previous PTQ and QAT methods on general language tasks. BitDistiller surpasses competing methods in terms of WikiText-2 perplexity and MMLU (5-shot) accuracy. Furthermore, BitDistiller demonstrates consistent performance across various QA benchmarks. Notably, in 2-bit weight quantization, BitDistiller substantially increases the average accuracy by +3.54% over LLM-QAT (Liu et al., 2023b) and by +12.43% compared to the leading PTQ method (Shao et al., 2023). Similar results on LLaMA-213B and LLaMA-2-70B can be found in the Appendix A.4. 4.3 Evaluation on Reasoning Tasks Table 2 demonstrates the superior performance of BitDistiller on reasoning-based benchmarks, including HumanEval and GSM8K, across a range of domain-specific language model families. BitDistiller achieves improvements over other methods in both 3-bit and 2-bit quantization. Especially in 2-bit quantization, while other methods exhibit significant performance drops, BitDistiller maintains a commendable level of accuracy. Detailedly, our method outperforms LLM-QAT by a remarkable margin of 24.69%, achieving an accuracy of 61.33% on complex mathematical reasoning tasks. These outcomes bolster the potential for implementing ultra-low-precision inference deployment in practical reasoning tasks without substantially compromising performance. 4.4 Ablation Studies Asymmetric Quantization and Clipping In this ablation study, we evaluate the efficacy of quantization strategies on the LLaMA-2-7B model. Our approach examines the impact of asymmetric quantization and clipping techniques within QAT. We specifically assess the 3-bit and 2-bit quantization levels, reporting our findings in terms of Perplexity (PPL) and MMLU (5-shot). As demonstrated in Table 3, asymmetric quantization significantly enhances model performance. Notably, under a 2-bit configuration, PPL can be reduced from 3.4e2 to 16.94 in post-training. Furthermore, the application of asymmetric clipping during initialization yields additional performance gains upon training completion. See Appendix A.5 for integration with other PTQ methods. Data Generation In our analysis, we meticulously evaluated the logit information of the teacher model by computing the cross-entropy 107 Domain-specific LLMs HumanEval @WizardCoder GSM8K @MetaMath 3B 7B 13B 34B 3B 7B 13B BF16 23.17 54.88 62.80 71.95 36.40 66.41 72.30 RTN 4.27 34.15 50.00 33.54 17.50 59.30 68.51 GPTQ 4.30 46.34 55.48 63.41 6.72 62.11 68.75 3 Bits AWQ 16.46 45.73 53.04 67.07 21.87 62.34 68.67 g128 OmniQuant 10.36 44.51 54.88 68.90 23.67 61.70 68.28 LLM-QAT 18.29 48.78 57.92 66.46 26.25 60.78 66.62 BitDistiller (ours) 20.73 53.66 63.41 69.51 32.50 64.38 69.69 RTN 0.0 0.0 0.0 0.61 0.0 0.0 7.89 GPTQ 0.0 0.0 1.83 3.65 0.0 0.0 11.43 2 Bits AWQ 0.0 0.0 0.0 0.0 0.0 0.0 7.89 g128 OmniQuant 0.0 0.0 20.12 26.83 0.0 0.0 9.45 LLM-QAT 0.0 14.63 15.21 29.27 6.56 23.13 36.64 BitDistiller (ours) 7.31 36.59 42.07 46.34 16.09 51.02 61.33 Table 2: Reasoning task results of BitDistiller versus established PTQ and QAT methods on domain-specific LLMs. Our method achieves leading performance in both 3-bit and 2-bit quantization. LLaMA-2-7B PPL ↓ MMLU (5s) ↑ (start 7→end ) (start 7→end ) 3 Bits NF-Sym 6.45 7→6.10 38.28 7→39.27 g128 →NF-Asym 6.30 7→6.01 41.53 7→42.61 + Clip-Asym 6.08 7→5.97 42.90 7→43.65 2 Bits INT-Sym 2.4e5 7→2.5e5 24.95 7→26.03 g128 →INT-Asym 3.4e2 7→16.94 24.12 7→24.82 + Clip-Asym 17.98 7→8.08 26.75 7→29.25 Table 3: Ablation study of asymmetric quantization and clipping on WikiText2 perplexity and MMLU (5-shot). The "start 7→end" notation denotes the metric values before and after training. loss (CELoss) for various outputs y. Figure 5a illustrates that the data generated by the teacher model yp exhibits low CELoss, indicative of a high-confidence logit distribution, which in turn facilitates better convergence with our proposed CAKLD. The comparative performance results depicted in Figure 5b reveal that the use of teachergenerated data in conjunction with CAKLD yields superior outcomes when compared to employing either a fixed dataset or student-generated data yq. Distillation Objectives In Figure 6, we demonstrate the effectiveness of our proposed ConfidenceAware KL Divergence (CAKLD) by showcasing performance indicators for reasoning tasks under different objective functions. Our findings show that CAKLD outperforms other objective functions. Though JSD also has a bounded coefficient for in0 25 50 75 100 125 150 Token Index 0 2 4 6 8 10 Per Token CE Loss Ground Truth (yg) Student Generation (yq) Teacher Generation (yp) (a) GT (yg) Student (yq) Teacher (yp) 0 5 10 15 20 25 30 35 HumanEval Pass@1 26.83 32.31 36.59 QAT w.o. KD (b) Figure 5: Comparative analysis of using various data generation methods on WizardCoder-7B. (a) shows the per-token cross-entropy loss. (b) presents the HumanEval Pass@1. (‘QAT w.o. KD’ indicates the baseline where only the ground truth dataset is used for supervised fine-tuning, without knowledge distillation.) terpolation, in practice we observe that it has a weak ability to converge for QAT. 4.5 Analysis and Discussion Comparison with SpQR SpQR automatically identifies sensitivity outliers and stores them in sparse matrices at higher precision while compressing all other weights to lower bits. For a fair comparison, we experimented with the configuration of SpQR to achieve an average bit rate similar to BitDistiller. As shown in Table 4, BitDistiller consistently outperforms SpQR across various benchmarks. Since sparse representation is orthogonal to QAT with distillation, we plan to investigate how QAT can effectively manage outliers using higher 108 0 10 20 30 40 50 Evaluation 33.54 34.76 28.65 36.59 49.23 48.67 34.14 51.02 HumanEval GSM8K FKLD RKLD JSD CAKLD Figure 6: Performance comparison between different objective functions on WizardCoder-7B and MetaMath7B with domain-specific tasks. precision in sparse matrices. Model Avg bits Method PPL ↓ MMLU (5s) QA-avg LLama-2-7B 3.16 SpQR 6.06 40.96 59.61 3.15 BitDistiller 5.97 43.65 60.48 2.36 SpQR 11.99 26.35 48.47 2.15 BitDistiller 8.08 29.25 54.17 LLama-2-13B 3.16 SpQR 5.28 52.42 63.23 3.15 BitDistiller 5.20 53.21 63.90 2.28 SpQR 10.54 28.87 49.69 2.15 BitDistiller 6.78 37.50 57.63 Table 4: Performance comparison of quantized models using SpQR and BitDistiller on LLaMA-2-7B and LLaMA-2-13B. Comparison with QuIP QuIP enhances 2-bit PTQ for LLMs through incoherence processing. Its subsequent iteration, QuIP#1, refines this approach by shifting from scalar quantization to vector quantization via lattice codebooks, significantly narrowing the performance gap with 16-bit models. For a consistent comparison, we utilize the BF16 pretrained model and then apply Quip(#) and BitDistiller. As shown in Table 5, our BitDistiller surpasses QuIP across all benchmarks. In comparison with QuIP#, BitDistiller retains its superior performance in language modeling and programming, while QuIP# outperforms in mathematical reasoning. Being orthogonal to QAT with distillation, PTQ incorporating incoherence processing and vector quantization could potentially serve as an effective initialization method for BitDistiller. We intend to explore whether the integration of QuIP(#) into BitDistiller can further improve the performance of low-bit models. Comparison with TSLD Prior work (Kim et al., 2023a) introduced Token-Scaled Logit Distillation 1https://cornell-relaxml.github.io/ quip-sharp/ Method LLaMA-2-7B WizardCoder-7B MetaMath-7B PPL↓ MMLU (5s) QA-avg HumanEval GSM8K BF16 5.47 46.45 61.67 54.88 66.41 Quip 55.65 24.70 39.23 0.0 0.0 2 Bits Quip# 8.97 30.90 52.40 12.96 60.00 BitDistiller 8.08 29.25 54.17 36.58 51.02 Table 5: Performance comparison of 2-bit quantized models using QuIP, QuIP#, and BitDistiller on LLaMA2-7B, WizardCoder-7B, and MetaMath-7B. (TSLD) to alleviate overfitting during QAT. To facilitate a direct and fair comparison between TSLD and our CAKLD, we incorporate TSLD into the BitDistiller framework by replacing CAKLD with TSLD while keeping all other settings unchanged. As depicted in Figure 7, CAKLD not only converges more rapidly but also delivers superior overall performance compared to TSLD. 0 20 40 60 80 Steps 8.0 8.5 9.0 9.5 10.0 10.5 11.0 11.5 12.0 PPL TSLD CAKLD (ours) 10 20 30 40 50 60 Steps 10 20 30 40 50 Evaluation GSM8K w. CAKLD GSM8K w. TSLD HumanEval w. CAKLD HumanEval w. TSLD Figure 7: Comparison of TSLD and CAKLD on perplexity (left) and reasoning tasks performance (right). Effectiveness of Self-Distillation Table 6 compares 2-bit QAT performance using the LLaMA-27B or larger LLaMA-2-13B as the teacher model. Surprisingly, in practice the larger 13B model didn’t improve accuracy, hinting that a teacher with the same model architecture as the student may enhance weight alignment and probability distribution matching, thereby improving model effectiveness. Further investigation and deeper analysis are needed in future work to fully understand the implications of different teacher-student sizes and architectures in QAT. LLaMA-2-7B Quantized Student Teacher PPL ↓ MMLU (5s) ↑ 2 Bits 7B 13B 8.12 28.27 g128 7B 7B 8.08 29.25 Table 6: Performance comparison of 2-bit quantized models using LLaMA-2-13B and LLaMA-2-7B as the teacher model. Training Efficiency Table 7 highlights the efficiency of BitDistiller compared to LLM-QAT (Liu et al., 2023b) in quantizing the WizardCoder-7B 109 model. The results demonstrate a dramatic reduction in the total time required for quantization: BitDistiller completes the process in approximately 3 hours on a single A100-80G GPU, as opposed to the hundreds of GPU hours required by LLM-QAT. (Original LLM-QAT uses 64 GPUs. For a direct and fair comparison, we evaluate the GPU hours needed for LLM-QAT on a single GPU.) Method Devices #Data Time (Hours) Data Gen Quant Init QAT Total LLM-QAT 1 * A100 80G 100K 270 0 10.64 280.64 BitDistiller 2K 1.47 0.63 0.92 3.02 Table 7: Time required for LLM-QAT and BitDistiller to quantize WizardCoder-7B on a NVIDIA A100-80G. 5 Conclusion BitDistiller leverages QAT with self-distillation to boost sub-4-bit LLM performance. The asymmetric quantization and clipping strategies, coupled with the innovative CAKLD objective, facilitate faster learning and superior performance. BitDistiller outperforms existing PTQ and QAT methods, achieving notable improvements in 3/2-bit settings across diverse language and reasoning tasks. Moreover, BitDistiller is more cost-efficient with fewer data and training resources required. Limitations Despite the promising results demonstrated by BitDistiller, it is important to acknowledge certain limitations and areas for future investigation. A key limitation lies in the empirical nature of our findings. For instance, the reason behind the counterintuitive outcome where a 7B model outperforms a 13B model as a teacher during the distillation of a 2-bit 7B student model. Having the same model architecture may be the reason but not detailed explained and understood. This highlights the need for a deeper investigation and theoretical exploration to complement our empirical observations. Looking ahead, we aim to extend BitDistiller to the realm of 1-bit (binary) quantization. While this presents a more challenging scenario, it also offers the potential for significant advancements in efficient LLM inference as binary weights enables computation with only additions and without multiplications. Moreover, the current iteration of BitDistiller applies exclusively to scalar quantization. As future work, we plan to explore the adaptation of BitDistiller to vector quantization. Preliminary research in this area indicates that vector quantization could yield substantial benefits, and incorporating it into our framework represents a natural and promising progression of our research. Acknowledgements This work was partially supported by Hong Kong CRF grant under Grant NO. C70004-22G. And we would like to thank the HPC-AI-Integrated Intelligent Computing center of HKUST(GZ) for providing some of the hardware platforms in this project. References Rishabh Agarwal, Nino Vieillard, Piotr Stanczyk, Sabela Ramos, Matthieu Geist, and Olivier Bachem. 2023. Gkd: Generalized knowledge distillation for auto-regressive sequence models. arXiv preprint arXiv:2306.13649. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 7432–7439. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Jerry Chee, Yaohui Cai, Volodymyr Kuleshov, and Christopher De Sa. 2023. Quip: 2-bit quantization of large language models with guarantees. arXiv preprint arXiv:2307.13304. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan 110 Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023a. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314. Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh. 2023b. Spqr: A sparse-quantized representation for near-lossless llm weight compression. arXiv preprint arXiv:2306.03078. Tim Dettmers and Luke Zettlemoyer. 2023. The case for 4-bit precision: k-bit inference scaling laws. In International Conference on Machine Learning, pages 7750–7774. PMLR. Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh. 2022. Gptq: Accurate post-training quantization for generative pre-trained transformers. arXiv preprint arXiv:2210.17323. Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama. Amir Gholami, Sehoon Kim, Zhen Dong, Zhewei Yao, Michael W Mahoney, and Kurt Keutzer. 2022. A survey of quantization methods for efficient neural network inference. In Low-Power Computer Vision, pages 291–326. Chapman and Hall/CRC. Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. 2023. Knowledge distillation of large language models. arXiv preprint arXiv:2306.08543. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Jae-Joon Han, Youngjun Kwak, Sung Ju Hwang, and Changkyu Choi. 2019. Learning to quantize deep networks by optimizing quantization intervals with task loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4350–4359. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Minsoo Kim, Sihwa Lee, Sukjin Hong, Du-Seong Chang, and Jungwook Choi. 2022. Understanding and improving knowledge distillation for quantization-aware training of large transformer encoders. arXiv preprint arXiv:2211.11014. Minsoo Kim, Sihwa Lee, Janghwan Lee, Sukjin Hong, Du-Seong Chang, Wonyong Sung, and Jungwook Choi. 2023a. Token-scaled logit distillation for ternary weight generative language models. arXiv preprint arXiv:2308.06744. Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W Mahoney, and Kurt Keutzer. 2023b. Squeezellm: Dense-and-sparse quantization. arXiv preprint arXiv:2306.07629. Andrey Kuzmin, Mart Van Baalen, Yuwei Ren, Markus Nagel, Jorn Peters, and Tijmen Blankevoort. 2022. Fp8 quantization: The power of the exponent. Advances in Neural Information Processing Systems, 35:14651–14662. Rundong Li, Yan Wang, Feng Liang, Hongwei Qin, Junjie Yan, and Rui Fan. 2019. Fully quantized network for object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2810–2819. Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han. 2023. Awq: Activationaware weight quantization for llm compression and acceleration. arXiv preprint arXiv:2306.00978. Shih-yang Liu, Zechun Liu, Xijie Huang, Pingcheng Dong, and Kwang-Ting Cheng. 2023a. Llm-fp4: 4-bit floating-point quantized transformers. arXiv preprint arXiv:2310.16836. Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra. 2023b. Llm-qat: Data-free quantization aware training for large language models. arXiv preprint arXiv:2305.17888. Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101. 111 Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. Wizardcoder: Empowering code large language models with evolinstruct. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505–3506. Nick Rosh. 2023. Evol-teacher: Recreating wizardcoder. https://github.com/nickrosh/ evol-teacher. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2021. Winogrande: An adversarial winograd schema challenge at scale. Communications of the ACM, 64(9):99–106. Charbel Sakr, Steve Dai, Rangha Venkatesan, Brian Zimmer, William Dally, and Brucek Khailany. 2022. Optimal clipping and magnitude-aware differentiation for improved quantization-aware training. In International Conference on Machine Learning, pages 19123–19138. PMLR. Yuzhang Shang, Zhihang Yuan, Qiang Wu, and Zhen Dong. 2023. Pb-llm: Partially binarized large language models. arXiv preprint arXiv:2310.00034. Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, and Ping Luo. 2023. Omniquant: Omnidirectionally calibrated quantization for large language models. arXiv preprint arXiv:2308.13137. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-bert: Hessian based ultra low precision quantization of bert. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8815–8821. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Huaijie Wang, Lingxiao Ma, Fan Yang, Ruiping Wang, Yi Wu, and Furu Wei. 2023. Bitnet: Scaling 1-bit transformers for large language models. arXiv preprint arXiv:2310.11453. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, JamesT. Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. 2023. Metamath: Bootstrap your own mathematical questions for large language models. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830. Wei Zhang, Lu Hou, Yichun Yin, Lifeng Shang, Xiao Chen, Xin Jiang, and Qun Liu. 2020. Ternarybert: Distillation-aware ultra-low bit bert. arXiv preprint arXiv:2009.12812. Yijia Zhang, Sicheng Zhang, Shijie Cao, Dayou Du, Jianyu Wei, Ting Cao, and Ningyi Xu. 2023a. Afpq: Asymmetric floating point quantization for llms. Yijia Zhang, Lingran Zhao, Shijie Cao, Wenqiang Wang, Ting Cao, Fan Yang, Mao Yang, Shanghang Zhang, and Ningyi Xu. 2023b. Integer or floating point? new outlooks for low-bit quantization on large language models. arXiv preprint arXiv:2305.12356. Yongchao Zhou, Kaifeng Lyu, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Kumar, Jean-François Kagy, and Rishabh Agarwal. 2023. Distillspec: Improving speculative decoding via knowledge distillation. 112 Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. 2023. A survey on model compression for large language models. arXiv preprint arXiv:2308.07633. A Appendix A.1 Details of PTQ and QAT Configuration 0 10 20 30 40 50 60 Evaluation CODE GSM8K GPTQ - Wiki GPTQ - Domain-Specific Data AWQ - Pile AWQ - Domain-Specific Data Figure 8: Comparative Evaluation of PTQ Methods Using Various Calibration Datasets We evaluate PTQ methods by examining the impact of different calibration dataset distributions. Illustrated in Figure 8, calibrating with domainspecific data significantly enhances task-specific performance. For a fair comparison, all PTQ methods utilize the default calibration datasets for general language tasks and domain-specific calibration datasets (Rosh, 2023; Yu et al., 2023) for reasoning tasks. Regarding QAT methods, it should be noted that the use of symmetric quantization in LLM-QAT results in degradation when grouped quantization is applied. To ensure a fair comparison, we replicate the approach with our setup and employ asymmetric uniform quantization. A.2 Implementation Details and Analysis of Confidence-Aware KLD 0 50 100 150 200 250 300 Token Index 0.0 0.2 0.4 0.6 0.8 1.0 Confidence MEAN (a) Text Generation Task 0 50 100 150 200 250 300 Token Index 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Confidence MEAN (b) Reasoning Task Figure 9: Per-token confidence scores when teacher model (full-precision) conducting text generation task and reasoning task. We use a straightforward method in the precalculation of the coefficient γ. We utilize ten batches of training data to perform forward passes without updating parameters. Subsequently, we obtain the logits from the teacher model to compute the average token probability. In Figure 9, we have conducted analysis by examining the confidence scores of the teacher model in various tasks during next-word prediction. This analysis reveals that confidence levels can vary in text generation tasks, in contrast to reasoning tasks where each step is critical. Notably, in text generation tasks using LLMs, relying solely on the highest conditional probability through Greedy Search may result in local optima, overlooking more optimal sequences. These observations advocate for a mean-seeking Kullback-Leibler (KL) approach, encouraging the student model to encompass all potential modes of the teacher, thereby more effectively capturing the teacher’s general generative capabilities. In reasoning tasks, where the teacher model shows high confidence in next-word predictions, the student model should concentrate on learning the predominant mode from the teacher. Our proposed method, CAKLD, is designed to balance these two distinct modes effectively. A.3 Training Datasets Examples For general language tasks, we mix token sequences from Alpaca and WikiText-2 datasets with a ratio of 2:1. Since WikiText-2 lacks explicit instructions, we utilize the first 128 tokens from the corpus as the input prompt for the teacher model’s generation process, setting the temperature to 0.7. For tasks related to code understanding and generation, we employ the Evol-Instruct-Code dataset. For mathematical reasoning, we utilize MetaMathQA. Examples of the training data utilized are shown in Table 9. It is essential to highlight that our self-distillation process utilizes only a small portion of the involved datasets. A.4 Evaluation of General Language Tasks on LLaMA-2-13B and LLaMA-2-70B Additional results for the General Language Tasks for LLaMA-2-13B and LLaMA-2-70B are presented in Table 10 and Table 11, respectively. A.5 Integration with AWQ For Quantization Strategies As shown in Table 8, we explore the efficacy of combining asymmetric clipping with AWQ during the self-distillation process. Our results indicate 113 LLaMA-2-7B PPL ↓ (start 7→end ) INT-Asym 6.65 7→6.15 3 Bits AWQ 6.48 7→6.09 g128 Clip-Asym 6.21 7→6.00 AWQ + Clip-Asym 6.18 7→6.00 INT-Asym 3.4e2 7→16.94 2 Bits AWQ 2.2e5 7→Inf g128 Clip-Asym 17.98 7→8.08 AWQ + Clip-Asym 16.61 7→8.13 Table 8: Results of quantization initialization for QAT combining with AWQ on PPL of WikiText-2. that asymmetric clipping significantly enhances robustness in sub-4-bit quantization scenarios. For instance, at the 2-bit quantization level, both INTAsym and AWQ methods are unable to complete the task. Conversely, Clip-Asym not only succeeds but also achieves a marked improvement in perplexity. It is also noteworthy that while integrating AWQ prior to QAT yields improvements initially, there is no additional performance gain after training. This suggests that a straightforward clipping approach is sufficiently effective for initializing QAT. 114 Tasks Resources Number Instruction Teacher Response General Language Tasks Alpaca 4K Find the five steps to meditate 1. Find a quiet place to meditate\n2. Sit in a comfortable position\n3. Close your eyes and take a deep breath\n4. Focus your attention on your breath\n5. When thoughts invade your mind, gently bring your attention back to your breath General Language Tasks WikiText 2K In December 1856 , Rice brought forward two bills in Congress : an enabling act that would allow Minnesota to form a state cons itution, and a bill that would allow Missouri to secede from the Union. The bills were defeated, and Rice was removed from the chairmanship of the House Judiciary Committee. ...... Code-related Tasks Evol-Instruct-Code 2K Create a nested loop to print every combination of numbers between 0-9, excluding any combination that contains the number 5. Additionally, exclude any combination that contains a repeating digit. Implement the solution without using any built-in functions or libraries to check for repeating digits. Here’s one way to implement the solution:\n“‘ python\r\n for i in range(10):...... Math-related Tasks MetaMathQA 2K A store is comparing their profits throughout the year. They had profits of $1,500 in the first quarter of the year, $3,000 in the third quarter, and $2,000 in the fourth quarter. If their annual profits are $8,000, how much profit, in dollars, did they make in the second quarter? \nThe annual profit is the sum of the profits in each quarter, so $8,000 = $1,500 + $3,000 + $2,000 + x, where x is the profit in the second quarter.\nTo find x, we need to isolate it on one side of the equation....... Table 9: The Training Dataset examples for different tasks. LLaMA-2-13B PPL ↓ MMLU (5s) PIQA Hella. Wino. ARC-c Avg BF16 4.88 55.54 79.16 60.13 72.14 48.12 63.02 RTN 5.52 50.74 78.35 57.75 71.11 43.86 60.36 3 Bits GPTQ 5.41 50.63 77.26 56.84 70.72 42.83 59.66 g128 AWQ 5.47 49.64 77.09 57.52 70.32 43.86 59.69 OmniQuant 5.48 48.97 77.64 57.08 70.88 44.28 59.77 LLM-QAT 5.32 51.60 78.29 58.45 70.56 44.62 60.70 BitDistiller (ours) 5.20 53.21 78.67 58.66 71.59 46.67 61.76 RTN 109.21 24.74 57.56 32.56 50.75 21.84 37.49 GPTQ 15.08 23.70 56.04 30.99 51.22 19.28 36.25 2 Bits AWQ 1.2e5 27.04 53.16 25.82 51.70 23.04 36.15 g128 OmniQuant 25.69 26.09 61.81 31.92 51.38 22.27 38.69 LLM-QAT 7.80 29.37 74.10 49.49 63.14 33.87 49.99 BitDistiller (ours) 6.78 37.50 75.84 51.30 65.90 37.46 53.60 Table 10: General language task results of BitDistiller versus established PTQ and QAT Methods on LLaMA-213B Model. Our method achieves leading performance in both 3-bit and 2-bit quantization. 115 LLaMA-2-70B PPL ↓ PIQA Hella. Wino. ARC-c Avg BF16 3.32 82.32 64.68 77.74 54.18 69.73 RTN 28.45 94.96 40.11 55.17 26.45 46.67 GPTQ 8.35 62.79 36.54 57.30 23.97 45.15 2 Bits AWQ 7e5 52.29 25.82 48.15 22.78 37.18 OmniQuant 9.77 70.72 45.18 56.20 31.40 50.88 BitDistiller (ours) 5.54 79.54 60.38 75.37 47.10 65.60 Table 11: General language task results of BitDistiller versus established PTQ and QAT Methods on LLaMA-270B Model. Our method achieves leading performance in 2-bit quantization. 116
2024
7
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1280–1297 August 11-16, 2024 ©2024 Association for Computational Linguistics DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models Damai Dai†‡∗, Chengqi Deng‡, Chenggang Zhao‡§∗, Runxin Xu‡, Huazuo Gao‡, Deli Chen‡, Jiashi Li‡, Wangding Zeng‡, Xingkai Yu‡¶∗, Y. Wu‡, Zhenda Xie‡, Y.K. Li‡, Panpan Huang‡, Fuli Luo‡, Chong Ruan‡, Zhifang Sui†, Wenfeng Liang‡⋄ †State Key Laboratory of Multimedia Information Processing, Peking University ‡DeepSeek-AI §Institute for Interdisciplinary Information Sciences, Tsinghua University ¶National Key Laboratory for Novel Software Technology, Nanjing University {daidamai,szf}@pku.edu.cn, {damai.dai,wenfeng.liang}@deepseek.com https://github.com/deepseek-ai/DeepSeek-MoE Abstract In the era of large language models, Mixtureof-Experts (MoE) is a promising architecture for managing computational costs when scaling up model parameters. However, conventional MoE architectures like GShard, which activate the top-K out of N experts, face challenges in ensuring expert specialization, i.e. each expert acquires non-overlapping and focused knowledge. In response, we propose the DeepSeekMoE architecture towards ultimate expert specialization. It involves two principal strategies: (1) finely segmenting the experts into mN ones and activating mK from them, allowing for a more flexible combination of activated experts; (2) isolating Ks experts as shared ones, aiming at capturing common knowledge and mitigating redundancy in routed experts. Starting from a modest scale with 2B parameters, we demonstrate that DeepSeekMoE 2B achieves comparable performance with GShard 2.9B, which has 1.5× expert parameters and computation. In addition, DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with the same number of total parameters, which sets the upper bound of MoE models. Subsequently, we scale up DeepSeekMoE to 16B parameters and show that it achieves comparable performance with DeepSeek 7B and LLaMA2 7B, with only about 40% of computations. 1 Introduction Recent research and practices have empirically demonstrated that, with sufficient training data available, scaling language models with increased parameters and computational budgets can yield remarkably stronger models (Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023a; Hoffmann et al., 2022; DeepSeek-AI, 2024). However, the endeavor to scale models to an extremely large scale is also associated with exceedingly high computational costs. Considering the substantial costs, ∗Contribution during internship at DeepSeek-AI. ⋄Corresponding author. the Mixture-of-Experts (MoE) architecture (Jacobs et al., 1991; Jordan and Jacobs, 1994; Shazeer et al., 2017) has emerged as a popular solution, which enables parameter scaling while concurrently keeping modest computational costs. Despite the promising potential of MoE architectures, existing MoE architectures like GShard (Lepikhin et al., 2021) potentially suffer from issues of knowledge hybridity and knowledge redundancy: (1) Knowledge Hybridity: existing MoE practices often employ a limited number of experts, and thus tokens assigned to a specific expert will be likely to cover diverse knowledge. Consequently, the designated expert will intend to assemble vastly different types of knowledge in its parameters, which are hard to utilize simultaneously. (2) Knowledge Redundancy: tokens assigned to different experts may require common knowledge. As a result, multiple experts may converge in acquiring shared knowledge in their respective parameters, thereby leading to redundancy in expert parameters. These issues collectively limit the expert specialization in MoE models, i.e., each expert acquires non-overlapping and focused knowledge. In response to the aforementioned issues, we introduce DeepSeekMoE, an innovative MoE architecture specifically designed towards ultimate expert specialization. Our architecture involves two principal strategies: (1) Fine-Grained Expert Segmentation: while maintaining the number of parameters constant, we segment the experts into a finer granularity by splitting the FFN intermediate hidden dimension. Correspondingly, keeping a constant computational cost, we also activate more fine-grained experts to enable a more flexible and adaptable combination of activated experts. Finegrained expert segmentation allows diverse knowledge to be decomposed more finely and be learned more precisely into different experts, where each expert will retain a higher level of specialization. In addition, the increased flexibility in combining 1280 activated experts also contributes to more accurate knowledge acquisition. (2) Shared Expert Isolation: we isolate certain experts to serve as shared experts that are always activated, aiming at capturing and consolidating common knowledge across varying contexts. Through compressing common knowledge into these shared experts, redundancy among other routed experts will be mitigated. This can enhance the parameter efficiency and ensure that each routed expert remains specialized by focusing on distinctive aspects. These architectural innovations in DeepSeekMoE offer opportunities to train a parameter-efficient MoE language model where each expert is highly specialized. Starting from a modest scale with 2B parameters, we validate the advantages of the DeepSeekMoE architecture. Empirical results on 12 diverse benchmarks indicate that DeepSeekMoE 2B surpasses GShard 2B (Lepikhin et al., 2021) by a substantial margin, and even matches GShard 2.9B, a larger MoE model with 1.5× expert parameters and computation. Remarkably, we find that DeepSeekMoE 2B nearly approaches the performance of its dense counterpart with an equivalent number of parameters, which sets the strict upper bound of MoE language models. We also conduct elaborate ablation studies and specialization analysis, and the studies validate the effectiveness of our main strategies, and provide evidence supporting that DeepSeekMoE can achieve higher expert specialization. Subsequently, we scale up the model parameters to 16B and train DeepSeekMoE 16B on a largescale corpus with 2T tokens. Evaluation results reveal that with only about 40% of computations, it achieves comparable performance with DeepSeek 7B (DeepSeek-AI, 2024) and LLaMA2 7B (Touvron et al., 2023b), two strong 7B dense models. Our contributions are summarized as follows: (1) We introduce DeepSeekMoE, an innovative MoE architecture aiming at achieving ultimate expert specialization. (2) We conduct extensive experiments to empirically validate the effectiveness of DeepSeekMoE and reveal its high level of expert specialization. (3) We scale up DeepSeekMoE to train a 16B MoE model which shows strong performance. (4) We will release the code and model checkpoint of DeepSeekMoE 16B to the public. 2 Preliminaries We first introduce a generic MoE architecture for Transformer language models. A standard Transformer language model is constructed by stacking L layers of standard Transformer blocks, where each block can be represented as follows: ul 1:T = Self-Att  hl−1 1:T  + hl−1 1:T , (1) hl t = FFN  ul t  + ul t, (2) where T denotes the sequence length, ul 1:T ∈ RT×d are the hidden states after the l-th attention module, and hl t ∈Rd is the output hidden state of the t-th token after the l-th Transformer block. For brevity, we omit the layer normalization. A typical practice to construct an MoE language model usually substitutes Feed-Forward Networks (FFNs) in a Transformer with MoE layers at specified intervals (Fedus et al., 2021; Lepikhin et al., 2021; Du et al., 2022; Zoph, 2022). An MoE layer is composed of multiple experts, where each expert is structurally identical to a standard FFN. Then, each token will be assigned to a few experts. If the l-th FFN is substituted with an MoE layer, its computation can be expressed as: hl t = N X i=1  gi,t FFNi  ul t  + ul t, (3) gi,t = ( si,t, si,t ∈Topk({sj,t|1 ≤j ≤N}, K), 0, otherwise, (4) si,t = Softmaxi  ul t T el i  , (5) where N denotes the total number of experts, FFNi(·) is the i-th expert FFN, gi,t denotes the gate value for the i-th expert, si,t denotes the tokento-expert affinity, Topk(·, K) denotes the set comprising K highest affinity scores among those calculated for the t-th token and all N experts, and el i is the centroid of the i-th expert in the l-th layer. Note that for each token, only K out of N gate values are nonzero. This sparsity property ensures computational efficiency within an MoE layer. 3 DeepSeekMoE Architecture On top of the generic MoE architecture, DeepSeekMoE introduces two principal strategies, finegrained expert segmentation and shared expert isolation, as illustrated in Figure 1. Both strategies aim at elevating the level of expert specialization. 3.1 Fine-Grained Expert Segmentation In scenarios where the number of experts is limited, tokens assigned to a particular expert will be more likely to cover diverse types of knowledge. As a consequence, the designated expert will intend 1281 … 1 2 𝑁𝑁 Router Input Hidden Output Hidden … Router Input Hidden Output Hidden 1 2 3 4 2𝑁𝑁-1 2𝑁𝑁 … Router Input Hidden Output Hidden 1 2 3 4 2𝑁𝑁-1 2𝑁𝑁 Shared Expert Routed Expert 𝐾𝐾= 2 𝐾𝐾= 4 𝐾𝐾 = 3 (a) Conventional Top-2 Routing (b) + Fine-grained Expert Segmentation (c) + Shared Expert Isolation (DeepSeekMoE) Figure 1: Illustration of DeepSeekMoE. (a) showcases an MoE layer with the conventional top-2 routing strategy. (b) illustrates the fine-grained expert segmentation strategy. Subsequently, (c) introduces the shared expert isolation strategy, constituting the complete DeepSeekMoE architecture. to learn vastly different types of knowledge in its parameters, and they are hard to be simultaneously utilized. However, if each token can be routed to more experts, diverse knowledge will gain the potential to be decomposed and learned in different experts respectively, where each expert can still remain specialized and focused. In pursuit of this goal, while maintaining a consistent number of expert parameters and computational cost, we segment the experts with a finer granularity. To be specific, on top of a typical MoE architecture shown in Figure 1(a), we segment each expert FFN into m smaller experts by reducing the FFN intermediate hidden dimension to 1 m times its original size. Since each expert becomes smaller, in response, we also increase the number of activated experts to m times to keep the same computation cost, as illustrated in Figure 1(b). Then, the output of an MoE layer can be expressed as: hl t = mN X i=1  gi,t FFNi  ul t  + ul t, (6) gi,t = ( si,t, si,t ∈Topk({sj,t|1 ≤j ≤mN}, mK), 0, otherwise, (7) si,t = Softmaxi  ul t T el i  , (8) where the number of expert parameters is equal to N times a standard FFN, and mN denotes the number of fine-grained experts. Also, the number of nonzero gates will increase to mK. From a combinatorial perspective, fine-grained expert segmentation substantially enhances the combinatorial flexibility of activated experts. As an example, we consider the case where N = 16. A typical top-2 routing strategy can yield 16 2  = 120 possible combinations. By contrast, if each expert is split into 4 smaller experts, we can yield 64 8  = 4, 426, 165, 368 potential combinations. The surge in combinatorial flexibility enhances the potential for achieving more accurate and targeted knowledge acquisition. 3.2 Shared Expert Isolation With a conventional routing strategy, tokens assigned to different experts may require some common knowledge. As a result, multiple experts will converge in acquiring shared knowledge in their respective parameters, leading to parameter redundancy. However, if there are shared experts that capture and consolidate common knowledge across varying contexts, the parameter redundancy among other routed experts will be alleviated. Towards this objective, we further isolate Ks experts as shared experts. Regardless of the router, each token will be deterministically assigned to these shared experts. In order to maintain a constant computational cost, the number of activated routed experts will be decreased by Ks, as depicted in Figure 1(c). Finally, an MoE layer in the com1282 plete DeepSeekMoE architecture is formulated as: hl t = Ks X i=1 FFNi  ul t  + mN X i=Ks+1  gi,t FFNi  ul t  + ul t, (9) gi,t= ( si,t, si,t∈Topk({sj,t|Ks+1≤j ≤mN}, mK−Ks), 0, otherwise, (10) si,t = Softmaxi  ul t T el i  . (11) Finally, the number of shared experts is Ks, the number of routed experts is mN −Ks, and the number of nonzero gates is mK −Ks. The prototype of shared expert isolation can be credited to some previous work Rajbhandari et al. (2022); Elbayad et al. (2023), but we derive this strategy from different standpoints. 3.3 Load Balance Consideration We employ an expert-level balance loss to mitigate the risk of routing collapse (Shazeer et al., 2017). The computation of the balance loss is as follows: LBal = α N′ X i=1 fiPi, (12) fi = N ′ K′T T X t=1 1(Token t selects Expert i), (13) Pi = 1 T T X t=1 si,t, (14) where balance factor α is a hyper-parameter, 1(·) denotes the indicator function, N′ is equal to (mN −Ks), and K′ is equal to (mK −Ks). 4 Validation Experiments 4.1 Experimental Setup Training Data and Tokenization. Our training data is sampled from a large-scale corpus created by DeepSeek-AI (DeepSeek-AI, 2024), which focuses on English and Chinese and is derived from diverse sources. For the purpose of validation experiments, we sample a subset containing 100B tokens from the corpus to train our models. For tokenization, we utilize the HuggingFace Tokenizer1 tools to train a byte pair encoding (BPE) (Sennrich et al., 2016) tokenizer with an 8K vocabulary size on a subset of the training corpus. Hyper-Parameters. In the validation experiments, we set the number of Transformer layers 1https://github.com/huggingface/tokenizers to 9 and the hidden dimension to 1280. We substitute all FFNs with MoE layers, and ensure that the total number of expert parameters equals 16 times that of a standard FFN. Additionally, we keep the activated expert parameters, including shared expert parameters and activated routed expert parameters, as 2 times that of a standard FFN. Under this configuration, each MoE model has approximately 2B total parameters, with the number of activated parameters around 0.3B. As for training, we employ the AdamW optimizer (Loshchilov and Hutter, 2019) and set the maximum learning rate to 1.08 × 10−3. The batch size is set to 2K, and with a maximum sequence length of 2K, each training batch contains 4M tokens. Correspondingly, the total number of training steps is set to 25,000 to achieve 100B training tokens. In order to prevent routing collapse, we set a balance factor of 0.01. Due to the page limit, we leave the other hyper-parameters in Appendix A.1. We also describe the training framework and infrastructures in Appendix B. Evaluation Benchmarks. We conduct evaluations on a wide range of benchmarks covering various types of tasks. For language modeling, we evaluate the models on the test set of Pile (Gao et al., 2020), and the evaluation metric is the crossentropy loss. For language understanding and reasoning, we consider HellaSwag (Zellers et al., 2019), PIQA (Bisk et al., 2020), ARC-challenge and ARC-easy (Clark et al., 2018), and the evaluation metric for these tasks is accuracy. For reading comprehension, we consider RACE-high and RACE-middle (Lai et al., 2017), and the evaluation metric is accuracy. For code generation, we consider HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021), and the evaluation metric is Pass@1. For closed-book question answering, we consider TriviaQA (Joshi et al., 2017) and NaturalQuestions (Kwiatkowski et al., 2019), and the metric is the Exactly Matching (EM) rate. 4.2 Evaluations Baselines. Including DeepSeekMoE, we compare five models for validation experiments. Dense denotes a standard dense Transformer model with 0.2B total parameters. Hash Layer (Roller et al., 2021) and Switch Transformer (Fedus et al., 2021) are two well-known MoE architectures based on top-1 routing, with 2.0B total parameters and 0.2B activated parameters. GShard (Lepikhin et al., 1283 Metric # Shot Dense Hash Layer Switch Transformer GShard DeepSeekMoE # Total Params N/A 0.2B 2.0B 2.0B 2.0B 2.0B # Activated Params N/A 0.2B 0.2B 0.2B 0.3B 0.3B FLOPs per 2K Tokens N/A 2.9T 2.9T 2.9T 4.3T 4.3T Pile (Loss) N/A 2.060 1.932 1.881 1.867 1.808 HellaSwag (Acc.) 0 38.8 46.2 49.1 50.5 54.8 PIQA (Acc.) 0 66.8 68.4 70.5 70.6 72.3 ARC-easy (Acc.) 0 41.0 45.3 45.9 43.9 49.4 ARC-challenge (Acc.) 0 26.0 28.2 30.2 31.6 34.3 RACE-middle (Acc.) 5 38.8 38.8 43.6 42.1 44.0 RACE-high (Acc.) 5 29.0 30.0 30.9 30.4 31.7 HumanEval (Pass@1) 0 0.0 1.2 2.4 3.7 4.9 MBPP (Pass@1) 3 0.2 0.6 0.4 0.2 2.2 TriviaQA (EM) 5 4.9 6.5 8.9 10.2 16.6 NaturalQuestions (EM) 5 1.4 1.4 2.5 3.2 5.7 Table 1: Evaluation results for validation experiments. Bold font indicates the best. HellaSwag PIQA ARC-easy ARC-challenge TriviaQA NaturalQuestions 0.6 0.8 1.0 1.2 1.4 Normalized Performance 0 shared expert + 2 out of 16 routed experts (GShard) 1 shared expert + 1 out of 15 routed experts (+ shared expert isolation) 1 shared expert + 3 out of 31 routed experts (+ fine-grained expert segmentation) 1 shared expert + 7 out of 63 routed experts (+ finer expert segmentation) Figure 2: Ablation studies for DeepSeekMoE. The performance is normalized by the best performance. 2021) employs a top-2 learnable routing strategy, with 2.0B total parameters and 0.3B activated parameters. DeepSeekMoE has 1 shared expert and 63 routed experts, where each expert is 0.25 times the size of a standard FFN. Including DeepSeekMoE, all compared models share the same training corpus and training hyper-parameters. Results. As shown in Table 1, (1) With more total parameters, Hash Layer and Switch Transformer achieve significantly stronger performance than the dense baseline with the same number of activated parameters. (2) Compared with Hash Layer and Switch Transformer, GShard has more activated parameters and achieves slightly better performance. (3) With the same number of total and activated parameters, DeepSeekMoE demonstrates overwhelming advantages over GShard. These results show the superiority of our DeepSeekMoE architecture. 4.3 DeepSeekMoE Aligns Closely with the upper bound of MoE Models For a more precise understanding of the performance of DeepSeekMoE, we compare it with larger baselines with more parameters or computations. Comparison with GShard×1.5. We first compare DeepSeekMoE with a larger GShard model with 1.5 times the expert size, which results in 1.5 times both expert parameters and expert computation. Evaluation results show that GShard×1.5 achieves a Pile test loss of 1.808, and DeepSeekMoE also achieves the same Pile test loss. This underscores the significant advantage of the DeepSeekMoE architecture. Due to the page limit, we show the complete evaluation results including all the benchmarks in Appendix C. Comparison with Dense×16. We also compare DeepSeekMoE and a dense model with the same number of total parameters. For a fair comparison, we do not use the widely used ratio (1:2) between the attention and FFN parameters. Instead, we configure 16 shared experts where each expert has the same number of parameters as a standard FFN. This architecture mimics a dense model with 16 times standard FFN parameters, which sets the strict upper bound of MoE models in terms of the 1284 0 1/16 2/16 3/16 4/16 Ratio of Disabled Top Routed Experts 2 4 6 8 Pile Loss DeepSeekMoE GShard × 1.5 Figure 3: Pile test loss with regard to different ratios of disabled top routed experts. model capacity. We find that this dense model achieves a Pile test loss of 1.806, while DeepSeekMoE achieves a close Pile test loss of 1.808. Due to the page limit, we also show the complete evaluation results in Appendix C. To summarize, these results suggest that, at least at the scale of about 2B parameters and 100B training tokens, the performance of DeepSeekMoE aligns closely with the theoretical upper bound of MoE models. 4.4 Ablation Studies We conduct ablation studies for DeepSeekMoE to substantiate the effectiveness of our two principal strategies. For a fair comparison, we ensure all models included in the comparison have the same number of total and activated parameters. Shared Expert Isolation. In order to evaluate the influence of shared expert isolation, based on GShard, we isolate one expert as the shared one. From Figure 2, we observe that compared with GShard, the isolation yields improved performance across a majority of benchmarks. Fine-Grained Expert Segmentation. For assessing the effectiveness of fine-grained expert segmentation, we segment each expert into 2 or 4 smaller experts, resulting in 32 (1 shared + 31 routed) or 64 (1 shared + 63 routed) total experts. Figure 2 shows a consistent trend that finer expert segmentation granularity corresponds to better performance. 4.5 Analysis on Expert Specialization We conduct an empirical analysis on the expert specialization of DeepSeekMoE 2B, which refers to the model reported in Table 1. DeepSeekMoE Exhibits Lower Redundancy Among Routed Experts. In order to assess the redundancy among routed experts, for each token, we mask a certain ratio of experts with the highest routing probability, and then select top-K experts 3 4 5 6 7 Activated Routed Experts 1.85 1.90 1.95 Pile Loss same activated expert parameters DeepSeekMoE GShard (full top-2 activated) Figure 4: Pile loss with regard to different numbers of activated routed experts in DeepSeekMoE. HellaSwag PIQA ARC-easy ARC-challenge TriviaQA NaturalQuestions 0 10 20 30 40 50 60 70 Performance GShard DeepSeekMoE (half activated) Figure 5: Comparison between GShard and DeepSeekMoE trained from scratch and with half the activated experts. from the remaining routed experts. For fairness, we compare DeepSeekMoE with GShard×1.5 since they have the same Pile loss when no experts are disabled. As shown in Figure 3, compared with GShard×1.5, DeepSeekMoE is more sensitive to the disabling of top routed experts. This implies lower parameter redundancy in DeepSeekMoE, since each routed expert is more irreplaceable. Shared Experts Are Irreplaceable by Routed Experts. In order to investigate the role of the shared expert in DeepSeekMoE, we disable it and activate one more routed expert. The evaluation on Pile shows a significant increase in the Pile loss, rising from 1.808 to 2.414, even though we maintain the same computational cost. This result indicates that the shared expert captures fundamental and essential knowledge not shared with routed experts, making it irreplaceable by routed ones. DeepSeekMoE Acquires Knowledge More Accurately. In order to validate our claim that higher flexibility in combining activated experts contributes to more accurate and targeted knowledge acquisition, we investigate whether DeepSeekMoE can acquire requisite knowledge with fewer activated experts. To be specific, we vary the number of activated routed experts from 3 to 7 and evaluate the resulting Pile loss. As demonstrated in Figure 4, even with only 4 routed experts activated, 1285 DeepSeekMoE is still comparable with GShard. Encouraged by these findings, we further train a new MoE model from scratch, which comprises 1 shared expert and 63 routed experts but only 3 routed experts are activated. Figure 5 demonstrates that, even with the same total expert parameters and only half of the activated expert parameters, DeepSeekMoE still outperforms GShard. 5 Scaling up to DeepSeekMoE 16B With the DeepSeekMoE architecture, we further scale up our MoE model to a larger scale with 16B total parameters and train it on 2T tokens. 5.1 Experimental Setup Training Data and Tokenization For training DeepSeekMoE 16B, we sample 2T tokens from the same corpus as described in Section 4.1, and use a larger BPE tokenizer with a 100K vocabulary size. Hyper-Parameters For DeepSeekMoE 16B, we set the number of Transformer layers to 28 and the hidden dimension to 2048. We substitute all FFNs except for the first layer with MoE layers, since we observe that the load balance status converges especially slower for the first layer. Each MoE layer consists of 2 shared experts and 64 routed experts, where each expert is 0.25 times the size of a standard FFN. Each token will be routed to these 2 shared experts and 6 out of 64 routed experts. Under this configuration, DeepSeekMoE 16B has approximately 16.4B total parameters, with the number of activated parameters around 2.8B. As for training, we employ the AdamW optimizer (Loshchilov and Hutter, 2019) and set the maximum learning rate to 4.2 × 10−4. The batch size is set to 4.5K, and with a maximum sequence length of 4K, each training batch contains 18M tokens. Correspondingly, the total number of training steps is set to 106,449 to achieve 2T training tokens. In order to prevent routing collapse, we set a balance factor of 0.001. Due to the page limit, we leave the other hyper-parameters in Appendix A.2. Evaluation Benchmarks In addition to the benchmarks used in the validation experiments, we incorporate additional benchmarks for a more comprehensive evaluation. For language modeling, we also evaluate the models on the test set of Pile (Gao et al., 2020). Since the tokenizer used in DeepSeekMoE 16B is different from that used in LLaMA2 7B, we use bits per byte (BPB) as the evaluation metric for a fair comparison. For reading comprehension, we additionally consider DROP (Dua et al., 2019) and the evaluation metric is EM. For math reasoning, we additionally incorporate GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021), using EM as the evaluation metric. For multi-subject multiplechoice, we additionally evaluate the models on MMLU (Hendrycks et al., 2020) and the evaluation metric is accuracy. For disambiguation, we additionally consider WinoGrande (Sakaguchi et al., 2019) and the evaluation metric is accuracy. Since DeepSeekMoE 16B is pretrained on a bilingual corpus, we also evaluate it on four Chinese benchmarks: CLUEWSC (Xu et al., 2020), CEval (Huang et al., 2023), CMMLU (Li et al., 2023), and CHID (Zheng et al., 2019). Evaluation metrics for these benchmarks are accuracy or EM. 5.2 Evaluations We compare DeepSeekMoE 16B with LLaMA2 7B (Touvron et al., 2023b) and DeepSeek 7B (DeepSeek-AI, 2024), two strong and wellknown dense models trained on 2T tokens. In addition, DeepSeekMoE 16B and DeepSeek 7B use the same training data. As shown in Table 2, we have the following observations: (1) On the whole, with about only 40% of the computations, DeepSeekMoE 16B achieves comparable performance with LLaMA2 7B and DeepSeek 7B. (2) DeepSeekMoE 16B exhibits notable strengths in language modeling and knowledge-intensive tasks such as Pile, HellaSwag, and TriviaQA. (3) Compared with the excellent performance on other tasks, DeepSeekMoE exhibits limitations in addressing multiple-choice tasks, which may stem from the limited attention parameters in DeepSeekMoE 16B. (4) Compared with LLaMA2 7B, DeepSeek 7B and DeepSeekMoE 16B have much stronger performance on math, coding, and Chinese benchmarks. For a more comprehensive understanding of the training process of DeepSeekMoE 16B, we also provide the benchmark curves of DeepSeekMoE 16B and DeepSeek 7B (Dense) during training in Appendix D. In addition, we provide a comparison between DeepSeekMoE 16B and other open source models on the Open LLM Leaderboard in Appendix E. 1286 Metric # Shot LLaMA2 7B (Dense) DeepSeek 7B (Dense) DeepSeekMoE 16B # Total Params N/A 6.7B 6.9B 16.4B # Activated Params N/A 6.7B 6.9B 2.8B FLOPs per 4K Tokens N/A 187.9T 183.5T 74.4T Pile (BPB) N/A 0.76 0.75 0.74 HellaSwag (Acc.) 0 75.6 75.4 77.1 PIQA (Acc.) 0 78.0 79.2 80.2 ARC-easy (Acc.) 0 69.1 67.9 68.1 ARC-challenge (Acc.) 0 49.0 48.1 49.8 RACE-middle (Acc.) 5 60.7 63.2 61.9 RACE-high (Acc.) 5 45.8 46.5 46.4 DROP (EM) 1 34.0 34.9 32.9 GSM8K (EM) 8 15.5 17.4 18.8 MATH (EM) 4 2.6 3.3 4.3 HumanEval (Pass@1) 0 14.6 26.2 26.8 MBPP (Pass@1) 3 21.8 39.0 39.2 TriviaQA (EM) 5 63.8 59.7 64.8 NaturalQuestions (EM) 5 25.5 22.2 25.5 MMLU (Acc.) 5 45.8 48.2 45.0 WinoGrande (Acc.) 0 69.6 70.5 70.2 CLUEWSC (EM) 5 64.0 73.1 72.1 CEval (Acc.) 5 33.9 45.0 40.6 CMMLU (Acc.) 5 32.6 47.2 42.5 CHID (Acc.) 0 37.9 89.3 89.4 Table 2: Comparison among LLaMA2 7B, DeepSeek 7B, and DeepSeekMoE 16B. 6 Related Work The Mixture of Experts (MoE) technique is first proposed by Jacobs et al. (1991); Jordan and Jacobs (1994) to deal with different samples with independent expert modules. Shazeer et al. (2017) introduce MoE into language model training and build a large-scale LSTM-based (Hochreiter and Schmidhuber, 1997) MoE models. As Transformer become the most popular architecture for NLP, many attempts extend FFNs in a Transformer as MoE layers to build MoE language models. GShard (Lepikhin et al., 2021) and Switch Transformer (Fedus et al., 2021) are pioneers which employ learnable top-2 or top-1 routing strategies to scale the MoE language models to an extremely large scale. Hash Layer (Roller et al., 2021) and StableMoE (Dai et al., 2022) use fixed routing strategies for more stable routing and training. Zhou et al. (2022) propose an expert-choice routing strategy, where each token can be assigned to different numbers of experts. Zoph (2022) focus on the issues of training instability and fine-tuning difficulty in MoE models, and propose ST-MoE to overcome these challenges. Gao et al. (2022) investigate parameterefficient MoE architectures via sharing information among experts. Krishnamurthy et al. (2023) attempt to improve the expert specialization on toy data. In addition to research on MoE architectures and training strategies, recent years have also witnessed the emergence of numerous largescale language or multimodal models (Lin et al., 2021; Du et al., 2022; Ren et al., 2023; Xue et al., 2023) based on existing MoE architectures. By and large, most of the previous MoE models are based on conventional top-1 or top-2 routing strategies, leaving large room for improving expert specialization. In response, we design the DeepSeekMoE architecture to improve the expert specialization. 7 Conclusion In this paper, we introduce the DeepSeekMoE architecture for MoE language models, with the objective of achieving ultimate expert specialization. Through fine-grained expert segmentation and shared expert isolation, DeepSeekMoE achieves significantly higher expert specialization and performance compared with prevailing MoE architectures. Starting with a modest scale of 2B parameters, we validate the advantages of DeepSeekMoE, demonstrating its capability to approach the upper bound performance for MoE models. Further1287 more, we provide empirical evidence to show that DeepSeekMoE has a higher level of expert specialization than GShard. Scaling up to a larger scale of 16B total parameters, we train DeepSeekMoE 16B on 2T tokens and demonstrate its outstanding performance comparable with DeepSeek 7B and LLaMA2 7B, with only about 40% of computations. For research purposes, we will release the model checkpoint of DeepSeekMoE 16B to the public, which can be deployed on a single GPU with 40GB of memory. We aspire for this work to provide valuable insights for both academia and industry, and contribute to the accelerated advancement of large language models. Limitations and Future Work Although we find that finer granularity in expert segmentation always leads to better model performance, we just use a moderate granularity in DeepSeekMoE 16B, since too fine granularity will decrease the computational efficiency. In future research, we plan to build a scaling law for the expert segmentation granularity and explore finer segmentation on larger-scale models. In addition, since DeepSeekMoE will select more experts, it has the potential to result in additional communication overhead when the experts are distributed across different devices. In the future, we will also design better algorithms and parallelism strategies to mitigate such additional communication overhead. Finally, in this paper, we fix the number of expert parameters to 16 times that of a standard FFN, and the number of activated expert parameters to twice that of a standard FFN. In larger model settings, the optimal numbers of total parameters and activated parameters are also a topic for future research and discussion. Acknowledgement Damai Dai and Zhifang Sui are supported by the National Key Research and Development Program of China 2020AAA0106700. References Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Heslow, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732. Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 2397–2430. PMLR. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432– 7439. AAAI Press. Sid Black, Leo Gao, Phil Wang, Connor Leahy, and Stella Biderman. 2021. GPT-Neo: Large Scale Autoregressive Language Modeling with MeshTensorflow. If you use this misc, please cite it using these metadata. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie 1288 Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. CoRR, abs/2107.03374. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the AI2 reasoning challenge. CoRR, abs/1803.05457. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Damai Dai, Li Dong, Shuming Ma, Bo Zheng, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Stablemoe: Stable routing strategy for mixture of experts. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 7085–7095. Association for Computational Linguistics. DeepSeek-AI. 2024. Deepseek llm: Scaling opensource language models with longtermism. arXiv preprint arXiv:2401.02954. Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten P. Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen S. MeierHellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Zhifeng Chen, and Claire Cui. 2022. Glam: Efficient scaling of language models with mixture-of-experts. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 5547–5569. PMLR. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACLHLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2368–2378. Association for Computational Linguistics. Maha Elbayad, Anna Y. Sun, and Shruti Bhosale. 2023. Fixing moe over-fitting on low-resource languages in multilingual machine translation. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 14237–14253. Association for Computational Linguistics. William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. CoRR, abs/2101.03961. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. 2020. The Pile: An 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027. Ze-Feng Gao, Peiyu Liu, Wayne Xin Zhao, ZhongYi Lu, and Ji-Rong Wen. 2022. Parameter-efficient mixture-of-experts architecture for pre-trained language models. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 3263–3273. International Committee on Computational Linguistics. Xinyang Geng and Hao Liu. 2023. Openllama: An open reproduction of llama. Aaron Harlap, Deepak Narayanan, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, and Phillip B. Gibbons. 2018. Pipedream: Fast and efficient pipeline parallel DNN training. CoRR, abs/1806.03377. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. High-Flyer. 2023. Hai-llm: An efficient and lightweight tool for training large models. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computing, 9(8):1735– 1780. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models. CoRR, abs/2203.15556. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. 2023. C-Eval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322. 1289 Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. 1991. Adaptive mixtures of local experts. Neural Computing, 3(1):79–87. Michael I. Jordan and Robert A. Jacobs. 1994. Hierarchical mixtures of experts and the EM algorithm. Neural Computing, 6(2):181–214. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. triviaqa: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. arXiv e-prints, page arXiv:1705.03551. Vijay Anand Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. 2023. Reducing activation recomputation in large transformer models. Proceedings of Machine Learning and Systems, 5. Yamuna Krishnamurthy, Chris Watkins, and Thomas Gärtner. 2023. Improving expert specialization in mixture of experts. CoRR, abs/2302.14703. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. RACE: large-scale reading comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 785–794. Association for Computational Linguistics. Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2021. Gshard: Scaling giant models with conditional computation and automatic sharding. In 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net. Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. 2023. CMMLU: Measuring massive multitask language understanding in Chinese. arXiv preprint arXiv:2306.09212. Junyang Lin, Rui Men, An Yang, Chang Zhou, Ming Ding, Yichang Zhang, Peng Wang, Ang Wang, Le Jiang, Xianyan Jia, Jie Zhang, Jianwei Zhang, Xu Zou, Zhikang Li, Xiaodong Deng, Jie Liu, Jinbao Xue, Huiling Zhou, Jianxin Ma, Jin Yu, Yong Li, Wei Lin, Jingren Zhou, Jie Tang, and Hongxia Yang. 2021. M6: A chinese multimodal pretrainer. CoRR, abs/2103.00823. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. Truthfulqa: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 3214–3252. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net. Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. 2021. Efficient large-scale language model training on gpu clusters using megatron-lm. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1– 15. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, and Yuxiong He. 2022. Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation AI scale. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 18332–18346. PMLR. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. Zero: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020, Virtual Event / Atlanta, Georgia, USA, November 9-19, 2020, page 20. IEEE/ACM. Xiaozhe Ren, Pingyi Zhou, Xinfan Meng, Xinjing Huang, Yadao Wang, Weichao Wang, Pengfei Li, Xiaoda Zhang, Alexander Podolskiy, Grigory Arshinov, Andrey Bout, Irina Piontkovskaya, Jiansheng Wei, Xin Jiang, Teng Su, Qun Liu, and Jun Yao. 2023. Pangu-Σ: Towards trillion parameter language model with sparse heterogeneous computing. CoRR, abs/2303.10845. Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason Weston. 2021. Hash layers for large sparse models. CoRR, abs/2106.04426. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Winogrande: An adversarial winograd schema challenge at scale. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, 1290 Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers. The Association for Computer Linguistics. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In 5th International Conference on Learning Representations, ICLR 2017. OpenReview.net. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053. Philippe Tillet, H. T. Kung, and David Cox. 2019. Triton: An intermediate language and compiler for tiled neural network computations. In Proceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL 2019, page 10–19, New York, NY, USA. Association for Computing Machinery. Together-AI. 2023. Redpajama-data: An open source recipe to reproduce llama training dataset. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288. Ben Wang and Aran Komatsuzaki. 2021. GPT-J6B: A 6 Billion Parameter Autoregressive Language Model. https://github.com/kingoflolz/ mesh-transformer-jax. Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan. 2020. CLUE: A chinese language understanding evaluation benchmark. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4762–4772. International Committee on Computational Linguistics. Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, and Yang You. 2023. Openmoe: Open mixture-of-experts language models. https://github.com/XueFuzhao/OpenMoE. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, and Luke Zettlemoyer. 2022. Opt: Open pretrained transformer language models. Chujie Zheng, Minlie Huang, and Aixin Sun. 2019. Chid: A large-scale chinese idiom dataset for cloze test. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 778–787. Association for Computational Linguistics. 1291 Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew M. Dai, Zhifeng Chen, Quoc V. Le, and James Laudon. 2022. Mixture-ofexperts with expert choice routing. In NeurIPS. Barret Zoph. 2022. Designing effective sparse expert models. In IEEE International Parallel and Distributed Processing Symposium, IPDPS Workshops 2022, Lyon, France, May 30 - June 3, 2022, page 1044. IEEE. 1292 Appendices A Details of Hyper-Parameters A.1 Validation Experiments Model Settings. In the validation experiments, we set the number of Transformer layers to 9 and the hidden dimension to 1280. We employ the multi-head attention mechanism with a total of 10 attention heads, where each head has a dimension of 128. For initialization, all learnable parameters are randomly initialized with a standard deviation of 0.006. We substitute all FFNs with MoE layers, and ensure that the total number of expert parameters equals 16 times that of a standard FFN. Additionally, we keep the activated expert parameters, including shared expert parameters and activated routed expert parameters, as 2 times that of a standard FFN. Under this configuration, each MoE model has approximately 2B total parameters, with the number of activated parameters around 0.3B. Training Settings. We employ the AdamW optimizer (Loshchilov and Hutter, 2019) with hyperparameters set to β1 = 0.9, β2 = 0.95, and weight_decay = 0.1. The learning rate is scheduled using a warmup-and-step-decay strategy. Initially, the learning rate linearly increases from 0 to the maximum value during the first 2K steps. Subsequently, the learning rate is multiplied by 0.316 at 80% of the training steps, and again by 0.316 at 90% of the training steps. The maximum learning rate for validation experiments is set to 1.08×10−3, and the gradient clipping norm is set to 1.0. The batch size is set to 2K, and with a maximum sequence length of 2K, each training batch contains 4M tokens. Correspondingly, the total number of training steps is set to 25,000 to achieve 100B training tokens. Due to the abundance of training data, we do not use dropout during training. Given the relatively small model size, all parameters, including expert parameters, are deployed on a single GPU device to avoid unbalanced computation. In order to prevent routing collapse, we set the balance factor to 0.01. A.2 DeepSeekMoE 16B Model Settings. For DeepSeekMoE 16B, we set the number of Transformer layers to 28 and the hidden dimension to 2048. We employ the multi-head attention mechanism with a total of 16 attention heads, where each head has a dimension of 128. As for initialization, all learnable parameters are randomly initialized with a standard deviation of 0.006. We substitute all FFNs except for the first layer with MoE layers, since we observe that the load balance status converges especially slower for the first layer. Each MoE layer consists of 2 shared experts and 64 routed experts, where each expert is 0.25 times the size of a standard FFN. Each token will be routed to these 2 shared experts and 6 out of 64 routed experts. An even finer expert segmentation granularity is not employed due to the potential reduction in computational efficiency associated with excessively small expert sizes. At a larger scale over 16B, a finer granularity can still be employed. Under our configuration, DeepSeekMoE 16B has approximately 16.4B total parameters, with the number of activated parameters around 2.8B. Training Settings. We employ the AdamW optimizer (Loshchilov and Hutter, 2019) with hyperparameters set to β1 = 0.9, β2 = 0.95, and weight_decay = 0.1. The learning rate is also scheduled using a warmup-and-step-decay strategy. Initially, the learning rate linearly increases from 0 to the maximum value during the first 2K steps. Subsequently, the learning rate is multiplied by 0.316 at 80% of the training steps, and again by 0.316 at 90% of the training steps. The maximum learning rate for DeepSeekMoE 16B is set to 4.2 × 10−4, and the gradient clipping norm is set to 1.0. The batch size is set to 4.5K, and with a maximum sequence length of 4K, each training batch contains 18M tokens. Correspondingly, the total number of training steps is set to 106,449 to achieve 2T training tokens. Due to the abundance of training data, we do not use dropout during training. We leverage pipeline parallelism to deploy different layers of a model on different devices, and for each layer, all the experts will be deployed on the same device. Therefore, there will not be unbalanced computation during training. In order to prevent routing collapse, we set a quite small balance factor of 0.001 because we find that under our parallelization strategy, a higher balance factor cannot increase the computation efficiency, but instead, it will compromise the model performance. B Infrastructures We conduct experiments based on HAILLM (High-Flyer, 2023), an efficient and light-weight training framework which integrates multiple parallelism strategies, including tensor 1293 parallelism (Shoeybi et al., 2019; Narayanan et al., 2021; Korthikanti et al., 2023), ZeRO data parallelism (Rajbhandari et al., 2020), PipeDream pipeline parallelism (Harlap et al., 2018), and more specifically, expert parallelism (Lepikhin et al., 2021) by combining data and tensor parallelism. In order to optimize performance, we develop GPU kernels with CUDA and Triton (Tillet et al., 2019) for gating algorithms and fusing computations across linear layers in different experts. All experiments are carried out on clusters equipped with NVIDIA A100 or H800 GPUs. Each node in the A100 cluster contains 8 GPUs connected pairwise via the NVLink bridge. The H800 cluster also features 8 GPUs per node, interconnected using NVLink and NVSwitch within nodes. For both A100 and H800 clusters, InfiniBand interconnects are utilized to facilitate communication across nodes. C Comparisons among DeepSeekMoE and Larger Models We show the comparisons among DeepSeekMoE, larger GShard models, and larger dense models in Table 3. D Training Benchmark Curves of DeepSeekMoE 16B We present the benchmark curves during training of DeepSeekMoE 16B and DeepSeek 7B (Dense) in Figure 6 for reference. E Evaluation on Open LLM Leaderboard Beyond our internal evaluations, we also evaluate DeepSeekMoE 16B on the Open LLM Leaderboard2 and compare it with other open source models. The Open LLM Leaderboard is a public leaderboard supported by HuggingFace, it consists of six tasks: ARC (Clark et al., 2018), HellaSwag (Zellers et al., 2019), MMLU (Hendrycks et al., 2020), TruthfulQA (Lin et al., 2022), Winogrande (Sakaguchi et al., 2019), and GSM8K (Cobbe et al., 2021). In addition to LLaMA2 7B, we take a broader set of open source models into consideration, including LLaMA 7B (Touvron et al., 2023a), Falcon 7B (Almazrouei et al., 2023), GPT-J 6B (Wang and Komatsuzaki, 2021), RedPajamaINCITE 7B and 3B (Together-AI, 2023), Open LLaMA 7B and 3B (Geng and Liu, 2023), OPT 2https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard 2.7B (Zhang et al., 2022), Pythia 2.8B (Biderman et al., 2023), GPT-neo 2.7B (Black et al., 2021), and BLOOM 3B (Scao et al., 2022). The evaluation results, as presented in Figure 7, show that DeepSeekMoE 16B consistently outperforms models with similar activated parameters by a large margin. Moreover, it achieves comparable performance with LLaMA2 7B, which has approximately 2.5 times the activated parameters. 1294 Metric # Shot GShard×1.5 Dense×16 DeepSeekMoE Relative Expert Size N/A 1.5 1 0.25 # Experts N/A 0 + 16 16 + 0 1 + 63 # Activated Experts N/A 0 + 2 16 + 0 1 + 7 # Total Expert Params N/A 2.83B 1.89B 1.89B # Activated Expert Params N/A 0.35B 1.89B 0.24B FLOPs per 2K Tokens N/A 5.8T 24.6T 4.3T # Training Tokens N/A 100B 100B 100B Pile (Loss) N/A 1.808 1.806 1.808 HellaSwag (Acc.) 0 54.4 55.1 54.8 PIQA (Acc.) 0 71.1 71.9 72.3 ARC-easy (Acc.) 0 47.3 51.9 49.4 ARC-challenge (Acc.) 0 34.1 33.8 34.3 RACE-middle (Acc.) 5 46.4 46.3 44.0 RACE-high (Acc.) 5 32.4 33.0 31.7 HumanEval (Pass@1) 0 3.0 4.3 4.9 MBPP (Pass@1) 3 2.6 2.2 2.2 TriviaQA (EM) 5 15.7 16.5 16.6 NaturalQuestions (EM) 5 4.7 6.3 5.7 Table 3: Comparisons among DeepSeekMoE, larger GShard models, and larger dense models. In the line of “# Experts”, a + b denotes a shared experts and b routed experts. In the line of “# Activated Experts”, a + b denotes a activated shared experts and b activated routed experts. DeepSeekMoE achieves comparable performance with a GShard model containing 1.5 times expert parameters and computation. In addition, DeepSeekMoE nearly approaches the performance of a dense model with 16 times FFN parameters, which sets the upper bound for MoE models in terms of the model capacity. 1295 0 500 1000 1500 2000 # Training Tokens (B) 0.3 0.4 0.5 0.6 0.7 Performance HellaSwag (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.60 0.65 0.70 0.75 0.80 Performance PIQA (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.4 0.5 0.6 0.7 Performance ARC-easy (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.3 0.4 0.5 Performance ARC-challenge (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.4 0.5 0.6 Performance RACE-middle (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.30 0.35 0.40 0.45 Performance RACE-high (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.1 0.2 0.3 Performance DROP (EM) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.00 0.05 0.10 0.15 0.20 Performance GSM8K (EM) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.0 0.1 0.2 0.3 Performance HumanEval (Pass@1) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.0 0.1 0.2 0.3 0.4 Performance MBPP (Pass@1) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.0 0.2 0.4 0.6 Performance TriviaQA (EM) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.00 0.05 0.10 0.15 0.20 0.25 Performance NaturalQuestions (EM) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.25 0.30 0.35 0.40 0.45 Performance MMLU (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.50 0.55 0.60 0.65 0.70 Performance WinoGrande (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.0 0.2 0.4 0.6 Performance CLUEWSC (EM) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.25 0.30 0.35 0.40 0.45 Performance CEval (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.25 0.30 0.35 0.40 0.45 Performance CMMLU (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) 0 500 1000 1500 2000 # Training Tokens (B) 0.5 0.6 0.7 0.8 0.9 Performance CHID (Acc.) DeepSeekMoE 16B DeepSeek 7B (Dense) Figure 6: Benchmark curves during training of DeepSeekMoE 16B and DeepSeek 7B (Dense). 1296 2 3 4 5 6 7 Number of Activated Parameters (Billions) 36 38 40 42 44 46 48 50 52 Average Performance DeepSeekMoE 16B LLaMA2 7B LLaMA 7B Falcon 7B Open LLaMA 7B RedPajama-INCITE 7B GPT-J 6B RedPajama-INCITE 3B Open LLaMA 3B Pythia 2.8B OPT 2.7B GPT-neo 2.7B BLOOM 3B Figure 7: Comparison between DeepSeekMoE 16B and open source models on the Open LLM Leaderboard. 1297
2024
70
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12956–12973 August 11-16, 2024 ©2024 Association for Computational Linguistics Generalizability of Mixture of Domain-Specific Adapters from the Lens of Signed Weight Directions and its Application to Effective Model Pruning Tuc Nguyen Department of Computer Science Indiana University [email protected] Thai Le Department of Computer Science Indiana University [email protected] Abstract Several parameter-efficient fine-tuning methods based on adapters have been proposed as a streamlined approach to incorporate not only a single specialized knowledge into existing Pre-Trained Language Models (PLMs) but also multiple of them at once. Recent works such as AdapterSoup propose to mix not all but only a selective sub-set of domain-specific adapters during inference via model weight averaging to optimize performance on novel, unseen domains with excellent computational efficiency. However, the essential generalizability of this emerging weight-space adapter mixing mechanism on unseen, in-domain examples remains unexplored. Thus, in this study, we conduct a comprehensive analysis to elucidate the generalizability of domain-specific adapter mixtures in in-domain evaluation. We also provide investigations into the inner workings of the mixture of domain-specific adapters by analyzing their weight signs, yielding critical analysis on the negative correlation between their fraction of weight sign difference and their mixtures’ generalizability. The code is available at Github. 1 Introduction Recently, several parameter-efficient fine-tuning methods that are based on adapters have been introduced as a streamlined approach for fine-tuning Pretrained Language Models (PLMs) to equip them with new, specialized knowledge or domain. Several algorithms have been proposed to train a distinct adapter for each new domain (Houlsby et al., 2019; Pfeiffer et al., 2021; Hu et al., 2022). To further improve a model’s generalizability, existing works (Pfeiffer et al., 2021; Wang et al., 2021a; Diao et al., 2023) mostly focus on training multiple adapters for multiple tasks and continuously adding more adapters for incoming new tasks. This can be inefficient for the new domain tasks that have only a few examples, making the learning among the tasks unequal. Thus, more recent works such as Matena and Raffel (2022); Wang et al. (2022, 2021b); Li et al. (2022); Chronopoulou et al. (2023) opt for weight-space averaging of model and/or adapters trained on different domains, resulting in so-called Mixture of Expert Adapters. One recent notable work in this space is AdapterSoup (Chronopoulou et al., 2023), which proposes to merge the weights of a fixed-size, selective subset of different domain-specific adapters via an averaging function to accommodate unseen tasks or domains during inference. Such weight-space merging mechanism on adapters is efficient in practice as one can efficiently train a small, additional adapter and plug it into existing PLMs to incorporate new knowledge. Although the work reported favorable evaluation results on unseen, novel domains, it is unclear to what extent such weight-space merging mechanism on domainspecific adapters can generalize in an in-domain evaluation setting–i.e., how well it makes predictions on unseen examples of domains already seen during training. Moreover, to the best of our knowledge, no existing works comprehensively study the generalization of the mixture of adapters in the indomain setting. This literature gap seems counterintuitive because in-domain evaluation is fundamental and should precede out-of-domain evaluation. Moreover, in real-world applications, model owners have incentives to utilize as much as possible available information to improve their models over time. With the availability of parameterefficient finetuning methods that are fairly easy to adopt with minimal space and runtime cost, the model owners are then incentivized to quickly finetune their models on a few examples collected from a new domain on an additional adapter to optimize the performance (rather than totally relying on outof-domain prediction capability). As a result, although in-domain evaluation seems trivial, it is fundamental as one must ensure that the mixture of adapters works well on the tasks they have already 12956 Adapter Pool Feature Space Step 1: Domain adaptation Step 2: Adapters mixing Figure 1: Mixing the adapter weights across various tasks may result in the importance weights of individual tasks nullifying each other, thereby yielding a merged mixture losing important information. been trained on. Furthermore, several key questions regarding the resulting mixtures of domainspecific adapters remain unanswered, especially those regarding their generalizability and their adversarial robustness when mixing adapters trained from very different tasks. Therefore, borrowing the pop-culture saying that “mixed drinks and cocktails aren’t actually the same thing”, in contrast from existing works, we hypothesize that not all mixture of expert adapters are created equal and all have superior performance. Then, through an array of comprehensive experiments, we attempt to give answers to questions about when and what to mix when it comes to domain-specific adapters. We found that the weight-space merging mechanism suffers from performance degradation in terms of both generalizability and adversarial robustness even with inputs from domains it already trains on. Moreover, we also attempt to explain such performance degradation by revealing a critical negative correlation between signed directions of adapter weights during mixing and domainspecific predictive performance (Fig. 1). Although simple, this intuitive and novel observation also allows us to select “when and what adapters to mix?” and design a more effective model pruning as a by-product application. Overall, our study does not focus on proposing a new mechanism, algorithm, or method. Instead, we focus on analyzing and bringing understanding of an existing and emerging paradigm of mixing multiple domain-specific adapters that was previously introduced (Chronopoulou et al., 2023; Wang et al., 2022). Specifically, we focus on in-domain prediction when mixing adapters from different domains as an emerging and potential paradigm for the deployment of PLMs in practice. Our contributions are summarized as follows. 1. This is the first and most comprehensive analysis of in-domain generalizability of a mixture of domain-specific adapters with 3 different adapter methods on 13 diverse classification datasets, 2. We provide insights and analysis on when and what adapters to mix to minimize performance degradation via the lens of signed directions of adapters’ parameter weights, 3. We demonstrate the utility of such insights to train mixtures of adapters with 90% sparsity that improve both generalizability and efficiency. 2 Related works Adapter Fine-tuning. The primary method for adapting general-purpose PLMs to downstream tasks is via full fine-tuning, which requires adjusting all models’ parameters (Peters et al., 2018; Devlin et al., 2019a). However, this results in redundant copies of fine-tuned models for each task, posing a significant memory challenge. Thus, various parameter-efficient fine-tuning methods have been proposed, including prompt-based (Li and Liang, 2021) and adapter-based fine-tuning (Houlsby et al., 2019; Pfeiffer et al., 2021; Hu et al., 2022). Among adapter-based fine-tuning methods, Houlsby (Houlsby et al., 2019) introduces two adapter blocks with bottleneck networks in each Transformer block of a PLM. Similarly, the Pfeiffer (Pfeiffer et al., 2021) adapter differs in architecture, incorporating only one adapter layer in each Transformer block, in contrast to the two layers introduced by Houlsby (Houlsby et al., 2019). LoRA (Hu et al., 2022) takes a distinctive approach by freezing the MLP modules of transformers and representing updates to attention weights with two low-rank matrices to optimize space while effectively retaining model performance. In this work, we focus on analyzing adapter-based fine-tuning methods as they are more popular and effective. Mixture of Expert Adapters. Additionally, several approaches (Wang et al., 2021a; Pfeiffer et al., 2021, 2020; Wang et al., 2022) have been proposed to further optimize their adapters for various downstream tasks by maintaining a set of adapters and combine them during inference. Particularly, AdaMix Wang et al. (2022) fine-tunes so-called Mixture of Experts (MoEs) with adapters on a downstream task and averaging their weights during inference. In addition, Li et al. (2022) explores performance in novel domains through weight averaging on entire language models. Similarly, 12957 mnli mrpc qnli qqprte sst2rct tweets imdbag finan author wiki mnli mrpc qnli qqp rte sst2 rct tweets imdb ag finan author wiki (a).Cosine similarity 0.15 0.10 0.05 0.00 0.05 0.10 0.15 0.20 0.25 0.4 0.3 0.2 0.1 0.0 0.1 mrpc sst2 qnli author rte tweets mnli rct wiki qqp finan imdb ag (b).LDA topic distribution 0.0 0.2 0.4 0.6 0.8 1.0 Figure 2: (a) Datasets’ semantic similarity via cosinesimilarity among centroids of Universal Sentence Encoder (USE) (Cer et al., 2018) embeddings of 1K randomly sampled documents from each dataset. (b) Topic distributions via Latent Dirichlet Allocation (LDA) (Blei et al., 2003). AdapterSoup (Chronopoulou et al., 2023) opts for weight-space averaging of adapters trained on different domains. Among these methods, weightspace averaging is identified as the most intuitive method for mixing different adapters (Jin et al., 2023a) and (Chronopoulou et al., 2023). Nevertheless, none of these works comprehensively evaluates and analyzes the generalizability and adversarial robustness of the resulting mixture of adapters under different mixtures of domainspecific knowledge, which is necessary to answer the question “when and what to mix?”. 3 Comprehensive In-Domain Evaluation To evaluate the in-domain performance of adapter mixtures, we train several adapters with domainspecific knowledge and mix them in different combinatorial combinations. Then, we evaluate each combination on different downstream tasks on two aspects: (1) generalizability on unseen in-domain examples and (2) adversarial robustness under adversarial text attacks. 3.1 Evaluation Datasets Diverse Domain Knowledge. To simulate knowledge diversity, we gather a total of 13 distinct and diverse domain-specific datasets of classification tasks for evaluation. We refer the readers to Appendix A.1 for detailed information and their linguistic statistics. Fig. 2 reveals the intricate diversity within our selected datasets, both semantic and topic-wise. Notably, SST2 and IMDB, both originating from the same movie corpus, exhibit proximity in topic embedding spaces. On the contrary, non-formal datasets such as Wiki and Tweets are distinctly distant from other datasets in this regard. We refer the readers to Appendix. A.2 for a detailed exploration and analysis of the topic distributions among the datasets. 3.2 Mixing Fine-Tuned Adapters Base Models and Individual Adapters. We design our evaluation using two transformer-based models, namely BERT (Devlin et al., 2019b) and RoBERTa (Liu et al., 2019), with a 3 diverse and well-known adapter methods. They are Houlsby (Houlsby et al., 2019), Pfeifer (Pfeiffer et al., 2021) and LoRA (Hu et al., 2022). These adapter-based methods introduce variations in the adapter architecture and parameterization (Sec. 2), contributing to the comprehensiveness of our analysis. Mixing Adapters. From the pre-trained weights θPLM of either BERT and RoBERTa, we train a suite of 13 domain-specific adapters tailored for diverse domains, denoted as θD1, θD2, . . . , θDk. Following (Chronopoulou et al., 2023), the final inference of the target mixture of domain-specific adapters becomes: f(x, θPLM + 1 k i=k X i=1 θDi) (1) 3.3 Adversarial Text Generation Textual adversarial attacks are popular in AI robustness research. Given a dataset D={(xi, yi)}N i , where x represents the sample and y denotes the ground truth label, a textual adversarial attack aims to attack a PLMs fθ with a classification loss function L by perturbing each sample x with perturbation noise δ given a certain budget C: arg maxδ∈CL[fθ(x + δ), y], (2) Toward evaluating the robustness of a mixture of adapters, we employ both black-box and white-box textual attacks to exercise Eq. 2. We utilize the popular TextFooler (Jin et al., 2020) as the black-box attack, which aims to replace words with synonyms or contextually similar words to deceive PLMs. We utilize the well-known FGSM (Goodfellow et al., 2015) as the white-box attack, which can efficiently craft adversarial examples by perturbing embedding of text data in the direction of the sign of the gradient of the loss function to the input, thereby exposing vulnerabilities in model robustness. 3.4 Combinatory Evaluation To evaluate in-domain performance for each target domain, we generate all possible combinations of adapters. To illustrate, for the target domain MNLI, we can first evaluate with a mixture of only 12958 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy MNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 MRPC 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 QNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 QQP 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 RCT 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 RTE 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy SST2 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Tweets 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 IMDB 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Ag News 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Financial 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Clean Black-box attack (TextFooler) White-box attack (FGDS) Figure 3: Accuracy of RoBERTa with Pfeiffer (Pfeiffer et al., 2021) in each target domain. X-axis denotes the number of mixed adapters. Dataset mnli mrpc sst2 rte qnli qqp rct ag authorship financial imdb tweets wiki Average ∇clean 15.04 8.31 4.90 8.50 4.19 9.92 21.72 13.69 12.60 15.33 14.26 13.17 13.16 11.91 ∇blackbox 12.10 16.07 10.43 10.21 10.13 9.82 12.32 12.62 13.42 13.82 13.04 11.83 10.52 12.03 ∇whitebox 10.14 11.21 8.31 9.42 12.14 10.24 12.53 12.06 10.45 9.53 10.04 10.24 7.53 10.30 Table 1: Average absolute performance drop (in percentage %) of RoBERTa with Pfeiffer (Pfeiffer et al., 2021) when mixed from all domain adapters on clean, black-box, and white-box attacks. itself. When combining two adapters, we have the flexibility to choose 1 additional adapter out of the remaining 12, resulting in 12 possible combinations. For a set of 3 adapters, including MNLI, we select 2 adapters out of the 12 to generate C2 12 combinations. This process continues for sets ranging from 4 to 13 adapters, where, in the case of 13 adapters, all adapters are combined. Thus, we have 13 ∗(1+ P12 i=1 Ci 12)=53, 248 combinations for all domains. We report mean and variance of in-domain performance for each set of mixtures of k adapters. Notably, this setup already includes all the possible mixtures potentially selected by AdapterSoup (Chronopoulou et al., 2023), which proposes an additional mechanism to select a subset of a fixed number of domain-specific adapters to mix. 4 Experiments 4.1 Implementation Details For the Ag News, Authorship, Financial, IMDB, Tweets, and Wiki-Toxic, we partition the dataset into three segments with an 8:1:1 for train:val:test splits. For datasets belonging to the GLUE corpus, we employ their public training and evaluation splits. For the black-box TextFooler attack, we set the minimum embedding cosine similarity between a word and its synonyms as 0.8, and the minimum USE similarity is 0.84. For white-box FGSM (Goodfellow et al., 2015) attack, we set the perturbation magnitude to 0.01. Following the setup of Houlsby and Pfeiffer (Houlsby et al., 2019; Pfeiffer et al., 2021), we use all adapters with a dimension of 64 and 256 for RoBERTa-large and BERT-base models. With LoRA, we use rank r=4 following (Hu et al., 2022). Detailed training, evaluation dataset, and hyper-parameter configuration for different tasks are presented in Appendix A.3. 4.2 In-Domain Performance Results Overview. We present the performance of a candidate setting of RoBERTa with Pfeiffer adapter (Pfeiffer et al., 2021) with different numbers of additional mixing domains in Fig. 3. Table 1 shows how much the predictive performance drops without and with adversarial black-box and whitebox attacks when mixing all adapters. Overall, the average performance drops over all tasks on the clean test set from the original performance to a mix of all adapters is 11.91%, and that for blackbox and white-box attacks are 12.03% and 10.30%, respectively. Further results of generalization and adversarial robustness performance across other models and adapter methods are documented in Appendix A.4. 12959 mnli mrpc qnli qqprctrte sst2 tweets imdbag financial authorship wiki mnli mrpc qnli qqp rct rte sst2 tweets imdb ag financial authorship wiki Pfeiffer-Roberta 0 10 20 30 40 50 Figure 4: Heatmap visualization of the Fraction of Sign Difference (in %) of Pfeiffer Adapters (Pfeiffer et al., 2021) trained on 13 domain-specific tasks with RoBERTa.. Finding #1: As we add more tasks or domains, the predictive performance of every single task decreases, reaching its lowest point when we incorporate the maximum of 13 adapters training on various topic datasets. Fig. 3 shows that mixing domain-specific adapters indeed decreases indomain performance (reduction of around 10% in MRPC, QQP, RTE, etc., and nearly 17% in the Financial domain when mixing 13 adapters). The same behaviors were also observed in (Jin et al., 2023b) where they merge the weight of PLMs. Notably, task accuracies decreased at a slower pace for QNLI and SST2 when evaluating with increasing size of mixtures (Fig. 3). In contrast, a substantial decrease in accuracy is observed for domains such as RCT, IMDB, Ag News, and Authorship (Fig. 3). This shows that mixing domain-specific adapters impair model performance differently depending on the target domain, or “what to mix” in an adapter mixture has a crucial effect on the mixture’s performance. To attempt to explain this behavior, we later present and verify a hypothesis that such mixing domain-specific adapters via weight averaging can result in “forgotten knowledge” that can happen due to the differences in signs when mixing these adapters (Sec. 5). Finding #2: On average, there are no notable differences of the magnitude in accuracy degradation with and without adversarial attacks (12.03% dropped in black-box attack versus 11.91% dropped without attack) (Fig. 3). The overall predictive performance was significantly lower under the white-box compared to the black-box attack as expected because the white-box FGSM attack has additional access to the models’ parameters. Interestingly, on the RCT dataset, the accuracy drop on clean (21.72%) is much larger than on black-box and white-box attacks (12.32% and 12.53%, respectively). Finding #3: Variance in task accuracy on adversarial attacks, when combining different domain adapters, is observed to be larger than the variance in clean accuracy (Fig. 3). Although the magnitude decrease appears similar in Fig.3, differences in variance (max, min) are discernible among each combination. Specifically, in MRPC, QNLI, QQP, RTE, SST2, Tweets, IMDB, Financial, and Authorship, we can observe that certain mixed models observed slightly better adversarial robustness compared to single adapters (when the number of mixing adapters k is 2 or 3). Moreover, the variance of robustness in adversarial scenarios tends to be higher than in clean scenarios because all of the adapters are trained on different tasks and they may exhibit different vulnerabilities to the same attack method. Finding #4: Mix up to three adapters to maintain competitive performance. Based on the results from Fig.3 and all findings above, it is advisable to mix only up to three tasks to maintain competitive performance (as observed in MRPC, QQP, RTE, Tweet, Financial, and Authorship domains with less than a 3% accuracy drop in accuracy). Especially, in some cases when evaluating QNLI, SST2, mixtures of less than three adapters even achieved better performance in terms of generalizability (up to 1%) and also in terms of adversarial robustness of up to 3% compared to the original performance. 5 Effects of Sign Differences of Adapter Weights during Mixing: A Hypothesis 5.1 An Explanation Hypothesis The ideal scenario when averaging adapter weights during mixing is to make minimal adjustments to their weights, both in terms of values–i.e., magnitudes, and directions, to sustain as much as possible the knowledge learned. Investigating how adapter weights mix regarding both their magnitudes and directions, can be overly complex. Thus, we simplify this assessment by focusing on the sign directions of the adapter weights in our analysis. Following the mixing process of k individual adapter weight in Eq. 1 (Sec. 4), we then hypothesize that mixing adapter weights of conflicting signs can result in “forgotten knowledge”, and lead to performance degradation. As illustrated in Fig. 1, averaging adapter weights across various tasks may 12960 lead to nullifying importance weights for individual tasks if their signs are opposite–i.e., positive v.s. negatives. In other words, the fraction of sign difference or FSD (%) or proportion of weight sign difference in adapter weights during mixing correlate with their mixtures’ generalizability. Alg. 3 (Appendix A.5) shows the calculation of FSD. We evaluate our hypothesis with different cases: (i) individual adapters (mixture with k=1), (ii) dual adapters (k=2), and generalize to (iii) multiple adapters (k>2). To demonstrate the utility of our hypothesis, we apply it to improve the generalizability of adapter mixtures and also to derive a more effective model pruning in Sec. 6, 7. 5.2 Individual Adapters (k=1) We calculate the FSD of adapter weights on RoBERTa and normalize it by the total number of weights, denoted as a matrix Sk×k where each row is the FSD of a single adapter train on task k to the remaining adapters (Fig. 4). We refer the readers to Sec. A.6 in the Appendix for results on BERT. Interestingly, a consistent trend in the FSD is observed across various model architectures (BERT, RoBERTa) and adapter methods (e.g., Fig. 4 and Fig. 9 of Appendix A.6). The reason is that adapters act as small MLP layers that integrate task-specific knowledge into pre-trained models (Meng et al., 2022) in different adapter methods. This shared functionality contributes to a similar trend in FSD, highlighting the robustness and generalizability of the observed behavior across different adapters’ architectural variations. In addition, Adapters trained on datasets with distinct topic distributions and cosine similarities (Fig. 2) exhibit varying weight directions (Fig. 4). Especially, MNLI has a similar linguistic distribution with other datasets (i.e. MRPC, RTE, Ag News, etc) (Fig. 2), but the adapter trained on MNLI has a significantly larger FSD compared to the remaining domain-specific adapters (Fig. 4). Thus, datasets that are similar in linguistic statistics may not necessarily share the same optimization trajectory. As a result, methods that are based on the linguistic distribution to choose the closest set of adapters to mix like AdapterSoup (Chronopoulou et al., 2023) may lead to sub-optimal performance. 5.3 Dual Adapters (k=2) Fig. 5 shows the FSD for each adapter when mixing two domain-specific adapters. This investigation is conducted within the context of the mnli qnli qqprctrte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Fraction of different (%) MRPC mnli mrpc qnli qqprte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 RCT mnli mrpc qqprctrte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Fraction of different (%) QNLI mnli mrpc qnli qqprctrte sst2 imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Tweets 80 82 84 86 88 90 80 82 84 86 88 90 Accuracy(%) 90 92 94 96 98 100 90 92 94 96 98 100 Accuracy(%) Figure 5: FSD when mixing two (k=2) adapters. Sky-blue bars show the FSD (left y-axis). Dashed blue lines denote the accuracy achieved by a standalone adapter. Solid red lines illustrate the variations in accuracy after mixing. Please refer to Fig. 14 in Appendix A.7 for results in other tasks. Pfeiffer Adapter (Pfeiffer et al., 2021) using a pre-trained RoBERTa model. Overall, there is a strong negative correlation between the FSD and the generalizability–i.e., the lower the sky-blue bar (or the smaller fraction of weight sign conflicts), the higher the performance of the mixture (Fig. 5). Notably, tasks with substantial difference in the weight signs witness a pronounced performance decrease. Specifically, RCT exhibits significant performance drops due to substantial differences in adapter weight direction. Conversely, tasks such as MRPC, and QNLI demonstrate either marginal improvement or no change in performance when mixed with other adapters. This is well correlated to the marginal FSD of the mixed adapters, ranging from only 5% to 10% compared to the original weight. 5.4 Multiple Adapters (k>2) Similar to the dual-adapter setting, there is still a strong negative correlation between FSD and the generalizability, and increasing the number of mixed adapters amplified the sign disparity (Fig. 6). For example, adapters trained on RCT and Tweet exhibit large FSD compared to other adapters and hence observed a significant decrease in generalizability (Fig. 6). In some cases, mixing only a few adapters (e.g., 2–4) could still maintain competitive performance as in QNLI, SST2 domains (Fig. 6, 15). This correlates with the relatively 12961 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 Fraction of different (%) MRPC 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 RCT 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 Fraction of different (%) QNLI 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 Tweets 60 65 70 75 80 85 90 95 100 60 65 70 75 80 85 90 95 100 Accuracy(%) 60 65 70 75 80 85 90 95 100 60 65 70 75 80 85 90 95 100 Accuracy(%) Figure 6: Fraction of weights changing direction during the mixing of multiple adapters, ranging from 2 to 13. The chart description is similar to Fig. 5. We refer to Fig. 15 in Appendix A.7 for detailed results in other tasks. 1 2 3 4 5 6 7 8 9 10 11 12 13 Number of adapters 75.0 77.5 80.0 82.5 85.0 87.5 90.0 92.5 95.0 Accuracy (%) Empirical Max Greedy Max Empirical Mean Greedy Min Empirical Min Single Adapter Figure 7: Average model accuracy of 13 domains under different numbers of mixed adapters (l) using the guidance of FSD. marginal differences in adapter weights of QNLI and SST compared to adapters trained on other domains (Fig. 4), as their mixtures do not lead to significant cancellations of existing parameters and hence preserve learned knowledge. 5.5 Discussion Which adapters we should mix? From Sec. 5, we find that it is not advisable to merge adapters that are significantly different from each other in weight signs. On the other hand, mixing the weight of these adapters with other adapters which have a small FSD achieves competitive performance (within 3% drop in accuracy) only when the number of mixing adapters is small (QNLI, MRPC in Fig. 6) or can achieve better performance, although rarely, compared to training the original adapter as seen in QNLI domain (Fig. 5). Therefore, when deploying PLMs, it is prudent to only select a group Algorithm 1: Greedy Adapter Mixing Input: k domain-specific adapters, a matrix Sk×k of FSD, l number of adapters to fusion. Output: Average(candidates) 1: candidates ←{} 2: Compute the average of FSD 3: avgS = mean(S, axis = 1) 4: Select the top l from set of k adapters according to 5: smallest average FSD 6: candidates ←topl 7: return average(candidates) of tasks with a small FSD to minimize the performance drop in the final mixed model. Our observation is crucial in deploying these models in edge devices where only the adapters are saved on edge, which often has a specific memory capacity limit. Experiment comparison with AdapterSoup. AdapterSoup (Chronopoulou et al., 2023) dynamically selects a set of l adapters during inference. When l=k, then our experiment setting is the same as the AdapterSoup setting when we use all adapters at once. When l=1, it is the original performance of a single adapter, assuming that AdapterSoup is perfect at picking the same domain that is already trained on, and this result is already included in our paper (mixture of only 1 adapter, Fig. 3). When 1<l<k, we do not have the results of the specific combination that AdapterSoup would select. However, we reported the maximum performance across all combinatorial combinations among 13 domain-specific adapters or each value in Fig. 3. For example, when l=3, we only observe a possible comparative performance (within less than 3% drop) in QNLI, MRPC, and SST2 domains (Fig. 15). We also emphasize that it is one thing to select the best adapters to mix during inference, it is much harder to choose l or how many of them. 6 Greedy Adapter Mixing with FSD Our observations from Sec. 5 reveal that two adapters with minimal disparity in FSD can yield competitive performance when combined. Therefore, to demonstrate the utility of FSD, in this section, we design a mixing strategy, so-called Greedy Adapter Mixing (Alg. 1), that utilizes FSD to decide which domain-specific adapters to mix by minimizing the overall FSD in a greedy manner to get a final mixed model with competitive performance. This algorithm is also based on the hypothesis that mixing adapters with minimal FSD can yield mixture models of better generalizability. We proceed 12962 by evaluating model performance across various adapter combinations. Fig. 7 shows the generalizability performance of a mixture of l ∈[2, 13] domain-specific adapters. Overall, Greedy Adapter Mixing resulted in very competitive performance compared with empirical upper-bound accuracy. In contrast, using the same algorithm but maximizing FSD resulted in performance close to the empirical lower-bound accuracy (Fig. 7). However, in both two cases, mixing adapters with FSD cannot achieve the empirical upper-bound and lower-bound performance. Thus, greedily mixing adapters to minimize FSD does not totally prevent knowledge loss in the adapter mixtures. Nevertheless, FSD is still useful as a guidance measure to effectively mix domain-specific adapters. 7 Towards Effective Model Pruning To further demonstrate the utility of our FSD analysis in Sec. 5 and Sec. 6, in this section, we leverage FSD information to reduce knowledge loss through the development of a pruning algorithm guided by FSD insights. Specifically, Fig. 5 shows that predictive performance experiences a significant drop when integrating adapters with pronounced disparities in weight signs–i.e., positive v.s. negative signs. Moreover, neural network pruning indicates that only a limited number of weights significantly contribute to task performance, suggesting redundancy within the weights that can be pruned without compromising the original task performance (Han et al., 2016; Frankle et al., 2021; Lazarevich et al., 2021). Thus, to mitigate the impact of weight sign differences in adapter mixtures, we propose mixing only the sparse versions of the adapters’ weights. Different from Alg. 1, this strategy indirectly reduces the fraction of weight sign conflicts. Intuitively, by minimizing the FSD, the mixing process becomes more resilient to the inadvertent elimination of important weights by less significant or redundant weights. This phenomenon is visually depicted in Step 2 of Fig. 1, where only significant weights in the two adapters need to be preserved, and small or unimportant weights of opposing signs can be eliminated. 7.1 FSD-based Magnitude Pruning Post-training Pruning. Sparse Adapter (He et al., 2022) employs pruning across every layer of adapters, being able to achieve comparable or Algorithm 2: FSD-based Pruning Input: adapter parameters w, sparse ratio s. Output: pruned adapter ˜w. 1: w ←Trained(w) 2: Compute important score z = |w| 3: Compute the s-th percentile of z as zs 4: m ←1 [z −zs ≥0] 5: ˜w ←m ⊙w 40 50 60 70 80 90 100 Parameters Pruned Away(%) 50 60 70 80 90 100 Accuracy(%) (a). Average performance Pruned Adapter Dense Adapter 1 2 3 4 5 6 7 8 9 10 11 12 13 Number of adapters 75.0 77.5 80.0 82.5 85.0 87.5 90.0 92.5 95.0 Accuracy (%) (b). Mixture sparse adapters Empirical Max Mix sparse adapter (90%) Mix sparse adapter (80%) Single adapter Figure 8: (a) Average RoBERTa performance with a single sparse adapter across 13 domains with increasing sparsity. (b) Model accuracy when increasing the # of sparse adapters being mixed. Dashed red lines represent model generalization when mixing domain-specific adapters. Dashed blue lines depics the maximum performance when mixing k domain-specific dense adapters. Solid green and solid purple lines represent model performance when mixing k domain-specific adapters with 80% and 90% sparsity, respectively. even superior predictive performance than standard adapters, even when the sparse ratio reaches up to 80%. By adopting a similar process of adapters pruning after training, with the guidance of our FSD analysis in Sec. 5, we can eliminate redundant parameters at an early stage, circumventing the need for a time-consuming iterative pruning process, as discussed in prior works such as Han et al. (2016); Lazarevich et al. (2021). The detailed pruning algorithm is presented in Alg. 2. Generalizability of Pruned Adapters. Fig. 8a shows RoBERTa’s performance, where we systematically prune the weight of the Pfeiffer adapter 40%–100% of sparsity. For a single task at the sparsity level d%, we retain only the largest–i.e., the top-d%, influential parameters of the corresponding adapter, and report the average in-domain performance across 13 domains. Remarkably, pruning up to 90% parameters of adapter weight does not lead to performance degradation. This observation suggests redundancy in adapters’ parameters may contribute to the increase in the fraction of weight direction conflicts–i.e., high FSD when merging them (Fig. 6). Generalizability of Mixed Pruned Adapters. Motivated by pruned adapter can still maintain original performance up to 90% of sparsity (Fig. 12963 8a), we hypothesize that mixing sparse adapters may indirectly reduce the fraction of weight sign conflict, therefore, leading to competitive performance with the original adapters. Given a set of domain-specific adapters, based on the FSD, we choose the top k layers with the highest fraction of weight sign difference and prune each layer up to 90% of sparsity. Then we mix these adapters by weight averaging. Details of mixing domainspecific adapters with weight sign conflict information is shown in Alg. 4 (Appendix A.8). Fig. 8b shows that mixing adapters with 80% or 90% sparsity consistently achieved better performance than the upper-bound empirical accuracy achieved when mixing their dense versions. 8 Conclusion This work provides a comprehensive empirical indomain evaluation of the emerging mechanism of mixing domain-specific adapters. We also provide insights into the inner workings of the mixture of domain-specific adapters by analyzing their weight signs, yielding critical observations on the negative correlation between the fraction of sign difference among adapters and their mixtures’ generalizability. By examining the signed directions of adapter weights, we also offer the readers valuable advice on the optimal selection of adapters to mix to achieve competitive performance. Such examination also helps enhance our understanding of the interconnected role of weight sign difference in the context of sparse neural networks. Limitation Primarily, our exploration focused solely on one classic pruning method, namely Magnitude Pruning (Sanh et al., 2020) while there are existing more advanced pruning techniques such as SynFlow (Tanaka et al., 2020), GraSP (Wang et al., 2020) that are also applicable for condensing neural network architectures. Consequently, future works include investigating the applicability of our findings to these alternative pruning approaches. Furthermore, our examination was confined to the natural language understanding tasks. A valuable avenue for future research would involve extending our analysis to encompass the emerging text generation tasks, particularly within the context of the current transformer-based language model, including but not limited to the machine translation tasks utilizing complex GPT-family models. References Malik Altakrori, Jackie Chi Kit Cheung, and Benjamin CM Fung. 2021. The topic confusion task: A novel evaluation scenario for authorship attribution. In EMNLP. Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In TAC. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. In Journal of machine Learning research. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. 2018. Universal sentence encoder for english. In EMNLP. Alexandra Chronopoulou, Matthew E Peters, Alexander Fraser, and Jesse Dodge. 2023. Adaptersoup: Weight averaging to improve generalization of pre-trained language models. In EACL. Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Shizhe Diao, Tianyang Xu, Ruijia Xu, Jiawei Wang, and Tong Zhang. 2023. Mixture-of-domain-adapters: Decoupling and injecting domain knowledge to pretrained language models memories. In ACL. Bill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Third International Workshop on Paraphrasing. Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. 2021. Pruning neural networks at initialization: Why are we missing the mark? In ICLR. Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In ICLR. Song Han, Huizi Mao, and William J Dally. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR. 12964 Shwai He, Liang Ding, Daize Dong, Jeremy Zhang, and Dacheng Tao. 2022. SparseAdapter: An easy approach for improving the parameter-efficiency of adapters. In EMNLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In ICLR. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In ICLR. Shankar Iyer, Nikhil Dandekar, Kornél Csernai, et al. 2017. First quora dataset release: Question pairs. In data. quora. com. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is BERT really robust? A strong baseline for natural language attack on text classification and entailment. In AAAI. Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, and Pengxiang Cheng. 2023a. Dataless knowledge fusion by merging weights of language models. In ICLR. Xisen Jin, Xiang Ren, Daniel Preotiuc-Pietro, and Pengxiang Cheng. 2023b. Dataless knowledge fusion by merging weights of language models. In ICLR. Ivan Lazarevich, Alexander Kozlov, and Nikita Malinin. 2021. Post-training deep neural network pruning via layer-wise calibration. In ICCV. Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A Smith, and Luke Zettlemoyer. 2022. Branch-train-merge: Embarrassingly parallel training of expert language models. In arXiv. Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In ACL. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. In arXiv. Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wallenius, and Pyry Takala. 2014. Good debt or bad debt: Detecting semantic orientations in economic texts. In Journal of the Association for Information Science and Technology. Michael Matena and Colin Raffel. 2022. Merging models with fisher-weighted averaging. In NeurIPS. Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associations in gpt. In NeurIPS. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In NAACL. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In EACL. Jonas Pfeiffer, Ivan Vuli´c, Iryna Gurevych, and Sebastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In EMNLP. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP. Victor Sanh, Thomas Wolf, and Alexander Rush. 2020. Movement pruning: Adaptive sparsity by fine-tuning. In NeurIPS. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. Hidenori Tanaka, Daniel Kunin, Daniel L Yamins, and Surya Ganguli. 2020. Pruning neural networks without any data by iteratively conserving synaptic flow. In NeurIPS. Chaoqi Wang, Guodong Zhang, and Roger Grosse. 2020. Picking winning tickets before training by preserving gradient flow. In ICLR. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Guihong Cao, Daxin Jiang, Ming Zhou, et al. 2021a. K-adapter: Infusing knowledge into pre-trained models with adapters. In ACL. Xinyi Wang, Yulia Tsvetkov, Sebastian Ruder, and Graham Neubig. 2021b. Efficient test time adapter ensembling for low-resource language varieties. In EMNLP. Yaqing Wang, Sahaj Agarwal, Subhabrata Mukherjee, Xiaodong Liu, Jing Gao, Ahmed Hassan Awadallah, and Jianfeng Gao. 2022. AdaMix: Mixture-ofadaptations for parameter-efficient model tuning. In EMNLP. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL. 12965 A Appendix Dataset mnli mrpc sst2 rte Train 392,702 3,668 67,349 2,490 Test 9,815 408 872 277 Dataset qnli qqp rct ag Train 104,743 363,846 178,882 120,000 Test 5,463 40,430 30,135 7,600 authorship financial imdb tweets wiki 2,743 4,846 22,500 31,962 127,656 686 484 2,500 3,196 63,978 Table 2: Number of instances for each dataset divided by training and test set. mnli mrpc qnli qqp rct rte sst2 tweets imdb ag_news financial authorship wiki toxic mnli mrpc qnli qqp rct rte sst2 tweets imdb ag_news financial authorship wiki toxic Houlsby-Bert mnli mrpc qnli qqp rct rte sst2 tweets imdb ag_news financial authorship wiki toxic Pfeiffer-Bert 0.0 0.1 0.2 0.3 0.4 0.5 Figure 9: Fraction on differences of adapter weight direction. Algorithm 3: Fraction of weight sign difference (FSD). Input: two adapters with similar architecture learned from two domain θA, θB Output: Fraction f of weight sign difference 1: Compute total parameters s in each adapter (s = sA = sB) 2: C ←{} 3: For every layer in adapter θA, θB, compute 4: element-wise product for each layer. 5: for k, v in θA.items() do: 6: C[k] = mul(θA[k], θB[k]) 7: Count total of numbers which value is 8: smaller than 0. 9: for k, v in C.items() do: 10: counter + = sum(value < 0) 11: f = counter/s 12: return f A.1 Datasets. Diverse Knowledge Datasets. To simulate knowledge diversity, we gather a total of 13 distinct and diverse domain-specific datasets or classification tasks for evaluation. They are MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016), RTE (Bentivogli et al., 2009), MRPC (Dolan and Brockett, 2005), QQP (Iyer et al., 2017) and SST2 (Socher et al., 2013) from the GLUE domain corpus. PubMed-20K RCT dataset (Dernoncourt and Lee, 2017) from Biology domain for sentence classification. IMDB dataset from a Movie Review domain. Ag News, Financial (Malo et al., 2014) and Guardian Authorship (Altakrori et al., 2021) are News domain datasets across World, Sports, Business, Science/Technology, and Financial topics. Wiki Toxic 1 and Tweets Hate Speech are two Informal text domain for toxicity detection. Linguistic Statistic. Table 3 shows detailed statistics such as number of documents, average document length, and sentence lengths. A.2 Topic distribution of training datasets Tables from 4 to 16 show 10 topics and corresponding important words which are exacted from LDA for each training dataset. Notably, Ag News and SST2 have high cosine similarity but observe a large difference in terms of topic distribution compared to other domains (Fig. 2). Therefore, each statistical mechanism like cosine similarity or topic distribution only reflects one aspect of data distribution and may show inconsistencies with each other. A.3 Hyper-parameter Training and evaluation datasets. To assess performance in out-of-distribution scenarios, we conduct evaluations on a diverse set of 13 datasets covering various topics, ranging from movie reviews, news, authorship, and healthcare, to non-formal language text such as Wiki Toxic and Tweets. For datasets within the GLUE corpus, we employ training and evaluation datasets to gauge accuracy across different settings. In the case of Ag News, Authorship, Financial, IMDB, Tweets, and WikiToxic, we partition the training set into three segments with an 8:1:1 ratio, utilizing them for training, evaluation, and test datasets, respectively. This approach ensures a comprehensive evaluation of model performance across a wide spectrum of domains and linguistic styles. Table 2 shows data statistics on train/test datasets. Setting on text adversarial attack. In this study, we employ two types of attacker methods: TextFooler (Jin et al., 2020) and FGSM (Goodfellow et al., 2015). TextFooler word-level attacks focus on replacing words within the text with synonyms or contextu1https://www.kaggle.com/competitions/jigsaw-toxiccomment-classification-challenge/ 12966 Data Source Average Document Length Average Sentence Length Average # Sentences per Document MNLI 15.1 14.7 1.0 MRPC 21.9 21.1 1.0 QNLI 18.2 18.0 1.0 QQP 11.1 9.9 1.2 RTE 26.2 18.1 1.4 SST 10.4 10.4 1.0 RCT 26.5 26.3 1.0 Ag-news 38.4 29.1 1.3 Authorship 1038.6 20.2 51.3 Financial 23.1 22.8 1.0 IMDB 233.8 21.6 10.8 Tweets 15.9 9.6 1.6 Wiki-toxic 67.8 15.4 4.4 Table 3: Length statistics. Table 4: Topic distribution on MNLI dataset #Topic MNLI 1 well, time, got, take, one, much, day, something, ive, even, way, long, little, make, back 2 kind, system, though, come, went, well, today, view, church, including, president, seems, across, run, policy 3 say, get, cost, guess, were, business, car, local, whole, north, rather, getting, question, technology, capital 4 service, state, world, get, big, pretty, give, war, yes, standard, real, here, came, call 5 probably, high, thought, however, set, hand, enough, said, since, type, jon, yet, and, service 6 could, mean, around, part, another, change, percent, made, course, life, book, fact, name, room 7 government, program, federal, information, country, problem, le, new, national, may, number, agency, report, organization 8 year, two, house, case, old, three, town, street, century, one, city, study, man, four, different 9 know, like, think, thats, right, really, people, thing, good, go, one, lot, going 10 yeah, work, legal, rule, last, year, he, american, small, home, company, act, group, analysis, public ally similar words. By making ostensibly minor alterations to the input text, these attacks can deceive LLMs into producing incorrect outputs or substantially modifying their predictions. We meticulously fine-tune the hyperparameters of TextFooler to obtain more appropriate synonyms. We set the minimum embedding cosine similarity between a word and its synonyms as 0.8, and the minimum Universal Sentence Encoder similarity is 0.84. FGSM (Goodfellow et al., 2015) is a white-box embedding-level attack. FGSM uses the Fast Gradient Sign Method to calculate gradients of the model’s loss to the input text and generates an adversarial example by perturbing the embedding of input text in the direction that maximizes the loss. We choose the magnitude of the perturbation in embedding space as 0.01 on BERT and RoBERTa models. Adapter Configuration. We use adapters with a dimension of 64 and 256 using RoBERTa-large and BERT-base encoders following the setup of (Houlsby et al., 2019), (Pfeiffer et al., 2021). With LoRA, we use rank r = 4 following the setup of (Hu et al., 2022). Hardware Information. We evaluate model performance on AMD Ubuntu 22.04.2 with Ryzen Threadripper PRO 5975WX, 1800MHz, Cached 512 KB and 4 × GPU Nvidia A6000. HyperParameters. Detailed hyper-parameter configuration for different tasks is presented in Table 17. A.4 Model performance when mixing adapters across tasks Tables 10, 11, 12 and 13 show task accuracy when mixing multiple adapters. A.5 Fraction of weight sign difference Alg. 3 presents a detailed algorithm to compute the FSD. A.6 Additional result on weight sign difference Fig. 9 shows the weight sign difference of the adapter and normalizes it by the total number of adapter weights in BERT. A.7 Additional results when mixing two adapters Fig. 14 and 15 show model generalization when mixing two and multiple adapters across various tasks. 12967 Table 5: Topic distribution on MRPC dataset #Topic MRPC 1 said, court, company, would, official, statement, decision, made, state, appeal, two, board 2 said, year, people, president, program, time, million, two, last, house, official, weapon 3 said, million, would, state, period, compared, men, get, democratic, plan, company, united, also, could 4 percent, share, cent, million, stock, point, nasdaq, billion, new, index, trading, rose, per, year 5 said, also, state, iraq, center, united, attack, hospital, killed, war, three, american, people 6 said, two, home, police, told, state, friday, last, year, federal, company, yesterday, national 7 standard, poor, index, chief, point, said, percent, justice, one, spx, broader, executive, three 8 said, analyst, expected, street, many, suit, call, yesterday, angeles, wall, los, research, one, change, according 9 case, said, court, filed, death, also, charged, lawsuit, charge, state, found, reported, office, cancer 10 said, would, server, window, network, one, new, microsoft, also, taken, people, company Table 6: Topic distribution on QNLI dataset #Topic QNLI 1 city, american, south, large, west, season, de, roman, service, art, london, first, located, street, new 2 state, united, new, including, people, city, national, million, school, north, government, army, many, within, building 3 also, system, later, early, used, based, part, control, four, use, death, official, known, act, called 4 group, language, east, among, found, common, company, india, federal, movement, population, early, included, production, range 5 the, church, term, example, university, greek, german, like, english, specie, god, word, per, old, one 6 form, although, following, law, central, rule, culture, without, often, modern, territory, society, treaty, considered, christian 7 war, world, british, life, development, empire, first, region, community, year, france, though, time, set, began 8 well, three, include, place, power, party, league, may, needed, right, one, political, club, a, event 9 became, first, time, john, film, president, number, year, french, one, day, land, america, process, le 10 century, music, around, house, home, period, age, record, late, established, several, standard, time, world, river Algorithm 4: Mixing Sparse Adapters with weight sign difference Input: k domain-specific adapters, the FSD matrix Sk×k, l number of mix adapters. Output: Average(sparse_candidates) 1: dense_candidates ←{} 2: Compute the average of the difference in weight 3: sign: averageS = mean(S, axis = 1) 4: Select the top l smallest adapters (Sl) to mix based 5: on the average weight sign difference 6: dense_candidates ←Sl 7: For each adapter, compute the average fraction of 8: weight sign different in each layer with corresponding layers from other adapters. 9: Get the top m layer with the highest fraction of 10: weight sign conflict to prune 11: sparse_candidates ←Prune(dense_candidates) 12: return average(sparse_candidates) A.8 Mixing Sparse Adapters with weight sign difference Alg. 4 shows details of mixing sparse adapter with sign conflict information. 12968 Table 7: Topic distribution on QQP dataset #Topic QQP 1 like, become, feel, get, job, movie, good, student, want, engineering, girl, website, sex, study, go 2 best, way, difference, learn, whats, money, make, online, book, india, buy, start, good, language, programming 3 much, best, time, weight, year, lose, old, place, month, day, iphone, read, possible, class 4 thing, day, business, get, first, going, example, one, prepare, video, woman, word, men 5 work, note, india, indian, ever, computer, black, r, science, you, help, rupee, different 6 would, life, trump, world, country, new, donald, war, india, win, happen, president, clinton, hillary 7 get, friend, used, long, why, bad, back, see, take, cant, good, facebook, system, relationship, person 8 someone, love, english, one, know, improve, account, people, get, instagram, tell, average, hair, password 9 mean, app, song, name, android, give, bank, right, what, company, india, working, get, now, create 10 people, quora, question, think, do, me, answer, google, stop, use, state, get, many, live Table 8: Topic distribution on RTE dataset #Topic RTE 1 year, bank, world, ago, police, place, human, people, said, man, problem, game, many, took, explosion 2 people, attack, california, killed, life, united, day, lost, air, one, space, injured, national, capital, said 3 oil, said, nuclear, company, new, president, iran, million, military, john, un, country, bush, price 4 said, world, state, united, minister, country, million, people, nobel, south, peace, war, trade, prize, mexico 5 woman, corp, parliament, case, confirmed, said, rabies, represented, cause, poorly, fire, president, police, loss 6 year, new, said, one, would, died, university, show, company, family, first, service, since, country, home 7 state, iraq, said, bush, bomb, found, used, water, home, killed, caused, damage, one, police 8 party, police, president, new, two, officer, name, drug, state, prime, people, minister, last, year, democratic 9 new, said, government, year, iraq, would, york, official, today, baghdad, also, euro, announced, percent, minister 10 said, year, leader, new, sanfrancisco, work, justice, two, president, government, end, free, guerrilla Table 9: Topic distribution on SST2 dataset #Topic SST2 1 film, really, enough, movie, something, make, interesting, many, like, subject, intelligent, laugh, short 2 movie, bad, film, better, great, fun, one, look, director, story, ultimately, smart, cinema, put 3 performance, funny, way, moment, film, cast, another, screen, yet, big, work, perfect, made 4 new, material, ve, movie, rather, film, special, seen, minute, enjoyable, might, offer, story, effect 5 comedy, drama, thriller, romantic, documentary, actor, moving, clever, funny, sometimes, pleasure, often, movie, film 6 work, film, movie, hard, well, keep, filmmaker, ever, life, original, sense, dull, quite, could 7 like, feel, movie, much, people, film, make, see, get, character, one, thing 8 good, real, film, worth, fascinating, make, time, lack, bit, amusing, humor, tale, pretty, run 9 character, one, best, film, movie, story, far, compelling, two, every, year, picture, little 10 love, audience, film, story, character, seems, entertainment, way, powerful, care, take, one, movie, spirit Table 10: Topic distribution on RCT dataset #Topic RCT 1 group, patient, week, randomized, study, received, control, year, mg, randomly, placebo, day 2 patient, session, visit, cohort, failure, lesion, myocardial, hospital, twice,death, heart, infarction 3 analysis, using, data, model, used, test, sample, analyzed, regression, characteristic, time, collected, cell, method, performed 4 outcome, primary, month, patient, baseline, measure, score, secondary, treatment, scale, assessed, symptom, week, followup 5 risk, associated, level, weight, factor, effect, disease, body, increased, diabetes, insulin, high, glucose, change, activity 6 study, patient, treatment, effect, therapy, efficacy, may, effective, result, evaluate, safety, weather, clinical, outcome, intervention 7 group, difference, significant, significantly, compared, control, treatment, score, higher, lower, time, observed, rate 8 trial, study, randomized, intervention, care, health, controlled, clinical, quality, life, conducted, prospective, effectiveness, child, number 9 patient, event, surgery, adverse, postoperative, complication, procedure, pain, undergoing, surgical, rate, incidence, common, infection, injection 10 mean, respectively, ratio, patient, group, median, versus, interval, year, day, month Table 11: Topic distribution on Tweets dataset #Topic Tweets 1 new, get, here, music, home, cool, playing, free, want, fun, season, shop, update, reason 2 day, one, night, time, good, week, last, never, first, get, year, got, lot, today 3 day, father, love, happy, time, weekend, take, friday, dad, fathersday, model 4 want, bull, up, do, help, trump, whatever, direct, dominate, waiting, libtard, yet, sleep, post 5 thankful, need, good, positive, orlando, morning, city, tear, news, blessed, friend, dream, bing, yeah, bong 6 user, amp, day, see, cant, go, like, new, today, one, people, get, wait, make 7 birthday, like, positive, affirmation, happy, baby, amp, god, girl, woman, feel, hate, hot, you 8 love, work, life, happy, happiness, make, always, food, quote, smile, wedding, moment, right, feeling, music 9 healthy, blog, gold, silver, altwaystoheal, forex, healing, grateful, dog, buffalo, peace, really, story 10 love, me, smile, summer, beautiful, fun, cute, girl, selfie, friend, sun, instagood, beach, photo 12969 Table 12: Topic distribution on IMDB dataset #Topic IMDB 1 story, film, life, movie, character, one, love, time, people, see, way, family, would, well 2 movie, like, one, good, really, it, film, bad, see, even, time, would, make, get 3 get, one, man, the, go, woman, take, back, he, find, there, scene, two, girl 4 hamilton, gadget, arkin, scooby, talespin, stallion, smoothly, tenderness, shaggy, gil, inspector, keller, nevada, hopelessness 5 war, american, documentary, soldier, political, world, german, country, history, america, military, army, hitler 6 bollywood, indian, kapoor, khan, akshay, fi, amitabh, ramones, verhoeven, christina, sci, braveheart, kumar, chiller 7 film, one, the, scene, character, story, director, much, plot, well, even, work, time 8 film, role, performance, great, play, best, good, cast, one, actor, comedy, john 9 show, series, episode, year, tv, time, great, first, kid, dvd, one, funny, still, watch 10 match, matthau, luke, shakespeare, neil, bruce, scarface, boxing, hamlet, elvis, branagh, lucas, polanski Table 13: Topic distribution on Ag News dataset #Topic Ag News 1 palestinian, said, iraqi, killed, iraq, reuters, attack, baghdad, arafat, israeli, bomb, scored, force, city 2 win, world, first, point, coach, cup, lead, victory, team, second, no, champion, night, final 3 president, afp, said, minister, election, bush, leader, india, state, reuters, prime, united 4 reuters, oil, price, stock, new, search, dollar, google, market, york, rate, apple, share, record 5 court, drug, say, ap, could, may, new, year, eu, case, said, state, scientist, trial 6 space, nasa, canadian, dec, press, former, nba, williams, winter, houston, monday, arsenal, sunday 7 said, company, inc, million, deal, corp, billion, sale, year, percent, reuters, buy, business 8 microsoft, new, software, internet, service, system, computer, technology, phone, ibm, music, online, web, company 9 china, police, said, reuters, people, worker, british, government, official, party, japan, group, chinese 10 game, new, year, red, one, time, season, first, team, series, last, york Table 14: Topic distribution on Financial dataset #Topic Financial 1 company, finnish, new, plant, finland, construction, order, line, contract, service, unit, production, investment 2 company, share, bank, said, also, capital, start, issue, term, financial, price, business, executive, dividend 3 eur, profit, sale, net, operating, million, period, quarter, compared, loss, year 4 finnish, said, today, million, company, first, helsinki, year 5 company, mobile, said, phone, nokia, solution, business, pretax, finland, network, product, group, store, customer 6 market, board, option, company, share, stock, director, member, concerning, meeting, general, bank, flow, chairman 7 share, company, group, lower, helsinki, stock, president, capital, holding, new, right 8 service, finland, customer, corporation, company, electronics, solution, industry, business, helsinki, ltd, group 9 company, expected, sale, said, people, production, paper, year, finland, plant, cut, staff, expects 10 euro, service, company, item, nokia, excluding, technology, business, mobile, device, market, product Table 15: Topic distribution on Authorship dataset #Topic Authorship 1 one, would, may, people, year, even, could, time, last, minister, public, police, many, blair, say 2 one, would, war, farmer, even, new, blair, bush, could, need, time, iraq, much, week 3 labour, new, people, government, tax, year, time, even, public, brown, blair, party, money 4 would, one, government, new, world, year, labour, much, state, blair, last, british 5 new, public, government, labour, people, year, one, would, may, way, time, make, right, life, need 6 people, time, public, said, even, government, lord, like, party, make, day 7 one, bush, american, world, year, right, war, child, people, british, state, new 8 people, one, child, like, time, family, get, year, burrell, may, still, even, much 9 would, one, blair, bush, war, nuclear, even, it, new, make, could, weapon, people, party 10 would, one, year, people, could, even, royal, like, woman, time, war, right, iraq 12970 Table 16: Topic distribution on Wiki Toxic dataset #Topic Wiki Toxic 1 page, talk, edit, please, user, edits, wikipedia, editor, comment, block, blocked, editing, discussion, thanks, stop 2 image, use, you, copyright, page, fair, picture, please, medium, wikipedia, see, template, deleted, file, photo 3 article, deletion, deleted, page, please, tag, may, speedy, notable, talk, guideline, subject, wikipedia, criterion, add 4 nigger, hate, bitchfuck, faggot, lol, class, rape, fat, asshole, mama, fucker, hairy, ha, boymamas 5 like, know, get, people, it, think, you, want, one, time, go, thing, me, really 6 state, english, country, american, language, people, name, war, city, world, government, history, british, jew, group 7 fuck, ass, suck, fucking, shit, u, hi, cunt, school, moron, go, bitch, shut, cock, dick 8 utc, year, new, game, redirect, song, old 9 page, wikipedia, talk, help, please, link, welcome, question, article, thank, thanks, like, name, best 10 article, one, would, source, also, think, section, fact, see, it, like, point, say, time, reference Task Learning rate epoch batch size warmup weight decay adapter size BERTBASE MNLI 4e-4 20 32 0.06 0.1 256 MRPC 4e-4 5 32 0.06 0.1 256 QNLI 4e-4 20 32 0.06 0.1 256 QQP 4e-4 20 32 0.06 0.1 256 RCT 4e-4 20 32 0.06 0.1 256 RTE 4e-4 5 32 0.06 0.1 256 SST2 4e-4 10 32 0.06 0.1 256 Tweets 4e-4 5 32 0.06 0.1 256 IMDB 4e-4 5 32 0.06 0.1 256 Ag News 4e-4 20 32 0.06 0.1 256 Financial 4e-4 5 32 0.06 0.1 256 Authorship 4e-4 5 32 0.06 0.1 256 RoBERTaLARGE MNLI 3e-4 20 64 0.6 0.1 64 MRPC 3e-4 5 64 0.6 0.1 64 QNLI 3e-4 20 64 0.6 0.1 64 QQP 3e-4 20 64 0.6 0.1 64 RCT 3e-4 20 64 0.6 0.1 64 RTE 3e-4 5 64 0.6 0.1 64 SST2 3e-4 10 64 0.6 0.1 64 Tweets 3e-4 5 64 0.6 0.1 64 IMDB 3e-4 5 64 0.6 0.1 64 Ag News 3e-4 20 64 0.6 0.1 64 Financial 3e-4 5 64 0.6 0.1 64 Authorship 3e-4 5 64 0.6 0.1 64 Table 17: Hyperparameter configurations for various tasks. 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy MNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 MRPC 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 QNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 QQP 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 RCT 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 RTE 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy SST2 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Tweets 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 IMDB 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Ag News 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Financial 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Clean Black-box attack (TextFooler) White-box attack (FGDS) Figure 10: Accuracy of RoBERTa with Houlsby (Houlsby et al., 2019) across various distribution datasets. The x-axis denotes the number of domain adapters to be mixed, ranging from 1 to 13. 12971 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy MNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 MRPC 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 QNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 QQP 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 RCT 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 RTE 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy SST2 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Tweets 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 IMDB 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Ag News 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Financial 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Authorship Clean Black-box attack (TextFooler) White-box attack (FGDS) Figure 11: Performance Evaluation of RoBERTa Using the LoRA (Hu et al., 2022) across Varied Domain Datasets. 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Accuracy MNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 MRPC 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 QNLI 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 QQP 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 RCT 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 RTE 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Accuracy SST2 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Tweets 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 IMDB 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Ag News 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Financial 1 3 5 7 9 11 13 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Authorship Clean Black-box attack (TextFooler) White-box attack (FGDS) Figure 12: Performance Evaluation of BERT Using the Houlsby (Houlsby et al., 2019) across Varied Domain Datasets. 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 Accuracy MNLI 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 MRPC 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 QNLI 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 QQP 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 RCT 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 RTE 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 Accuracy SST2 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 Tweets 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 IMDB 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 Ag News 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 Financial 1 3 5 7 9 11 13 0.2 0.4 0.6 0.8 1.0 Authorship Clean Black-box Attack (TextFooler) White-box Attack (FGDS) Figure 13: Performance Evaluation of BERT Using the Pfeiffer (Pfeiffer et al., 2021) across Varied Domain Datasets. 12972 mrpc qnli qqp rct rte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Fraction of different(%) MNLI mnli qnli qqp rct rte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 MRPC mnli mrpc qqp rct rte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 QNLI mnli mrpc qnli rct rte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 QQP mnli mrpc qnli qqp rte sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Fraction of different(%) RCT mnli mrpc qnli qqp rct sst2 tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 RTE mnli mrpc qnli qqp rct rte tweets imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 SST2 mnli mrpc qnli qqp rct rte sst2 imdb ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Tweets mnli mrpc qnli qqp rct rte sst2 tweets ag_news financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Fraction of different(%) IMDB mnli mrpc qnli qqp rct rte sst2 tweets imdb financial authorship wiki_toxic 0 5 10 15 20 25 30 35 Ag News mnli mrpc qnli qqp rct rte sst2 tweets imdb ag_news authorship wiki_toxic 0 5 10 15 20 25 30 35 Financial mnli mrpc qnli qqp rct rte sst2 tweets imdb ag_news financial wiki_toxic 0 5 10 15 20 25 30 35 Authorship 0.80 0.82 0.84 0.86 0.88 0.90 0.80 0.82 0.84 0.86 0.88 0.90 0.90 0.92 0.94 0.96 0.98 1.00 0.86 0.88 0.90 0.92 0.94 Accuracy 0.80 0.82 0.84 0.86 0.88 0.90 0.80 0.82 0.84 0.86 0.88 0.90 0.90 0.92 0.94 0.96 0.98 1.00 0.90 0.92 0.94 0.96 0.98 1.00 Accuracy 0.86 0.88 0.90 0.92 0.94 0.86 0.88 0.90 0.92 0.94 0.80 0.82 0.84 0.86 0.88 0.90 0.90 0.92 0.94 0.96 0.98 1.00 Accuracy Figure 14: Fraction of weights altering direction during the consolidation of two adapters. The sky-blue bar represents the fraction of weight sign conflicts between two (k=2) adapters (left y-axis). The dashed blue line denotes the accuracy achieved by a standalone adapter trained on a specific task. While the solid red line illustrates the variations in accuracy when merging the adapter with another task’s adapter. 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 Fraction of different(%) MNLI 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 MRPC 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 QNLI 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 QQP 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 Fraction of different(%) RCT 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 RTE 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 SST2 2 3 4 5 6 7 8 9 10111213 0 10 20 30 40 50 Tweets 2 3 4 5 6 7 8 9 10111213 Number of domain adapters 0 10 20 30 40 50 Fraction of different(%) IMDB 2 3 4 5 6 7 8 9 10111213 Number of domain adapters 0 10 20 30 40 50 Ag News 2 3 4 5 6 7 8 9 10111213 Number of domain adapters 0 10 20 30 40 50 Financial 2 3 4 5 6 7 8 9 10111213 Number of domain adapters 0 10 20 30 40 50 Authorship 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Accuracy 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Accuracy 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Accuracy Figure 15: Fraction of weights changing direction during the mixing of multiple adapters, ranging from 2 to 13. The sky-blue bar represents the fraction of weight sign conflicts between k (from 1 to 13) adapters (left y-axis). The dashed blue line corresponds to the accuracy of a single adapter trained on a specific task, while the solid red line depicts the fluctuation in task accuracy resulting from merging the adapter with another task’s adapter. 12973
2024
700
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12974–12990 August 11-16, 2024 ©2024 Association for Computational Linguistics Learning to Decode Collaboratively with Multiple Language Models Shannon Zejiang Shen Hunter Lang Bailin Wang Yoon Kim David Sontag Massachusetts Institute of Technology {zjshen, hjl, bailinw, yoonkim, dsontag}@mit.edu Abstract We propose a method to teach multiple large language models (LLM) to collaborate by interleaving their generations at the token level. We model the decision of which LLM generates the next token as a latent variable. By optimizing the marginal likelihood of a training set under our latent variable model, the base LLM automatically learns when to generate itself and when to call on one of the “assistant” language models to generate, all without direct supervision. Token-level collaboration during decoding allows for a fusion of each model’s expertise in a manner tailored to the specific task at hand. Our collaborative decoding is especially useful in cross-domain settings where a generalist base LLM learns to invoke domain expert models. On instruction-following, domainspecific QA, and reasoning tasks, we show that the performance of the joint system exceeds that of the individual models. Through qualitative analysis of the learned latent decisions, we show models trained with our method exhibit several interesting collaboration patterns, e.g., template-filling.1 1 Introduction Techniques that combine the generations of multiple large language models (LLMs) at decoding time have benefits ranging from faster decoding speed (Leviathan et al., 2023), to more controllable generations (Liu et al., 2021; Yang and Klein, 2021), to more coherent, less repetitive text (Li et al., 2023a), and even enabling a large model to be “tuned” by combining its generations with those of a smaller model from the same family (Liu et al., 2024). A parallel thread of work has aimed to equip language models with the ability to infuse external tools into their generations, with the goal of incorporating outside knowledge and capabilities (Mialon et al., 2023). Language models are able to produce more 1Code: https://github.com/clinicalml/co-llm Figure 1: Example generations of our method, Co-LLM. Top: the base model generates the answer template and uses a larger LLAMA model to fill in factual knowledge; Bottom: the base model uses a math-specialized model as an “API” for computation. The assistant model generated the highlighted tokens because the base model learned to defer generation at those locations. faithful and accurate generations when equipped with external APIs (Schick et al., 2023; Qin et al., 2023; i.a.), search engines or retrievers (Izacard et al., 2022; Asai et al., 2023; Nakano et al., 2021; i.a.), or code executors (Gao et al., 2023; i.a.). While powerful, these methods all require a prescription on how to combine the models and when to use the tools, either via specific formulas for combining the logits of multiple models, or through (weak) supervision on where to insert tool/API calls in the training data. In this work, we explore a different type of model combination 12974 Figure 2: Illustration of the decoding procedure in Co-LLM, where a base (LLAMA-7B) and assistant model (MEDITRON-70B) collaborate to generate a correct response for a medical question. For each token, the deferral control predicts the probability of switching to the assistant model to decode the next token given the context: it defers when the probability is above some threshold η (indicated by A), and uses the decoded token as the context (highlighted with orange border). When using the base model alone, it may make factual mistakes (indicated by B); Co-LLM learns to use the assistant model at these positions to produce correct generations. where the models learn to interleave their generations token-by-token. Each token is generated by one model, so the models collaborate to generate a token sequence together. We represent the decision of which LLM generates the next token as a latent variable, assuming no direct supervision on the decision of which model to use at each decoding step. This enables an effective collaboration pattern for a given task to be learned organically from data. Figure 1 shows example generations from our method, Co-LLM. In the top example, LLAMA7B collaborates with LLAMA-70B on instructionfollowing by generating a list template for the answer and then calling on LLAMA-70B to fill in each element of the list. Using the larger model as an assistant allows the smaller model to make effective use of a larger knowledge base and focus its efforts on learning the correct “scaffolding” for instruction responses. In the bottom example, LLAMA-7B collaborates with LLEMMA-34B (a domain-specific math model, Azerbayev et al., 2023) by treating the latter as an API call to fill in parts of a LaTeX formula. In both cases, the model predicts when to call the assistant by itself, behavior it learns from training without direct supervision on which contexts suit the assistant model well. This enables the emergence of qualitatively different collaboration methods (e.g., learning to scaffold, calling the large model as an API) based on what the task demands. Section 2 describes our latent-variable model for collaboration during decoding, and Section 3 describes the training and decoding procedures for Co-LLM in this model. In, Section 4, we evaluate Co-LLM on instruction-following, mathematical reasoning, and domain-specific question-answering tasks. Our results indicate that teaching models to collaborate improves performance across all of these tasks compared to using the individual models alone, and can sometimes match or exceed the performance of fine-tuning the large model. By using chain-of-thought reasoning (Wei et al., 2022), Co-LLM can also be applied to classification tasks, where our experiments show that it boosts performance by enabling improved reasoning capability. Our results show that Co-LLM is especially useful in cross-domain settings where a generalist base LLM learns to invoke domain expert models and that Co-LLM can be effectively combined with other ensemble models, such as Mixture of Experts models (Shazeer et al., 2017). 2 Latent-Variable Framework for Collaborative Generation Given a set of LMs with different expertise or sizes, we propose a latent-variable framework that enables their collaboration in a cost-efficient way. The framework centers around a finetunable base model, which itself is a relatively small LM. It decides which other assistant models (which are typically larger and/or more specialized models) to use per token. When the base model calls on an assistant to generate the next token, we say it defers generation for that token. To generate a sequence of tokens (X1, . . . , XT ), we represent the choice of which model generates token Xt as a discrete latent variable Zt ∈ {0, 1, . . . , M}, where i = 0 denotes the base model, and i ∈{1, . . . , M} refers to the i-th of M assistant models. We assume access to the conditional distributions Pi(Xt|X<t), i ∈{1, . . . , M}, of the assistant models,2 and full access to the base model. Using these distributions, we represent the 2Conditionals can be obtained via either locally deployed models or remote API calls. At training time, we only require access to the assistant model’s probability for the ground-truth tokens, not the full conditional distribution. 12975 joint sequence-level likelihood as: P(X, Z) = T Y t=1  Pθ(Zt|X<t)PZt (Xt|X<t)  (1) The (learned) categorical distribution Pθ models the token-level discrete decision Zt. For efficiency, each Zt is conditionally independent of Z<t given X<t in our design. The latent variable Zt is designed in the same spirit as the defer variable in Mozannar and Sontag (2020) and classical Mixtureof-Experts models (Jordan and Jacobs, 1994) or ensemble models (Saunders et al., 2019). Unsupervised Learning. In practice, the token level decisions Zt are unknown, and collecting human annotation is not scalable. Our latent-variable framework offers a natural way of handling the issue with unsupervised learning. In particular, we aim to optimize the following marginal likelihood, P(X) = T Y t=1  M X Zt=0 Pθ(Zt|X<t)PZt (Xt|X<t)  , (2) which can be computed efficiently during training due to the conditional independence structure. Collaborative Decoding. During inference time, our goal is to find the best sequence X along with the best decision Z on which assistant LM to use. ˆX, ˆZ = arg max X,Z P(X, Z) (3) The exact arg max in Eq. (3) is intractable, so we follow the common practice of using the greedy strategy for decoding both Zt and Xt in a token-by-token, autoregressive manner (see Fig. 2 for an example). In greedy decoding, for each token position, we first choose ˆZt = arg maxZt Pθ(Zt|X<t) to determine which model to decode from, then decode greedily from that model: ˆXt = arg maxXt P ˆ Zt(Xt|X<t). Compared with standard greedy decoding for a single LM, decoding in this model is performed in collaboration, coordinated by Pθ. An alternative strategy to arg max decoding is to marginalize out Zt: ˆXt = arg max Xt X Zt PZt(Xt|X<t)Pθ(Zt|X<t) which closely aligns with the marginal likelihood training objective. However, this requires calling all M + 1 models every token, slowing down decoding. Our empirical results show that greedily choosing Zt based on Pθ(Zt|X<t) performs well and enables interpretable collaboration, since each token is generated by a single model. Remark. The design of the collaborative decoding is natural from the probabilistic modeling perspective, and in this work we mainly focus on important empirical questions: how should we parameterize Pθ so that it can be learned in a dataefficient way; can the base model learn how to cooperate with (larger or domain-specific) assistant models with only access to conditional probabilities (no internal hidden states or weights); and what kind of collaboration pattern can be induced if the collaboration is learned without supervision? The latent-variable framework allows us to answer these questions via the exposed interpretable variable Zt. 3 Co-LLM: Learning to Decode Collaboratively with LMs In this work, we focus on the basic case where we only have one base model and one assistant model. In this setting, the base model needs to make a binary decision of whether to generate from itself or defer to the assistant model, i.e., Zt ∈ {0, 1}. For clarity, we use Pbase and Passt to denote the base and assistant model, respectively. In the rest of this section, we explain our design for the parameterization of the model selector, and the training and inference procedure of the joint model. 3.1 Modeling Pθ(Zt|X<t) Since we have full access to the base model, we build Pθ on top of the base model for parameter efficiency. Specifically, we represent θ as a linear binary classification head in the last layer. Formally, if ht(X<t) ∈Rd is the base model’s last hidden state at time step t for inputs X<t, and θ ∈Rd is the weight vector, we set: Pθ(Zt|X<t) = σ(⟨θ, ht(X<t⟩), where σ(·) is the sigmoid function. This introduces only d new parameters to the base model, where d is the base model’s hidden dimension size. 3.2 Training Our training objective minimizes the negative logmarginal likelihood −PT t=1 log P(Xt|X<t), where P(Xt|X<t) =Pbase(Xt|X<t)Pθ(Zt = 0|X<t)+ Passt(Xt|X<t)Pθ(Zt = 1|X<t) (4) is the likelihood of the next token after marginalizing out the latent variable Zt. The training procedure updates both θ and the base model parameters; it requires the forward pass of both the base 12976 and assistant model to obtain next-token probabilities, and the backward pass of the base model for gradient computation. This implies that any commercial LLMs or locally deployed LMs exposing next-probability output can be used as the assistant LMs. The marginal likelihood aligns well with the typical pretrained objective of maximizing the nexttoken probs. Moreover, offloading “difficult tokens” can potentially alleviate hallucination issues of the base model, thus leading to better generalization even without much help from the assistant model, as we will show in the experiments. Initialization of θ. In our experiments, we found that appropriately initializing θ helps encourage the base LM to quickly switch from generating by itself to generating collaboratively. Instead of collecting direct supervision for Zt values to initialize θ, we use weak supervision to collect initial Zt values, use these pseudolabels to initialize the parameters θ, then allow the Zt’s to change during the training procedure. Intuitively, the prime location for Zt = 1 is when the assistant model correctly predicts the target token Xt but the base model does not. To that end, we set the pseudolabels for Zt as: ˆZt := 1[Xt = arg maxv∈V Passt(v|X<t)∧ Xt ̸= arg maxv∈V Pbase(v|X<t)], (5) and initialize the d parameters θ by maximizing the likelihood of log Pθ( ˆZt|X<t) while holding the rest of the base model fixed. In our experiments, we compare to a baseline that combines this initialization with the usual language-model fine-tuning loss; our results show that the marginal likelihood objective leads to better performance by enabling the base model to learn better Zt values from data (see ablation results in Section 5.2). 3.3 Decoding In initial experiments, we found that performance of the joint model is sensitive to the choice of Zt; due to the exposure of Zt in our latent-variable framework, we can impose extra priors over Zt for better performance. Specifically, we follow the general greedy decoding strategy, and set a threshold η for decoding Zt: ˆZt = 1[Pθ(Zt = 1|X<t) > η], (6) which means that when Pθ(Zt = 1) > η, we execute the assistant model to predict the next token. The hyperparameter η is picked via grid search on a small validation set per dataset. The choice of threshold for the decoding probability also allows for inference-time control over the amount of collaboration, in contrast with other approaches such as DExperts (Liu et al., 2021), and our performance degrades gracefully as the threshold increases. 4 Experimental Setup In our experiments, we fine-tune models for specific tasks and we test the models in-domain, comparing the end-task performance between CoLLM and multiple single- or multi-model baselines. We test on 4 datasets ranging from instructionfollowing to solving expert problems, trying to understand when and how model collaboration can be beneficial. We investigate the collaboration between different models (e.g., between Llama models of multiple scales, and between models finetuned on different domains). Overall, we find that Co-LLM can learn a successful collaboration between different base and reference models, leading to better results than tuning base models alone. Models Used. Our primary experiments are concerned with whether smaller models can collaborate with expert models that have been specialized to different domains. In Section 5.1, we experiment with collaboration between the finetuned LLAMA-7B and the LLEMMA family (Azerbayev et al., 2023, finetuned for math and reasoning), as well as the MEDITRON family (Chen et al., 2023, finetuned for biomedicine). It is also possible to use differently-sized models from the same family with Co-LLM: in Section 5.2 we experiment with the 7B and the 70B model from the same LLAMA2 family (Touvron et al., 2023) as the base and assistant model, respectively. Datasets. We train on the full Tülu v2 mix data (Wang et al., 2023) for instruction following, GSM8k (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021), each with 7.5k samples, for reasoning and math problem solving, and the BioASQ (Tsatsaronis et al., 2015) (4.7k samples) for medical question answering. We train and test the model on the corresponding data and evaluation suites separately. Evaluation. We only compare greedy decoding results for all models, as they are commonly used in real-world applications. We evaluate the instruction following model using the AlpacaEval (Li 12977 et al., 2023b) benchmark; we use the GPT-4 annotator and compute the win rate of the testing model as judged by GPT-4-0613 when compared to the outputs produced by Davinci-003. For GSM8k, following Wang et al. (2023),we extract the last numerical answer from the model’s output, and calculate the exact match of the model prediction on a 200-example subset of the test set. The MATH dataset provides 5 levels of math problems from 7 categories: similar to Wu et al. (2023), we skip the geometry category since it largely relies on Asymptote code, and sample 25 questions per level per category for evaluation, resulting in a 750-example subset. We adopt the prompting and evaluation code from Azerbayev et al. (2023) and Lewkowycz et al. (2022), extracting the last generated math after “The answer is”. 3 The BioASQ comes with 330 test examples of 4 categories: factoid, list, summary, and yes/no questions, evaluated using strict accuracy (SAcc.), F1, accuracy (Acc.), and Rouge2 (R2). We test on 310 examples (saving 20 for validation) and reimplement the evaluation code.4 4.1 Baselines Single models. The performance of the base and assistant models can inform whether the learned collaboration is beneficial. We report 0-shot performance of the original untuned models and their finetuned counterparts. The same data and hyperparameters are used for model finetuning; for 70B models, we fine-tune using QLoRA (Dettmers et al., 2023) with the hyperparameters in Appendix A. Other collaborative models. We use two collaborative strategies from the literature. Contrastive Decoding (Li et al., 2022b; O’Brien and Lewis, 2023, CD) combines the output of the untuned “expert” (e.g., a 70B model) and “amateur” (e.g., a 7B model) models by subtracting their logits and sampling from the resulting distribution. We follow the setup in O’Brien and Lewis (2023), setting the same α = 0.1 and β = 0.5, and use unmodified LLAMA-70B as the expert model and LLAMA-7B as the amateur model.5 Proxy Tuning (Liu et al., 2024, PT) proposes to approximate finetuning a (large) base model M by composing the outputs 3We use greedy decoding for GSM8k and MATH; some literature refers to this as “maj@1”. 4See details in Appendix A. 5In their paper, O’Brien and Lewis (2023) use a 1.5B parameter amateur model; as this model is not released, we use the 7B model instead. Different from their paper, we use 0- or 1-shot prompting in congruence with our other results. of smaller expert M+ and anti-expert models M− with M. We include CD and PT results as ways to enhance untuned models and to simulate finetuning 70B models, respectively. Both CD/PT require calling the smaller and larger models at each time step. In contrast, Co-LLM may generate multiple tokens from the base model before calling the large model, which can be seen as a form of speculative decoding (Leviathan et al., 2023). For example, for a sequence of length L, Proxy Tuning makes L calls6 to the large model and 2L calls to the small model, whereas Co-LLM makes fL calls to the large (assistant) model and L calls to the small (base) model, where 0 ≤f ≤1 is the empirical frequency of deferral, the percent of Zt = 1 tokens. Ablated Co-LLM. We consider different variants of Co-LLM to verify the necessity of a learned model selector Pθ. First, we consider two simple heuristics as model selectors: Co-LLM-Random randomly chooses the base or the assistant model to produce a token with probability p = 0.5; Co-LLMGreedy runs both models in parallel for each token and selects the token with the higher probability from either model. This is a strong baseline since it requires observing next-token probabilities from both models at every decoding step. Similar to our default setting, the base model is finetuned on target datasets while the assistant model is frozen. Weakly-supervised Co-LLM. Finally, we consider a different weakly supervised training procedure for Co-LLM. This baseline is inspired by the process used to derive tool-use labels in Toolformer (Schick et al., 2023): the weak supervision for when to call the tool is chosen and fixed before updating the language model parameters. Specifically, the training procedure is two-stage: first, we collect pseudo-labels ˆZt using Eq. (5). Second, we jointly train Pθ(Zt|X<t) and the base model by optimizing the weighted sum of log P( ˆZt|X<t) and the usual language modeling loss. This trains the base model to defer to the assistant when ˆZt = 1 while also fine-tuning it on the full training set. In contrast, our marginal likelihood training Section 3.2 only uses the ˆZt values to initialize θ and allows the Zt values to evolve during training. 6Here each “call” corresponds to a token-level decoding step, which is a sensible unit with which to measure inference latency as the context encoding portion can be easily parallelized and thus grows slowly with respect to context length in small batch regimes. 12978 Math and reasoning tasks GSM MATH LLEMMA-7B 4.0 2.0 LLEMMA-34B 14.5 6.3 Finetuned LLAMA-7B 34.5 7.6 Finetuned LLAMA-70B (QLoRA) 52.5 11.7 PT (LLEMMA-34B + LLAMA-7B) 30.0 20.9 PT (LLEMMA-34B + LLEMMA-7B) 58.5 23.7 Co-LLM-7B + LLEMMA-7B 40.0 17.2 Co-LLM-7B + LLEMMA-34B 43.5 24.5 BioASQ tasks Factoid List Yes/No Summ. Avg. MEDITRON-7B 0.00 2.7 70.4 18.6 22.9 MEDITRON-70B 17.2 16.1 80.2 21.1 33.7 Finetuned LLAMA-7B 23.7 13.8 76.5 18.1 33.0 Finetuned LLAMA-70B (QLoRA) 24.7 20.7 75.3 21.1 35.5 PT (MEDITRON-70B + LLAMA-7B) 26.9 10.7 80.2 7.3 31.3 PT (MEDITRON-70B + MEDITRON-7B) 26.9 23.5 82.7 11.0 35.6 Co-LLM-7B + MEDITRON-7B 17.2 16.0 72.8 19.8 31.4 Co-LLM-7B + MEDITRON-70B 21.5 18.6 81.5 20.6 35.6 Table 1: Co-LLM enables collaboration between models trained on different domains: using the expert model trained for the domain (e.g., LLEMMA for math and reasoning, and MEDITRON for biomedical tasks) during decoding boosts performance compared to the fine-tuned base model, and sometimes performs even better than fine-tuned LLAMA-70B. Proxy Tuning (Liu et al., 2024, PT) only performs well when all three of its component models (M, M+, M−) are pretrained on the same domain mix. 5 Results 5.1 Collaboration across domains Table 1 shows that Co-LLM enables collaboration between LLAMA and domain-specific models and that this collaboration improves performance compared to the individual models themselves. For example, being able to utilize LLEMMA as an assistant leads to improved performances on math and reasoning tasks. On the MATH dataset, even invoking a small, 7B-scale LLEMMA assistant (17.2) outperforms fine-tuned LLAMA-7B (7.6), fine-tuned LLAMA-70B (11.7), and LLEMMA-34B (6.3). Similarly, cooperation with MEDITRON models leads to performance gains on some BioASQ subtasks (e.g., List, Summary), and outperforms fine-tuned LLAMA-7B, fine-tuned LLAMA-70B, and base MEDITRON-70B on average. In addition, Co-LLM with LLAMA-7B and LLEMMA-34B can achieve similar performance as fine-tuned LLEMMA-7B, which scores 43.5 on GSM8k and 23.5 on MATH. Co-LLM allows the base 7b model access to collaborate with a domain expert (LLEMMA-34B), which surprisingly leads to similar performance as performing a large amount of domain-specific fine-tuning plus further taskspecific fine-tuning on the base model (finetuned LLEMMA-7b). These results suggest that Co-LLM enables a modular approach to continued pretraining and task-specific finetuning: one can pretrain a large model on a domain-specific corpus, then fine-tune smaller models with Co-LLM to leverage the knowledge from the larger models and attain improved performance on the downstream tasks. Comparison against Proxy Tuning. While PT and our work are differently motivated and constructed, they both leverage multiple models during generation. Table 1 also provides an in-depth comparison between the two methods in the context of combining models from different domains. PT only performs well when all three models (M, M+, M−) are pretrained on the same domain mix (compare, e.g. “LLEMMA + LLAMA” to “LLEMMA + LLEMMA”). This is due to the implicit assumption that the difference between the base model M and a hypothetical, tuned version of the base model is the same as the difference between the smaller expert M+ and the anti-expert M−. Our results show that Co-LLM is more effective at enabling collaboration between models from different domains. PT also requires more calls to the larger model, thus resulting in slower inference. Co-LLM makes fewer calls to both large and small models. In the following section, we show that in addition to enabling collaboration across domains, Co-LLM also allows collaboration across model scales. 5.2 Collaboration across scales Table 2 shows that using Co-LLM leads to a successful collaboration between the base and assistant models of different sizes within the LLAMA family. In particular, compared to using the (finetuned) base model LLAMA-7B alone, Co-LLM7B + LLAMA-70B, which occasionally calls the unmodified assistant model during decoding, consistently achieves significantly better performance across all tasks and datasets (2.6, 10.5, 7.5, and 3.3 absolute improvements for the 4 datasets, respectively). Co-LLM is sometimes better than the QLoRA finetuned assistant model (on MATH and BioASQ), suggesting that our method can effectively combine the best of the models and achieve better performance than the “sum” of their parts. Training with Co-LLM does not hurt the perfor12979 AlpacaEval GSM MATH BioASQa (% Win) (Acc.) (EM) Factoid (SAcc.) List (F1) Yes/No (Acc.) Summ. (R2) Avg. Untuned LLAMA-7B 7.0 0.3 4.3 4.9 71.6 17.2 24.5 LLAMA-70B 11.6 13.5 2.1 11.8 14.9 77.8 18.6 30.8 LLAMA-70B+7B (CD) 11.5 1.3 11.8 9.0 71.6 17.5 27.5 Finetuned LLAMA-7B (Finetuned) 69.3 34.5 7.6 23.7 13.8 76.5 18.1 33.0 LLAMA-70B (QLoRA) 78.6b 52.5 11.7 24.7 20.7 75.3 21.1 35.5 LLAMA-70B+7B (PT) 72.3 52.5 17.3 29.0 16.8 85.2 21.3 38.1 Collaboration CO-Random 46.3 17.0 6.1 6.5 1.9 30.9 17.5 14.3 CO-Greedy 64.1 38.0 8.1 29.0 16.6 76.5 20.2 35.6 Weak Supervision 56.7 40.0 12.3 22.6 14.6 80.2 17.5 33.7 Co-LLM-7B (Base Only) 70.6 33.0 6.4 20.4 11.2 79.0 18.1 32.2 Co-LLM-7B + LLAMA-70B 71.9 45.0 15.1 24.7 18.0 82.7 20.4 36.5 a For BioASQ, we use 1-shot prompting for LLAMA-7B, -70B, and CD experiments to inform the model of the output format. b We report the results obtained by Ivison et al. (2023) in Table 5. Table 2: Results of using Co-LLM for LLAMA models of different sizes. Occasionally7 calling the Llama-70B model to generate a few tokens, Co-LLM-7B is able to significantly outperform the finetuned Llama-7B model in all tasks, and sometimes even performs better than the QLoRA-finetuned Llama-70B model. Math and reasoning tasks GSM MATH MISTRAL-7B 21.5 7.2 MIXTRAL-8×7B (MoE) 38.5 16.2 Finetuned MISTRAL-7B 51.0 13.9 Co-LLM MISTRAL-7B + MIXTRAL-8×7B 57.0 20.0 Table 3: Co-LLM can be applied among models of different architectures like a dense LLM (MISTRAL-7B) and a sparse Mixture of Experts (MoE) model (MIXTRAL8×7B). The learned collaboration leads to strong performance improvements on both GSM and MATH tasks. mance of the base model: when we prohibit using the assistant model during inference, performance is comparable to the base model finetuned with the usual language modeling objective (LLAMA-7B (Finetuned)): for example, getting 33.0 and 6.4 for GSM8k and MATH, respectively. Co-LLM thus degrades gracefully as the amount of deferral is changed, which we explore further in §5.4. Comparing with the model collaboration baselines, we show that interleaving two model generations is not a trivial task: randomly switching between the two models lead to worse than singlemodel performances in Co-LLM-Random. Even if running two models in parallel, Co-LLM-Greedy does not consistently yield better performance than using either model alone, and in some cases, they are worse (e.g., in the case of GSM8k). As discussed in Section 5.1, Proxy Tuning performs very well when used with models of different sizes in the same family, but Co-LLM also performs well despite using far fewer calls to the language models. Finally, the Toolformer-style baseline with fixed, weakly supervised values of Zt leads to overall worse performance compared to Co-LLM, indicating the benefits of our latent variable formulation and marginal likelihood loss, which allow the best deferral patterns to be learned from data. 5.3 Collaboration across architectures Since Co-LLM only assumes access to token logprobs, it can easily be used for collaboration between models of different architectures. For example, Table 3 shows that Co-LLM can be adopted to collaborate between a dense model (MISTRAL-7B) and a sparse MoE model (MIXTRAL-8×7B), and the joint model achieves strong accuracy gains compared to either the finetuned MISTRAL-7B model or the MIXTRAL-8×7B model. These results indicate that Co-LLM is still useful when used together with other model combination methods like MoE. 5.4 Qualitative Analysis The exposure of the interpretable variable Zt in our latent-variable framework makes it easy to visualize the model composition patterns. As shown in Figs. 1 and 2, our unsupervised training can induce an interesting template-filling collaboration strategy: the base model learns to generate a template with slots that requires deeper knowledge (in question answering) or reasoning (in math problems) for assistant models to fill in. Table 4 further illustrates how the collaboration between the assistant 7See Section 5.4 for a detailed analysis of the frequency of invoking the assistant model and the end performance. 12980 Deferral frequency f = 0 f = 0.3 f = 0.4 f = 0.9 f = 1.0 We have a3 = 53 = 125, and a2 = 52 = 25, so a3·a2 = 125 . The final answer is 125. We have a 3 = 53 = 125, and a2 = 52 = 25, so a 3·a2 = 3000 . The final answer is 3000. We have a 3 = 53 = 125, and a2 = 52 = 25, so a 3 ·a2 = 3125 . The final answer is 3125. We can use the power rule to simplify this expression. We have that a3 · a2 = a3+2 = a5. Now we can substitute a = 5 to get 55 = 3125. Therefore, the final answer is 3125 . The final answer is 3125. ### 1 Given a mathematics problem, determine the answer. Simplify your answer as much as possible. You can use latex to format your answer and you should state your final answer as "The final answer is $(final - answer)$." Problem: Table 4: Model answers with different rates of deferral to the question “Evaluate the expression a3 · a2 if a = 5” (answer: 3125). In this example, we use the finetuned LLAMA-7B as the base model and LLEMMA-34B model as the assistant. We show the 0-shot model answers at different deferral frequencies f, with yellow background to indicate the token is produced by the assistant model. 0 20 40 60 80 100 f - D eferral Frequency 0 10 20 30 40 50 A ccuracy GSM8k (V alidation) A sst. L lam a 70B A sst. L lem m a 7B A sst. L lem m a 34B Figure 3: Performance of Co-LLM at different frequencies of deferral on GSM8k. There exists an optimal f that the joint model achieves better performance than using either of them alone. Similar trend is observed in MATH and BioASQ, shown in Fig. 4 in Appendix. (LLEMMA-34B) and base (fine-tuned LLAMA-7B) model leads to correct responses while either model alone fails to do so. When the base model is tasked to solve the math question alone (i.e, deferral frequency f = 0), it fails to produce a valid answer, generating 125 rather than 3125. As we increase the frequency of deferral by lowering the threshold η, we observe Co-LLM starts to invoke the assistant model to generate latex code and compute the results, and when f = 0.4, the joint model produces the correct answer. More deferral in this case does not yield better generations. When we fully rely on the assistant model (f = 1), since it is not tuned or aligned, it produces no helpful solutions. We also evaluate the joint model at different deferral frequencies on small validation sets for GSM8k, MATH, and BioASQ, and plot the results in Fig. 3 and Fig. 4 in the Appendix. Across different domains and scales, the models exhibit a similar concave performance curve: at some optimal deferral frequencies, the joint model can achieve better performance than using either of the models alone, with decreased performance at both extremes. The optima vary across different datasets, corresponding to the different patterns of invoking assistant models (e.g., API-call or “leading” style). In practice, one can pick the proper η to balance the accuracy and the efficiency/cost. 6 Related Work Learning to compose models. Composing expert models has been a recurring strategy to improve large models, and the way of composing expert models is largely related to the underlying learning settings. Mixture of Experts (Shazeer et al., 2017; Jiang et al., 2024; Dai et al., 2024; Xue et al., 2024, MoE) requires that all experts are trained simultaneously using the same data. In contrast, post-hoc methods assume the presence of pretrained language models but usually pose implicit constraints on the model families to compose. Proxy Tuning (Liu et al., 2024, 2021) works best when all models are all pretrained on the same data mixture; PHATGOOSE (Muqeeth et al., 2024) requires that all models are LoRA-finetuned (Hu et al., 2021) models; Contrastive Decoding (Li et al., 2022b, CD) requires an amateur model, which is not clear for tasks such as math reasoning. Co-LLM is more flexible, mainly because our base model learns to interact with assistant models instead of relying on prescribed strategies. Our experiments also indicate that Proxy Tuning (concurrent work) only performs well when all models are pretrained in the same domain, whereas Co-LLM can effectively combine general and domain-specific models. Compared to CD and Proxy Tuning, CoLLM also makes fewer calls to the language models at inference time, as described in Section 4.1. 12981 Perhaps most similar to our work are speculative decoding (Leviathan et al., 2023) and CombLM (Ormazabal et al., 2023). Like our approach, speculative decoding generates some tokens with one model and some tokens with another, but our method differs in that the goal is to improve generation quality rather than to sample more quickly from a large model by approximating its generations with a smaller one. CombLM also learns to combine the probability distributions of multiple models, but their combination is not time-varying, and they mainly demonstrate wins on perplexity. Our approach could be seen as a special case of Toolformer (Schick et al., 2023), where the assistant model is the tool/API called by the base model. However, our latent variable formulation allows fine-grained deferral decisions to evolve over the course of fine-tuning, whereas Toolformer derives a fixed dataset prescribing which tool gets called (and where) before training. Co-LLM enables varying the frequency of calling the assistant model, whereas Toolformer has no provision for flexibly adjusting the amount of tool use at inference time. Is this just Mixture of Experts? In Mixture of Experts (MoE) LLMs (Zhou et al., 2022; Li et al., 2022a; Jiang et al., 2024; Xue et al., 2024; Dai et al., 2024), the goal is to train a model that can be partially executed during inference time. A common choice in MoEs is to decompose FFN layers (feed forward network) in a transformer into modular “experts”. These experts are subnetworks—they cannot be used standalone, and the experts are typically expected to have the same size (parameter count). MoE also requires gradient access to all experts and assumes every expert has access to the same training data. In contrast, we aim to learn how to collaborate with existing off-the-shelf models. First, we do not assume gradient access to assistant models, and we only fine-tune the base model. This saves in training cost, but poses a more difficult learning problem, since we only assume access to logprobs from the assistant model. Co-LLM is able to learn effective collaboration patterns between models despite this limited access to the assistants. Second, because of our less restrictive assumptions on assistant model access and architecture, Co-LLM can be applied in more constrained scenarios. For example, Co-LLM can still be applied when the assistant is trained on proprietary data (e.g., due to copyright issues (Duetting et al., 2023)). Third, our framing allows flexible collaboration between models of different sizes or even architectures. Table 3 shows performance gains from combining a dense model with an MoE model using Co-LLM, indicating that the gains are orthogonal to MoE. Finally, in our approach, each “expert” is a full-fledged LLM rather than a sub-network. This allows for interpretable generation patterns (as illustrated in Figs. 1 and 2), as well as flexible inference-time tradeoffs between how much each model is used (e.g., in Table 4, we show Co-LLM can work well at different frequencies of deferral). Learning to defer. A large body of literature focuses on the related problems of prediction with rejection (Cortes et al., 2016), where the goal is to train a model that can predict on some inputs and decline to predict on others, and learning with deferral (Mozannar and Sontag, 2020; Mozannar et al., 2023), where the goal is to train a model to make predictions on some inputs and defer to a human expert on others. Mohri et al. (2023) combine prediction with example-level rejection and LLMs on the text decontextualization problem. We use a latent variable formulation inspired by Mozannar et al. (2023), initially developed for learning to defer to a human expert on classification problems. We replace the human expert with a fixed LLM assistant model and extend the loss to allow for token-level rather than sequence-level deferral. 7 Conclusion We propose Co-LLM, a latent variable approach that learns to collaboratively decode using multiple LMs. Co-LLM can interleave token generations from different LMs in patterns most suitable for the task at hand (e.g., scaffolding the generation, or using the other model like an API), and learns the collaboration pattern organically from data. We empirically show that Co-LLM can produce generations of better quality compared to using either of the models alone, in tasks ranging from math reasoning, medical question answering, and instruction following. The latent “defer” variable offers a flexible and interpretable way for adjusting the frequency for invoking other LMs at inference time without re-training. In the future, we plan to extend Co-LLM to integrate more than two LMs and investigate potentially more complex collaboration strategies emerging from this setting. 12982 Limitations As shown in Fig. 4 and Table 6 of the Appendix, the optimal deferral threshold may be different across different datasets and models. Co-LLM thus requires picking the deferral threshold per task, which can be inconvenient in practice. However, the threshold also enables inference-time control over the amount of collaboration. Second, not every deferral matters: for some positions, the assistant model may generate an identical token as the base model does. This suggests the development of more fine-grained control of deferral strategies, potentially via more sophisticated modeling of deferral model parameters θ. Another limitation in our method comes from fully relying on an assistant model at some point in decoding. For example, if the assistant model is not well-tuned or aligned, it may unintentionally break the generation due to occasional mistakes. As shown in the example below, one erroneous token might lead to a cascade of errors, causing repetition patterns or generating irrelevant content. One future work is to develop a more robust deferral strategy that allows backtracking when the assistant model fails to generate a proper response. Here’s a recipe for Kubdari, a traditional Georgian dish: Ingredients: * 1 lb ground beef * 1 onion, finely cho pped * 2 cloves garlic, minced * 1 cup chopped parsley * 1 cup chopped cilantro * 1 cup chopped dill * 1 cup chopped ... [...repeating the same pattern...] Acknowledgements SS and DS were supported by the National Science Foundation (NSF award no. IIS-2205320 Conceptualizing ML for Dynamic Information Retrieval of EHR Notes). HL was funded by an NDSEG fellowship. BW and YK were partially supported by MIT-IBM Watson AI and an Amazon award. We thank CloudBank (Norman et al., 2021) for supplying computational resources, which is supported by the National Science Foundation under award #1925001. And thanks to Jacob Andreas, Lucas Torroba Hennigen, Hussein Mozannar, Ilker Demirel, Andreas Haupt, Tiwalayo Eisape, Alex Gu, Ruochen Zhang, and Doug Downey for feedback on the draft of this paper. References Akari Asai, Zeqiu Wu, Yizhong Wang, Avirup Sil, and Hannaneh Hajishirzi. 2023. Self-rag: Learning to retrieve, generate, and critique through self-reflection. arXiv preprint arXiv:2310.11511. Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. 2023. Llemma: An open language model for mathematics. Zeming Chen, Alejandro Hernández Cano, Angelika Romanou, Antoine Bonnet, Kyle Matoba, Francesco Salvi, Matteo Pagliardini, Simin Fan, Andreas Köpf, Amirkeivan Mohtashami, Alexandre Sallinen, Alireza Sakhaeirad, Vinitra Swamy, Igor Krawczuk, Deniz Bayazit, Axel Marmet, Syrielle Montariol, Mary-Anne Hartley, Martin Jaggi, and Antoine Bosselut. 2023. Meditron-70b: Scaling medical pretraining for large language models. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. Corinna Cortes, Giulia DeSalvo, and Mehryar Mohri. 2016. Learning with rejection. In Algorithmic Learning Theory: 27th International Conference, ALT 2016, Bari, Italy, October 19-21, 2016, Proceedings 27, pages 67–82. Springer. Damai Dai, Chengqi Deng, Chenggang Zhao, R. Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Yu Wu, Zhenda Xie, Y. K. Li, Panpan Huang, Fuli Luo, Chong Ruan, Zhifang Sui, and Wenfeng Liang. 2024. Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models. ArXiv, abs/2401.06066. Tri Dao. 2023. Flashattention-2: Faster attention with better parallelism and work partitioning. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314. Paul Duetting, Vahab Mirrokni, Renato Paes Leme, Haifeng Xu, and Song Zuo. 2023. Mechanism design for large language models. arXiv preprint arXiv:2310.10826. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language models. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. 12983 Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685. Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, and Hannaneh Hajishirzi. 2023. Camels in a changing climate: Enhancing lm adaptation with tulu 2. Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299. Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, MarieAnne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mixtral of experts. Michael I Jordan and Robert A Jacobs. 1994. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181–214. Anastasia Krithara, Anastasios Nentidis, Konstantinos Bougiatiotis, and Georgios Paliouras. 2023. Bioasqqa: A manually curated corpus for biomedical question answering. Scientific Data, 10(1):170. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via speculative decoding. In International Conference on Machine Learning, pages 19274–19286. PMLR. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models. Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, and Luke Zettlemoyer. 2022a. Branch-train-merge: Embarrassingly parallel training of expert language models. Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and M. Lewis. 2022b. Contrastive decoding: Open-ended text generation as optimization. pages 12286–12312. Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. 2023a. Contrastive decoding: Open-ended text generation as optimization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12286–12312, Toronto, Canada. Association for Computational Linguistics. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023b. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, and Noah A. Smith. 2024. Tuning language models by proxy. Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, and Yejin Choi. 2021. DExperts: Decoding-time controlled text generation with experts and anti-experts. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6691–6706, Online. Association for Computational Linguistics. Ilya Loshchilov and Frank Hutter. 2018. Decoupled weight decay regularization. In International Conference on Learning Representations. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey. Christopher Mohri, Daniel Andor, Eunsol Choi, and Michael Collins. 2023. Learning to reject with a fixed predictor: Application to decontextualization. arXiv preprint arXiv:2301.09044. Hussein Mozannar, Hunter Lang, Dennis Wei, Prasanna Sattigeri, Subhro Das, and David Sontag. 2023. Who should predict? exact algorithms for learning to defer to humans. In International Conference on Artificial Intelligence and Statistics, pages 10520–10545. PMLR. Hussein Mozannar and David Sontag. 2020. Consistent estimators for learning to defer to an expert. In International Conference on Machine Learning, pages 7076–7087. PMLR. Mohammed Muqeeth, Haokun Liu, Yufan Liu, and Colin Raffel. 2024. Learning to route among specialized experts for zero-shot generalization. arXiv preprint arXiv:2402.05859. 12984 Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021. Webgpt: Browser-assisted questionanswering with human feedback. arXiv preprint arXiv:2112.09332. Michael Norman, Vince Kellen, Shava Smallen, Brian DeMeulle, Shawn Strande, Ed Lazowska, Naomi Alterman, Rob Fatland, Sarah Stone, Amanda Tan, et al. 2021. Cloudbank: Managed services to simplify cloud access for computer science research and education. In Practice and Experience in Advanced Research Computing, pages 1–4. Sean O’Brien and Mike Lewis. 2023. Contrastive decoding improves reasoning in large language models. Aitor Ormazabal, Mikel Artetxe, and Eneko Agirre. 2023. Comblm: Adapting black-box language models through small fine-tuned models. arXiv preprint arXiv:2305.16876. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Lauren Hong, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023. Toolllm: Facilitating large language models to master 16000+ real-world apis. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 3505–3506. Danielle Saunders, Felix Stahlberg, A. Gispert, and B. Byrne. 2019. Domain adaptive inference for neural machine translation. ArXiv, abs/1906.00408. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, D. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, A. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, et al. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. BMC bioinformatics, 16(1):1–28. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A. Smith, Iz Beltagy, and Hanna Hajishirzi. 2023. How far can camels go? exploring the state of instruction tuning on open resources. ArXiv, abs/2306.04751. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Yiran Wu, Feiran Jia, Shaokun Zhang, Hangyu Li, Erkang Zhu, Yue Wang, Yin Tat Lee, Richard Peng, Qingyun Wu, and Chi Wang. 2023. An empirical study on challenging math problem solving with gpt4. Fuzhao Xue, Zian Zheng, Yao Fu, Jinjie Ni, Zangwei Zheng, Wangchunshu Zhou, and Yang You. 2024. Openmoe: An early effort on open mixture-ofexperts language models. ArXiv, abs/2402.01739. Kevin Yang and Dan Klein. 2021. FUDGE: Controlled text generation with future discriminators. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3511–3535, Online. Association for Computational Linguistics. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685. Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew Dai, Zhifeng Chen, Quoc Le, and James Laudon. 2022. Mixture-ofexperts with expert choice routing. 12985 A Additional Experimental Details A.1 Model Training and Computation Training details. In our experiments, we finetune the base model and learn the latent variable parameters θ jointly: i.e., we optimize both the “base” and θ parameters by optimizing the marginal likelihood from Eq. (4) with AdamW (Loshchilov and Hutter, 2018). We train our models primarily using 4 A100 80G GPUs, and we use FlashAttention (Dao, 2023) and DeepSpeed ZeRO Stage 2 (Rasley et al., 2020) during training to reduce the GPU memory usage. We follow Ivison et al. (2023) and Liu et al. (2024), using similar hyperparameters and settings detailed in Table 5. In all training experiments, we compute the marginal likelihood loss on only target tokens (i.e., we mask out the input tokens). It takes around 2 hours to finish the finetuning experiments for GSM8k, and we estimate a total of 3,000 GPU hours used for all experiments. Datasets and prompts. Table 8 details the sizes of the training and evaluation datasets. We format the data using the same simple prompts during training and evaluation, with examples shown in Table 11. Model and data licenses. We use three different LLMs in our experiments, i.e., LLAMA (Touvron et al., 2023), LLEMMA (Azerbayev et al., 2023), and MEDITRON (Chen et al., 2023), and they all share the same LLaMA 2 community license.8 The licenses for the dataset are listed in Table 9. Threshold search. For all methods with a learnable model selector (including Co-LLM), we choose the best η for Eq. (6) using a small validation set before testing, described in Algorithm 1. For GSM8k, MATH, and BioASQ, we conduct the search in-domain, using a small hold-out subset for that dataset; for AlpacaEval, we use a subset of a separate instruction following benchmark MTBench (Zheng et al., 2023) to search and pick the η∗. We report the used values under different settings in Table 6 (for Co-LLM) and Table 6 (for Weakly-supervised Co-LLM). A.2 BioASQ Evaluation There are four different substasks in the BioASQ dataset, and they are evaluated using different met8https://ai.meta.com/llama/license/. 0 20 40 60 80 f - D eferral Frequency 5 10 15 20 25 30 E xact M atch MATH (Validation) A sst. L lam a-70B A sst. L lem m a-7B A sst. L lem m a-34B 0 20 40 60 80 f - D eferral Frequency 25 30 35 40 45 A verage Score BioASQ (Validation) A sst. L lam a-70B A sst. M editron-7B A sst. M editron-70B Figure 4: Performance of Co-LLM at different frequencies of deferral on GSM8k, MATH and BioASQ. There exists an optimal f that the joint model achieves better performance than using either of them alone. Algorithm 1: Find Optimal Deferral Th. η Input :Base Model, Asst. Model, Model Selector ϕ, Validation Dataset D Let P = {} for i = 0 to |D| do Given the input prompt in Di, generate response X(i) using the base model Predict per-token deferral probability Pϕ(Z(i) t = 1 | X(i) <t), and append it to P end Sort P in ascending order Set current best threshold η = 0 and evaluation score s = 0 for j = 0 to 100 by 10 do Get the j-th quantile pj in P and use it as the deferral threshold ηj Generate responses X(i) for i in from 0 to |D| using the base and asst. model controlled by Pϕ(Z(i) t = 1 | X(i) <t) > ηj Evaluate the responses, and if the evaluation score is better than s, set η = ηj and s to the new score end Return η 12986 Hyperparameter Configuration Default and Co-LLM Finetuning for 7B Modelsa Training Epoch 2 Max Sequence Length 2048 Effective Batch Size 128 Gradient Accumulation Steps 16 Learning Rate 2e-5 Warmup Ratio 0.04 Weight Decay 0 AdamW β1, β2 0.9, 0.999 QLoRA Finetuning for 70B modelsb Training Epoch 2 LoRA Rank 64 LoRA Alpha 16 LoRA Dropout 0.1 Learning Rate 1e-4 Warmup Ratio 0.03 Weak Supervision Experiment λ for binary classifier loss 0.5 Positive Class Weight in binary loss 8 or 5c a The settings are similar to the ones used by Liu et al. (2024). b We adopt the same values as Ivison et al. (2023). c 8 for Tülu-v2-mixture training and 5 for the rest, set based on the class imbalance in the training data. Table 5: Training hyperparameters for our experiments. rics according to the guideline:9 • Factoid: It require a particular entity name (e.g., of a disease, drug, or gene, or a number) to be generated per the question, which might not appera in the original question. The model may generate a list of candidate, and we pick the first generation (as the model often only generates one) and search for matching among the allowed candidate answers. This is the Strict Accuracy (SAcc.) metric in Krithara et al. (2023). • List: Similar to Factoid questions but the model is expected to generate a list of entities. The model is required to produce the answer in a bullet list format, and we use F1 score to evaluate the performance. • Summary: The answer is expected to be a longform text like the description of a treatment. Following Krithara et al. (2023), we report ROUGE2 (Lin, 2004) (using the implementation from the Huggingface Evaluate library10) to measure the 9http://participants-area.bioasq.org/general_ information/Task9b/ 10https://huggingface.co/spaces/ evaluate-metric/rouge Base Asst. η∗ f AlpacaEval N = 2048 LLAMA-7B LLAMA-70B 0.80 0.1 GSM8k N = 512 LLAMA-7B LLAMA-70B 0.17 0.1 LLAMA-7B LLEMMA-7B 0.08 0.1 LLAMA-7B LLEMMA-34B 0.05 0.3 MATH N = 512 LLAMA-7B LLAMA-70B 0.57 0.6 LLAMA-7B LLEMMA-7B 0.05 0.9 LLAMA-7B LLEMMA-34B 0.30 0.8 BioASQ N = 512 LLAMA-7B LLAMA-70B 0.38 0.2 LLAMA-7B MEDITRON-7B 0.07 0.5 LLAMA-7B MEDITRON-70B 0.20 0.5 Table 6: For each dataset, we show the max number of generated tokens N (used for all models) and the optimal deferral threshold η∗(corresponding frequency f) used to generate the responses (available only for Co-LLM). Base Asst. η∗ f AlpacaEval LLAMA-7B LLAMA-70B 0.11 0.1 GSM8k LLAMA-7B LLAMA-70B 1.00 0.0 MATH LLAMA-7B LLAMA-70B 0.44 0.1 BioASQ LLAMA-7B LLAMA-70B 1.00 0.0 Table 7: Similar to Table 6, we report the the optimal deferral threshold η∗and frequency used for Weaklysupervised Co-LLM. # of Samples Dataset Train Dev Test Tülu-v2-mixture 326,115 MT-Bench 24 AlpacaEval 805 GSM8k 7,473 50 200 MATH 7,498 60 750 BioASQ 4,719 20 310 (Factiod) 93 (List) 61 (Summary) 75 (Yes/No) 81 Table 8: The training and evaluation dataset sizes. For the instruction following task, we train a model on the Tülu mixture, uses a small validation set from the MTBench dataset to pick the deferral threshold η, and evaluate on the AlpacaEval dataset. For the other tasks, we train and test in-domain. textual overlapping between the generation and the ground truth. • Yes/No: The model needs to provide a binary 12987 Dataset Name License AlpacaEval CC BY-NC 4.0 GSM8k MIT MATH MIT BioASQ CC BY 2.5 Tülu v2 mix ODC BY Table 9: The licenses of the datasets used. answer yes or no to the given question, and the classification accuracy is reported. B Additional Generation Examples Fig. 2 shows an simplified version of generation for clarity, and we show the original generation in Table 10. An additional example generation on MATH is shown in Table 12. Opdu alag contains two active components: 1) n iv ol umab and 2) rel at lim ab. The final answer is: 1) nivolumab 2) relatlimab Table 10: The original (token-level) model generation for the example in Fig. 2. Fig. 2 shows the generation at the word level rather than the token level for simplicity. 12988 Dataset Prompt AlpacaEval <|user|> What are some species of bears that are now extinct? <|assistant|> GSM8k Please solve the following math problem with detailed steps. Question: Tom’s ship can travel at 10 miles per hour. He is sailing from 1 to 4 PM. He then travels back at a rate of 6 mph. How long does it take him to get back? Answer: MATH Given a mathematics problem, determine the answer. Simplify your answer as much as possible. You can use latex to format your answer and you should state your final answer as “The final answer is $(final-answer)$.” Problem: Consider the function $g(x)=3x-4$. What is $g(0)$? Solution: BioASQ You are a medical expert answering real-world biomedical questions. Based on your understanding of basic and clinical science, medical knowledge, and mechanisms underlying health, disease, patient care, and modes of therapy, write a list of entity names, numbers, or short expressions that answer the following question. Question: Which genes are related to Emery-Dreifuss Muscular Dys trophy (EDMD)? Answer: Table 11: Prompts used for formatting the datasets during training and evaluation. For training on the Tülu mixture, we adopt the same chat template as Wang et al. (2023). The prompt for GSM8k and MATH is based on the prompt by Wei et al. (2022) and Hendrycks et al. (2021). We modify the prompt for MedMCQA questions in the Meditron paper (Chen et al., 2023) for BioASQ, and adapt it for different tasks. 12989 f Generation (Source TEX) Rendered GT Completing the square in $x$ and $y,$ we get \[9(x + 4)^2 + 4(y - 1)^2 = 324.\] Then \[\frac{(x + 4)^2}{36} + \frac{(y 1)^2}{81} = 1.\]Thus, the center of the ellipse is $\boxed{(-4,1)}.$ Completing the square in x and y, we get 9(x + 4)2 + 4(y −1)2 = 324. Then (x + 4)2 36 + (y −1)2 81 = 1. Thus, the center of the ellipse is (−4, 1) . 0.0 Completing the square in $x$ and $y,$ we get \[9x^2 + 72x + 4y^2 - 8y - 176 = 0 \Rightarrow 9x^2 + 72x + 4y^2 - 8y - 176 = 9(x - 2)^2 + 72(x - 2) + 4(y - 2)^2 - 8(y - 2) - 176.\]Then \[\boxed{2} = \frac{1}{9}(9(x - 2)^2 + 72(x - 2) + 4(y - 2)^2 - 8(y - 2) - 176).\] The final answer is $2$. Completing the square in x and y, we get 9x2 + 72x + 4y2 −8y −176 = 0 ⇒ 9x2 + 72x + 4y2 −8y −176 = 9(x −2)2 + 72(x −2) + 4(y −2)2 −8(y −2) −176. Then 2 = 1 9(9(x −2)2 + 72(x −2) + 4(y −2)2 −8(y −2) −176). The final answer is 2. 0.4 Completing the square in $x$ and $y,$ we get \[9x^2 + 72x + 4y^2 8y - 176 = 0 \Rightarrow 9 ( x - 4)^2 + 4 ( y - 1 )^2 = 1.\]Then \[\ frac{ 9 ( x 4 )^ 2 }{ 1 } + \ frac{ 4( y - 1)^2}{ 1} = 1.\]Hence, \[ a = \ sqrt { 9 } = 3,\]and $b = \ sqrt { 4} = 2.$ Then the center of the ellipse is $\boxed {\ textbf{ (4, 1)}}.$ The final answer is $\textbf{ (4, 1)}$. Completing the square in x and y, we get 9x2 + 72x + 4y2 −8y −176 = 0 ⇒ 9 (x −4)2 + 4(y −1)2 = 1. Then 9(x −4)2 1 + 4(y −1)2 1 = 1. Hence, a = √ 9 = 3, and b = √ 4 = 2. Then the center of the ellipse is (4, 1) . The final answer is (4, 1). 0.8 Completing the square in $x$ and $y,$ we get \[ 9 ( x ^ 2 + 8 x ) + 4 ( y ^ 2 - 2 y ) = 1 7 6.\]Then \[\ begin{aligned} 9 ( x ^ 2 + 8 x + 1 6 ) + 4 ( y ^ 2 - 2 y + 1 ) &= 1 7 6 \\ 9 ( x + 4 )^ 2 + 4 ( y - 1 )^ 2 &= 1 7 6. \end { aligned}\]There fore, the center of the ellipse is $\boxed { (- 4 , 1 )}.$ The final answer is $(-4, 1)$. Completing the square in x and y, we get 9(x2 + 8x) + 4(y2 −2y) = 176. Then 9(x2 + 8x + 16) + 4(y2 −2y + 1) = 176 9(x + 4)2 + 4(y −1)2 = 176. There fore, the center of the ellipse is (−4, 1) . The final answer is (−4, 1). Table 12: Model answers with different deferral frequencies to the question “Find the center of the ellipse whose equation is 9x2 + 72x + 4y2 −8y −176 = 0.” In this example, the base model is a LLAMA-7B model fine-tuned on the MATH dataset, and the reference model is the LLEMMA-34B model. We show the ground-truth answer in the first row, and the 0-shot model answers with different deferral ratios, with the original outputs and rendered. We use yellow background to indicate the token is produced by the reference model rather than the base model. 12990
2024
701
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12991–13013 August 11-16, 2024 ©2024 Association for Computational Linguistics DRAGIN: Dynamic Retrieval Augmented Generation based on the Information Needs of Large Language Models Weihang Su*1, Yichen Tang†1, Qingyao Ai‡1, Zhijing Wu2, Yiqun Liu1 1Department of Computer Science and Technology, Tsinghua University 2School of Computer Science and Technology, Beijing Institute of Technology Abstract Dynamic retrieval augmented generation (RAG) paradigm actively decides when and what to retrieve during the text generation process of Large Language Models (LLMs). There are two key elements of this paradigm: identifying the optimal moment to activate the retrieval module (deciding when to retrieve) and crafting the appropriate query once retrieval is triggered (determining what to retrieve). However, current dynamic RAG methods fall short in both aspects. Firstly, the strategies for deciding when to retrieve often rely on static rules. Moreover, the strategies for deciding what to retrieve typically limit themselves to the LLM’s most recent sentence or the last few tokens, while the LLM’s information needs may span across the entire context. To overcome these limitations, we introduce a new framework, DRAGIN, i.e., Dynamic Retrieval Augmented Generation based on the Information Needs of LLMs. Our framework is specifically designed to make decisions on when and what to retrieve based on the LLM’s information needs during the text generation process. We evaluate DRAGIN along with existing methods comprehensively over 4 knowledge-intensive generation datasets. Experimental results show that DRAGIN achieves superior performance on all tasks, demonstrating the effectiveness of our method1. 1 Introduction In recent years, large language models (LLMs) have made significant advancements across various natural language processing (NLP) tasks, quickly becoming a critical element in numerous AI applications (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023a; Scao et al., 2022; Zhang *[email protected] †contributed equally ‡Corresponding Author: [email protected] 1We have open-sourced all the code, data, and models in GitHub: https://github.com/oneal2000/DRAGIN/tree/main et al., 2022). Despite their impressive capabilities, these models often produce text that seems coherent and plausible but factually incorrect, a problem commonly known as hallucination (Maynez et al., 2020; Zhou et al., 2020; Liu et al., 2021; Ji et al., 2023; Su et al., 2024). To mitigate this issue, Retrieval-Augmented Generation (RAG) has emerged as a prominent solution. RAG enhances LLMs by retrieving and incorporating relevant information from external databases into the LLMs’ inputs. It has demonstrated superior effectiveness across numerous NLP challenges (Khandelwal et al., 2019; Borgeaud et al., 2022; Lewis et al., 2020; Guu et al., 2020; Izacard and Grave, 2020; Jiang et al., 2022; Shi et al., 2023). Traditional methods of RAG typically rely on single-round retrieval, using the LLM’s initial input to retrieve relevant information from external corpora. While this method is effective for straightforward tasks, it tends to fall short for complex multi-step tasks and long-form generation tasks (Jiang et al., 2023). In contrast, dynamic RAG (Trivedi et al., 2022; Borgeaud et al., 2022; Ram et al., 2023; Jiang et al., 2023) performs multiple times of retrieval during the generation process of LLMs. It includes two steps: identifying the optimal moment to activate the retrieval module (deciding when to retrieve), and crafting the appropriate query once retrieval is triggered (determining what to retrieve). Depending on when and what to retrieve, a variety types of methods have been proposed in this direction. For example, IRCoT (Trivedi et al., 2022) adopts a global augmentation method where retrieval is conducted for each generated sentence, with the latest generated sentence used as the query. RETRO (Borgeaud et al., 2022) and IC-RALM (Ram et al., 2023) define a sliding window and trigger the retrieval module based on a preset number of processed tokens, and the last n tokens are used as the query. However, existing dynamic RAG methods face 12991 several critical challenges, primarily in determining the optimal timing for retrieval and formulating effective queries when retrieval is triggered. First of all, existing approaches often rely on static rules to decide when to retrieve, neglecting the assessment of necessity and potential risks involved. On the one hand, depending on the quality of the input query and retrieval models, unnecessary retrieval augmentation may introduce irrelevant or noisy data to LLMs which could jeopardize the quality of the outputs. On the other hand, conducting retrieval augmentation will inevitably increase the time and computation cost of LLM inference, such cost is unworthy if LLMs can generate correct outputs by themselves. Additionally, the strategies of existing studies in determining what to retrieve often restrict themselves to the LLM’s most recent sentence or the last few tokens. This approach may not capture the model’s real-time information needs since the LLM’s information needs may actually be related to terms that span the entire context. Retrieving documents in this manner is thus suboptimal in many cases. To overcome these limitations, we introduce a new framework, DRAGIN, i.e., Dynamic Retrieval Augmented Generation based on the Information Needs of LLMs. Our framework is specifically designed to make decisions on when and what to retrieve, based on the LLM’s information needs during the text generation process. For the timing of retrieval, we propose RIND: Real-time Information Needs Detection, which considers the LLM’s uncertainty about its own generated content, the influence of each token on subsequent tokens, and the semantic significance of each token. For the formulation of retrieval queries, we propose QFS: Query Formulation based on Self-attention, which innovates query formulation by leveraging the LLM’s self-attention across the entire context. DRAGIN is a lightweight RAG framework that can be incorporated into any Transformer-based LLMs without further training, fine-tuning, or prompt engineering. We comprehensively evaluate DRAGIN along with existing dynamic RAG frameworks over four knowledge-intensive generation benchmarks. Experimental results show that DRAGIN achieves superior performance on all datasets, demonstrating the effectiveness of our method. Moreover, the results of the ablation study indicate that our proposed new strategies for "when to retrieval" (i.e., RIND) and "what to retrieval" (i.e., QFS) perform uniformly better than other strategies in existing RAG methods despite retrieval models and LLMs. In summary, the contributions of our paper are as follows: • We propose a novel dynamic RAG framework: DRAGIN. In contrast to previous works, our framework optimizes when and what to retrieve based on the real-time information needs of the LLM. • We evaluate existing dynamic RAG methods and DRAGIN on four knowledge-intensive datasets using three different LLMs. Experimental results indicate that DRAGIN achieves state-of-the-art (SOTA) performance. 2 Related Work 2.1 Single-round Retrieval-augmented LLM LLMs have demonstrated significant effectiveness across a wide range of tasks. However, their builtin knowledge can sometimes fall short when dealing with knowledge-intensive tasks. To address this limitation, Retrieval-Augmented Generation (RAG) strategies are widely employed to enhance the performance of LLMs. One of the most direct methods is single-round retrieval augmentation (Khandelwal et al., 2019; Borgeaud et al., 2022; Lewis et al., 2020; Guu et al., 2020; Izacard and Grave, 2020; Jiang et al., 2022; Shi et al., 2023), which involves using the initial input as a query to retrieve information from an external corpus. The retrieved external knowledge is then incorporated as part of the input for the model. Previous research has explored single-round retrieval augmentation extensively. For instance, REPLUG (Shi et al., 2023) treats LLMs as a black box and leverages them to generate training data for the retrieval model. From a different perspective, UniWeb (Li et al., 2023d) proposes an adaptive search engine-assisted learning method that can self-assess whether the LLM requires retrieval augmentation. 2.2 Multi-round Retrieval-augmented LLM Single-round retrieval can be relatively effective for simple tasks or cases where user information needs are clear-cut. However, for complex tasks or tasks involving the generation of lengthy text, such as long-form question answering, multi-hop reasoning, chain-of-thought reasoning, etc., relying solely on the user’s initial input for retrieval 12992 Input: Please give me a brief introduction to Einstein LLM Generating: RIND: Real-time Information Need Detection Continue Generation a position at the University of 𝓗𝒊 𝒂𝒎𝒂𝒙(𝒊) 𝒔𝒊 0.56 1.35 0.03 0.01 2.56 0.02 0.43 0.22 0.26 0.35 0.76 0.15 0 1 0 0 1 0 × × × × × × × × × × × × ... ... 0 0.16 0 0 1.95 0 𝑺𝑹𝑰𝑵𝑫 Retrieval Module QFS: Query Formulation based on Self-attention Please give me a brief introduction to Einstein <SEP> Einstein was born in the German Empire and moved to Switzerland in 1895. In 1903, he secured a job at the 𝑨𝒏𝒚 𝑺𝑹𝑰𝑵𝑫> 𝜽 𝑨𝒍𝒍 𝑺𝑹𝑰𝑵𝑫< 𝜽 Query: Einstein 1903 secured job LLM Continual Generation based on External Knowledge Retrieved External Knowledge Einstein was born in the German Empire and moved to Switzerland in 1895. In 1903, he secured a job at the University of Zurich, where he began to establish himself as a leading physicist. During his time at the University of Zurich I Einstein was born in the German Empire and moved to Switzerland in 1895. In 1903, he secured a job at the Swiss Patent Office in Bern. This position allowed him to have a stable income I Figure 1: An illustration of our DRAGIN framework. may not adequately cover all the external knowledge that the model requires (Jiang et al., 2023). Therefore, some researchers have begun to explore multi-round retrieval augmentation. For example, RETRO (Borgeaud et al., 2022) and ICRALM (Ram et al., 2023) trigger retrieval every 4 to 32 tokens, and IRCot (Trivedi et al., 2022) triggers retrieval every sentence. However, solely relying on fixed interval-based retrieval without considering the information needs of the LLM itself could produce suboptimal results. Inspired by this, FLARE (Jiang et al., 2023) triggers retrieval when encountering an uncertain token. Specifically, if any token in the generated text has a probability lower than a certain threshold, the retrieval module is triggered. 3 Methodology In this section, we introduce the DRAGIN framework in detail. DRAGIN consists of two components: Real-time Information Needs Detection (RIND) and Query Formulation based on Selfattention (QFS), as illustrated in Figure 1. We introduce RIND in section 3.1, and QFS in section 3.2. 3.1 Real-time Information Need Detection As discussed above, most existing dynamic RAG frameworks trigger the retrieval module based on static, predefined rules. To the best of our knowledge, the only notable exception is FLARE (Jiang et al., 2023) which triggers retrieval dynamically when the LLM’s confidence (i.e., the generation probability) on the next token is lower than certain thresholds. However, the necessity of retrieval augmentation not only depends on the generation confidence, but also depends on the importance of the token, the semantic of the token, and the influence of each token on subsequent tokens. To address the limitations of the existing approaches, we propose an enhanced approach for triggering retrieval within dynamic RAG frameworks, named Real-time Information Needs Detection (RIND). This method refines the retrieval activation process by evaluating not only the uncertainty of each token, but also its semantic contribution and the impact on the following context. RIND begins by quantifying the uncertainty of each token generated during the LLM’s inference process. This is accomplished by recording the entropy of the token’s probability distribution across the vocabulary. Consider an output sequence generated by an LLM, denoted as T = {t1, t2, . . . , tn}, with each ti representing an individual token within the sequence at position i. For any token ti, the entropy Hi is computed as follows: Hi = − X v∈V pi(v) log pi(v), (1) where pi(v) denotes the probability of generating the token v over all tokens in the vocabulary V at position i. This measurement of uncertainty serves as the first dimension in our multi-faceted evaluation of tokens. In addition, RIND leverages the self-attention mechanism inherent in Transformer-based LLMs to allocate weights to tokens, which represent the tokens’ impact on the subsequent context. Specifically, for any given token ti, we quantify its influence by recording the maximum attention value amax(i), which records the maximum attention from all following tokens 2. The attention value 2We choose the attention scores of the last Transformer layer of the LLM. 12993 Ai,j between two tokens ti and tj ( i < j ) is computed as follows: Ai,j = softmax QiKT j √dk ! , (2) where Qi represents the query vector of token ti, Kj is the key vector of token tj, and dk denotes the dimensionality of the key vector. The softmax function is applied to the dot product of Qi and Kj, normalized by the square root of dk. Following this, the maximum attention value amax(i) for token ti is identified by locating the highest Ai,j for all j > i: amax(i) = max j>i Ai,j (3) Consider the semantic contribution of each token, RIND employs a binary semantic indicator to filter out stopwords, thus concentrating on tokens with significant semantic value: si = ( 0, if ti ∈S 1, otherwise , (4) where S is the stopwords set, si is the semantic contribution score of the token ti. This process ensures that only semantically potent tokens contribute to the retrieval decision-making process. Combining uncertainty, significance, and semantics, RIND computes a comprehensive score for each token ti: SRIND(ti) = Hi · amax(i) · si (5) Let T = {t1, t2, . . . , tn} represent the set of tokens already generated by the LLM. The retrieval module activates when the score SRIND(ti) for any token exceeds a predefined threshold, θ. 3.2 Query Formulation based on Self-attention Once the position to conduct retrieval augmentation is determined, the next step in the RAG framework is to formulate a query to retrieve necessary information from external databases for the continued generation of LLMs. In the existing dynamic RAG frameworks, all the query formulation methods limit their focus to the LLM’s most recent sentence or the last few tokens. This narrow scope fails to adequately cater to the model’s real-time information needs, which may span across the entire context. To overcome the shortcomings of these approaches, we propose a novel strategy that utilizes the selfattention mechanisms inherent in Transformerbased LLMs. Our method, termed "Query Formulation based on Self-Attention" (QFS), seeks to ascertain the LLM’s information needs more precisely by examining its understanding of the full context. Consider an output sequence generated by an LLM, denoted as T = {t1, t2, . . . , tn}, with each ti representing an individual token within the sequence. Suppose the RIND module identifies the token at position i, which requires external knowledge and triggers the retrieval module. The QFS approach then focuses on this specific position to formulate a query. For the token at position i, the QFS method evaluates the attention weights across the preceding token sequence {ti−1, ti−2, ..., t1}. Since the generation of ti by the LLM is based on its interpretation of the entire preceding context, the attention weights reflect the model’s selfassessed importance of each token in generating ti. The QFS method prioritizes these tokens based on their attention scores, selecting the top n tokens to construct the query. The query formulation process includes the following steps: (1) Extract the attention scores of the last Transformer layer Ai = {ai,1, ai,2, ..., ai,i−1} for each token ti in T, where ai,j represents the attention score assigned by ti to tj; (2) Sort Ai in descending order to identify the top n tokens with the highest attention scores; (3) Find the words corresponding to these tokens from the vocabulary and arrange them according to their original order in the text; (4) Construct the query Qi using the words from these top n tokens, ensuring the query reflects the most relevant aspects of the context as determined by the LLM’s self-attention mechanism. 3.3 Continue Generation after Retrieval Once the RIND module detects the position i that needs external knowledge, the QFS module creates the query and utilizes an off-the-shelf retrieval model (e.g. BM25) to retrieve relevant information from external knowledge bases. Suppose the retrieved documents are denoted as Di1, Di2, and Di3. Upon successful retrieval, the next step of the dynamic RAG framework is to integrate this external knowledge into the LLM’s generation process. This integration begins with truncating the LLM’s output at the identified position i for retrieval augmentation: T ′ = truncate(T, ti), (6) where T ′ represents the truncated output, T is the original sequence generated by the LLM, and ti is the token at which the need for external knowledge was identified by RIND. To integrate the re12994 trieved knowledge Di1, Di2, and Di3, we adopt a meticulously designed prompt template 3, which is structured as follows: The entire input for LLM: Below are the external knowledge references: [1] Di1 [2] Di2 [3] Di3 Please answer the question based on the external knowledge: Question: xxx Answer: T’ At this point, the LLM continues generating content based on the retrieved external knowledge and the truncated output T ′. Following the integration of the retrieved knowledge, the LLM resumes generating content from the truncation point, enhanced with additional information. This procedure allows the LLM to bridge the previously identified knowledge gap, facilitating a more informed and precise continuation of its output. Suppose at a subsequent position j where RIND detects again that the LLM requires external knowledge. In that case, the QFS module is triggered again at position j to generate a new query, retrieving a new set of documents Dj1, Dj2, and Dj3 to replace Di1, Di2, and Di3. The LLM will then continue generating from position j based on the newly retrieved documents, following the same process. 4 Experimental Setup 4.1 Datasets We choose two MultihopQA datasets 2WikiMultihopQA (Ho et al., 2020) and HotpotQA (Yang et al., 2018) to evaluate the RAG framework’s ability to answer complex questions that require multihop reasoning. We choose the IIRC (Ferguson et al., 2020) dataset to evaluate the RAG framework’s ability in reading comprehension tasks. Furthermore, we utilize the StrategyQA (Geva et al., 2021) dataset to evaluate the commonsense reasoning capabilities of DRAGIN and other baselines. 3The specific content of the prompt template is presented in Appendix F. 4.2 Settings for each Dataset • 2WikiMultihopQA (Ho et al., 2020). We follow the setting of (Wang et al., 2022) to generate both chain-of-thought (CoT) reasoning process as well as the final answer. We follow the prompt template of (Trivedi et al., 2022) and (Jiang et al., 2023). For the evaluation metrics, we extract the final answer from the generated output using pattern matching techniques. The extracted answer is then compared with the reference answer, utilizing methods such as exact match at the answer level, along with token-level measurements of F1 score and precision. • HotpotQA (Yang et al., 2018). We follow the setting and the prompt template of (Trivedi et al., 2022) to generate both chain-of-thought (CoT) reasoning process as well as the final answer. Our evaluation metric on this dataset is the same as 2WikiMultihopQA. • StrategyQA (Geva et al., 2021). We follow the setting of (Wei et al., 2022) to generate both the CoT reasoning process as well as the final answer. We follow the prompt template of (Wei et al., 2022) and (Jiang et al., 2023). For the evaluation metrics, the obtained yes/no response is extracted and compared with the standard correct answer using an exact match approach. • IIRC (Ferguson et al., 2020). We follow the setting and the prompt template of (Trivedi et al., 2022) to generate the final answer. Our evaluation metric on this dataset is the same as 2WikiMultihopQA. Besides the settings introduced in this section, the specific prompt templates corresponding to each dataset are presented in Appendix F. Appendix A provides more detailed descriptions of each dataset’s settings. 4.3 Baselines We choose the following Text Generation baselines for comparison. Following the setting of FLARE (Jiang et al., 2023), we implemented the existing multi-round RAG frameworks using the same settings, with the only variation being the timing of triggering retrieval (when to retrieve) and the query formulation method when the retrieval is triggered (what to retrieve). • wo-RAG. LLM provides direct answers to questions without RAG. 12995 Table 1: A comparative overview of our selected Retrieval-Augmented Generation baselines. Timing for Retrieval Query Formulation SR-RAG Before Generation Initial Input FL-RAG Per n Tokens Last Generated Tokens FS-RAG Per Sentence Last Generated Sentence FLARE Any token’s probability below the threshold Last generated Sentence exclude low-probability tokens DRAGIN Generated token’s importance and uncertainty LLM’s attention over the entire context • SR-RAG (Single-round RAG). Relevant passages are retrieved from an external corpus based on the initial question. The retrieved passages are then added into the LLM’s input. • FL-RAG (Fix Length RAG) (Khandelwal et al., 2019; Borgeaud et al., 2022; Ram et al., 2023). A multi-round retrieval augmentation method that triggers the retrieval module every n tokens. The tokens generated in the previous token window are utilized as the query. • FS-RAG (Fix Sentence RAG) (Trivedi et al., 2022). A multi-round retrieval augmentation method that triggers the retrieval module every sentence. The last generated sentence are utilized as the query. • FLARE (Jiang et al., 2023). A multi-round retrieval augmentation method that triggers retrieval each time it encounters an uncertain token. When the retrieval module is triggered, the last generated sentence without the uncertain tokens are defines as the query. To illustrate the differences between DRAGIN and other dynamic RAG baselines directly, we present a comparison of retrieval timing and query formation methods for each dynamic RAG frameworks in Table 1. 4.4 Selected LLMs To validate the effectiveness of DRAGIN and other RAG baselines, we conducted experiments with the following LLMs: • LLaMA-2-Chat (Touvron et al., 2023b). LLaMA2 is a collection of pre-trained and finetuned LLMs. This series includes fine-tuned LLMs, known as Llama2-Chat, specifically designed for optimal performance in dialoguebased applications. We choose LLaMA-2-Chat7B and LLaMA-2-Chat-13B. • Vicuna-13B-v1.5 (Chiang et al., 2023) is a collection of open-source chatbots fine-tuned from LLaMA using user-shared conversations gathered from ShareGPT. We have selected the latest versions of Vicuna, namely Vicuna-13B-v1.5. 4.5 Implementation Details Hyperparameter: The hyperparameters are all presented in Appendix E. Retriever: We adopt BM25 as our retrieval model based on findings from (Ram et al., 2023), which demonstrated its superior performance in RetrievalAugmented Generation, even outperforming some dense retrieval models. We also explored the impact of replacing BM25 with a SOTA dense retrieval method SGPT (Muennighoff, 2022), which is detailed in Section 5.5. Stopwords: For the identification of stopwords within the RIND module, we utilized the en_core_web_sm language model from the Spacy library, a tool recognized for its effectiveness and efficiency in Natural Language Processing tasks as evidenced by previous research (Shelar et al., 2020). External Knowledge Corpus: We adopt Wikipedia as our external knowledge corpus. Each article are segmented into 100-token passages. LLM Configuration: For the selected LLMs, we directly download model parameters from the official Hugging Face repositories for each model, and use the code provided by Hugging Face to conduct text generation. For the generation configuration, we have chosen greedy decoding as the decoding strategy for LLM inference to ensure the reproducibility of our experimental results. However, for practical applications, we recommend using the official default generation configuration provided by each model, as this will yield better performance. 5 Experimental Results 5.1 Overall Results of DRAGIN and Baselines Our experiments comprehensively evaluate the performance of DRAGIN against various baselines across four benchmark datasets: 2WikiMultihopQA, HotpotQA, StrategyQA, and IIRC. The results, summarized in Table 2, underscore several critical insights: (1) The integration of single-round retrieval augmentation consistently boosts LLMs’ performance across all datasets when compared 12996 Table 2: The overall experimental results of DRAGIN and other baselines on four benchmarks. The best results are in bold. 2WikiMultihopQA HotpotQA StrategyQA IIRC LLM RAG Method EM F1 EM F1 Accuracy EM F1 Llama2-13b-chat wo-RAG 0.187 0.2721 0.223 0.3097 0.650 0.168 0.2039 SR-RAG 0.245 0.3364 0.263 0.3706 0.654 0.196 0.2303 FL-RAG 0.217 0.3054 0.177 0.2682 0.648 0.155 0.1875 FS-RAG 0.270 0.3610 0.267 0.3715 0.655 0.171 0.2061 FLARE 0.224 0.3076 0.180 0.2756 0.655 0.138 0.1667 DRAGIN (Ours) 0.304 0.3931 0.314 0.4238 0.689 0.185 0.2221 Llama2-7b-chat wo-RAG 0.146 0.2232 0.184 0.2745 0.659 0.139 0.1731 SR-RAG 0.169 0.2549 0.164 0.2499 0.645 0.187 0.2258 FL-RAG 0.112 0.1922 0.146 0.2107 0.635 0.172 0.2023 FS-RAG 0.189 0.2652 0.214 0.3035 0.629 0.178 0.2157 FLARE 0.143 0.2134 0.149 0.2208 0.627 0.136 0.1644 DRAGIN (Ours) 0.220 0.2926 0.232 0.3344 0.641 0.192 0.2336 Vicuna-13b-v1.5 wo-RAG 0.146 0.2232 0.228 0.3256 0.682 0.175 0.2149 SR-RAG 0.170 0.2564 0.254 0.3531 0.686 0.217 0.2564 FL-RAG 0.135 0.2133 0.187 0.3039 0.645 0.0985 0.1285 FS-RAG 0.188 0.2625 0.185 0.3216 0.622 0.1027 0.1344 FLARE 0.157 0.2257 0.092 0.1808 0.599 0.1174 0.1469 DRAGIN (Ours) 0.252 0.3516 0.288 0.4164 0.687 0.2233 0.2652 to direct question answering, confirming the effectiveness of RAG. (2) Despite the overall positive impact of retrieval augmentation, we observe that fixed rules-based retrieval methods, e.g. FL-RAG and FS-RAG, do not always outperform singleround retrieval. This observation validates our hypothesis that retrieval augmentation, if conducted at a wrong position, may not be helpful in improving the quality of LLM’s outputs. This underscores the significance of timing in the activation of retrieval processes, which should be tailored to the information needs of Large Language Models (LLMs), activating retrieval only when LLMs necessitate external knowledge. (3) The performance of FLARE varies significantly among different datasets. Interestingly, as shown in our ablation study (§ 5.4), the query formulation strategies are significantly better than those used by other baselines, but its overall performance is not. This indicates that the timing of retrieval augmentation in FLARE is far from perfect. (4) Our proposed DRAGIN method demonstrates superior performance across most LLMs and datasets. This indicates the robustness and effectiveness of DRAGIN in enhancing LLMs’ capabilities. (5) DRAGIN demonstrates more substantial performance improvements in MultihopQA tasks, such as 2WikiMultihopQA and HotpotQA, than in tasks requiring common sense reasoning, like those in the StrategyQA dataset. This difference highlights DRAGIN’s specialized capability in managing complex, multi-step reasoning tasks. Table 3: Comparison of the frequency of retrieval module activation in dynamic RAG frameworks across all datasets. 2WMQA, HQA, SQA indicates 2WikiMultihopQA, HotpotQA, StrategyQA respectively. 2WMQA HQA SQA IIRC #Num #Num #Num #Num L13B FL-RAG 3.770 3.194 3.626 3.426 FS-RAG 3.131 4.583 4.885 4.305 FLARE 1.592 3.378 0.625 5.521 DRAGIN 2.631 3.505 4.786 2.829 L7B FL-RAG 3.342 3.809 3.757 2.839 FS-RAG 3.833 4.152 4.546 4.210 FLARE 0.941 1.064 1.271 1.095 DRAGIN 2.836 3.013 4.629 2.927 V13B FL-RAG 4.199 3.564 3.591 3.189 FS-RAG 3.720 5.701 6.820 6.032 FLARE 1.093 1.078 1.118 0.335 DRAGIN 2.542 3.184 3.744 3.120 5.2 Efficiency In this section, we investigate the efficiency of various dynamic RAG frameworks across multiple datasets. We measure efficiency based on the number of retrieval calls made, as outlined in Table 3. Due to the special design of FS-RAG, the #NUM for FS-RAG also indicates the average number of sentences produced by the LLM in response to queries on this dataset. Among the evaluated frameworks, FLARE stood out for its efficiency, requiring the fewest retrieval calls across all datasets. DRAGIN followed closely, with fewer retrieval 12997 Table 4: The influence of the ‘When to Retrieve’ decision on various dynamic RAG frameworks, with the IIRC dataset as the evaluation benchmark. The best results are in bold. L13B indicates LLaMA2-13B-Chat, V13B indicates Vicuna-13b-v1.5. We fix the query formulation method, the last complete sentence generated by the LLM is selected as the query for all the baselines. EM F1 Prec. L13B FLARE 0.128 0.1599 0.1677 FL-RAG 0.155 0.1875 0.1986 FS-RAG 0.171 0.2061 0.2185 DRAGIN 0.187 0.2242 0.2319 V13B FLARE 0.097 0.1277 0.1324 FL-RAG 0.099 0.1285 0.1324 FS-RAG 0.103 0.1344 0.1358 DRAGIN 0.196 0.2367 0.2476 calls than FS-RAG and FL-RAG. 5.3 Timing of Retrieval In this subsection, we investigate the impact of the timing of retrieval on the performance of dynamic RAG frameworks. Specifically, we fixed the method of query formulation to use the last complete sentence generated by the LLM as the query, and varied the timing of retrieval as the only variable. We examined DRAGIN alongside three existing frameworks: FLARE, FL-RAG, and FS-RAG on the IIRC dataset. As shown in Table 4, DRAGIN consistently outperforms all other dynamic RAG methods. This highlights the effectiveness of our novel approach to determining the optimal moment for retrieval. DRAGIN’s superior performance suggests that its method for detecting the real-time information needs of LLMs and triggering retrieval accordingly is particularly adept at enhancing the quality of the generated text. We also evaluate the impact of varying threshold values within the RIND module on the performance of the DRAGIN framework. We present the experimental results on the HotpotQA dataset in Table 5. Our experimental results show that DRAGIN’s performance remains stable across threshold settings, indicating a low sensitivity to changes in this hyperparameter. The threshold value is pivotal in determining the retrieval module’s activation frequency. As the threshold increases, the frequency of the retrieval module’s activation decreases, suggesting that adjusting the threshold can strike a balance between the system’s efficiency and the accuracy of its outputs in practical applications. Table 5: Comparasion between different threshold of RIND for LLaMA-13B-Chat model on the HotpotQA dataset. The best results are in bold. threshold EM F1 Prec. 0.3 0.295 0.3856 0.3873 0.4 0.297 0.387 0.389 0.5 0.299 0.3897 0.3915 0.6 0.304 0.3931 0.3946 0.7 0.304 0.3927 0.3937 0.8 0.301 0.392 0.3927 0.9 0.301 0.3944 0.3947 1 0.293 0.3869 0.3875 Table 6: The influence of the query formulation methods on various dynamic RAG frameworks, with the HotpotQA dataset as the evaluation benchmark. The best results are in bold. L13B indicates LLaMA2-13B-Chat, V13B indicates Vicuna-13b-v1.5. EM F1 Prec. L13B FLARE 0.262 0.3674 0.3792 Full Context 0.252 0.3584 0.3711 FS-RAG 0.255 0.3574 0.3685 FL-RAG 0.241 0.3394 0.3495 DRAGIN 0.314 0.4238 0.4401 V13B FLARE 0.225 0.3366 0.3420 Full Context 0.221 0.3402 0.3457 FS-RAG 0.216 0.3432 0.3507 FL-RAG 0.214 0.3268 0.3264 DRAGIN 0.288 0.4164 0.4226 5.4 Query Formulation This subsection delves into the impact of query formulation techniques on the performance of dynamic RAG frameworks. We standardize the timing of trigger retrieval to RIND, which is proven to be the most effective timing based on the experimental results detailed in section 5.3. We focus on the comparison between DRAGIN and three existing frameworks: FLARE, FL-RAG, and FS-RAG. The query formulation method of FLARE is the last generated sentence excludes low-probability tokens. FL-RAG selects the closest 25 tokens to this position as the query. FS-RAG selects the sentence before this position as the query. We also evaluate the effectiveness of using the full context as the query. As shown in Table 6, DRAGIN’s query formulation method performs best among all the dynamic RAG frameworks. FLARE emerged as the second most effective query formulation method, outperforming the FS-RAG and FL-RAG methods. Moreover, leveraging the entire context as a query did not yield optimal results, indicating potential redundancy within the full context. This finding 12998 Table 7: Comparison of performance between BM25 and SGPT using the LLaMA2-13B-Chat model. The method with better performance is highlighted. retriever EM F1 Prec. 2WMQA BM25 0.304 0.393 0.395 SGPT 0.273 0.356 0.357 HQA BM25 0.314 0.424 0.437 SGPT 0.264 0.371 0.388 IIRC BM25 0.185 0.222 0.235 SGPT 0.169 0.201 0.207 validates the effectiveness of our proposed QFS method, which aims to select tokens from the context that can represent the real-time information needs of the LLM as the query. 5.5 Impact of Retriever In the dynamic RAG paradigm, the choice of the retriever plays an important role in retrieving relevant passages from a corpus based on a given query. In the field of information retrieval, the two popular types of retrieval methods are lexical matching (Zhai, 2008; Robertson et al., 2009) and dense retrieval (Su et al., 2023b; Gao and Callan, 2021; Su et al., 2023a; Muennighoff, 2022; Li et al., 2023b; Ma et al., 2023; Ye et al., 2024; Su et al., 2023c; Li et al., 2023a; Chen et al., 2023, 2022; Li et al., 2023c; Fang et al., 2024). Among lexical matching techniques, BM25 stands out for its widespread adoption and effectiveness (Robertson et al., 2009). Conversely, among existing dense retrieval methods, none has achieved the widespread popularity of BM25. We have opted for SGPT, which has recently attained state-of-the-art performance across a variety of datasets. (Muennighoff, 2022). In our experimental analysis presented in Table 7, we found that BM25 consistently surpasses SGPT in performance across various datasets within the dynamic RAG framework, despite SGPT’s generally better performance in numerous information retrieval tasks. This outcome aligns with the findings of prior research, such as the study by (Ram et al., 2023), which underscored BM25’s effectiveness in RAG tasks. These results indicate that despite progress in dense retrieval technologies like SGPT, the simpler, lexicon-based BM25 algorithm is still a strong baseline for enhancing the performance of LLM in RAG tasks. 6 Conclusions and Future Works In this work, we propose DRAGIN, a dynamic RAG framework tailored to address the real-time information needs of LLMs during text generation. By integrating RIND for timely retrieval activation and QFS for precise query formulation, DRAGIN significantly outperforms existing dynamic RAG methods across various knowledge-intensive benchmarks. 7 Limitations We acknowledge the limitations of this paper. One of the primary limitations is the reliance on the selfattention mechanism of Transformer-based LLMs for both Real-time Information Needs Detection (RIND) and Query Formulation based on Selfattention (QFS). While self-attention scores are accessible for all open-source LLMs, it’s important to note that our method is not applicable to certain APIs that do not provide access to the self-attention scores. Thus, our future work aims to develop more methods to overcome this constraint. 8 Ethics Statement In conducting this research, we have prioritized ethical considerations at every stage to ensure the responsible development and application of AI technologies. Our research does not rely on personally identifiable information or require manually annotated datasets. We firmly believe in the principles of open research and the scientific value of reproducibility. To this end, we have made all models, data, and code associated with our paper publicly available on GitHub. This transparency not only facilitates the verification of our findings by the community but also encourages the application of our methods in other contexts. Acknowledgments This work is supported by Quan Cheng Laboratory (Grant No. QCLZD202301). References Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, et al. 2022. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206–2240. PMLR. 12999 Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Jia Chen, Haitao Li, Weihang Su, Qingyao Ai, and Yiqun Liu. 2023. Thuir at wsdm cup 2023 task 1: Unbiased learning to rank. arXiv preprint arXiv:2304.12650. Xuesong Chen, Ziyi Ye, Xiaohui Xie, Yiqun Liu, Xiaorong Gao, Weihang Su, Shuqi Zhu, Yike Sun, Min Zhang, and Shaoping Ma. 2022. Web search via an efficient and effective brain-machine interface. In Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pages 1569– 1572. Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April 2023). Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Yan Fang, Jingtao Zhan, Qingyao Ai, Jiaxin Mao, Weihang Su, Jia Chen, and Yiqun Liu. 2024. Scaling laws for dense retrieval. arXiv preprint arXiv:2403.18684. James Ferguson, Matt Gardner, Hannaneh Hajishirzi, Tushar Khot, and Pradeep Dasigi. 2020. Iirc: A dataset of incomplete information reading comprehension questions. arXiv preprint arXiv:2011.07127. Luyu Gao and Jamie Callan. 2021. Condenser: a pre-training architecture for dense retrieval. arXiv preprint arXiv:2104.08253. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. 2021. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346– 361. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Mingwei Chang. 2020. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938. PMLR. Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara, and Akiko Aizawa. 2020. Constructing a multi-hop qa dataset for comprehensive evaluation of reasoning steps. arXiv preprint arXiv:2011.01060. Gautier Izacard and Edouard Grave. 2020. Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38. Zhengbao Jiang, Luyu Gao, Jun Araki, Haibo Ding, Zhiruo Wang, Jamie Callan, and Graham Neubig. 2022. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. arXiv preprint arXiv:2212.02027. Zhengbao Jiang, Frank F Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. arXiv preprint arXiv:1911.00172. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474. Haitao Li, Jia Chen, Weihang Su, Qingyao Ai, and Yiqun Liu. 2023a. Towards better web search performance: Pre-training, fine-tuning and learning to rank. arXiv preprint arXiv:2303.04710. Haitao Li, Weihang Su, Changyue Wang, Yueyue Wu, Qingyao Ai, and Yiqun Liu. 2023b. Thuir@ coliee 2023: Incorporating structural knowledge into pre-trained language models for legal case retrieval. arXiv preprint arXiv:2305.06812. Haitao Li, Changyue Wang, Weihang Su, Yueyue Wu, Qingyao Ai, and Yiqun Liu. 2023c. Thuir@ coliee 2023: More parameters and legal knowledge for legal case entailment. arXiv preprint arXiv:2305.06817. Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jingyuan Wang, Jian-Yun Nie, and Ji-Rong Wen. 2023d. The web can be your oyster for improving large language models. arXiv preprint arXiv:2305.10998. Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, and Bill Dolan. 2021. A token-level reference-free hallucination detection benchmark for free-form text generation. arXiv preprint arXiv:2104.08704. Yixiao Ma, Yueyue Wu, Weihang Su, Qingyao Ai, and Yiqun Liu. 2023. Caseencoder: A knowledgeenhanced pre-trained model for legal case encoding. arXiv preprint arXiv:2305.05393. 13000 Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factuality in abstractive summarization. arXiv preprint arXiv:2005.00661. Niklas Muennighoff. 2022. Sgpt: Gpt sentence embeddings for semantic search. arXiv preprint arXiv:2202.08904. Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333–389. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. Hemlata Shelar, Gagandeep Kaur, Neha Heda, and Poorva Agrawal. 2020. Named entity recognition approaches and their comparison for custom ner model. Science & Technology Libraries, 39(3):324–337. Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. 2023. Replug: Retrievalaugmented black-box language models. arXiv preprint arXiv:2301.12652. Weihang Su, Qingyao Ai, Xiangsheng Li, Jia Chen, Yiqun Liu, Xiaolong Wu, and Shengluan Hou. 2023a. Wikiformer: Pre-training with structured information of wikipedia for ad-hoc retrieval. arXiv preprint arXiv:2312.10661. Weihang Su, Qingyao Ai, Yueyue Wu, Yixiao Ma, Haitao Li, and Yiqun Liu. 2023b. Caseformer: Pre-training for legal case retrieval. arXiv preprint arXiv:2311.00333. Weihang Su, Xiangsheng Li, Yiqun Liu, Min Zhang, and Shaoping Ma. 2023c. Thuir2 at ntcir-16 session search (ss) task. arXiv preprint arXiv:2307.00250. Weihang Su, Changyue Wang, Qingyao Ai, Yiran Hu, Zhijing Wu, Yujia Zhou, and Yiqun Liu. 2024. Unsupervised real-time hallucination detection based on the internal states of large language models. arXiv preprint arXiv:2403.06448. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. arXiv preprint arXiv:2212.10509. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. Ziyi Ye, Xiaohui Xie, Qingyao Ai, Yiqun Liu, Zhihong Wang, Weihang Su, and Min Zhang. 2024. Relevance feedback with brain signals. ACM Transactions on Information Systems, 42(4):1–37. ChengXiang Zhai. 2008. Statistical language models for information retrieval. Synthesis lectures on human language technologies, 1(1):1–141. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068. Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer, and Marjan Ghazvininejad. 2020. Detecting hallucinated content in conditional neural sequence generation. arXiv preprint arXiv:2011.02593. A Datasets and Settings Datasets, metrics, and experimental settings are summarized in Table 8. • 2WikiMultihopQA. For the question "When did the director of film Hypocrite (Film) die?", the 13001 output we aim to generate is "The film Hypocrite was directed by Miguel Morayta. Miguel Morayta died on 19 June 2013. So the answer is 19 June 2013." For 2WikiMultihopQA, we employed 6 examples enclosed in (Trivedi et al., 2022) for context learning, using BM25 as the retriever and Wikipedia articles as the retrieval corpus. While increasing the number of documents can somewhat improve performance, excessive retrieval content may cause the model to overlook previous exemplars. Therefore, we utilized a maximum document count of 3. • HotpotQA. For the question "What film directed by Brian Patrick Butler was inspired by a film directed by F.W. Murnau?", the output we aim to generate is "Brian Patrick Butler directed the film The Phantom Hour. The Phantom Hour was inspired by the films such as Nosferatu and The Cabinet of Dr. Caligari. Of these, Nosferatu was directed by F.W. Murnau. So the answer is The Phantom Hour." We utilized 8 examples enclosed in (Trivedi et al., 2022), conducted experiments with BM25 as the retriever on the Wikipedia corpus, and retrieved 3 documents for context learning. • IIRC. For the question "What is the age difference between the kicker and the quarterback for the Chargers?", the output we aim to generate is "The kicker for the Chargers is Nate Kaeding. The quarterback (QB) for the Chargers is Philip Rivers. Nate Kaeding was born in the year 1982. Philip Rivers was born in the year 1981. So the answer is 1." We utilized 8 examples enclosed in (Trivedi et al., 2022), conducted experiments with BM25 as the retriever on the Wikipedia corpus, and retrieved 3 documents for context learning. In particular, we excluded questions without answers, so there are a total of 954 questions in IIRC. • StrategyQA. For the question "Is it common to see frost during some college commencements?", the output we aim to generate is "College commencement ceremonies can happen in December, May, and June. December is in the winter, so there can be frost. Thus, there could be frost at some commencements. So the answer is yes." We utilized 8 examples enclosed in (Wei et al., 2023), conducted experiments with BM25 as the retriever on the Wikipedia corpus, and retrieved 3 documents for context learning. B Evaluation Details In order to match the answers obtained by the model, we included the paradigm "So the answer is" in the exemplars to encourage the model to generate in this format. Specifically, if "So the answer is" is absent from all of the model’s generations, during the evaluation phase, we append "So the answer is" to the end of the model’s output, prompting the model to generate again. Subsequently, we select the words following "So the answer is" as the final answer. C Case Study We select the following question for case study: Question The arena where the Lewiston Maineiacs played their home games can seat how many people? This is a complex question that requires identifying the Arena’s name as well as finding the seating capacity of the arena. Initial Output of the LLM The arena where the Lewiston Maineiacs played their home games is the Androscoggin Bank Colisée. The Androscoggin Bank Colisée has a seating capacity of 4,250. Therefore, the answer is 4,250. During the generation process of LLM, the RIND module identified that the LLM needs external knowledge assistance when generating the first 4,250, thus triggering the QFS module for query generation. At this moment, the information need of the LLM is to find the seating capacity of Androscoggin Bank Colisée. Our proposed QFS generates a query based on the self-attention distribution over the entire context, where the tokens selected by QFS are as follows in bold: 13002 The selected tokens for query formulation Question: The arena where the Lewiston Maineiacs played their home games can seat how many people? </s> Answer: The arena where the Lewiston Maineiacs played their home games is the Androscoggin Bank Colisée. The Androscoggin Bank Colisée has a seating capacity of | Thus, the QFS module generated the following query: The query generated by the QFS module seat Androscoggin Bank Colisée seating capacity The generated query indeed reflects the real-time information needs of LLM. This query directly led to the retrieval of the relevant paragraph from Wikipedia: The top 1 retrieved document Androscoggin Bank Colisée The Androscoggin Bank Colisée is a 4,000 capacity (3,677 seated) multi-purpose arena, in Lewiston, Maine, that opened in 1958. The Androscoggin Bank Colisée was built to ...... Thus, after including the retrieved passage to the LLM, it can then generate the correct answer: 3,677. The final revised output is as follows: Final Output The arena where the Lewiston Maineiacs played their home games is the Androscoggin Bank Colisée. The Androscoggin Bank Colisée has a seating capacity of 3,677. Therefore, the answer is 3,677. In contrast, the FLARE framework decides to trigger retrieval for the first sentence because the probability of the tokens ‘Lewiston’, ‘home’, ‘Androscoggin’, and ‘Bank’ are all below the FLARE’s threshold: Initial output of the LLM The arena where the Lewiston Maineiacs played their home games is the Androscoggin Bank Colisée. After that, FLARE generates the following query that removes all the tokens below the threshold from the first sentence: The generated query of FLARE The arena where the Maineiacs played their games is the Colisée. Unfortunately, the generated query omitted the correct information from the most recent sentence, leading to an irrelevant passage retrieval for the LLM’s real-time information need. Thus, the LLM did not answer this question correctly. D Error Analysis When the RIND module triggers retrieval, the DRAGIN framework adds multiple passages to the input of the LLM, thereby extending the context length. As a result, LLMs that typically perform poorly with long contexts may become confused during the generation process and mix up the information from these passages. We select the following case: Question What is the name of the fight song of the university whose main campus is in Lawrence, Kansas, and whose branch campuses are in the Kansas City metropolitan area? Our framework first detects that the LLM needs to determine which university’s main campus is located in Lawrence, Kansas, and thus retrieves three passages: 13003 Retrieved Passages P1: The Kansas City metropolitan area’s largest private employer is Cerner Corporation..... P2: University of Kansas The University of Kansas, also referred to as KU, is a public research university in the U.S. state of Kansas ...... P3: The population of the Kansas City MSA grew from 1,842,965 to an estimated 2,037,357..... The second of the three passages is relevant. However, the LLaMA-7B model, which performs poorly in long context scenarios, failed to generate the correct answer based on this retrieved external knowledge, despite the passage indeed addressing the LLM’s real-time information needs. Therefore, future work should explore how to enable large models to effectively understand complex information in extended contexts. E Hyperparameters The hyperparameters of DRAGIN on different datasets are listed in Table 9. 13004 Table 8: Dataset statistics and experimental settings. Dataset statistics 2WikiMultihopQA HotpotQA IIRC StrategyQA Task multi-hop QA multi-hop QA reading comprehension QA commonsense QA #Examples 1000 1000 954 1000 Evaluation settings 2WikiMultihopQA HotpotQA IIRC StrategyQA Metrics EM, F1, Prec., Rec. EM, F1, Prec., Rec. EM, F1, Prec., Rec. Accuracy Retrieval settings 2WikiMultihopQA HotpotQA IIRC StrategyQA Corpus Wikipedia Retriever BM25 Top-k 3 Prompt format 2WikiMultihopQA HotpotQA IIRC StrategyQA #Examplars 6 8 8 8 Table 9: Hyperparameters of DRAGIN on different datasets. LLM Hyperparameters 2WikiMultihopQA HotpotQA IIRC StrategyQA Llama2-13b-chat generate length 64 100 128 100 θ 0.6 1.2 1.25 1.0 top n tokens 25 35 25 25 Llama2-7b-chat generate length 64 100 128 100 θ 1.0 1.3 1.3 0.75 top n tokens 25 35 35 35 Vicuna-13b-v1.5 generate length 64 100 128 100 θ 1.2 1.2 1.3 1.5 top n tokens 25 35 35 25 13005 F Prompt Template Each dataset has a prompt for direct generation and a prompt for generation with relevant documents, as shown below. Prompt 1: exemplars of 2WMQA Direct Question: When did the director of film Hypocrite (Film) die? Answer: The film Hypocrite was directed by Miguel Morayta. Miguel Morayta died on 19 June 2013. So the answer is 19 June 2013. Question: Are both Kurram Garhi and Trojkrsti located in the same country? Answer: Kurram Garhi is located in the country of Pakistan. Trojkrsti is located in the country of Republic of Macedonia. Thus, they are not in the same country. So the answer is no. Question: Do director of film Coolie No. 1 (1995 Film) and director of film The Sensational Trial have the same nationality? Answer: Coolie No. 1 (1995 film) was directed by David Dhawan. The Sensational Trial was directed by Karl Freund. David Dhawan’s nationality is India. Karl Freund’s nationality is Germany. Thus, they do not have the same nationality. So the answer is no. Question: Who is Boraqchin (Wife Of Ögedei)’s father-in-law? Answer: Boraqchin is married to Ögedei Khan. Ögedei Khan’s father is Genghis Khan. Thus, Boraqchin’s father-in-law is Genghis Khan. So the answer is Genghis Khan. Question: Who was born first out of Martin Hodge and Ivania Martinich? Answer: Martin Hodge was born on 4 February 1959. Ivania Martinich was born on 25 July 1995. Thus, Martin Hodge was born first. So the answer is Martin Hodge. Question: When did the director of film Laughter In Hell die? Answer: The film Laughter In Hell was directed by Edward L. Cahn. Edward L. Cahn died on August 25, 1963. So the answer is August 25, 1963. Question: Who is the mother of the director of film Polish-Russian War (Film)? Answer: 13006 Prompt 2: exemplars of 2WMQA RAG Question: When did the director of film Hypocrite (Film) die? Answer: The film Hypocrite was directed by Miguel Morayta. Miguel Morayta died on 19 June 2013. So the answer is 19 June 2013. Question: Are both Kurram Garhi and Trojkrsti located in the same country? Answer: Kurram Garhi is located in the country of Pakistan. Trojkrsti is located in the country of Republic of Macedonia. Thus, they are not in the same country. So the answer is no. Question: Do director of film Coolie No. 1 (1995 Film) and director of film The Sensational Trial have the same nationality? Answer: Coolie No. 1 (1995 film) was directed by David Dhawan. The Sensational Trial was directed by Karl Freund. David Dhawan’s nationality is India. Karl Freund’s nationality is Germany. Thus, they do not have the same nationality. So the answer is no. Question: Who is Boraqchin (Wife Of Ögedei)’s father-in-law? Answer: Boraqchin is married to Ögedei Khan. Ögedei Khan’s father is Genghis Khan. Thus, Boraqchin’s father-in-law is Genghis Khan. So the answer is Genghis Khan. Question: Who was born first out of Martin Hodge and Ivania Martinich? Answer: Martin Hodge was born on 4 February 1959. Ivania Martinich was born on 25 July 1995. Thus, Martin Hodge was born first. So the answer is Martin Hodge. Question: When did the director of film Laughter In Hell die? Answer: The film Laughter In Hell was directed by Edward L. Cahn. Edward L. Cahn died on August 25, 1963. So the answer is August 25, 1963. Context: [1] film was shot between May 6 and 18 June 2008 in locations of Warsaw, Wejherowo, Sopot and Gdynia outskirts. The film premiered on May 22, 2009. The budget of Polish-Russian War amounted to approx. 4 million zlotys. The creators of the music for the film are Jan Komar, Filip Kuncewicz, Liroy, Mateusz Łapot and Jarosław Karczmarczyk. The soundtrack also included the following songs: Polish-Russian War (film) Polish-Russian War (Wojna polsko-ruska) is a 2009 Polish film directed by Xawery ˙Zuławski based on the novel Polish-Russian War under the white-red flag by Dorota Masłowska. [2] Pharaoh (film) Pharaoh () is a 1966 Polish film directed by Jerzy Kawalerowicz and adapted from the eponymous novel by the Polish writer Bolesław Prus. In 1967 it was nominated for an Academy Award for Best Foreign Language Film. It was also entered into the 1966 Cannes Film Festival. Jerzy Kawalerowicz, who had previously directed such films as "Cellulose" (1953), "Under the Phrygian Star" (1954), "The Shade" (1956), "The Real End of the Great War" (1957), "Night Train" (1959) and "Mother Joan of the Angels" (1961) [3] Polish-Russian War (film) Polish-Russian War (Wojna polsko-ruska) is a 2009 Polish film directed by Xawery ˙Zuławski based on the novel Polish-Russian War under the white-red flag by Dorota Masłowska. The film’s events take place over several days and they are set in the present time in a large Polish city. The main character is a bandit, a Polish dres (a Polish chav) called "Strong" (Borys Szyc), who does not work or study, and who frequently gets into conflict with the law and is in love with Magda (Roma G ˛asiorowska). The relationship is not going well. Answer in the same format as before. Question: Who is the mother of the director of film Polish-Russian War (Film)? 13007 Prompt 3: exemplars of HotpotQA Direct Question: Jeremy Theobald and Christopher Nolan share what profession? Answer: Jeremy Theobald is an actor and producer. Christopher Nolan is a director, producer, and screenwriter. Therefore, they both share the profession of being a producer. So the answer is producer. Question: What film directed by Brian Patrick Butler was inspired by a film directed by F.W. Murnau? Answer: Brian Patrick Butler directed the film The Phantom Hour. The Phantom Hour was inspired by the films such as Nosferatu and The Cabinet of Dr. Caligari. Of these Nosferatu was directed by F.W. Murnau. So the answer is The Phantom Hour. Question: How many episodes were in the South Korean television series in which Ryu Hye-young played Bo-ra? Answer: The South Korean television series in which Ryu Hye-young played Bo-ra is Reply 1988. The number of episodes Reply 1988 has is 20. So the answer is 20. Question: Were Lonny and Allure both founded in the 1990s? Answer: Lonny (magazine) was founded in 2009. Allure (magazine) was founded in 1991. Thus, of the two, only Allure was founded in 1990s. So the answer is no. Question: Vertical Limit stars which actor who also played astronaut Alan Shepard in "The Right Stuff"? Answer: The actor who played astronaut Alan Shepard in "The Right Stuff" is Scott Glenn. The movie Vertical Limit also starred Scott Glenn. So the answer is Scott Glenn. Question: What was the 2014 population of the city where Lake Wales Medical Center is located? Answer: Lake Wales Medical Center is located in the city of Polk County, Florida. The population of Polk County in 2014 was 15,140. So the answer is 15,140. Question: Who was born first? Jan de Bont or Raoul Walsh? Answer: Jan de Bont was born on 22 October 1943. Raoul Walsh was born on March 11, 1887. Thus, Raoul Walsh was born the first. So the answer is Raoul Walsh. Question: In what country was Lost Gravity manufactured? Answer: The Lost Gravity (roller coaster) was manufactured by Mack Rides. Mack Rides is a German company. So the answer is Germany. Answer the following question by reasoning step-by-step, following the example above. Question: Were Scott Derrickson and Ed Wood of the same nationality? 13008 Prompt 4: exemplars of HotpotQA RAG Question: Jeremy Theobald and Christopher Nolan share what profession? Answer: Jeremy Theobald is an actor and producer. Christopher Nolan is a director, producer, and screenwriter. Therefore, they both share the profession of being a producer. So the answer is producer. Question: What film directed by Brian Patrick Butler was inspired by a film directed by F.W. Murnau? Answer: Brian Patrick Butler directed the film The Phantom Hour. The Phantom Hour was inspired by the films such as Nosferatu and The Cabinet of Dr. Caligari. Of these Nosferatu was directed by F.W. Murnau. So the answer is The Phantom Hour. Question: How many episodes were in the South Korean television series in which Ryu Hye-young played Bo-ra? Answer: The South Korean television series in which Ryu Hye-young played Bo-ra is Reply 1988. The number of episodes Reply 1988 has is 20. So the answer is 20. Question: Were Lonny and Allure both founded in the 1990s? Answer: Lonny (magazine) was founded in 2009. Allure (magazine) was founded in 1991. Thus, of the two, only Allure was founded in 1990s. So the answer is no. Question: Vertical Limit stars which actor who also played astronaut Alan Shepard in "The Right Stuff"? Answer: The actor who played astronaut Alan Shepard in "The Right Stuff" is Scott Glenn. The movie Vertical Limit also starred Scott Glenn. So the answer is Scott Glenn. Question: What was the 2014 population of the city where Lake Wales Medical Center is located? Answer: Lake Wales Medical Center is located in the city of Polk County, Florida. The population of Polk County in 2014 was 15,140. So the answer is 15,140. Question: Who was born first? Jan de Bont or Raoul Walsh? Answer: Jan de Bont was born on 22 October 1943. Raoul Walsh was born on March 11, 1887. Thus, Raoul Walsh was born the first. So the answer is Raoul Walsh. Question: In what country was Lost Gravity manufactured? Answer: The Lost Gravity (roller coaster) was manufactured by Mack Rides. Mack Rides is a German company. So the answer is Germany. Context: [1] Scott Derrickson Scott Derrickson (born July 16, 1966) is an American director, screenwriter and producer. He lives in Los Angeles, California. Derrickson is best known for directing numerous horror films, such as "The Exorcism of Emily Rose" (2005), "Sinister" (2012), and "Deliver Us From Evil" (2014), as well as the Marvel Cinematic Universe superhero film "Doctor Strange" (2016). Derrickson grew up in Denver, Colorado. [2] Scott Derrickson Scott Derrickson (born July 16, 1966) is an American director, screenwriter and producer. He lives in Los Angeles, California. Derrickson is best known for directing numerous horror films, such as "The Exorcism of Emily Rose" (2005), "Sinister" (2012), and "Deliver Us From Evil" (2014), as well as the Marvel Cinematic Universe superhero film "Doctor Strange" (2016). Derrickson grew up in Denver, Colorado. He graduated from Biola University with a B.A. in Humanities, with an emphasis on literature and philosophy, and a B.A. in communications, with an emphasis on film, and a minor in theological studies. [3] The film had its world premiere at the 2013 Toronto International Film Festival. It was released in 2014. Derrickson directed his own script, "Deliver Us from Evil", for producer Jerry Bruckheimer and Sony Screen Gems. Eric Bana played the lead role, and the film was released wide in theaters on July 2, 2014. In 2014, Derrickson wrote a film version of "The Outer Limits" with Cargill. Other upcoming Derrickson projects include an adaptation of Stephen King’s "The Breathing Method" with Jason Blum producing, and an adaptation of the popular video game "" for CBS Films. Answer in the same format as before. Answer the following question by reasoning step-by-step, following the example above. Question: Were Scott Derrickson and Ed Wood of the same nationality? 13009 Prompt 5: exemplars of StrategyQA Direct Question: Do hamsters provide food for any animals? Answer: Hamsters are prey animals. Prey are food for predators. Thus, hamsters provide food for some animals. So the answer is yes. Question: Could Brooke Shields succeed at University of Pennsylvania? Answer: Brooke Shields went to Princeton University. Princeton University is about as academically rigorous as the University of Pennsylvania. Thus, Brooke Shields could also succeed at the University of Pennsylvania. So the answer is yes. Question: Hydrogen’s atomic number squared exceeds number of Spice Girls? Answer: Hydrogen has an atomic number of 1. 1 squared is 1. There are 5 Spice Girls. Thus, Hydrogen’s atomic number squared is less than 5. So the answer is no. Question: Is it common to see frost during some college commencements? Answer: College commencement ceremonies can happen in December, May, and June. December is in the winter, so there can be frost. Thus, there could be frost at some commencements. So the answer is yes. Question: Could a llama birth twice during War in Vietnam (1945-46)? Answer: The War in Vietnam was 6 months. The gestation period for a llama is 11 months, which is more than 6 months. Thus, a llama could not give birth twice during the War in Vietnam. So the answer is no. Question: Would a pear sink in water? Answer: The density of a pear is about 0.6g/cm3, which is less than water. Objects less dense than water float. Thus, a pear would float. So the answer is no. Following the examples above, answer the question by reasoning step-by-step. Question: Are more people today related to Genghis Khan than Julius Caesar? 13010 Prompt 6: exemplars of StrageQA RAG Question: Do hamsters provide food for any animals? Answer: Hamsters are prey animals. Prey are food for predators. Thus, hamsters provide food for some animals. So the answer is yes. Question: Could Brooke Shields succeed at University of Pennsylvania? Answer: Brooke Shields went to Princeton University. Princeton University is about as academically rigorous as the University of Pennsylvania. Thus, Brooke Shields could also succeed at the University of Pennsylvania. So the answer is yes. Question: Hydrogen’s atomic number squared exceeds number of Spice Girls? Answer: Hydrogen has an atomic number of 1. 1 squared is 1. There are 5 Spice Girls. Thus, Hydrogen’s atomic number squared is less than 5. So the answer is no. Question: Is it common to see frost during some college commencements? Answer: College commencement ceremonies can happen in December, May, and June. December is in the winter, so there can be frost. Thus, there could be frost at some commencements. So the answer is yes. Question: Could a llama birth twice during War in Vietnam (1945-46)? Answer: The War in Vietnam was 6 months. The gestation period for a llama is 11 months, which is more than 6 months. Thus, a llama could not give birth twice during the War in Vietnam. So the answer is no. Question: Would a pear sink in water? Answer: The density of a pear is about 0.6g/cm3, which is less than water. Objects less dense than water float. Thus, a pear would float. So the answer is no. Context: [1] Borte Chino (Grey Wolf) and his wife was Gua Maral (White Doe) Temujin (Genghis Khan) - Founder and Khagan of the Mongol Empire (1206–1227) Family tree of Genghis Khan The family tree of Genghis Khan is listed below. This family tree only lists prominent members of the Borjigin family and does not reach the present. Genghis Khan appears in the middle of the tree, and Kublai Khan appears at the bottom of the tree. The Borjigin family was the royal family of the Mongol Empire, dating back to the 13th and 14th centuries. [2] Genghis Khan (2004 TV series) Genghis Khan is a Chinese television series based on the life of Genghis Khan, the founder of the Mongol Empire in the 13th century. Ba Sen, who portrayed the eponymous character in the series, is a descendant of Genghis Khan’s second son, Chagatai. The series was first broadcast on CCTV in China in 2004, on KBS in South Korea in 2005, and the Turkish state channel TRT 1. The 30 episodes long television series depicts the major events in the life of Temüjin, the founder of the Mongol Empire in the 13th century. [3] He allegedly planned to assassinate Genghis Khan. Although Toghrul was allegedly saved on multiple occasions by Genghis Khan, he gave in to his son and became uncooperative with Genghis Khan. Genghis Khan learned of Senggum’s intentions and eventually defeated him and his loyalists. One of the later ruptures between Genghis Khan and Toghrul was Toghrul’s refusal to give his daughter in marriage to Jochi, Genghis Khan’s first son. This was disrespectful in Mongolian culture and led to a war. Answer in the same format as before. Following the examples above, answer the question by reasoning step-by-step. Question: Are more people today related to Genghis Khan than Julius Caesar? 13011 Prompt 7: exemplars of IIRC Direct Question: What is the age difference between the kicker and the quarterback for the Chargers? Answer: The kicker for the Chargers is Nate Kaeding. The quarterback (QB) for the Chargers is Philip Rivers. Nate Kaeding was born in the year 1982. Philip Rivers was born in the year 1981. Thus, the age difference between them is of 1 year. So the answer is 1. Question: How many years was the ship that took the battalion from New South Wales to Ceylon in service? Answer: The ship that took the battalion from New South Wales to Ceylon is General Hewitt. General Hewitt was launched in Calcutta in 1811. General Hewitt was sold for a hulk or to be broken up in 1864. So she served for a total of 1864 - 1811 = 53 years. So the answer is 53. Question: What year was the theatre that held the 2016 NFL Draft built? Answer: The theatre that held the 2016 NFL Draft is Auditorium Theatre. The Auditorium Theatre was built in 1889. So the answer is 1889. Question: How long had Milan been established by the year that Nava returned there as a reserve in the first team’s defense? Answer: Nava returned to Milan as a reserve in the first team’s defense in the year 1990. Milan had been established in the year 1899. Thus, Milan had been established for 1990 - 1899 = 91 years when Milan returned to Milan as a reserve in the first team’s defense. So the answer is 91. Question: When was the town Scott was born in founded? Answer: Scott was born in the town of Cooksville, Illinois. Cooksville was founded in the year 1882. So the answer is 1882. Question: In what country did Wright leave the French privateers? Answer: Wright left the French privateers in Bluefield’s river. Bluefields is the capital of the South Caribbean Autonomous Region (RAAS) in the country of Nicaragua. So the answer is Nicaragua. Question: Who plays the A-Team character that Dr. Hibbert fashioned his hair after? Answer: Dr. Hibbert fashioned his hair after Mr. T from The A-Team. Mr T.’s birthname is Lawrence Tureaud. So the answer is Lawrence Tureaud. Question: How many people attended the conference held near Berlin in January 1942? Answer: The conference held near Berlin in January 1942 is Wannsee Conference. Wannsee Conference was attended by 15 people. So the answer is 15. Question: In what country did Bain attend doctoral seminars of Wlad Godzich? Answer: 13012 Prompt 8: exemplars of IIRC RAG Question: What is the age difference between the kicker and the quarterback for the Chargers? Answer: The kicker for the Chargers is Nate Kaeding. The quarterback (QB) for the Chargers is Philip Rivers. Nate Kaeding was born in the year 1982. Philip Rivers was born in the year 1981. Thus, the age difference between them is of 1 year. So the answer is 1. Question: How many years was the ship that took the battalion from New South Wales to Ceylon in service? Answer: The ship that took the battalion from New South Wales to Ceylon is General Hewitt. General Hewitt was launched in Calcutta in 1811. General Hewitt was sold for a hulk or to be broken up in 1864. So she served for a total of 1864 - 1811 = 53 years. So the answer is 53. Question: What year was the theatre that held the 2016 NFL Draft built? Answer: The theatre that held the 2016 NFL Draft is Auditorium Theatre. The Auditorium Theatre was built in 1889. So the answer is 1889. Question: How long had Milan been established by the year that Nava returned there as a reserve in the first team’s defense? Answer: Nava returned to Milan as a reserve in the first team’s defense in the year 1990. Milan had been established in the year 1899. Thus, Milan had been established for 1990 - 1899 = 91 years when Milan returned to Milan as a reserve in the first team’s defense. So the answer is 91. Question: When was the town Scott was born in founded? Answer: Scott was born in the town of Cooksville, Illinois. Cooksville was founded in the year 1882. So the answer is 1882. Question: In what country did Wright leave the French privateers? Answer: Wright left the French privateers in Bluefield’s river. Bluefields is the capital of the South Caribbean Autonomous Region (RAAS) in the country of Nicaragua. So the answer is Nicaragua. Question: Who plays the A-Team character that Dr. Hibbert fashioned his hair after? Answer: Dr. Hibbert fashioned his hair after Mr. T from The A-Team. Mr T.’s birthname is Lawrence Tureaud. So the answer is Lawrence Tureaud. Question: How many people attended the conference held near Berlin in January 1942? Answer: The conference held near Berlin in January 1942 is Wannsee Conference. Wannsee Conference was attended by 15 people. So the answer is 15. Context: [1] Wlad Godzich Wlad Godzich (born May 13, 1945 in Germany, raised in France) is a literary critic, literary theorist, translator, and scholar. He is attributed with influencing the conceptualization of modern literary critical theory. He currently serves as Professor of general and comparative literature, and critical studies at the University of California, Santa Cruz. Godzich has published and translated several books, edited eight collections of essays, and authored over a hundred scholarly articles, lectures, and papers. In 2000, Godzich joined the University of California, Santa Cruz as dean of Humanities. [2] The patterns of his thought emerge from his interest in the relationship between language and literacy-the latter conceived as "a determinate set of relations that we have to language." Godzich reinvigorates the semiological project proposed by Saussure but forsaken by his heirs: that of exploring the social functioning of language in its historical and rhetorical actualizations. Wlad Godzich Wlad Godzich (born May 13, 1945 in Germany, raised in France) is a literary critic, literary theorist, translator, and scholar. He is attributed with influencing the conceptualization of modern literary critical theory. [3] He sits on the editorial board of multiple American, European and Asian journals, both print and electronic. His research grants have been primarily from US, Canadian, Swedish, Swiss and private agencies. Through his work at the University of Minnesota Press, Godzich brought important works of critical theory into English translation. His essays during this period were well received by critics as they were among the first to link deconstruction, cultural criticism, and third-world literatures through linguistics: they can now be seen as tesserae composing a theoretical mosaic of remarkable scope. Answer in the same format as before. Question: In what country did Bain attend doctoral seminars of Wlad Godzich? Answer: 13013
2024
702
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13014–13033 August 11-16, 2024 ©2024 Association for Computational Linguistics Living in the Moment: Can Large Language Models Grasp Co-Temporal Reasoning? Zhaochen Su1*, Juntao Li1†, Jun Zhang1, Tong Zhu1, Xiaoye Qu2, Pan Zhou3, Bowen Yan4, Yu Cheng5, Min Zhang1 1Institute of Computer Science and Technology, Soochow University, China 2Shanghai AI Laboratory, 3Huazhong University of Science and Technology 4Tsinghua University, 5The Chinese University of Hong Kong {suzhaochen0110,junzhang20030309}@gmail.com; {ljt,minzhang}@suda.edu.cn; [email protected] [email protected]; [email protected]; [email protected] Abstract Temporal reasoning is fundamental for large language models (LLMs) to comprehend the world. Current temporal reasoning datasets are limited to questions about single or isolated events, falling short in mirroring the realistic temporal characteristics involving concurrent nature and intricate temporal interconnections. In this paper, we introduce COTEMPQA, a comprehensive co-temporal Question Answering (QA) benchmark containing four co-temporal scenarios (Equal, Overlap, During, Mix) with 4,748 samples for evaluating the co-temporal comprehension and reasoning abilities of LLMs. Our extensive experiments reveal a significant gap between the performance of current LLMs and human-level reasoning on COTEMPQA tasks. Even when enhanced with Chain of Thought (CoT) methodologies, models consistently struggle with our task. In our preliminary exploration, we discovered that mathematical reasoning plays a significant role in handling co-temporal events and proposed a strategy to boost LLMs’ co-temporal reasoning from a mathematical perspective. We hope that our COTEMPQA datasets will encourage further advancements in improving the co-temporal reasoning capabilities of LLMs. Our code is available at https://github. com/zhaochen0110/Cotempqa. 1 Introduction Recent advanced Large Language Models (LLMs) like GPT-4 (OpenAI, 2023) have shown impressive capabilities in understanding, generating, and reasoning about natural language (Wei et al., 2022a; Zhao et al., 2023; Chang et al., 2023). Despite their advancements, these models fall short in mastering *Work was done during the internship at Shanghai AI lab. †Juntao Li is the Corresponding Author. 2015 2019 2023 2004 works for OpenAI 2022 2018 was the chair of Tesla … worked for OpenAI Figure 1: Understanding concurrent is crucial for us to understand how individuals navigate and influence diverse aspects of real-world scenarios. For instance, when Elon Musk was the chair of Tesla, he also worked for OpenAI. Concurrently, Sam Altman was working for OpenAI, too. Their simultaneous experiences greatly influenced subsequent decision-making at OpenAI. temporal reasoning (Chu et al., 2023), which is fundamental for humans to comprehend the world and distinguish daily events (Chen et al., 2021; Su et al., 2023; Tan et al., 2023), requiring a complex integration of capabilities, involving implicit arithmetic calculations (Zhu et al., 2023a), understanding logical implications (Wei et al., 2022c), and leveraging extensive world knowledge (Chu et al., 2023). Current studies in temporal reasoning mainly focus on time-sensitive question-answering (TSQA). Chen et al. (2021) first introduced the TIMEQA datasets, constructing time-evolving facts for a given subject and formulating questions based on the specific timestamp within the evolutionary facts. TEMPLAMA (Dhingra et al., 2022) extracted structured facts from the Wikidata Knowledge Base (Vrandeˇci´c and Krötzsch, 2014) for closedbook TSQA. Furthermore, TEMPREASON (Tan et al., 2023) translated explicit temporal expressions into the implicit event information within questions, offering a more comprehensive evalu13014 Datasets Question Answer TIMEQA (2021) Which school did Sam Altman attended in 2005? Stanford University Which position did Elon Musk hold in 2005? chairman of Tesla TEMPLAMA (2022) In 2005, Sam Altman attended _X_. Stanford University In 2005, Elon Musk hold the position of _X_. chairman of Tesla TEMPREASON (2023) Which school did Sam Altman attend before he held the position of president of Y Combinator? Standford University Which position did Elon Musk held before he worked for OpenAI? chairman of Tesla COTEMPQA (ours) When Elon Musk was working for OpenAI, where did he work for within the same time interval? (Overlap) Tesla, SpaceX While Elon Musk was working for OpenAI, where did Sam Altman work for concurrently? (During) OpenAI Table 1: Example questions of prior TSQA datasets and our COTEMPQA datasets. Interpretation Relation •——x——• x is equal to y ◦——y——◦ •—x—• x overlaps with y ◦—y—◦ • —x—• x during y ◦——y——◦ Table 2: Interpretation of three co-temporal relations. ation framework of TSQA. Given the fact “Elon musk held the position of Tesla’s chairman from 2004 to 2018”, the models are tasked with accurately interpreting and responding to time specifiers in the questions, i.e., “Which position did Elon Musk hold in 2005?” in TIMEQA (Chen et al., 2021) or “Which position did Elon Musk held before he worked for OpenAI?” in TEMPREASON (Tan et al., 2023). The datasets mentioned above provide a straightforward way to evaluate LLMs’ capabilities in temporal reasoning. However, as LLMs evolve, there is an urgent need to evaluate their proficiency in more realistic scenarios. As shown in Figure 1, the reality might present a more intricate and multifaceted nature, involving concurrent events and complex temporal interconnections over time (UzZaman et al., 2012). Current datasets mainly question single or isolated events and might not fully reflect the realistic temporal characteristics. Therefore, we create the Co-Temporal QA (COTEMPQA) datasets to complement existing corpora by focusing on the concurrent nature of time and cotemporal relations in real-world situations. Experiments conducted on both closed-book and openbook QA settings across 14 large language models reveal that even the advanced model GPT-4 is well below a satisfactory co-temporal reasoning performance. Specifically, GPT-4 achieves an overall score of 54.7, and the best open-source LLM is 30.1, which significantly falls behind the human performance of 92.8. We also observe that the representative reasoning enhancement strategies, e.g., Mode Questions Subjects #Facts #Answers Equal 436 401 11.65 1.17 Overlap 653 591 14.51 1.23 During 3,096 2,161 15.05 1.33 Mix 563 434 12.54 2.27 Total 4,748 3,587 14.45 1.41 Table 3: Statistics of our datasets. #Facts and #Answers represent the average number of facts and answers within the subject and question, respectively. Chain-of-Thought (CoT) (Wei et al., 2022b), fail to consistently improve and even reduce the temporal reasoning capabilities of LLMs in some scenarios. Throughout the investigation on our COTEMPQA, we observed that mathematical reasoning plays a crucial role in handling co-temporal events. Building on this insight, we propose a simple but effective MATH-REASONING CoT (MR-COT) strategy to boost the co-temporal reasoning capability of LLMs, achieving a remarkable 10.8 point improvement over existing baselines. However, it is important to note that there remains a nonnegligible gap between the performance of our proposed MR-COT and human-level reasoning in handling complex, concurrent temporal relations. We hope our research could inspire more great works to improve the co-temporal ability of LLMs. 2 The COTEMPQA Datasets 2.1 The Taxonomy of Co-temporal Relations Co-temporal relations are fundamental to understanding how events interconnect in time. These relationships highlight when facts or events happen simultaneously, which can be categorized into three distinct types (Pustejovsky et al., 2003), as shown in Table 2. Each of them represents a unique manner, whether events coincide with or overlap with each other in the temporal aspect. We divide these relations into four different scenarios below: • Equal: Facts occur simultaneously, representing 13015 a strict co-temporal relationship, occurring at the same time without duration differences. • Overlap: Facts partially coincide in time. This scenario is common in real-world settings, where facts often intersect for a part of their duration. • During: One fact is entirely contained within the timeline of another, reflecting a more complex interaction of timelines. • Mix: A combination of the three types above. This category is particularly challenging as it involves the complexity and variability of realworld temporal relationships, necessitating a comprehensive level of co-temporal reasoning. 2.2 Structuring Temporal Facts We utilize the Wikidata (Vrandeˇci´c and Krötzsch, 2014) dump of September 20, 2023 as our knowledge source for extracting time-dependent facts. Following Dhingra et al. (2022) and Tan et al. (2023), we focus on nine time-sensitive entity relations (Cheng et al., 2021; Gu et al., 2022; Qu et al., 2023) and keep a maximum of 2,000 subjects for each relation type. To structure the information, we transform the knowledge triples and qualifiers into a quintuplet format of (s, r, o, ts, te), where s is the subject, r is the relation, o is the object, ts and te are the start time and end time. We group all the temporal facts by subject, denoted as S = {(s, ri, oi, tsi, tei)|i ∈1 . . . N}, where N is the number of facts within a group. We keep the groups that contain three or more temporal facts. 2.3 Extracting Co-temporal Facts Building on our approach to structuring timedependent facts from Wikidata, we compare the timestamps of different facts to identify overlaps. Each fact fi and its co-temporal counterpart fj are represented as a triple, with fi = {si, ri, oi} and fj = {sj, rj, oj}. S = {si, sj}, R = {ri, rj}, O = {oi, oj} are the sets of subjects, relations, and objects within the co-temporal fact pairs (fi, fj), respectively. We categorize fact pairs into five scenarios based on the consistency or variation of (S, R, O), involving (S, R, O), (S, R, O), (S, R, O), (S, R, O), (S, R, O), where an overline indicates a change in the specific set. For instance, (S, R, O) represents the scenario where the subjects and relations are constant while the objects differ between fi and fj. We exclude the scenarios (S, R, O) and (S, R, O) since it is unrealistic for Algorithm 1 Identifying Co-temporal Facts 1: Input: Set of facts F, each fact as (s, r, o, ts, te) 2: Output: Set of co-temporal facts with their minimum temporal units 3: function MINMAXTIME(fi, fj) 4: (si, ri, oi, tsi, tei) ←fi 5: (sj, rj, oj, tsj, tej) ←fj 6: start ←max(tsi, tsj) 7: end ←min(tei, tej) 8: Tmin ←(start, end) 9: if start ≤end then return Tmin 10: else return None 11: end if 12: end function 13: R ←empty set ▷R is the set of co-temporal facts 14: for each fi in F do 15: for each fj in F where fi ̸= fj do 16: Tmin ←MINMAXTIME(fi, fj) 17: if Tmin is not None then 18: R ←R ∪{(fi, fj, Tmin)} 19: end if 20: end for 21: end for 22: return R the same subject and object to have different relationships, or for the same object to have the same relationship with different subjects concurrently. The detailed illustrations are shown in Appendix B. Taken (S, R, O) as an example, we detail the extraction of co-temporal facts in the MINMAXTIME function (lines 3-11) from Algorithm 1. This framework identifies the complex co-temporal relations between events, allowing for a more intuitive understanding of how multiple events and states are interrelated in the temporal dimension. 2.4 QA Pairs Construction Upon identifying co-temporal facts (fi, fj, Tmin), we construct the query Q by the condition fact fcond and the query fact fquery. fcond is selected from the intersection fact in (fi, fj), while fquery is the other fact in the pair. To control the correlation between fcond and fquery, we manually predefined 17 types of relevant relation pairs and constructed questions for the object by these question templates, which can be found in Table 11. By predefining these pairs, we align them logically and ensure the extracted facts are contextually interconnected. Based on the temporal relations identified through Tmin, we categorize the tasks into four distinct classes: Equal, Overlap, During, and Mix. Since multiple events can happen simultaneously in real life, we aggregate all valid answers for query Q to a set to avoid questions having several correct answers. As detailed in Table 3, the average number of our answers within the question is 1.42. 13016 3 The Performance of LLMs on COTEMPQA 3.1 Experimental Setup We investigate the co-temporal reasoning abilities of large language models within two problem settings: (1) Closed-Book QA (CBQA) is widely recognized task format in time-sensitive QA research (Dhingra et al., 2022; Liska et al., 2022; Tan et al., 2023). In this setting, the language model is given only the question and tasked with generating the answer without relying on external natural language texts. The primary challenge here involves the retention and temporal reasoning of knowledge pertinent to the question. (2) In the Open-Book QA (OBQA) setting, we provide all the relevant temporal facts within the group S = {(s, ri, oi, tsi, tei)|i ∈1...N} in a structured format directly into the prompt, which is in contrast to previous studies (Chen et al., 2021; Wei et al., 2023) that utilized Wikipedia as the knowledge base. This process shifts the evaluation’s emphasis towards the reasoning process itself, thereby minimizing the influence of the model’s inherent factual extraction capabilities on the outcomes (Tan et al., 2023; Chu et al., 2023). Here, the language models need to provide all possible answers within the concurrent timeframe. 3.2 LLMs for Evaluation We perform comprehensive experiments on 14 representative large language models including (1) ChatGPT (Ouyang et al., 2022) ChatGPT is a chat model aligned through Supervised Finetuning (SFT) and Reinforcement Learning from Human Feedback (RLHF). GPT-4 is an upgraded version of ChatGPT with enhanced reasoning capabilities, making it the most powerful LLM. Since the model is constantly updated, we used the gpt3.5-turbo-0613 and gpt4-0613 for consistent evaluation. (2) LLaMA2 (Touvron et al., 2023) LLaMA2 is one of the most popular opensource foundation models trained on 2T tokens with efficient group query attention (Ainslie et al., 2023). (3) Code-LLaMA (Roziere et al., 2023) Code-LLaMA models is a code generation model built on LLaMA2 and further trained on 500B tokens of code. (4) WizardMath (Luo et al., 2023a) WizardMath is also built on LLaMA2 and further trained on their proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) (Xu et al., 2023) to enhance the mathematical reasoning abilities of LLaMA2. (5) WizardCoder (Luo et al., 2023b) WizardCoder, similar to WizardMath, adapts the RLEIF method to the domain of code. The implementation details of our experiments are shown in Appendix A. 3.3 Evaluation Metrics Prior works followed the SQuAD benchmark’s evaluation protocol (Rajpurkar et al., 2016), using exact match (EM) and token-level F1 score. These metrics calculate the highest scores across all references, tending to overestimate performance in task settings involving questions with multiple possible answers. Following Zhong et al. (2022), we adopt a stricter Acc. score, where a prediction is correct only if it aligns with all the gold answers for a question. Additionally, we also evaluate our methods by answer-level F1 score (F1), which is a stricter metric compared to token-level F1 score. 3.4 Results and Analysis The main results are shown in Table 4. We report human performance to serve as an upper bound. From the results, we can observe: LLMs partially grasp co-temporal reasoning Our analysis reveals that, despite GPT-4 exhibiting the best performance among all LLMs, there is still a considerable disparity compared to human performance (54.7 vs. 92.8), indicating significant potential for further improvement in co-temporal reasoning. We also discover that models exhibit different reasoning capabilities in different co-temporal scenarios. Take GPT-4 for further illustration, in the simple co-temporal reasoning task, i.e., the Equal scenario, GPT-4 demonstrates strong performance, achieving a 92.7 score overall. However, its performance significantly declines in more complex scenarios. Specifically, in the Overlap category, GPT4’s accuracy falls to 59.4, decreasing further to 50.1 in the During category. In the most challenging category, Mix, which combines various temporal relations, GPT-4’s performance drops to 45.0. We provide a case study to explain the varying difficulties of scenarios and model performances in Appendix C. As shown in Table 7, the concurrent characteristics in the Equal scenario are relatively obvious compared with the Overlap and During scenarios. Furthermore, the Mix scenario has more than one answer and involves reasoning with multiple co-temporal relations, which makes it the most challenging compared to other scenarios. 13017 Model Equal Overlap During Mix Overall Acc. F1 Avg. Acc. F1 Avg. Acc. F1 Avg. Acc. F1 Avg. The Closed Book Question Answer (CBQA) setting GPT-3.5-TURBO 13.80 14.80 14.30 11.30 14.30 12.80 15.00 22.90 18.90 0.00 15.50 7.70 16.30 GPT-4 11.20 12.30 11.80 11.50 14.00 12.70 14.80 18.50 16.70 0.00 13.60 6.80 14.50 The Open Book Question Answer (OBQA) setting GPT-3.5-TURBO 59.40 66.30 62.80 40.10 48.50 44.30 31.50 42.90 37.20 0.70 46.10 23.40 38.90 GPT-4 91.10 94.30 92.70 55.30 63.50 59.40 44.30 55.80 50.10 23.40 66.50 45.00 54.70 CODELLAMA-7B 6.40 27.70 17.00 3.10 14.60 8.80 3.10 15.80 9.50 2.00 24.10 13.00 10.50 WIZARDCODER-7B 9.20 21.10 15.10 4.70 14.80 9.80 6.30 15.90 11.10 0.50 20.40 10.50 11.20 LLAMA-7B 4.10 18.90 11.50 4.70 19.50 12.10 4.50 19.50 12.00 0.20 23.80 12.00 12.00 WIZARDMATH-7B 12.40 16.50 14.40 9.20 15.20 12.20 11.60 20.50 16.00 0.40 22.00 11.20 14.80 CODELLAMA-13B 7.60 28.30 18.00 4.10 17.00 10.60 3.30 19.30 11.30 3.20 28.60 15.90 12.40 WIZARDCODER-13B 8.30 16.60 12.40 7.00 17.80 12.40 9.50 19.70 14.60 1.10 24.10 12.60 13.90 LLAMA-13B 11.20 31.20 21.20 5.80 21.60 13.70 5.00 20.60 12.80 1.10 26.90 14.00 13.80 WIZARDMATH-13B 23.90 29.00 26.40 10.90 15.10 13.00 11.70 17.10 14.40 0.00 13.20 6.60 14.40 CODELLAMA-34B 16.10 46.50 31.30 9.80 27.00 18.40 8.10 28.40 18.30 4.40 40.30 22.40 20.00 WIZARDCODER-34B 19.50 26.30 22.90 15.20 22.40 18.80 15.90 23.90 19.90 0.90 25.90 13.40 19.20 LLAMA-70B 11.90 41.70 26.80 10.00 32.50 21.20 9.40 33.50 21.40 5.2 0 42.50 23.80 22.20 WIZARDMATH-70B 36.70 46.80 41.80 23.60 33.70 28.60 25.50 37.10 31.30 0.40 32.90 16.60 30.10 HUMAN 97.00 98.30 97.70 91.10 93.50 92.30 82.00 87.00 84.50 88.00 96.20 92.10 92.80 Table 4: Experimental results of each model in the CBQA and OBQA settings of our proposed COTEMPQA. Notably, we only report the performance of GPT-3.5 and GPT-4 in CBQA setting as the open-source LLMs are almost negligible here, and closed-book human evaluations largely depend on individual knowledge, leading to significant variations between different individuals. The best performance of each model is in bold. CBQA is more challenging for LLMs LLMs exhibit significantly weaker performance in the CBQA compared to the OBQA, as reflected in the GPT-4’s performance (14.5 vs. 54.7). Interestingly, GPT-4 is outperformed by GPT-3.5 in CBQA. Our error analysis indicates that GPT-4 often responds with “uncertain" when unsure, unlike GPT-3.5, which tends to provide direct answers. This discovery is also found in previous works (OpenAI, 2023; Wei et al., 2023). This characteristic hinders GPT-4’s effectiveness in co-temporal CBQA, where precise answers are needed. While constructing our datasets, we concentrated on the top 2,000 subjects for each relationship type. These subjects are typically well-covered in pre-training stages, as Wikipedia is a significant part of their training data (Touvron et al., 2023). Despite this prior exposure, LLMs’ reduced capability in CBQA underscores the need to enhance the co-temporal reasoning abilities of LLMs, empowering them to comprehend and reason about concurrent events. Different aspects of capability benefit cotemporal reasoning differently Notably, models specialized in mathematical reasoning (e.g., WizardMath-70B) show significant improvements in co-temporal reasoning, scoring 30.1, compared to the foundational LLaMA-70B model’s 22.2 and CodeLLaMA-34B’s 20.0. This improvement indicates a strong correlation between the skills utilized in math and those required for understanding and interpreting complex temporal relationships. Although WizardMath is the overall best model among the baseline, we also observe its reduced effectiveness in the Mix scenario compared with others. By further investigation, questions have multiple answers in the Mix scenario. WizardMath tends to return a single response rather than enumerating all possible answers, causing higher precision but lower recall in contrast to other models (i.e., LLaMA, CodeLLaMA). We provide further experimental results and analysis in Appendix D. In Appendix E, we provide a detailed error analysis to help understand the limitations of current models and guide future improvements in cotemporal reasoning capabilities. 3.5 Data Analysis In Section 2.3, we categorize co-temporal facts into five scenarios. Building on this classification, this section delves into investigating how various types of fact elements influence LLMs’ ability to perform co-temporal reasoning. To ensure fairness in our experiments, we excluded questions with multiple answers and standardized the number of questions across all co-temporal relations. Figure 2 illustrates GPT-4’s performance with various element types. Additional results concerning different LLMs are presented in Table 10, and results consistently align with the findings shown below: 13018 57.0 58.0 29.9 23.5 74.7 68.0 70.8 73.3 48.8 39.8 100 80 60 40 20 0 Acc. F1 (𝑆, 𝑅, 𝑂) (𝑆, 𝑅, 𝑂) (𝑆, 𝑅, 𝑂) (𝑆, 𝑅, 𝑂) (𝑆, 𝑅, 𝑂) (a) Triple Element Types 66.8 69.6 44.3 45.2 58.5 62.1 59.7 60.3 100 80 60 40 20 0 Intra-Subject Inter-Subject Intra-Relation Inter-Relation Acc. F1 𝑆 𝑆 𝑅 𝑅 (b) Triplet Element Types Figure 2: Performance of GPT-4 under different co-temporal element types in the OBQA setting of our COTEMPQA. “Triplet” indicates scenarios where each element of (S, R, O) either changes or remains constant, while “Triple” focuses on variations in a single element. The overline indicates we changed the element in fact to others. The best performance of each element type is bold. Model Equal Overlap During Mix Overall Acc. F1 Avg. Acc. F1 Avg. Acc. F1 Avg. Acc. F1 Avg. The Closed Book Question Answer (CBQA) setting GPT-4 11.20 12.30 11.80 11.50 14.00 12.70 14.80 18.50 16.70 0.00 13.60 6.80 14.50 + COT 12.2 0 14.40 13.30 8.40 12.50 10.50 12.10 18.60 15.30 1.60 14.30 8.00 13.60 + FS 26.40 29.60 28.00 17.60 21.20 19.40 20.60 26.70 23.70 0.00 21.70 10.90 22.00 + FS&COT 32.10 35.20 33.60 19.90 25.70 22.80 23.20 29.50 26.40 0.50 25.60 13.10 25.00 + FS&MR-COT 24.80 30.60 27.70 16.70 29.90 23.30 20.80 35.70 28.20 3.90 31.70 17.80 26.30 The Open Book Question Answer (OBQA) setting GPT-4 91.10 94.30 92.70 55.30 63.50 59.40 44.30 55.80 50.10 23.40 66.50 45.00 54.70 + COT 87.8 0 90.00 88.90 46.20 58.70 52.50 43.50 57.00 50.20 29.50 71.60 50.50 54.10 + FS 87.40 91.40 89.40 62.60 72.50 67.60 55.90 68.60 62.20 30.60 71.90 51.20 64.20 + FS&COT 96.80 97.30 97.10 61.30 71.40 66.30 55.70 69.40 62.50 32.10 73.20 52.70 65.00 + FS&MR-COT 95.90 97.20 96.50 77.90 83.90 80.90 69.00 78.80 73.90 50.30 82.20 66.20 75.80 Table 5: Performance of GPT-4 under Zero-shot CoT (COT) prompting, Few-shot (FS) prompting, Few-shot CoT (FS&COT) prompting and our proposed Few-Shot Mr-CoT (FS&MR-COT) prompting in CBQA and OBQA. The influence of triple element types As observed in Figure 2a, the complexity of co-temporal reasoning for models increases with the number of changing elements. Among the scenarios, (S, R, O), (S, R, O) are particularly challenging compared to others. It indicates that LLMs encounter significant challenges when dealing with scenarios of high complexity, where multiple elements undergo simultaneous changes. The analysis below further investigates which elements present the most significant challenges to co-temporal reasoning capabilities. The influence of triplet element types In the left part of Figure 2b, we observe a notable decline in the model’s performance, i.e., 22.5 point decrease in Acc. and 24.4 in F1 when it engages in reasoning involving multiple subjects. These findings highlight models’ inherent difficulty when processing information from multiple concurrent subjects. While the reasoning process for handling multiple subjects shares similarities with single-subject scenarios, real-world situations are inherently more complex and variable. The model is required to integrate information across different subjects and understand complex relationships that extend beyond a single domain or context. On the other hand, in the right part of Figure 2b, we discover that the difference in the fact relation does not significantly impact the abilities of LLMs in co-temporal reasoning. This is attributed to the fact that relationship changes are not as complex as those involving multiple subjects in real-world scenarios (Huang and Chang, 2023), making them less challenging for the models’ capabilities. 4 Making Language Models Better COTEMPQA Responders Previous research has demonstrated that the Chainof-Thought (CoT) enables models to process complex reasoning tasks, such as mathematical and logical reasoning, in a step-by-step manner (Wei et al., 2022b). Motivated by this, we explore the application of CoT prompting to improve the capa13019 Question While Klaus Fuchs was working for Atomic Energy Research Establishment, which employer did Alexander Oppenheim work for during the identical time period? Context Alexander Oppenheim works for National University of Singapore from 1949 to 1965. Alexander Oppenheim works for University of Malaya from 1949 to 1965. Klaus Fuchs works for Atomic Energy Research Establishment from 1946 to 1950. ...... Gold Answer National University of Singapore and University of Malaya GPT-4 Answer: National University of Singapore + CoT Reasoning: Klaus Fuchs worked for Atomic Energy Research Establishment from 1946 to 1950. Currently, Alexander Oppenheim worked for University of Ghana from 1968 to 1973. However, this does not overlap with the time Klaus Fuchs was at the Atomic Energy Research Establishment. Answer: no information + Fs Answer: University of Malaya + Fs&CoT Reasoning: Klaus Fuchs worked for Atomic Energy Research Establishment from 1946 to 1950. During this same period, Alexander Oppenheim worked for National University of Singapore, as his tenure with the university spanned from 1949 to 1965. Answer: National University of Singapore + Fs&Mr-CoT The datetime can be formed as ((1946, None, None), (1950, None, None)). The content provided and related to the question can be structured as: (University of Malaya, (1949, None, None), (1965, None, None)) (National University of Singapore, (1949, None, None), (1965, None, None)) Given the ((1946, None, None), (1950, None, None)), compared with all contents related, we find that [(1949, None, None) −(1965, None, None)] ∩[(1946, None, None) −(1950, None, None)] ̸= ∅ Reasoning: Klaus Fuchs worked for Atomic Energy Research Establishment from 1946 to 1950. Answer: National University of Singapore and University of Malaya Table 6: Example inputs and outputs of GPT-4 with Zero-shot CoT (COT) prompting, Few-shot (FS) prompting, Few-shot CoT (FS&COT) prompting and Few-Shot Mr-CoT (FS&MR-COT). Answers highlighted in blue are correct, whereas thoses marked in red are incorrect. bilities of co-temporal reasoning in this section. 4.1 Math-reasoning CoT (MR-COT) As indicated in Table 4, our research uncovers a mathematically oriented reasoning framework that can enrich the LLMs’ understanding and handling of co-temporal reasoning tasks. Specifically, the WizardMath-70B model scores 30.1 overall, notably higher than the foundational LLaMA70B model’s score of 22.2. In light of this finding, we propose a MATH-REASONING (MR-COT) instruction-based prompting, which can be used together with in-context learning and chain-ofthought prompting. As demonstrated in the bottom of Table 6, our framework consists of three steps: (1) establish the key datetime, (2) structure the relevant timeline, and (3) mathematically identify the overlap. This prompt aims to guide the LLMs towards approaching temporal reasoning problems through a mathematical perspective, aligning their problem-solving processes more closely with mathematical logic and principles. 4.2 Experimental Setup We launch experiments under both zero-shot and few-shot settings. In the zero-shot CoT scenario, we use Let’s think step by step (Kojima et al., 2022) after questions as the reasoning trigger. In contrast, the few-shot setting provides the model with several question-answer pairs as initial demonstrations. Specifically, for the few-shot CoT scenario, we manually create rationales for each task, which are used as demonstrations to guide the model in step-by-step reasoning. Further details on the instructions and demonstrations are available from Figure 3 to Figure 12 in Appendix F. 4.3 Results and Analysis The results are presented in Table 5, and the output of GPT-4 to a range of prompts under different settings are shown in Table 6. From these tables, we can discover the following insights: Inconsistency in the impact of existing CoT prompts on GPT-4 In the zero-shot scenario, improvements were inconsistent, with a notable 5.5 performance increase in the Mix task and a 3.8 decrease in the Equal task under the OBQA setting. This suggests that the impact of CoT prompts varies significantly based on the task type. Moreover, GPT-4 demonstrates an overall decline in performance on both CBQA and OBQA when complemented with CoT. In the few-shot scenario, while overall improvements exist due to CoT prompts, 13020 these are relatively modest, amounting to an average performance enhancement of 0.8 in OBQA. All results indicate that while existing CoT prompts can be beneficial, their effectiveness is nuanced and task-dependent. Superiority of our proposed MR-COT Our method demonstrates significant superiority over existing reasoning enhancement strategies. Notably, MR-COT significantly enhances performance on the more challenging tasks, yielding improvements of 14.6, 11.4, and 13.5 on the tasks Overlap, During, and Mix, respectively in the OBQA setting. In the closed-book scenario, which is typically more challenging to improve, our method still achieves a 1.3 enhancement. However, it is observed that our method has a moderate effect on the Equal setting. We hypothesize that this is because this task is simple enough and does not require the additional complexity of mathematical reasoning. In such cases, this added complexity could be counterproductive. Despite these advancements, there is still a considerable gap compared to human-level reasoning, indicating the need for more effective methods to improve the model’s co-temporal reasoning abilities. 5 Related Work 5.1 Temporal Reasoning Benchmarks Temporal reasoning in natural language processing has seen significant advancements over the years. Early benchmarks, such as TimeBank (PUSTEJOVSKY, 2003), and TempEval-3 (UzZaman et al., 2012), lay the foundational work in this domain. They primarily focused on understanding temporal relationships between events in text, offering a preliminary framework for analyzing time in language models. However, recent years have witnessed a significant surge in developing time-sensitive question-answering datasets. These newer datasets, including MC-TACO (Zhou et al., 2019), SituatedQA (Zhang and Choi, 2021), TimeQA (Chen et al., 2021), TempLAMA (Dhingra et al., 2022), StreamingQA (Liska et al., 2022), RealtimeQA (Kasai et al., 2022), TempREASON (Tan et al., 2023) and Menatqa (Wei et al., 2023), represent a more nuanced approach to temporal reasoning. These datasets challenge models to answer questions grounded in specific times or events, thereby testing the models’ ability to comprehend and reason with temporal information more dynamically. The introduction of benchmarks such as TRAM (Wang and Zhao, 2023) and TimeBench (Chu et al., 2023) marks a significant advancement, providing crucial platforms for temporal reasoning research. Despite these advancements, there has been a noticeable gap in exploring the concurrent nature of temporal events. Previous research has primarily focused on individual events or sequences of events in isolation, overlooking the complexity of scenarios where multiple events cooccur or interact over the same period. Our work aims to fill this gap by being the first to explore the concurrent nature of temporal events. 5.2 Temporal Reasoning over LLMs To enhance the temporal reasoning capabilities of language models, previous methods either rely heavily on knowledge graphs to rank entities that satisfy the time-related queries (Han et al., 2021; Mavromatis et al., 2022; Liang et al., 2022; Chen et al., 2023; Huang et al., 2024) or are strictly dependent on the continual pre-training to strengthen models’ abilities in certain temporal aspects (Su et al., 2022; Tan et al., 2023; Yuan et al., 2023). The evolution of LLMs has demonstrated impressive ability in complex reasoning tasks (Chen, 2023), such as mathematical reasoning (Mishra et al., 2022) and logic reasoning (liu et al., 2023). In light of these advancements, recent methods shift towards a program-aided approach (Gao et al., 2023) to improve the performance of time-sensitive tasks, employing Python code as an intermediate logical step instead of natural language (Li et al., 2023). This method, while effective, relies heavily on external tools (Zhu et al., 2023b) and does not fully leverage the inherent capabilities of LLMs (Brown et al., 2020). The results reveal that existing LLMs, even with advanced strategies like Chain of Thought (Wei et al., 2022b), demonstrate limited efficacy in addressing the complexities inherent in co-temporal reasoning tasks. Meanwhile, our research highlights the significant role of mathematical abilities in co-temporal reasoning, offering a direction for future methodologies. 6 Potential impact We believe that it is crucial to enable LLMs to understand events that occur simultaneously or overlap in time. The potential impact of improved cotemporal reasoning on downstream applications includes the following: • Question Answering: Advanced co-temporal 13021 reasoning capabilities enable models to better handle questions involving concurrent events, leading to more accurate responses that reflect a deeper understanding of temporal relationships between events. • Event Understanding: Enhanced co-temporal reasoning improves the model’s ability to comprehend concurrent events, leading to more accurate identification of event relationships and temporal sequences. • Timeline Generation: Improved co-temporal reasoning facilitates more precise timeline generation by better understanding the relationships between events, resulting in more coherent and accurate timelines. 7 Conclusion In this paper, we propose the COTEMPQA datasets to facilitate the investigation of under-explored cotemporal reasoning problems for large language models. Extensive experiments have shown a significant gap between existing advanced LLMs and human-level performance, even with the enhancement of reasoning approaches. We also discover that mathematical reasoning is crucial for understanding co-temporal events and propose a mathbased strategy to improve LLMs’ co-temporal reasoning. Reasoning on concurrent and intricate temporal relations remains an open research question, and we hope more enhancement to develop upon our COTEMPQA datasets. Limitations There are still some limitations in our work, which are listed below: • While the COTEMPQA is comprehensive, with 4,748 samples, a larger dataset could provide more robust evaluation and training opportunities. As our dataset construction pipeline is adaptable to more data sources besides Wikidata, we will continuously expand our dataset and explore model training in future versions. • For our open-book QA setting, we directly provide the subject’s relevant facts in a structured format in the prompt. Recent work shows that LLM’s performance in context-based reasoning was significantly weaker than in the former (Chu et al., 2023). In the future, we will employ some retrieval tools to construct prompts with more contextually rich information sources. • We evaluate the co-temporal reasoning capabilities from the perspective of task performance. However, a more direct approach could involve analyzing how the model’s neurons and hidden states are triggered (Zhang et al., 2023). This limitation is not unique to our study and is common in most evaluations of Large Language Models. Acknowledgement We want to thank all the anonymous reviewers for their valuable comments. This work was supported by the National Science Foundation of China (NSFC No. 62206194), the Priority Academic Program Development of Jiangsu Higher Education Institutions, the Natural Science Foundation of Jiangsu Province, China (Grant No. BK20220488), Young Elite Scientists Sponsorship Program by CAST (2023QNRC001), and Supercomputing Center in Yancheng, Grant No. 20231001. References Joshua Ainslie, James Lee-Thorp, Michiel de Jong, Yury Zemlyanskiy, Federico Lebrón, and Sumit Sanghai. 2023. Gqa: Training generalized multi-query transformer models from multi-head checkpoints. arXiv preprint arXiv:2305.13245. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. 2023. A survey on evaluation of large language models. Wenhu Chen. 2023. Large language models are few(1)shot table reasoners. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1120–1130, Dubrovnik, Croatia. Association for Computational Linguistics. Wenhu Chen, Xinyi Wang, and William Yang Wang. 2021. A dataset for answering timesensitive questions. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). Ziyang Chen, Jinzhi Liao, and Xiang Zhao. 2023. Multigranularity temporal question answering over knowledge graphs. In Proceedings of the 61st Annual Meeting of the Association for Computational 13022 Linguistics (Volume 1: Long Papers), pages 11378– 11392, Toronto, Canada. Association for Computational Linguistics. Qiao Cheng, Juntao Liu, Xiaoye Qu, Jin Zhao, Jiaqing Liang, Zhefeng Wang, Baoxing Huai, Nicholas Jing Yuan, and Yanghua Xiao. 2021. Hacred: A largescale relation extraction dataset toward hard cases in practical applications. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 2819–2831. Qinyuan Cheng, Tianxiang Sun, Xiangyang Liu, Wenwei Zhang, Zhangyue Yin, Shimin Li, Linyang Li, Zhengfu He, Kai Chen, and Xipeng Qiu. 2024. Can ai assistants know what they don’t know? Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Haotian Wang, Ming Liu, and Bing Qin. 2023. Timebench: A comprehensive evaluation of temporal reasoning abilities in large language models. arXiv preprint arXiv:2311.17667. Bhuwan Dhingra, Jeremy R. Cole, Julian Martin Eisenschlos, Daniel Gillick, Jacob Eisenstein, and William W. Cohen. 2022. Time-aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257– 273. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language models. Yingjie Gu, Xiaoye Qu, Zhefeng Wang, Yi Zheng, Baoxing Huai, and Nicholas Jing Yuan. 2022. Delving deep into regularity: a simple but effective method for chinese named entity recognition. arXiv preprint arXiv:2204.05544. Rujun Han, Xiang Ren, and Nanyun Peng. 2021. ECONET: Effective continual pretraining of language models for event temporal reasoning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5367–5380, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. Rikui Huang, Wei Wei, Xiaoye Qu, Wenfeng Xie, Xianling Mao, and Dangyang Chen. 2024. Joint multifacts reasoning network for complex temporal question answering over knowledge graph. arXiv preprint arXiv:2401.02212. Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A Smith, Yejin Choi, and Kentaro Inui. 2022. Realtime qa: What’s the answer right now? arXiv preprint arXiv:2207.13332. Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems, volume 35, pages 22199–22213. Curran Associates, Inc. Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, and Lidong Bing. 2023. Unlocking temporal question answering for large language models using code execution. Ke Liang, Lingyuan Meng, Meng Liu, Yue Liu, Wenxuan Tu, Siwei Wang, Sihang Zhou, Xinwang Liu, and Fuchun Sun. 2022. Reasoning over different types of knowledge graphs: Static, temporal and multi-modal. arXiv preprint arXiv:2212.05767. Adam Liska, Tomas Kocisky, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, D’Autume Cyprien De Masson, Tim Scholtes, Manzil Zaheer, Susannah Young, et al. 2022. Streamingqa: A benchmark for adaptation to new knowledge over time in question answering models. In International Conference on Machine Learning, pages 13604– 13622. PMLR. Hanmeng liu, Zhiyang Teng, Ruoxi Ning, Jian Liu, Qiji Zhou, and Yue Zhang. 2023. Glore: Evaluating logical reasoning of large language models. Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. 2023. A survey of deep learning for mathematical reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14605–14631, Toronto, Canada. Association for Computational Linguistics. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023a. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583. Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023b. Wizardcoder: Empowering code large language models with evolinstruct. Costas Mavromatis, Prasanna Lakkur Subramanyam, Vassilis N. Ioannidis, Adesoji Adeshina, Phillip R Howard, Tetiana Grinberg, Nagib Hakim, and George Karypis. 2022. Tempoqr: Temporal question reasoning over knowledge graphs. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5):5825–5833. Swaroop Mishra, Matthew Finlayson, Pan Lu, Leonard Tang, Sean Welleck, Chitta Baral, Tanmay Rajpurohit, Oyvind Tafjord, Ashish Sabharwal, Peter Clark, and Ashwin Kalyan. 2022. LILA: A unified benchmark for mathematical reasoning. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5807–5832, 13023 Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. OpenAI. 2023. Gpt-4 technical report. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. J PUSTEJOVSKY. 2003. The timebank corpus. In Proceedings of Corpus Linguistics 2003, pages 647– 656. James Pustejovsky, Robert Ingria, Andrea Setzer, and Graham Katz. 2003. Timeml: Robust specification of event and temporal expressions in text. Xiaoye Qu, Jun Zeng, Daizong Liu, Zhefeng Wang, Baoxing Huai, and Pan Zhou. 2023. Distantlysupervised named entity recognition with adaptive teacher learning and fine-grained student ensemble. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pages 13501–13509. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950. Zhaochen Su, Juntao Li, Zikang Zhang, Zihan Zhou, and Min Zhang. 2023. Efficient continue training of temporal language model with structural information. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6315–6329, Singapore. Association for Computational Linguistics. Zhaochen Su, Zecheng Tang, Xinyan Guan, Lijun Wu, Min Zhang, and Juntao Li. 2022. Improving temporal generalization of pre-trained language models with lexical semantic change. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6380–6393, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Qingyu Tan, Hwee Tou Ng, and Lidong Bing. 2023. Towards benchmarking and improving the temporal reasoning capability of large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 14820–14835, Toronto, Canada. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Naushad UzZaman, Hector Llorens, James F. Allen, Leon Derczynski, Marc Verhagen, and James Pustejovsky. 2012. Tempeval-3: Evaluating events, time expressions, and temporal relations. CoRR, abs/1206.5333. Denny Vrandeˇci´c and Markus Krötzsch. 2014. Wikidata: A free collaborative knowledge base. Communications of the ACM, 57:78–85. Yuqing Wang and Yun Zhao. 2023. Tram: Benchmarking temporal reasoning for large language models. arXiv preprint arXiv:2310.00835. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In International Conference on Learning Representations. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Curran Associates, Inc. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed H Chi, Quoc V Le, Denny Zhou, et al. 2022c. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems. Yifan Wei, Yisong Su, Huanhuan Ma, Xiaoyan Yu, Fangyu Lei, Yuanzhe Zhang, Jun Zhao, and Kang Liu. 2023. Menatqa: A new dataset for testing the temporal comprehension and reasoning abilities of large language models. arXiv preprint arXiv:2310.05157. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Association for Computational Linguistics. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. 13024 Chenhan Yuan, Qianqian Xie, Jimin Huang, and Sophia Ananiadou. 2023. Back to the future: Towards explainable temporal reasoning with large language models. Michael Zhang and Eunsol Choi. 2021. SituatedQA: Incorporating extra-linguistic contexts into QA. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7371–7387, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. 2023. Multimodal chain-of-thought reasoning in language models. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223. Victor Zhong, Weijia Shi, Wen tau Yih, and Luke Zettlemoyer. 2022. Romqa: A benchmark for robust, multievidence, multi-answer question answering. Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. “going on a vacation” takes longer than “going for a walk”: A study of temporal commonsense understanding. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3363–3369, Hong Kong, China. Association for Computational Linguistics. Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang, Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yujiu Yang. 2023a. Solving math word problems via cooperative reasoning induced language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4471–4485, Toronto, Canada. Association for Computational Linguistics. Xinyu Zhu, Cheng Yang, Bei Chen, Siheng Li, JianGuang Lou, and Yujiu Yang. 2023b. Question answering as programming for solving time-sensitive questions. arXiv preprint arXiv:2305.14221. 13025 A Implementation Details We utilize the OpenAI API1 to evaluate all closedsource models, and for open-source models, we employ the transformers library (Wolf et al., 2020). In all our experiments, we set the temperature to 0 and the maximum length to 256. These experiments were conducted across a full range of scales for each evaluated model. B The Details of Co-temporal Extraction Building on our approach to structuring timedependent facts from Wikidata, we delve into extracting co-temporal facts by identifying overlaps in the timestamps of different facts. Specifically, we compare a given fact fi = (si, ri, oi, tsi, tei) with another fact fj in five distinct scenarios: B.1 Scenario 1: (S, R, O) Original Fact: fi as defined above. Compared Fact: fj = (si, ri, oj, tsj, tej). Explanation: In this scenario, the subject si and relation ri remain constant, indicating the same subject in the same type of relationship. However, the object changes, where the subject is related to different objects during co-temporal periods. Template: While <subject1> was holding the position of <object1>, which position did <subject1> hold during the same time span? B.2 Scenario 2: (S, R, O) Original Fact: fi as above. Compared Fact: fj = (si, rj, oj, tsj, tej). Explanation: Here, the subject si stays constant while the relation and the object change. This scenario is crucial for identifying instances where a single subject is involved in different relationships with different objects concurrently. Template: While <subject1> was holding the position of <object1>, which political party did <subject1> belong to simultaneously? B.3 Scenario 3: (S, R, O) Original Fact: fi as above. Compared Fact: fj = (sj, ri, oi, tsj, tej). Explanation: This scenario reflects cases where the relationship and object remain constant, but the subject changes. It suggests different subjects simultaneously having the same type of 1https://platform.openai.com/ relationship with the same object. Template: While <subject1> was holding the position of <object1>, who also held the position of <object1> concurrently? B.4 Scenario 4: (S, R, O) Original Fact: fi as above. Compared Fact: fj = (sj, ri, oj, tsj, tej). Explanation: Only the relationship remains constant in this case, while both the subject and the object change. This scenario signifies instances where different subjects share a common relationship with different objects concurrently. Template: While <subject1> was playing for <object1>, which team did <subject2> play for within the same time interval? B.5 Scenario 5: (S, R, O) Original Fact: fi as above. Compared Fact: fj = (sj, rj, oj, tsj, tej). Explanation: This scenario represents completely distinct facts that overlap in time, with all quintuplet elements changing. Template: While <subject1> was holding the position of <object1>, which employer did <subject2> work for during the same time? C Case Study Table 4 indicates that existing LLMs can effectively reason about straightforward concurrent events. However, they encounter difficulties in more complex tasks that require a deeper understanding and comprehension of co-temporal reasoning. In this section, we provide a further case study to show this difference. As shown in Table 7, the Equal scenario is more accessible for LLMs as their cotemporal time interval entirely overlap. Overlap and During scenarios present intricate temporal intersections, necessitating more implicit reasoning to understand the co-temporal relationships. It becomes more challenging to determine whether one time period intersects another (i.e., During and Overlap) compared to the straightforward identification in the Equal scenario. Additionally, the Mix scenario has several correct answers and contains various co-temporal relationships, which makes it the most challenging compared to other scenarios. 13026 Equal Context: Thomas Wenski holds the position of auxiliary bishop in June 24, 1997. Thomas Wenski holds the position of titular bishop in June 24, 1997. ...... Question: While Thomas Wenski was holding the position of auxiliary bishop, which position did Thomas Wenski during the same time period? Overlap Context: Avet Ter-Gabrielyan works for Yerevan Komitas State Conservatory from 1923 to 1944. Avet Ter-Gabrielyan works for Komitas Quartet from 1924 to 1976. ...... Question: While Avet Ter-Gabrielyan was working for Yerevan Komitas State Conservatory, which employer did Avet Ter-Gabrielyan work for during the same time span? During Context: Yaiza Canzani works for Institute for Advanced Study from July 1, 2014 to July 31, 2015. Yaiza Canzani works for Harvard University from June 30, 2013 to June 30, 2016. ...... Question: While Yaiza Canzani was working for Institute for Advanced Study, which employer did Yaiza Canzani work for at the same time? Mix Context: John Daniel FitzGerald holds the position of Minister for Justice from July 23, 1919 to April 12, 1920. John Daniel FitzGerald holds the position of Minister for Local Government from November 15, 1916 to April 12, 1920. John Daniel FitzGerald holds the position of Solicitor General for New South Wales from July 23, 1919 to April 12, 1920. John Daniel FitzGerald holds the position of Vice-President of the Executive Council from April 27, 1915 to July 30, 1919. ...... Question: While John Daniel FitzGerald was holding the position of Minister for Justice, which position did John Daniel FitzGerald during the same time period? Table 7: Case Study. We provide some representative examples to give an intuitive presentation of the varying difficulties in the COTEMPQA. Time periods are highlighted in bold for easy identification. D Further Analysis for the Mix Scenario In this section, we provide further analysis for WizardMath’s reduced effectiveness in the Mix scenario by the case and experimental results. As shown in Table 8, CodeLLaMA and LLaMA prefer to provide all potential answers, but WizardMath only returns a signal alternative answer. WizardMath is trained to return the answer with the highest probability since preciseness and accuracy are required in mathematical reasoning (Lu et al., 2023). As questions have more than one correct answers in the Mix scenario, WizardMath score the highest precision (47.5%) and the lowest Recall (28.8%), leading to underperformance in this scenario. E Error Analysis To better understand the mistakes made by models, we focused our investigation on the responses generated by GPT-4 under 0-shot CoT. We divide the errors into three categories: • Incomplete answer errors refer to situations where the questions have multiple correct answers, but failing to return all of them. • Uncertainty errors represent the models’ inability to extract the co-temporal relation from the context provided and refuse to response the question. • Incorrect answer errors are characterized by the model cannot return the correct answers, which means the models are insufficient in cotemporal reasoning. Our case-wise error analysis is shown in Table 9, “uncertainty errors” are the most frequent error type, accounting for 43.14%. We assume that the GPT-4 tends to provide relatively conservative responses and only returns answers when there is a certain level of confidence (Cheng et al., 2024). Future research needs to optimize the model’s framework and further enhance the capabilities of LLMs in co-temporal understanding and reasoning. F Prompts The prompts and demonstrations can be found from Figure 3 to Figure 12. 13027 Model Precision Recall F1 Prediction CodeLLaMA-34B 41.6 47.3 40.3 Minister of Finance, Minister of Education of New Zealand, Minister of Justice LLaMA-70B 41.3 58.3 42.5 Minister of Finance, Minister of Education of New Zealand, Minister of Justice WizardMath-70B 47.5 28.8 32.9 Minister of Education of New Zealand Table 8: The performance of different open source model in the mixed scenario and the models’ prediction when the ground truth is Minister of Finance, Minister of Education of New Zealand. Error Type Example Incomplete answer errors (27.93%) Question: While Bodil Nyboe Andersen was working for Tryg, which employer did Bodil Nyboe Andersen work for within the same time interval? Gold answer: Sampension, Alka Predict answer: Alka Uncertainty errors (43.14%) Question: While Alain Decaux was holding the position of director, which position did Alain Decaux at the same time? Gold answer: president Predict answer: Alain Decaux did not hold any other position at the same time he was a director from 1969 to 1971. Incorrect answer errors (28.93%) Question: While Thomas Wenski was holding the position of auxiliary bishop, which position did Thomas Wenski during the same time period? Gold answer: titular bishop Predict answer: Minister for Children (Denmark) Table 9: Case-wise error analysis. The incorrect answers are categorized into three types (i.e., incomplete answer errors, uncertainty errors, incorrect answer errors). For each type, an illustrative example is provided to enhance clarity and understanding. “Uncertainty errors” are the most frequent error type. Model (S, R, O) (S, R, O) (S, R, O) (S, R, O) (S, R, O) Overall Acc. F1 Avg. Acc. F1 Avg. Acc. F1 Avg. Acc. F1 Avg. Acc. F1 Avg. The Closed Book Question Answer (CBQA) setting GPT-3.5-TURBO-0613 7.3 11.9 9.6 29.7 31.1 30.4 3.0 4.5 3.7 13.3 30.6 22.0 8.5 21.8 15.1 16.3 GPT-4-0613 3.5 8.0 5.7 30.0 31.3 30.7 2.3 4.4 3.3 15.0 24.5 19.7 9.5 15.1 12.3 14.5 The Open Book Question Answer (OBQA) setting GPT-3.5-TURBO 34.2 55.8 45.0 54.7 57.1 55.9 34.4 55.6 45.0 17.8 33.5 25.6 15.1 28.0 21.5 38.9 GPT-4 57.0 74.7 65.9 68.0 70.8 69.4 58.0 73.3 65.6 29.9 48.8 39.4 23.5 39.8 31.7 54.7 CODELLAMA-7B 4.3 24.3 14.3 6.5 23.6 15.0 1.3 15.4 8.4 1.7 9.9 5.8 2.2 14.3 8.3 10.5 WIZARDCODER-7B 3.3 17.9 10.6 16.6 27.3 22.0 3.6 16.7 10.1 1.7 8.2 4.9 2.3 12.5 7.4 11.2 LLAMA-7B 2.5 20.5 11.5 9.5 23.0 16.3 1.9 20.5 11.2 2.2 16.8 9.5 3.4 18.7 11.1 12.0 WIZARDMATH-7B 7.2 16.8 12.0 22.8 26.6 24.7 4.5 19.1 11.8 7.5 17.9 12.7 7.1 17.1 12.1 14.8 CODELLAMA-13B 6.0 26.0 16.0 7.7 26.9 17.3 2.0 17.9 10.0 1.0 16.2 8.6 1.7 16.7 9.2 12.4 WIZARDCODER-13B 5.0 16.6 10.8 19.6 30.2 24.9 3.9 23.3 13.6 6.3 15.1 10.7 4.6 12.6 8.6 13.9 LLAMA-13B 6.2 28.6 17.4 10.5 27.0 18.7 3.6 20.9 12.2 2.3 17.2 9.8 3.0 17.6 10.3 13.8 WIZARDMATH-13B 11.2 19.7 15.5 30.9 34.0 32.5 3.1 9.1 6.1 5.6 13.8 9.7 4.0 9.1 6.5 14.4 CODELLAMA-34B 9.9 42.4 26.2 19.6 38.3 29.0 4.7 27.1 15.9 4.3 25.6 14.9 3.6 21.6 12.6 20.0 WIZARDCODER-34B 11.0 22.6 16.8 35.2 38.0 36.6 9.0 27.4 18.2 7.5 15.9 11.7 7.4 16.0 11.7 19.2 LLAMA-70B 10.3 43.6 26.9 14.7 37.6 26.1 7.1 31.7 19.4 7.0 32.4 19.7 6.2 29.9 18.1 22.2 WIZARDMATH-70B 18.3 37.8 28.1 49.8 53.4 51.6 8.6 24.8 16.7 20.1 37.3 28.7 17.4 30.1 23.8 30.1 Table 10: Experimental results of different triple element types in COTEMPQA. The best performance is bold. 13028 WikiData ID KB Relation Pairs # Queries Template P102-P102 political party & political party 475 While <subject> was a member of <object>, which political party did <subject> belong to within the same time interval? P39-P39 position held & position held 1,017 While <subject> was holding the position of <object>, which position did <subject> hold during the same time span? P108-P108 employer & employer 768 While <subject> was working for <object>, which employer did <subject> work for during the same time period? P54-P54 member of sports team & member of sports team 204 While <subject> was playing for <object>, which team did <subject> play for at the same time? P69-P69 educated at & educated at 258 While <subject> attended <object>, which school was <subject> attending during the identical time period? P127-P127 owned by & owned by 75 While <subject> was owned by <object>, who was the owner of <subject> concurrently? P102-P39 political party & position held 117 While <subject> was a member of <object>, which position did <subject> hold simultaneously? P102-P108 political party & employer 101 While <subject> was a member of <object>, which employer did <subject> work for during the same time span? P102-P69 political party & educated at 74 While <subject> was a member of <object>, which school was <subject> attending within the same time interval? P39-P102 position held & political party 420 While <subject> was holding the position of <object>, which political party did <subject> belong to during the same time period? P39-P108 position held & employer 380 While <subject> was holding the position of <object>, which employer did <subject> work for at the same time? P108-P39 employer & position held 125 While <subject> was working for <object>, which position did <subject> hold during the identical time period? P108-P69 employer & educated at 241 While <subject> was working for <object>, which school was <subject> attending concurrently? P54-P69 member of sports team & educated at 77 While <subject> was playing for <object>, which school was <subject> attending simultaneously? P69-P102 educated at & political party 187 While <subject> attended <object>, which political party did <subject> belong to during the same time span? P69-P39 educated at & position held 95 While <subject> attended <object>, which position did <subject> hold within the same time interval? P69-P108 educated at & employer 134 While <subject> attended <object>, which employer did <subject> work for during the same time period? Table 11: Templates used for converting Wikidata facts into natural questions. We manually predefine 17 types of KB relation pairs and ensure these relations are interconnected logically and contextually. Taking the “position held & employer” relation pair as an example, understanding the overlap between the period when a person held a specific position and their employment at the same organization can provide valuable insights into career patterns. 13029 Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Only return the answer: Figure 3: Default prompt for Closed-Book QA (CBQA) in our proposed COTEMPQA Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Answer: Let’s think step by step, Figure 4: Zero-cot prompt for Closed-Book QA (CBQA) in our proposed COTEMPQA Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Only return the answer: European Commissioner for Internal Market and Services ...... Question: While Eduard Jan Dijksterhuis was working for Leiden University, which employer did Eduard Jan Dijksterhuis work for during the same time span? Only return the answer: Figure 5: Few-shot prompt for Closed-Book QA (CBQA) in our proposed COTEMPQA (5-shot) Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Answer: According to the fact, Valdis Dombrovskis became the European Commissioner for Trade on August 26, 2020. He also held the position of European Commissioner for Internal Market and Services from July 16, 2016, to October 12, 2020. This period overlaps with his tenure as Commissioner for Trade. Therefore, the answer is European Commissioner for Internal Market and Services. ...... Question: While Eduard Jan Dijksterhuis was working for Leiden University, which employer did Eduard Jan Dijksterhuis work for during the same time span? Answer: According to the fact, Figure 6: Few-shot&CoT prompt for Closed-Book QA (CBQA) in our proposed COTEMPQA (5-shot) Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Answer: According to the context, Valdis Dombrovskis became the European Commissioner for Trade on August 26, 2020. The datetime can be formed (2020, 8, 26). The content provided and related to the question can be structured as: (Vice-President of the European Commission, (2019, 12, 1)). (European Commissioner for Internal Market and Services, (2016, 6, 16), (2020, 10, 12)). (European Commissioner for An Economy, (2019, 10, 1)). (Prime Minister of Latvia, (2009, 3, 12), (2014, 1, 22)). (Minister of Finance, (2002, 11, 7), (2004, 3, 9)). Given the (2020, 8, 26), compared with all contents related, we find that [(2016, 6, 16)−(2020, 10, 12)]∩(2020, 8, 26) ̸= ∅. Therefore the answer is European Commissioner for Internal Market and Services. ...... Question: While Eduard Jan Dijksterhuis was working for Leiden University, which employer did Eduard Jan Dijksterhuis work for during the same time span? Answer: According to the fact, Figure 7: Few-shot&Mr-CoT prompt for Closed-Book QA (CBQA) in our proposed COTEMPQA (5-shot) 13030 Answer the question based on the context: Context: Valdis Dombrovskis holds the position of Vice-President of the European Commission in December 1, 2019. Valdis Dombrovskis holds the position of European Commissioner for Internal Market and Services from July 16, 2016 to October 12, 2020. Valdis Dombrovskis holds the position of European Commissioner for Trade in August 26, 2020. Valdis Dombrovskis holds the position of European Commissioner for An Economy that Works for People in December 1, 2019. Valdis Dombrovskis holds the position of Prime Minister of Latvia from March 12, 2009 to January 22, 2014. Valdis Dombrovskis holds the position of Minister of Finance from November 7, 2002 to March 9, 2004. Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Only return the answer: Figure 8: Default prompt for Open-Book QA (OBQA) in our proposed COTEMPQA Answer the question based on the context: Context: Valdis Dombrovskis holds the position of Vice-President of the European Commission in December 1, 2019. Valdis Dombrovskis holds the position of European Commissioner for Internal Market and Services from July 16, 2016 to October 12, 2020. Valdis Dombrovskis holds the position of European Commissioner for Trade in August 26, 2020. Valdis Dombrovskis holds the position of European Commissioner for An Economy that Works for People in December 1, 2019. Valdis Dombrovskis holds the position of Prime Minister of Latvia from March 12, 2009 to January 22, 2014. Valdis Dombrovskis holds the position of Minister of Finance from November 7, 2002 to March 9, 2004. Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Answer: Let’s think step by step, Figure 9: Zero-cot prompt for Open-Book QA (OBQA) in our proposed COTEMPQA Answer the question based on the context: Context: Valdis Dombrovskis holds the position of Vice-President of the European Commission in December 1, 2019. Valdis Dombrovskis holds the position of European Commissioner for Internal Market and Services from July 16, 2016 to October 12, 2020. Valdis Dombrovskis holds the position of European Commissioner for Trade in August 26, 2020. Valdis Dombrovskis holds the position of European Commissioner for An Economy that Works for People in December 1, 2019. Valdis Dombrovskis holds the position of Prime Minister of Latvia from March 12, 2009 to January 22, 2014. Valdis Dombrovskis holds the position of Minister of Finance from November 7, 2002 to March 9, 2004. Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Only return the answer: European Commissioner for Internal Market and Services ...... Answer the question based on the context: Context: Eduard Jan Dijksterhuis attended University of Groningen from 1911 to 1918. Eduard Jan Dijksterhuis worked for Duke University School of Medicine from August 28, 1912 to April 28, 1923. Eduard Jan Dijksterhuis works for Leiden University from July 5, 1954 to September 5, 1960. Eduard Jan Dijksterhuis worked for American University of Armenia in August, 1911. Eduard Jan Dijksterhuis worked for Austin College from July, 1936 to April, 1947. Eduard Jan Dijksterhuis worked for Sonoma State University in July, 1932. Eduard Jan Dijksterhuis worked for Fairfax Media in December 16, 1942. Eduard Jan Dijksterhuis worked for Canadian Broadcasting Corporation in 1941. Eduard Jan Dijksterhuis works for Utrecht University from May 1, 1953 to September 1, 1960. Eduard Jan Dijksterhuis worked for Jean-Marie Le Pen in January, 1931. Question: While Eduard Jan Dijksterhuis was working for Leiden University, which employer did Eduard Jan Dijksterhuis work for during the same time span? Only return the answer: Figure 10: Few-shot prompt for Open-Book QA (OBQA) in our proposed COTEMPQA (5-shot) 13031 Answer the question based on the context: Context: Valdis Dombrovskis holds the position of Vice-President of the European Commission in December 1, 2019. Valdis Dombrovskis holds the position of European Commissioner for Internal Market and Services from July 16, 2016 to October 12, 2020. Valdis Dombrovskis holds the position of European Commissioner for Trade in August 26, 2020. Valdis Dombrovskis holds the position of European Commissioner for An Economy that Works for People in December 1, 2019. Valdis Dombrovskis holds the position of Prime Minister of Latvia from March 12, 2009 to January 22, 2014. Valdis Dombrovskis holds the position of Minister of Finance from November 7, 2002 to March 9, 2004. Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Answer: According to the context, Valdis Dombrovskis became the European Commissioner for Trade on August 26, 2020. He also held the position of European Commissioner for Internal Market and Services from July 16, 2016, to October 12, 2020. This period overlaps with his tenure as Commissioner for Trade. Therefore, the answer is European Commissioner for Internal Market and Services. ...... Answer the question based on the context: Context: Eduard Jan Dijksterhuis attended University of Groningen from 1911 to 1918. Eduard Jan Dijksterhuis worked for Duke University School of Medicine from August 28, 1912 to April 28, 1923. Eduard Jan Dijksterhuis works for Leiden University from July 5, 1954 to September 5, 1960. Eduard Jan Dijksterhuis worked for American University of Armenia in August, 1911. Eduard Jan Dijksterhuis worked for Austin College from July, 1936 to April, 1947. Eduard Jan Dijksterhuis worked for Sonoma State University in July, 1932. Eduard Jan Dijksterhuis worked for Fairfax Media in December 16, 1942. Eduard Jan Dijksterhuis worked for Canadian Broadcasting Corporation in 1941. Eduard Jan Dijksterhuis works for Utrecht University from May 1, 1953 to September 1, 1960. Eduard Jan Dijksterhuis worked for Jean-Marie Le Pen in January, 1931. Question: While Eduard Jan Dijksterhuis was working for Leiden University, which employer did Eduard Jan Dijksterhuis work for during the same time span? Answer: According to the context, Figure 11: Few-shot&CoT prompt for Open-Book QA (OBQA) in our proposed COTEMPQA (5-shot) 13032 Answer the question based on the context: Context: Valdis Dombrovskis holds the position of Vice-President of the European Commission in December 1, 2019. Valdis Dombrovskis holds the position of European Commissioner for Internal Market and Services from July 16, 2016 to October 12, 2020. Valdis Dombrovskis holds the position of European Commissioner for Trade in August 26, 2020. Valdis Dombrovskis holds the position of European Commissioner for An Economy that Works for People in December 1, 2019. Valdis Dombrovskis holds the position of Prime Minister of Latvia from March 12, 2009 to January 22, 2014. Valdis Dombrovskis holds the position of Minister of Finance from November 7, 2002 to March 9, 2004. Question: While Valdis Dombrovskis was holding the position of European Commissioner for Trade, which position did Valdis Dombrovskis during the identical time period? Answer: According to the context, Valdis Dombrovskis became the European Commissioner for Trade on August 26, 2020. The datetime can be formed (2020, 8, 26). The content provided and related to the question can be structured as: (Vice-President of the European Commission, (2019, 12, 1)). (European Commissioner for Internal Market and Services, (2016, 6, 16), (2020, 10, 12)). (European Commissioner for An Economy, (2019, 10, 1)). (Prime Minister of Latvia, (2009, 3, 12), (2014, 1, 22)). (Minister of Finance, (2002, 11, 7), (2004, 3, 9)). Given the (2020, 8, 26), compared with all contents related, we find that [(2016, 6, 16)−(2020, 10, 12)]∩(2020, 8, 26) ̸= ∅. Therefore the answer is European Commissioner for Internal Market and Services. ...... Answer the question based on the context: Context: Eduard Jan Dijksterhuis attended University of Groningen from 1911 to 1918. Eduard Jan Dijksterhuis worked for Duke University School of Medicine from August 28, 1912 to April 28, 1923. Eduard Jan Dijksterhuis works for Leiden University from July 5, 1954 to September 5, 1960. Eduard Jan Dijksterhuis worked for American University of Armenia in August, 1911. Eduard Jan Dijksterhuis worked for Austin College from July, 1936 to April, 1947. Eduard Jan Dijksterhuis worked for Sonoma State University in July, 1932. Eduard Jan Dijksterhuis worked for Fairfax Media in December 16, 1942. Eduard Jan Dijksterhuis worked for Canadian Broadcasting Corporation in 1941. Eduard Jan Dijksterhuis works for Utrecht University from May 1, 1953 to September 1, 1960. Eduard Jan Dijksterhuis worked for Jean-Marie Le Pen in January, 1931. Question: While Eduard Jan Dijksterhuis was working for Leiden University, which employer did Eduard Jan Dijksterhuis work for during the same time span? Answer: According to the context, Figure 12: Few-shot&Mr-CoT prompt for Open-Book QA (OBQA) in our proposed COTEMPQA (5-shot) 13033
2024
703
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13034–13054 August 11-16, 2024 ©2024 Association for Computational Linguistics CRITIQUELLM: Towards an Informative Critique Generation Model for Evaluation of Large Language Model Generation Pei Ke1,∗, Bosi Wen1,2,∗,†, Zhuoer Feng1,2,∗,†, Xiao Liu3,2,∗, Xuanyu Lei3,2,†, Jiale Cheng1,2,†, Shengyuan Wang3,2,†, Aohan Zeng3,2,†, Yuxiao Dong3, Hongning Wang1, Jie Tang3, Minlie Huang1,‡ 1The Conversational Artificial Intelligence (CoAI) Group, Tsinghua University 2Zhipu AI 3The Knowledge Engineering Group (KEG), Tsinghua University [email protected], {wbs23,fze22,liuxiao21}@mails.tsinghua.edu.cn, [email protected] Abstract Since the natural language processing (NLP) community started to make large language models (LLMs) act as a critic to evaluate the quality of generated texts, most of the existing works train a critique generation model on the evaluation data labeled by GPT-4’s direct prompting. We observe that these models lack the ability to generate informative critiques in both pointwise grading and pairwise comparison especially without references. As a result, their generated critiques cannot provide fine-grained distinguishability on generated texts, causing unsatisfactory evaluation performance. In this paper, we propose a simple yet effective method called Eval-Instruct, which can first acquire pointwise grading critiques with pseudo references and then revise these critiques via multipath prompting to obtain informative evaluation data in different tasks and settings, including pointwise grading and pairwise comparison with / without references. After fine-tuning on these data, the resulting model CRITIQUELLM is empirically shown to outperform ChatGPT and all the open-source baselines and even achieve comparable evaluation performance to GPT-4 in system-level correlations of pointwise grading. We also demonstrate that our generated critiques can act as scalable feedback to further improve the generation quality of strong LLMs like ChatGPT1. 1 Introduction Recently, large language models (LLMs) (OpenAI, 2022, 2023; Touvron et al., 2023a) have been improved rapidly and approached human-level performance on various natural language processing ∗Equal contribution †Work done when these authors interned at Zhipu AI. ‡Corresponding author 1The codes are available at https://github.com/ thu-coai/CritiqueLLM. (NLP) tasks, such as question answering, text summarization, dialogue generation, and code generation (Laskar et al., 2023). How to automatically measure the performance of LLMs has now become an essential research problem and attracted extensive attention (Chang et al., 2023; Zhang et al., 2023; Liu et al., 2024). Strong evaluation methods are expected to provide high-quality critiques (including not only rating scores but also explanations) that act as scalable feedback and guide LLMs to improve persistently (Cui et al., 2023). Traditional evaluation metrics, usually based on n-gram overlap between generated texts and reference texts (such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004)), have limited effectiveness. Recent works mostly resort to model-based evaluation metrics, especially LLM-based ones (Wang et al., 2023a; Liu et al., 2023b; Zheng et al., 2023). Since most of the best-performing LLMs such as ChatGPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023) can only be accessed via OpenAI APIs, researchers start to automatically collect evaluation data by directly prompting GPT-4 and train their own evaluation models, aiming to avoid potential risks of commerical APIs, such as high cost, unstable usage, and data leakage (Zheng et al., 2023; Wang et al., 2024; Li et al., 2024). However, we argue that these evaluation models are still struggling to generate informative critiques in different evaluation tasks including pointwise grading and pairwise comparison. Especially in the challenging reference-free setting, these models tend to generate general critiques without finegrained distinguishability on generated texts, causing unsatisfactory evaluation performance (Zheng et al., 2023). In this work, we propose a simple yet effective method called Eval-Instruct, which can automatically construct informative instruction-tuning data for different evaluation tasks and settings, including pointwise grading and pairwise comparison 13034 with / without references. Our main idea is to fully utilize referenced pointwise grading critiques, which are shown to possess rich information with the assistance of references and elaborate prompt design (Zheng et al., 2023; Liu et al., 2023a), to construct evaluation data for other tasks and settings. Specifically, after acquiring pointwise grading critiques with pseudo references via GPT-4, we devise a multi-path prompting method including two strategies: 1) Pointwise-to-Pairwise Prompting aims to inject pointwise grading critiques into pairwise critiques and enrich them with more information about the respective quality of text pairs. 2) Referenced-to-Reference-Free Prompting is targeted at removing direct comparison with references in referenced critiques, while keeping other details to improve the specificity of reference-free critiques. The evaluation data in different tasks and settings can be acquired via different paths consisting of these two strategies. And we also design a cross validation mechanism to improve the data quality of reference-free pairwise comparison because both of the two paths reach this task. After fine-tuning on the data of all the tasks and settings, the resulting model CRITIQUELLM is empirically shown to outperform all the opensource baselines and even achieve comparable performance with GPT-4 in system-level correlations of pointwise grading. We also show the potential of CRITIQUELLM to act as effective feedback to enhance the performance of LLMs like ChatGPT. Our main contributions are as follows: • We propose an evaluation data construction method called Eval-Instruct to automatically acquire informative evaluation data in both pointwise grading and pairwise comparison with / without references. • We conduct extensive experiments on CRITIQUELLM, which is fine-tuned on the data constructed by Eval-Instruct. Experimental results on three instruction following benchmark datasets show that our model can outperform all the open-source baselines and even perform comparably with GPT-4 in system-level correlations of pointwise grading. • We reveal the potential of CRITIQUELLM to guide LLMs to improve persistently by showing the positive impact of our generated critiques as scalable feedback on the generation quality of LLMs. 2 Related Work Evaluation is a long-standing task in NLP, which becomes more challenging with the rapid development of LLMs (Celikyilmaz et al., 2020; Chang et al., 2023). Currently, there are mainly two lines of work on LLM evaluation, including NLU-style and NLG-style evaluations. NLU-style evaluation methods utilize natural language understanding (NLU) tasks such as multi-choice QA to measure the performance of LLMs via simple objective metrics (such as accuracy and F1 score) (Hendrycks et al., 2021; Zhong et al., 2023; Huang et al., 2023b), which may deviate from the common usage of LLMs and may not exactly reflect the ability of LLMs in generating responses for user queries. NLG-style evaluation methods extend metrics for natural language generation (NLG) tasks and expect to apply them to the measurement of LLM’s performance, which are the main focus of this paper. Compared with early metrics that depend on the n-gram overlap between generated texts and reference texts (Papineni et al., 2002; Banerjee and Lavie, 2005; Lin, 2004), recently proposed metrics based on state-of-the-art LLMs like GPT-4 (OpenAI, 2023) are shown to be strong evaluators due to the encouraging effectiveness of LLMs and the simplicity of formulating evaluation tasks as instruction-following tasks (Wang et al., 2023a; Chen et al., 2023; Liu et al., 2023b; Zheng et al., 2023; Ke et al., 2023; Fu et al., 2023). Since most of the state-of-the-art LLMs can only be accessed via APIs, researchers start to automatically collect evaluation data by directly prompting GPT-4 and train their own evaluation models to provide stable and effective evaluations at a lower cost (Wang et al., 2024; Li et al., 2024; Kim et al., 2024). The concurrent works similar to ours are the LLMs specially trained for evaluation tasks like PandaLM (Wang et al., 2024), JudgeLM (Zhu et al., 2023), and AUTO-J (Li et al., 2024). For comparison, our work is the first attempt to deal with the challenge of uninformative critique generation which commonly appears in recent LLMbased evaluation models especially without references. Instead of prompting GPT-4 directly, our proposed Eval-Instruct can fully utilize the connection among different evaluation tasks and settings to construct informative evaluation data, which are empirically shown to improve the quality of generated critiques. 13035 3 Method 3.1 Task Definition and Method Overview This paper mainly involves two typical evaluation tasks: 1) Pointwise Grading: Given a user query q, a LLM-generated text x, and a reference text r (omitted in the reference-free setting), the goal is to obtain a critique c including a rating score and an explanation to support this score. 2) Pairwise Comparison: Given a user query q, two LLMgenerated texts x1 and x2, and a reference text r (omitted in the reference-free setting), our purpose is to acquire a critique c including a comparison label (i.e., win / tie / lose) and an explanation to support this label. Our method consists of the following steps. We first construct an informative instruction-tuning dataset for different evaluation tasks and settings, including pointwise grading and pairwise comparison with / without references (§3.2). Specifically, after collecting user queries, LLM-generated texts, and pseudo references (§3.2.1), we can acquire high-quality referenced pointwise grading critiques via elaborately prompting GPT-4. Then, we devise a multi-path prompting method to construct informative evaluation data in other tasks and settings, which covers pointwise-to-pairwise and referenced-to-reference-free prompting strategies (§3.2.2). Since there are two paths to obtain reference-free pairwise comparison data, we design a cross validation mechanism to filter out the contradictory data and improve the quality (§3.2.3). Finally, we perform supervised fine-tuning on the automatically constructed evaluation data in a multitask manner to train a unified critique generation model for different evaluation tasks and settings (§3.3). 3.2 Evaluation-Oriented Instruction Data Construction (Eval-Instruct) 3.2.1 Pseudo Reference Collection To construct instruction-tuning data for evaluation, it is imperative to first obtain the evaluation input, including user queries, LLM-generated texts, and references. We refer to recent works on instruction following (Liu et al., 2023a; Li et al., 2024; Zhang et al., 2024) and merge their task taxonomy to consider ten instruction following tasks covering diverse NLP applications in real-world scenarios2. 2Our task taxonomy contains fundamental language ability, advanced Chinese understanding, open-ended question answering, writing ability, logical reasoning, mathematics, We utilize self-instruct (Wang et al., 2023d) to augment seed queries of these tasks which are publicly available and conduct strictly filtering to improve the data quality. The details are provided in Appendix A. Then, we collect LLM-generated texts from 10 representative models, which cover different levels of generation qualities, including GPT-4 (OpenAI, 2023), ChatGPT (OpenAI, 2022), two versions of ChatGLM (Du et al., 2022; Zeng et al., 2023), MOSS (Sun et al., 2023), Minimax3, Sparkdesk4, Chinese-Llama2-7B-Chat5, Baichuan2-13B-Chat (Yang et al., 2023), and Ernie Bot6. We further filter out the generated results by removing a small number of failure cases, such as empty responses. Finally, we select the best-performing LLM (i.e., GPT-4) and manually check its generated texts for each user query, while revising them if necessary to improve the quality. Thus, these generated texts after manual check and revise can act as pseudo references to assist the evaluation data construction. 3.2.2 Multi-Path Prompting To acquire high-quality evaluation data in different evaluation tasks and settings, we first construct referenced pointwise grading critiques by prompting GPT-4 with the assistance of pseudo references and well-designed prompts like Liu et al. (2023a), which are empirically shown to be informative (Zheng et al., 2023). Then, regarding this setting as a beginning, we devise a multi-path prompting method to obtain evaluation data in other tasks and settings. As shown in Figure 1, there are two main prompting strategies: (1) Pointwise-to-Pairwise Prompting (fP2P ): This prompting strategy injects pointwise grading critiques of generated texts into pairwise comparison critiques, enriching them with information about the respective text quality. Meanwhile, it requires self-reflection on the pointwise critiques generated by GPT-4 before obtaining the final pairwise comparison results. (2) Referenced-to-Reference-Free Prompting (fR2RF ): This prompting strategy aims to remove direct comparison with references while keeping informative contents from references. It also retask-oriented role play, professional knowledge, code generation, and multi-lingual ability. 3https://api.minimax.chat/ 4https://xinghuo.xfyun.cn/ 5https://huggingface.co/FlagAlpha/ Llama2-Chinese-7b-Chat/ 6https://yiyan.baidu.com/ 13036 Pointwise Grading Referenced Pairwise Comparison Referenced Pointwise Grading Reference-Free Pairwise Comparison Reference-Free User Query LLM-generated Text Evaluation Input Reference Text Explanation: Compared with the reference, AI assistant’s response … Rating: [5] Evaluation Output (Critique) Cross Validation Explanation: Based the reference, AI assistant 1’s response …, while AI assistant 2’s response … Label: AI assistant 1 Wins Path#1 Path#2 Explanation: AI assistant 1’s response …, while AI assistant 2’s response … Label: AI assistant 1 Wins Explanation: AI assistant’s response … Rating: [5] Pointwise-to-Pairwise Referenced-to-Reference-Free 𝑓𝑃2𝑃 𝑓𝑃2𝑃 𝑓𝑅2𝑅𝐹 𝑓𝑅2𝑅𝐹 User Query LLM-generated Text 1 Evaluation Input Reference Text Evaluation Output (Critique) LLM-generated Text 2 User Query LLM-generated Text 1 Evaluation Input Evaluation Output (Critique) LLM-generated Text 2 User Query LLM-generated Text Evaluation Input Evaluation Output (Critique) 1. Combined with the reference and LLM-generated texts with their pointwise grading results, give a detailed comparison critique … (Knowledge Injection) 2. Give a detailed explanation if the comparison label isn’t consistent with the rating order … (Self-Reflection) Pointwise-to-Pairwise 1. Combined with LLM-generated texts with their pointwise grading results, give a detailed comparison critique … (Knowledge Injection) 2. Give a detailed explanation if the comparison label isn’t consistent with the rating order … (Self-Reflection) 1. Don’t mention the comparison with references directly, while keeping enough details … (Reference Removal) 2. Allow to slightly modify the comparison label if the label isn’t consistent with the revised explanation … (Self-Reflection) Referenced-to-Reference-Free 1. Don’t mention the comparison with references directly, while keeping enough details … (Reference Removal) 2. Allow to modify the rating if the rating isn’t consistent with the revised explanation … (Self-Reflection) Figure 1: Overview of Eval-Instruct. Starting from referenced pointwise grading data, our proposed multi-path prompting method can apply pointwise-to-pairwise and referenced-to-reference-free prompting strategies to acquire evaluation data in other tasks and settings via two different paths. Cross validation is adopted to filter out the contradictory data from these two paths and further improve the data quality. quires GPT-4 to self-reflect7 whether the evaluation results including scores / labels and revised explanations are consistent, and modify the results if necessary. Equipped with the above prompting strategies, we have two paths to construct evaluation data in different tasks and settings. Assume that Dpoint,r = {(qi, ri, xi, cpoint,r i )}N i=1 indicates the referenced pointwise grading dataset constructed above and cpoint,r i represents the critique in the corresponding setting, our purpose is to acquire the datasets Dpair,r, Dpoint,rf, Dpair,rf via different paths, where point/pair means pointwise / pairwise evaluation and r/rf indicates referenced / reference-free evaluation, respectively. The two paths are devised as follows. Path#1: Dpoint,r fP 2P −−−→Dpair,r fR2RF −−−−→Dpair,rf As shown in Path#1 of Figure 1, we firstly conduct pointwise-to-pairwise prompting to acquire the referenced pairwise comparison dataset Dpair,r = {(qi, ri, xi,1, xi,2, cpair,r i )}M i=1: cpair,r i = fP2P (qi, ri, xi,1, xi,2, cpoint,r i,1 , cpoint,r i,2 ) i = 1, 2, · · · , M (1) where qi, ri, xi,1, xi,2 indicate the user query, the reference, and two generated texts of the i-th data, 7The purpose of self-reflection in the two strategies is to alleviate the inconsistency problem in the output critiques, reducing error propagation during the data construction process. respectively. cpoint,r i,1 , cpoint,r i,2 , cpair,r i are the referenced pointwise and pairwise evaluation results of xi,1, xi,2, respectively8. Then, we can apply referenced-to-reference-free prompting to obtain Dpair,rf = {(qi, xi,1, xi,2, cpair,rf i )}: cpair,rf,1 i = fR2RF (qi, ri, xi,1, xi,2, cpair,r i ) i = 1, 2, · · · , M (2) where cpair,rf,1 i means the reference-free pairwise comparison critique of the i-th data from Path#1. Path#2: Dpoint,r fR2RF −−−−→Dpoint,rf fP 2P →Dpair,rf Similarly, as shown in Path#2 of Figure 1, we can exchange the order of two prompting strategies applied to Dpoint,r accordingly. In this way, we can in turn acquire Dpoint,rf and Dpair,rf: cpoint,rf i = fR2RF (qi, ri, xi, cpoint,r i ) i = 1, 2, · · · , N (3) cpair,rf,2 i = fP2P (qi, xi,1, xi,2, cpoint,rf i,1 , cpoint,rf i,2 ) i = 1, 2, · · · , M (4) where cpair,rf,2 i denotes the reference-free pairwise comparison critique of the i-th data from Path#2. 8We conduct strictly rule-based filtering after each prompting step to remove low-quality data with errors in format and other aspects, which is omitted in this subsection. 13037 3.2.3 Cross Validation Since both of the two paths finally reach Dpair,rf, we design a cross validation mechanism to further improve the data quality. Specifically, Dpair,rf only contains the data whose comparison labels from two paths are consistent. In this case, the critiques from both of the two paths are added to Dpair,rf. The other data with contradictory comparison labels are strictly filtered. In our experiment, the proportion of the evaluation data which are filtered out is 7.7%, demonstrating that most of our constructed data from the two paths have consistent comparison labels, indicating acceptable data quality. 3.3 Supervised Fine-Tuning We perform supervised fine-tuning on the LLM Pθ using all the constructed training data in a multitask manner to obtain CRITIQUELLM: L = −1 N N X i=1 Pθ(cpoint,r i |qi, ri, xi) −1 N N X i=1 Pθ(cpoint,rf i |qi, xi) −1 M M X i=1 Pθ(cpair,r i |qi, ri, xi,1, xi,2) −1 M ′ M ′ X i=1 Pθ(cpair,rf i |qi, xi,1, xi,2) where M ′ indicates the data amount of Dpair,rf after cross validation. During fine-tuning, we follow Bai et al. (2022) to add simplified prompts to distinguish different parts of inputs. We also follow Li et al. (2024) to augment pairwise training data via swapping the order of two generated texts and exchanging the corresponding contents in critiques. 4 Experiment 4.1 Dataset We adopt three benchmark datasets on open-ended instruction following, which involve various NLP tasks in LLM’s real-world scenarios9. The datasets also cover all the evaluation tasks and settings in this paper. The statistics are shown in Table 1. AlignBench (Liu et al., 2023a): This benchmark includes 8 categories of instruction following tasks 9We have conducted string matching to show that there is no overlap between the queries in the training and test sets. Dataset Task Setting #Models #Samples / #Pairs Length AlignBench Pointwise R / R-F 8 3,200 274 Pairwise R / R-F 8 1,600 293 AUTO-J (Eval-P) Pairwise R-F 6 1,392 372 LLMEval Pairwise R-F 11 1,530 283 Table 1: Statistics of the benchmark datasets, including the evaluation task / setting, the number of models / samples / pairs, and the average length of generated texts. R / R-F indicates referenced / reference-free evaluation, respectively. and 8 LLMs for generation. It provides an evaluation dataset with human-annotated scores on the quality of generated texts. In addition to using human-annotated scores for measuring pointwise grading performance, we also follow the original paper to sample text pairs of the same query for pairwise comparison10, whose label is automatically determined by their pointwise scores. AUTO-J (Eval-P) (Li et al., 2024): This benchmark provides 1,392 pairwise comparison data, each of which contains a user query, two LLMgenerated texts, and a human-annotated preference label. These data involve 58 real-world scenarios and 6 model families for generation. LLMEval (Zhang et al., 2024): This benchmark designs 17 types of user queries covering representative NLP tasks in real-world scenarios, and provides ∼100,000 pairwise comparison data with human-annotated labels. Due to the limitation of computational resources and API costs for LLMbased evaluation methods, we randomly sample a subset (∼1,000) to measure the performance of our method and all the baselines for a fair comparison. As for the relationship between our training dataset in §3.2 and these benchmark datasets, our training dataset has similar task categories with AlignBench because our task taxonomy is built mainly based on AlignBench (Liu et al., 2023a) with other tasks in recent works (Li et al., 2024; Zhang et al., 2024) as supplementary, as described in §3.2.1. Also, our training dataset includes the training data of AUTO-J (Eval-P) (Li et al., 2024) while excluding its test set. Compared with AlignBench and AUTO-J (Eval-P), LLMEval (Zhang et al., 2024) does not have a similar task or data 10The authors in the original paper of AlignBench (Liu et al., 2023a) collect all the pairs of generated texts for each query (∼10,000 pairwise comparison data), causing high demand of computational resources and API costs for LLM-based evaluation methods. Thus, we randomly sample a subset (∼1,000 pairwise comparison data) to test our method and all the baselines for a fair comparison. 13038 Level Text-Level System-Level Setting Referenced Reference-Free Referenced Reference-Free Metric r ρ τ r ρ τ r ρ τ r ρ τ Closed-Source Evaluation Models ChatGPT 0.443 0.421 0.379 0.292 0.287 0.266 0.955 0.976 0.929 0.778 0.833 0.643 GPT-4 0.629 0.583 0.532 0.523 0.494 0.447 0.995 1.000 1.000 0.997 0.976 0.929 Open-Source Evaluation Models ChatGLM3-6B 0.223 0.222 0.207 0.159 0.150 0.140 0.790 0.833 0.643 0.544 0.548 0.429 Baichuan2-13B-Chat 0.199 0.200 0.187 0.125 0.117 0.110 0.854 0.929 0.786 0.663 0.527 0.400 Qwen-14B-Chat 0.373 0.379 0.358 0.255 0.254 0.239 0.901 0.929 0.786 0.772 0.833 0.643 Mixtral-8x7B 0.474 0.471 0.426 0.302 0.306 0.282 0.972 0.976 0.929 0.863 0.929 0.786 Llama-2-70B-Chat 0.152 0.162 0.109 0.123 0.122 0.113 0.663 0.667 0.500 0.547 0.429 0.286 JudgeLM-13B 0.450 0.430 0.391 0.170 0.162 0.155 0.984 0.976 0.929 0.717 0.905 0.786 AUTO-J-Bilingual-6B 0.044 0.045 0.041 0.558 0.571 0.500 CRITIQUELLM (Ours) 0.555 0.523 0.477 0.366 0.352 0.319 0.995 1.000 1.000 0.954 0.976 0.929 Table 2: Text-level and system-level Pearson (r), Spearman (ρ), and Kendall (τ) correlations in referenced and reference-free settings of pointwise grading on AlignBench. The highest correlation among the methods based on local models is bold, while the highest correlation overall is underlined. - means that AUTO-J-Bilingual-6B cannot support referenced pointwise grading. distribution with our training dataset, which can act as a benchmark to test the generalization ability. 4.2 Baselines We choose state-of-the-art general LLMs and evaluation-specific LLMs as our baselines. General LLMs: We adopt ChatGPT (gpt-3.5-turbo-1106) (OpenAI, 2022), GPT4 (gpt-4-1106-preview) (OpenAI, 2023), ChatGLM3-6B (Du et al., 2022; Zeng et al., 2023), Baichuan2-13B-Chat (Yang et al., 2023), Qwen14B-Chat (Bai et al., 2023), Llama-2-70B-Chat (Touvron et al., 2023b), and Mixtral-8x7B (Jiang et al., 2024) as our general baselines. These general LLMs can perform as an evaluator for pointwise grading and pairwise comparison via elaborate prompts without further training. We directly prompt these LLM to obtain evaluation results in single-turn interaction. Evaluation-Specific LLMs: We select AUTO-JBilingual-6B (Li et al., 2024) and JudgeLM-13B (Zhu et al., 2023) as our task-specific baselines. These two baselines are designed for specific evaluation tasks and settings. 4.3 Implementation Details We choose ChatGLM3-6B (Du et al., 2022; Zeng et al., 2023) as our base model and use Zero Redundancy Optimizer (ZeRO) (Rajbhandari et al., 2020) stage 2 framework from the Deepspeed (Rasley et al., 2020) library. CRITIQUELLM is trained on 8 A800 GPUs. The number of training samples for Dpoint,r/Dpoint,rf/Dpair,r/Dpair,rf is 12,102 / 12,095 / 6,190 / 5,428, respectively. We use AdamW (Kingma and Ba, 2015) optimizer with the weight decay of 0.1. The peak learning rate is 6e-5 with 10% warmup ratio. We set the maximum sequence length to 8,192 and the batch size to 64. The number of training epochs is 5. We use greedy decoding in the main result and investigate the effect of different decoding methods on our model in §4.7. For beam search, we set the beam size to 4. For the sampling-based decoding method, we adopt Nucleus Sampling (i.e., Top-p Sampling) (Holtzman et al., 2020) and set both the temperature and p to 0.9. For self-consistency decoding (Wang et al., 2023c), the number of candidate critiques is 5. 4.4 Main Results 4.4.1 Pointwise Grading Following Colombo et al. (2022), we adopt textlevel and system-level Pearson (r), Spearman (ρ), and Kendall (τ) correlation coefficients between human judgments and automatic metrics to measure the pointwise grading performance. Text-level correlation is computed by the average score over the correlation coefficients between human judgments and automatic metrics for all the generated texts of each instruction. For comparison, system-level correlation is obtained by the correlation coefficients between human judgments and automatic metrics of each LLM’s score, which is the average value over all the scores of the corresponding model on 13039 Dataset AlignBench AUTO-J (Eval-P) LLMEval Setting Referenced Reference-Free Reference-Free Reference-Free Metric Agr. Cons. Agr. Cons. Agr. Cons. Agr. Cons. Closed-Source Evaluation Models ChatGPT 32.50 38.56 39.56 53.94 42.74 62.43 40.07 64.58 GPT-4 74.69 86.75 70.25 84.88 62.28 86.28 50.98 84.71 Open-Source Evaluation Models ChatGLM3-6B 17.75 31.84 24.75 42.88 14.15 26.22 28.56 51.70 Baichuan2-13B-Chat 35.81 50.06 27.06 40.82 19.40 32.33 23.53 43.27 Qwen-14B-Chat 33.81 43.25 42.06 58.75 31.68 52.08 42.81 69.61 Mixtral-8x7B 61.69 74.06 53.88 72.25 35.20 52.66 48.04 79.02 Llama-2-70B-Chat 40.56 57.13 41.38 64.19 33.62 56.90 40.00 68.50 JudgeLM-13B 42.50 66.00 35.13 58.19 44.77 75.82 AUTO-J-Bilingual-6B 26.00 45.38 49.43 77.23 27.58 55.56 CRITIQUELLM (Ours) 70.56 89.25 58.81 83.06 50.93 82.76 50.72 85.95 Table 3: Agreement (Agr.) and consistency (Cons.) rates in pairwise comparison evaluation. The highest correlation among the methods based on local models is bold, while the highest correlation overall is underlined. - means that JudgeLM-13B and AUTO-J-Bilingual-6B cannot support referenced pairwise comparison. Figure 2: Critique quality evaluation results. The percentages indicate the preference results between CRITIQUELLM and other models via GPT-4’s evaluation and human verification. the dataset. The results in Table 2 show that CRITIQUELLM can achieve comparable performance with GPT-4 especially in system-level correlations, while outperforming ChatGPT and all the open-source baselines. This indicates that our proposed method can successfully improve the quality of generated critiques. We can observe that system-level correlations of CRITIQUELLM are almost the same as those of GPT-4, which even approach 1,0. This demonstrate that our model is nearly able to distinguish the overall performance of all the eight LLMs. 4.4.2 Pairwise Comparison Following Li et al. (2024), we adopt agreement and consistency rates to test the pairwise comparison performance. Specifically, we conduct two comparisons for each data sample via swapping the order of two generated texts. We consider the model’s evaluation result to agree with humans only when the two comparison results are consistent and align with the human preference label. The results in Table 3 show that CRITIQUELLM can beat ChatGPT and all the open-source baselines in both agreement and consistency rates. Compared with GPT-4, CRITIQUELLM achieves comparable performance especially in the consistency rate. This indicates that CRITIQUELLM equipped with high-quality evaluation data in different tasks and settings not only performs well in pointwise grading, but also has a strong evaluation ability in pairwise comparison. 4.5 Analysis on Critique Quality To further measure the quality of generated critiques, we follow Chen et al. (2024) to combine automatic and human evaluations. Specifically, we follow existing works (Wang et al., 2023b; Sun et al., 2024) to devise an evaluation prompt for GPT-4 to judge the quality of generated critiques. After GPT-4’s evaluation, we manually verify the results and modify them if necessary. We randomly select 100 evaluation data in the setting of pairwise comparison, which are from the mix of three datasets. And we collect generated critiques from CRITIQUELLM, state-of-the-art evaluators (i.e., ChatGPT and GPT-4), and an alternative model CRITIQUELLM (DP) whose training data in differ13040 Critique Model Overall Logical Open-ended QA Professional Fundamental Mathematics Role Play Writing Chinese Understanding None 6.385 5.318 7.000 5.824 6.310 6.160 7.260 7.154 6.000 ChatGPT 6.300 5.045 6.762 6.353 6.276 5.760 7.000 6.885 6.063 GPT-4 6.545 4.455 7.190 6.588 6.897 6.200 7.111 7.077 6.563 CRITIQUELLM 6.530 5.136 7.381 6.765 6.414 6.000 7.407 7.192 5.315 Table 4: GPT-4’s referenced pointwise scores on AlignBench for original generated texts from ChatGPT (i.e., None) and modified texts based on each critique generation model, respectively. ent tasks and settings are acquired from GPT-4’s direct prompting. For each pair of critiques (one from CRITIQUELLM and the other from a baseline / an alternative model, given the same evaluation input), GPT-4 are required to label which critique is better (i.e. win, lose or tie) in terms of correctness, helpfulness, and informativeness. The priority of these three aspects is set to follow the above order. Then, human verification is conducted to check GPT-4’s evaluation on critiques. The results are shown in Figure 2. We can observe that CRITIQUELLM can achieve superior performance over ChatGPT and CritiqueLLM (DP), and even perform comparably with GPT-4. This demonstrates that our proposed evaluation data construction method can successfully improve the overall quality of generated critiques and enhance their informativeness. 4.6 Analysis of Critique as Feedback To investigate whether the critiques generated by our model can serve as feedback to improve the quality of LLM-generated texts, we employ ChatGPT, GPT-4, and CRITIQUELLM to provide critiques for the generated texts of ChatGPT in the reference-free setting. Then, we instruct ChatGPT to modify its original generation based on the critiques. Finally, we use GPT-4 to perform referenced evaluations on the original texts and the modified texts generated by ChatGPT, respectively. The results in Table 4 show that the critiques from CRITIQUELLM can serve as positive feedback whose contributed improvement on the overall score is close to that from the GPT-4’s critiques. This further verifies the utility of CRITIQUELLM to provide informative critiques as scalable feedback that can guide LLMs towards better generation. We also notice that the critiques from ChatGPT itself have a negative impact on the overall quality of its generated texts. This phenomenon is consistent with recent works that doubt the selfcorrection ability of LLMs without external inputs (Huang et al., 2023a; Stechly et al., 2023; Valmeekam et al., 2023). We also report the evaluation scores before and after the critique-based modification across different tasks in Table 4. It is notable that the critiques from CRITIQUELLM can help enhance the quality of generated texts in a majority of tasks. However, in the tasks of logical reasoning, mathematics, and advanced Chinese understanding which are mostly hard tasks involving reasoning, the critiques from CRITIQUELLM seem to degrade the performance. We manually checked error cases and found that our model obtained misleading critiques on the reasoning process of generated texts. Since the evaluation of reasoning chains remains a challenging task (Golovneva et al., 2023) even for GPT-4, we leave further investigation in these tasks as future work. Since our experiment is a preliminary step towards utilizing critiques as feedback, we additionally have some findings which may inspire future research. First, while incorporating human critiques can provide the comparison results between the generation performance assisted by the critiques from humans and LLMs, we notice that it is not trivial to collect high-quality critiques from human annotators for AlignBench especially in the reference-free setting. It is because AlignBench is designed to be difficult and covers a wide range of tasks (Liu et al., 2023a). Thus, how to collect highquality human critiques to improve the generation quality of LLMs is worth further exploring. Then, since we choose ChatGPT as the generation model, we find that stronger LLMs which can already generate high-quality responses struggle to be further improved via generated critiques. While weaker LLMs have a lot of room for improvement, they also have the weak ability to follow instructions. Thus, how to make weaker LLMs follow critiques to generate texts of a higher quality should be left as important future work. 4.7 Ablation Study To further investigate the impact of each part on CRITIQUELLM, we conduct additional ablation 13041 Setting Pointwise Pairwise R R-F R R-F Metric r r Agr. Agr. CRITIQUELLM 0.555 0.366 70.56 58.81 Fine-Tuning Data w/o Cross Validation 0.566 0.361 66.13 57.44 Decoding Strategy w/ Beam Search 0.554 0.374 70.31 57.75 w/ Sampling 0.547 0.353 68.69 57.31 w/ Self-Consistency 0.573 0.384 69.13 58.44 Explanation w/o Explanation 0.509 0.332 60.19 51.56 Table 5: Text-level Pearson (r) correlations and agreement rates (Agr.) of ablation models in reference (R) and reference-free (R-F) settings of AlignBench. studies. For fine-tuning data, we remove the cross validation module (§3.2.3) to explore its impact on the evaluation performance. Table 5 shows that the performance of CRITIQUELLM degrades especially in pairwise comparison, demonstrating that cross validation can filter out low-quality evaluation data and contribute to the final performance. As for decoding strategies, we show the evaluation performance of three decoding strategies in addition to greedy decoding in the main result, including beam search, Nucleus Sampling (Holtzman et al., 2020), and self-consistency decoding (Wang et al., 2023c). The results in Table 5 show that the self-consistency decoding method can enhance the performance of our model especially in pointwise grading. Meanwhile, greedy decoding performs best in pairwise comparison, while achieving comparable performance with other methods in pointwise grading at a smaller computational cost. For evaluation explanations, we remove the explanations in the critiques of training data. The results in Table 5 show that the performance of CRITIQUELLM largely degrades in both pointwise and pairwise evaluations without explanations. This verifies the positive impact of explanations on the final performance, which play a similar role to chain-of-thought reasoning (Wei et al., 2022). 5 Conclusion We present an evaluation data construction method called Eval-Instruct, which can automatically construct informative evaluation data in both pointwise grading and pairwise comparison with / without references. After fine-tuning on the data from EvalInstruct, the resulting model CRITIQUELLM can beat ChatGPT and all the open-source baselines, and perform comparably with GPT-4 in systemlevel correlations of pointwise grading. CRITIQUELLM can also provide scalable feedback which can improve the generation quality of LLMs. Limitations The limitations of our work are summarized as follows: (1) In our method of multi-path prompting, we devise two prompting strategies to enrich the information in the resulting critiques, which can improve the critique quality. However, this method also increases the length of input prompts and lead to higher API costs when constructing evaluation data in different tasks and settings. We believe that it is not a severe problem because data acquisition is single-round and we do not repeatedly acquire critiques for the same evaluation input. Also, our proposed critique generation model based on opensource LLMs (i.e., ChatGLM3-6B) can achieve comparable performance with GPT-4 in some aspects, which may save the cost for LLM evaluation via APIs and avoid the risks such as unstable usage and data leakage. (2) Similar to other model-based evaluation methods, our evaluation model suffers from the selfevaluation bias (He et al., 2023) (also known as self-enhancement bias (Zheng et al., 2023)), which indicates the preference on the generated texts from the same base model. This bias is commonly recognized even in state-of-the-art LLM-based evaluators like GPT-4. We argue that researchers and developers can use multiple LLM-based evaluators with different base models including CRITIQUELLM to avoid self-evaluation bias towards specific generation models. Since there does not exist a satisfactory solution to the self-evaluation bias currently, we leave the further investigation as important future work. Acknowledgements This work was supported by the NSFC projects (with No. 62306160) and the National Science Foundation for Distinguished Young Scholars (with No. 62125604). This work was also supported by China National Postdoctoral Program for Innovative Talents (No. BX20230194) and China Postdoctoral Science Foundation (No. 2023M731952). We would also like to thank Zhipu AI for sponsoring 13042 the computation resources and annotation cost used in this work. References Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: an automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72. Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. arXiv preprint arXiv:2006.14799. Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. 2023. A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109. Kai Chen, Chunwei Wang, Kuo Yang, Jianhua Han, Lanqing Hong, Fei Mi, Hang Xu, Zhengying Liu, Wenyong Huang, Zhenguo Li, Dit-Yan Yeung, Lifeng Shang, Xin Jiang, and Qun Liu. 2024. Gaining wisdom from setbacks: Aligning large language models via mistake analysis. In The Twelfth International Conference on Learning Representations. Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and Ruifeng Xu. 2023. Exploring the use of large language models for reference-free text quality evaluation: A preliminary empirical study. arXiv preprint arXiv:2304.00723. Pierre Jean A Colombo, Chloé Clavel, and Pablo Piantanida. 2022. Infolm: A new metric to evaluate summarization & data2text generation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10554–10562. Ganqu Cui, Lifan Yuan, Ning Ding, Guanming Yao, Wei Zhu, Yuan Ni, Guotong Xie, Zhiyuan Liu, and Maosong Sun. 2023. Ultrafeedback: Boosting language models with high-quality feedback. arXiv preprint arXiv:2310.01377. Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320–335. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166. Olga Golovneva, Moya Peng Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam FazelZarandi, and Asli Celikyilmaz. 2023. Roscoe: A suite of metrics for scoring step-by-step reasoning. In The Eleventh International Conference on Learning Representations. Tianxing He, Jingyu Zhang, Tianle Wang, Sachin Kumar, Kyunghyun Cho, James Glass, and Yulia Tsvetkov. 2023. On the blind spots of model-based evaluation metrics for text generation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12067–12097. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. In 9th International Conference on Learning Representations. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations. Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. 2023a. Large language models cannot self-correct reasoning yet. arXiv preprint arXiv:2310.01798. Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023b. Ceval: A multi-level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322. Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088. Pei Ke, Fei Huang, Fei Mi, Yasheng Wang, Qun Liu, Xiaoyan Zhu, and Minlie Huang. 2023. DecompEval: Evaluating generated texts as unsupervised decomposed question answering. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9676–9691. Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, et al. 13043 2024. Prometheus: Inducing fine-grained evaluation capability in language models. In The Twelfth International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations. Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, and Jimmy Huang. 2023. A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets. In Findings of the Association for Computational Linguistics: ACL 2023, pages 431–469. Junlong Li, Shichao Sun, Weizhe Yuan, Run-Ze Fan, Hai Zhao, and Pengfei Liu. 2024. Generative judge for evaluating alignment. In The Twelfth International Conference on Learning Representations. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Xiao Liu, Xuanyu Lei, Shengyuan Wang, Yue Huang, Zhuoer Feng, Bosi Wen, Jiale Cheng, Pei Ke, Yifan Xu, Weng Lam Tam, et al. 2023a. Alignbench: Benchmarking chinese alignment of large language models. arXiv preprint arXiv:2311.18743. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. 2024. Agentbench: Evaluating llms as agents. In The Twelfth International Conference on Learning Representations. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023b. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2511–2522. OpenAI. 2022. Introducing chatgpt. OpenAI. 2023. GPT-4 technical report. arXiv preprint arXiv:2303.08774. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318. Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. 2020. Zero: memory optimizations toward training trillion parameter models. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, page 20. Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 3505–3506. Kaya Stechly, Matthew Marquez, and Subbarao Kambhampati. 2023. Gpt-4 doesn’t know it’s wrong: An analysis of iterative prompting for reasoning problems. arXiv preprint arXiv:2310.12397. Shichao Sun, Junlong Li, Weizhe Yuan, Ruifeng Yuan, Wenjie Li, and Pengfei Liu. 2024. The critique of critique. arXiv preprint arXiv:2401.04518. Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yunhua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing Huang, and Xipeng Qiu. 2023. Moss: Training conversational language models from synthetic data. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Karthik Valmeekam, Matthew Marquez, and Subbarao Kambhampati. 2023. Can large language models really improve by self-critiquing their own plans? arXiv preprint arXiv:2310.08118. Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023a. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048. Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O’Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam FazelZarandi, and Asli Celikyilmaz. 2023b. Shepherd: A critic for language model generation. arXiv preprint arXiv:2308.04592. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V. Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023c. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, and Yue Zhang. 2024. Pandalm: An automatic evaluation benchmark for LLM instruction tuning optimization. In The Twelfth International Conference on Learning Representations. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh 13044 Hajishirzi. 2023d. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, pages 13484–13508. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824–24837. Aiyuan Yang, Bin Xiao, Bingning Wang, Borong Zhang, Chao Yin, Chenxu Lv, Da Pan, Dian Wang, Dong Yan, Fan Yang, et al. 2023. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023. GLM-130B: an open bilingual pre-trained model. In The Eleventh International Conference on Learning Representations. Yue Zhang, Ming Zhang, Haipeng Yuan, Shichun Liu, Yongyao Shi, Tao Gui, Qi Zhang, and Xuanjing Huang. 2024. Llmeval: A preliminary study on how to evaluate large language models. In The 38th Annual AAAI Conference on Artificial Intelligence. Zhexin Zhang, Leqi Lei, Lindong Wu, Rui Sun, Yongkang Huang, Chong Long, Xiao Liu, Xuanyu Lei, Jie Tang, and Minlie Huang. 2023. Safetybench: Evaluating the safety of large language models with multiple choice questions. arXiv preprint arXiv:2309.07045. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-bench and chatbot arena. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. Lianghui Zhu, Xinggang Wang, and Xinlong Wang. 2023. Judgelm: Fine-tuned large language models are scalable judges. arXiv preprint arXiv:2310.17631. A Query Augmentation and Scoring Prompts We provide the prompt for query augmentation and scoring in Table 6. First, in the stage of generation, we give some in-context examples and devise detailed requirements to help ChatGPT (OpenAI, 2022) generate augmented user queries and assign the category label to them. Then, during evaluation, we instruct ChatGPT to provide a difficulty score to each query for difficulty balance in the whole augmentation dataset. B Prompt Design for Eval-Instruct We provide original prompts for pointwise-topairwise and referenced-to-reference-free strategies in Table 7 and Table 9, respectively. We also translate these prompts into English and show them in Table 8 and Table 10. C Case Study on Critique Generation To intuitively show the effectiveness of our critique generation model, we provide two generated cases of pointwise and pairwise settings, respectively, in Table 11 and 13. We also translate these cases into English and show them in Table 12 and 14. 13045 Stage Prompt Generation You are asked to provide 10 diverse prompts. These task prompts will be provided to a GPT model and we will evaluate the ability of the GPT model to reply to these prompts. The following are some examples: 1.{example prompt 1} 2.{example prompt 2} 3.{example prompt 3} Here are the requirements you need to follow to provide prompts: 1. The prompts need to be complete sentences, not phrases or fragments. 2. The prompts need to be varied, do not use similar prompts. 3. the prompts need to be meaningful, do not use meaningless prompts. 4. The prompts need to have a variety of tones, e.g., combining interrogative and imperative sentences. 5. The prompts need to be challenging, do not use simple directions. 6. The prompts need to be something that the Large Language Model can accomplish. For example, don’t ask the assistant to create any visual or audio output. For example, don’t ask the assistant to wake you up at 5pm or set a reminder because it can’t perform any action. For example, prompts should not be related to audio, video, images, hyperlinks. 7. The prompts are in Simplified Chinese, except for translation-related questions or math-related questions. 8. Some prompts can provide contextual information, should involve realistic data, and should not contain simple placeholders. Not all prompts require input. For example, when an prompts asks for general knowledge information, such as "What is the tallest mountain in the world?", it does not need to provide specific context. After you have provided the prompts, please add the category of the prompts in a pair of && sign after the prompt and surround the prompt with in a pair of @@ sign. For example, if the prompt is "@@What is the tallest mountain in the world?@@&& 基本任务&&", then the category is 基本任务. The category must be one of the following 10 categories. 1. 基本任务2. 中文理解3. 综合问答4. 文本写作5. 数学计算6. 逻辑推理7. 角色扮演8. 专业 能力9. 代码生成10. 多语言能力 Here are some examples of prompts you provide: @@example prompt1@@ &&category1&& @@example prompt2@@ &&category2&& · · · @@example prompt9@@ &&category9&& @@example prompt10@@ &&category10&& The following is a list of 10 good task prompts with serial numbers and categories: Evaluation 已知上面三个问题和它们的类别,现在请你根据以下要求,对这三个问题的题目的难度在1-3分的量表上分别评分: (1) 1分:对于大语言模型来说,这类问题是容易的 (2) 2分:对于大语言模型来说,这类问题是中等难度的 (3) 3分:对于大语言模型来说,这类问题是困难的 最后:请将这三个问题,题目用一对@@符号包围,对应的类别用一对&&符号包围,分数用一对##包围,分别带有序号地输出出来: 例如:如果问题1的题目是题目1,类别是综合问答类别,分数是1分,问题2的题目是题目2,类别是基本任务类别,分数是2分,问题3的题目是题目3, 类别是文本写作类别,分数是3分,那么输出如下: 1.@@题目1@@&&综合问答&&##1## 2.@@题目2@@&&基本任务&&##2## 3.@@题目3@@&&文本写作&&##3## 下面是按照上述要求生成的示例: Evaluation (English) Given the above three questions and their categories, please rate the difficulty of each question on a scale of 1-3 based on the following requirements: (1) Score 1: For large language models, this type of question is easy. (2) Score 2: For large language models, this type of question is of medium difficulty. (3) Score 3: For large language models, this type of question is difficult. Finally, please output the three questions with their titles enclosed in a pair of @@ symbols, the corresponding categories enclosed in a pair of && symbols, and the scores enclosed in a pair of ## symbols, each with an serial number. For example, if question 1 is titled “Title 1”, the category is “Open-ended Questions”, and the score is 1, question 2 is titled “Title 2”, the category is “Fundamental Language Ability”, and the score is 2 points, question 3 is titled “Title 3”, the category is “Writing Ability”, and the score is 3 points, then the output is as follows: 1.@@Title 1@@&&Open-ended Questions&&##1## 2.@@Title 2@@&&Fundamental Language Ability&&##2## 3.@@Title 3@@&&Writing Ability&&##3## The following parts are generated examples based on the above requirements: Table 6: Prompts for instructing ChatGPT to generate, categorize and evaluate user queries. Examples and corresponding categories are randomly sampled from the set of seed queries. 13046 Setting Prompt Referenced Pointwise Grading to Referenced Pairwise Comparison 你是一个擅长评价文本质量的助手。请你以公正的评判者的身份,比较两个AI助手对于用户提问的回答的质量优劣。我们会给你提供用户的提 问,高质量的参考答案,需要你比较的两个AI助手的答案,以及两个答案各自的质量评价分析。当你开始你的评估时,你需要遵守以下的流程: 1. 结合参考答案、两个AI助手的答案以及其质量评价分析,根据上述指定的维度对他们的答案进行细致的比较,给出详细的比较分析文本。比较 分析文本要求覆盖两个答案的质量评价分析中可用于比较的所有重要细节,并包含对答案中具体内容的分析。 2. 结合参考答案和每个维度的比较分析,从两个AI助手的答案中选出综合质量更高的那个,或者判定他们质量相当,并给出详尽的选择理由。你 的比较需要尽可能严谨细致,不受两个AI助手答案先后顺序的影响。 质量评价分析中的各维度分数和综合得分仅供参考,在各维度和综合的比较分析文本中不能直接提及各维度分数和综合得分。针对综合得分差距 较大的样本对,应尽可能按照分数高低得出比较结果,除非发现质量评价分析中存在明显错误。而针对综合得分差距较小的样本对,则允许比较 结果和分数高低不一致,但仍需要详细说明比较评价的理由。 请记住,你必须首先按照给定的评价维度,输出相应维度的名称和比较分析的文本。然后再给出综合质量比较结果,并给出比较结果的分析和解 释。之后,在你回答的末尾,按照以下字典格式(包括括号)返回你的综合质量选择结果,即你选择的综合质量更高的那个AI助手(或者认为质 量相当),并确保你返回的结果和上述生成文本中的结果保持一致: {{’综合比较结果’: 回答综合质量更高的助手序号或质量相当}},例如:{{’综合比较结果’: ’助手1’}}或{{’综合比较结果’: ’助手2’}}或{{’综合比较 结果’: ’质量相当’}}。 用户的提问:{Question} [参考答案开始] {Reference} [参考答案结束] [助手1的答案开始] {Generated Text 1} [助手1的答案结束] [助手1的答案质量评价分析开始] {Referenced Pointwise Grading Critique for Generated Text 1} [助手1的答案质量评价分析结束] [助手2的答案开始] {Generated Text 2} [助手2的答案结束] [助手2的答案质量评价分析开始] {Referenced Pointwise Grading Critique for Generated Text 2} [助手2的答案质量评价分析结束] ReferenceFree Pointwise Grading to ReferenceFree Pairwise Comparison 你是一个擅长评价文本质量的助手。请你以公正的评判者的身份,比较两个AI助手对于用户提问的回答的质量优劣。我们会给你提供用户的提 问,需要你比较的两个AI助手的答案,以及两个答案各自的质量评价分析。当你开始你的评估时,你需要遵守以下的流程: 1. 结合两个AI助手的答案以及其质量评价分析,根据上述指定的维度对他们的答案进行细致的比较,给出详细的比较分析文本。比较分析文本要 求覆盖两个答案的质量评价分析中可用于比较的所有重要细节,并包含对答案中具体内容的分析。 2. 结合每个维度的比较分析,从两个AI助手的答案中选出综合质量更高的那个,或者判定他们质量相当,并给出详尽的选择理由。你的比较需要 尽可能严谨细致,不受两个AI助手答案先后顺序的影响。 质量评价分析中的各维度分数和综合得分仅供参考,在各维度和综合的比较分析文本中不能直接提及各维度分数和综合得分。针对综合得分差距 较大的样本对,应尽可能按照分数高低得出比较结果,除非发现质量评价分析中存在明显错误。而针对综合得分差距较小的样本对,则允许比较 结果和分数高低不一致,但仍需要详细说明比较评价的理由。 请记住,你必须首先按照给定的评价维度,输出相应维度的名称和比较分析的文本。然后再给出综合质量比较结果,并给出比较结果的分析和解 释。之后,在你回答的末尾,按照以下字典格式(包括括号)返回你的综合质量选择结果,即你选择的综合质量更高的那个AI助手(或者认为质 量相当),并确保你返回的结果和上述生成文本中的结果保持一致: {{’综合比较结果’: 回答综合质量更高的助手序号或质量相当}},例如:{{’综合比较结果’: ’助手1’}}或{{’综合比较结果’: ’助手2’}}或{{’综合比较 结果’: ’质量相当’}}。 用户的提问:{Question} [助手1的答案开始] {Generated Text 1} [助手1的答案结束] [助手1的答案质量评价分析开始] {Reference-Free Pointwise Grading Critique for Generated Text 1} [助手1的答案质量评价分析结束] [助手2的答案开始] {Generated Text 2} [助手2的答案结束] [助手2的答案质量评价分析开始] {Reference-Free Pointwise Grading Critique for Generated Text 2} [助手2的答案质量评价分析结束] Table 7: Pointwise-to-Pairwise prompt design in multi-path prompting. 13047 Setting Prompt Referenced Pointwise Grading to Referenced Pairwise Comparison You are an expert at text quality evaluation. Please act as a fair judge, and compare the quality between two AI assistants’ answers to a user query. We will provide you with a user query, a high-quality reference answer, two AI assistants’ responses to the query, and the corresponding critiques to the two responses, respectively. When you start your evaluation, you need to follow the procedures below: 1. Considering the reference answers, along with two AI assistants’ answers and the corresponding critiques to them, conduct detailed comparison between two AI assistants’ answers based on the evaluation dimensions Dimension. Provide a detailed comparison result. The comparison result should cover all the important details from the pointwise critiques that can be used for the comparison, and it should include an analysis of the specific content in the answers. 2. Based on the reference answer and the comparison result of each dimension, choose the answer from the two AI assistants that has the higher overall quality, or judge that their qualities are equivalent. Provide a detailed rationale for your choice. Your comparison needs to be as rigorous and detailed as possible, and not be affected by the order in which the two AI assistants’ answers were given. The scores of each dimension and the overall score in the pointwise critique are for reference only, neither of which can be directly referred to in the comparison result. For the text pairs with a large difference in overall scores, the comparison result should be determined largely according to the scores, unless there are obvious errors in the pointwise critique. For text pairs with a small difference in overall scores, the comparison result is allowed to be inconsistent with the score ranking, but the reason for the comparison result needs to be detailed. Please remember that you must first output the names and comparison results of each given evaluation dimensions, respectively. Then, give the comparison result of overall quality and provide an analysis and explanation of the comparison result. Afterwards, at the end of your answer, return your choice of overall quality result in the following dictionary format (including brackets), that is, the AI assistant you chose as having higher overall quality (or considered to have equivalent quality), and be sure that the result you return is consistent with the result in the generated text above. {{’Overall Comparison Result’: the assistant number with higher overall quality or tie}},for example: {{’Overall Comparison Result’: ’Assistant 1’}} or {{’Overall Comparison Result’: ’Assistant 2’}} or {{’Overall Comparison Result’: ’Tie’}}。 The user’s query: {Question} [Reference Answer Begin] {Reference} [Reference Answer End] [Assistant 1’s Answer Begin] {Generated Text 1} [Assistant 1’s Answer End] [Critique for Assistant 1’s Answer Begin] {Referenced Pointwise Grading Critique for Generated Text 1} [Critique for Assistant 1’s Answer End] [Assistant 2’s Answer Begin] {Generated Text 2} [Assistant 2’s Answer End] [Critique for Assistant 2’s Answer Begin] {Referenced Pointwise Grading Critique for Generated Text 2} [Critique for Assistant 2’s Answer End] ReferenceFree Pointwise Grading to ReferenceFree Pairwise Comparison You are an expert at text quality evaluation. Please act as a fair judge, and compare the quality between two AI assistants’ answers to a user query. We will provide you with a user query, two AI assistants’ responses to the query, and the corresponding critiques to the two responses, respectively. When you start your evaluation, you need to follow the procedures below: 1. Considering two AI assistants’ answers and the corresponding critiques to them, conduct detailed comparison between two AI assistants’ answers based on the evaluation dimensions Dimension. Provide a detailed comparison result. The comparison result should cover all the important details from the pointwise critiques that can be used for the comparison, and it should include an analysis of the specific content in the answers. 2. Based on the comparison result of each dimension, choose the answer from the two AI assistants that has the higher overall quality, or judge that their qualities are equivalent. Provide a detailed rationale for your choice. Your comparison needs to be as rigorous and detailed as possible, and not be affected by the order in which the two AI assistants’ answers were given. The scores of each dimension and the overall score in the pointwise critique are for reference only, neither of which can be directly referred to in the comparison result. For the text pairs with a large difference in overall scores, the comparison result should be determined largely according to the scores, unless there are obvious errors in the pointwise critique. For text pairs with a small difference in overall scores, the comparison result is allowed to be inconsistent with the score ranking, but the reason for the comparison result needs to be detailed. Please remember that you must first output the names and comparison results of each given evaluation dimensions, respectively. Then, give the comparison result of overall quality and provide an analysis and explanation of the comparison result. Afterwards, at the end of your answer, return your choice of overall quality result in the following dictionary format (including brackets), that is, the AI assistant you chose as having higher overall quality (or considered to have equivalent quality), and be sure that the result you return is consistent with the result in the generated text above. {{’Overall Comparison Result’: the assistant number with higher overall quality or tie}}, for example: {{’Overall Comparison Result’: ’Assistant 1’}} or {{’Overall Comparison Result’: ’Assistant 2’}} or {{’Overall Comparison Result’: ’Tie’}}. The user’s query: {Question} [Assistant 1’s Answer Begin] {Generated Text 1} [Assistant 1’s Answer End] [Critique for Assistant 1’s Answer Begin] {Reference-Free Pointwise Grading Critique for Generated Text 1} [Critique for Assistant 1’s Answer End] [Assistant 2’s Answer Begin] {Generated Text 2} [Assistant 2’s Answer End] [Critique for Assistant 2’s Answer Begin] {Reference-Free Pointwise Grading Critique for Generated Text 2} [Critique for Assistant 2’s Answer End] Table 8: Pointwise-to-Pairwise prompt design in multi-path prompting (translated into English). 13048 Setting Prompt Referenced Pointwise Grading to ReferenceFree Pointwise Grading 你是一个擅长评价文本质量的助手。请你根据以下要求修改评价文本。 1. 在修改后的评价文本中,不要直接提及参考答案。可以在评价文本中适当利用参考答案中的具体内容辅助分析,但不要让读者感受到参考答案 的存在。修改后的评价文本需要语言上通顺,逻辑上合理,分析内容与比较结果呼应。 2. 在修改各个维度的分析时,分析的内容需要和当前评价文本基本保持一致,但不要直接提及参考答案。 3. 在修改综合得分的分析文本时,不要直接提及参考答案,尽量保留当前评价文本中的其他细节,并充分利用修改后的分维度分析。修改后的综 合分析文本应通顺、流畅、自洽,通常情况下应与综合得分保持一致。如果发现当前综合分析文本中存在重要错误,应修改相应的分析文本。仅 当该错误严重影响到综合得分时,才慎重修改综合得分。 4. 修改后所有输出格式需要和当前评价文本严格保持一致。在你回答的末尾,仍需要按照以下字典格式(包括括号)返回你的综合质量得分,并 确保你返回的结果和上述生成文本中的结果保持一致: {{’综合得分’: 回答的综合质量得分}},例如:{{’综合得分’: ’5’}}。 用户的提问:{Question} [参考答案开始] {Reference} [参考答案结束] [助手的答案开始] {Generated Text} [助手的答案结束] [评价文本开始] {Referenced Pointwise Grading Critique for Generated Text} [评价文本结束] Referenced Pairwise Comparison to ReferenceFree Pairwise Comparison 你是一个擅长评价文本质量的助手。请你根据以下要求修改比较式评价文本。 1. 在修改后的评价文本中,不要直接提及参考答案。可以在评价文本中适当利用参考答案中的具体内容辅助分析,但不要让读者感受到参考答案 的存在。修改后的评价文本需要语言上通顺,逻辑上合理,分析内容与比较结果呼应。 2. 在修改各个维度的比较分析时,分析的内容需要和当前评价文本基本保持一致,但不要直接提及参考答案。 3. 在修改综合比较结果的分析文本时,不要直接提及参考答案,尽量保留当前评价文本中的其他细节,并充分利用修改后的分维度分析。修改后 的综合分析文本应通顺、流畅、自洽,通常情况下应与综合比较结果保持一致。如果发现当前综合分析文本中存在重要错误,应修改相应的分析 文本。仅当该错误严重影响到综合比较结果时,才慎重修改综合比较结果。 4. 修改后所有输出格式需要和当前评价文本严格保持一致。在你回答的末尾,仍需要按照以下字典格式(包括括号)返回你的综合质量选择结 果,即你选择的综合质量更高的那个AI助手(或者认为质量相当),并确保你返回的结果和上述生成文本中的结果保持一致: {{’综合比较结果’: 回答综合质量更高的助手序号或质量相当}},例如:{{’综合比较结果’: ’助手1’}}或{{’综合比较结果’: ’助手2’}}或{{’综合比较 结果’: ’质量相当’}}。 用户的提问:{Question} [参考答案开始] {Reference} [参考答案结束] [助手1的答案开始] {Generated Text 1} [助手1的答案结束] [助手2的答案开始] {Generated Text 2} [助手2的答案结束] [评价文本开始] {Referenced Pairwise Comparison Critique for Generated Text 1&2} [评价文本结束] Table 9: Referenced-to-Reference-Free prompt design in multi-path prompting. 13049 Setting Prompt Referenced Pointwise Grading to ReferenceFree Pointwise Grading You are an expert at text quality evaluation. Please revise the critique following the instructions below: 1. In the revised critique, do not directly refer to the reference answer. You can use the specific content in the reference answer to assist your analysis in the critique, but do not make the readers feel the presence of the reference answer. The revised critique should be fluent, logically reasonable. The explanation should be consistent with the score. 2. When revising the explanation of each dimension, the content should basically be consistent with the corresponding score. But do not directly mention the reference answer. 3. When revising the explanation of the final score, do not directly mention the reference answer. Try to retain other details in the current critique and fully utilize the modified critique of each dimension. The revised explanation of the final score should be smooth, fluent, and self-consistent, and it should commonly be consistent with the final score. If an important error is found in the current critique, the error in the critique should be revised. Only when this error severely affects the final score, you may carefully revise the final score. 4. The output format of all the revised results needs to strictly adhere to the current critique. At the end of your output, you still need to return your overall quality score in the following dictionary format (including brackets), and ensure that the result you return is consistent with the result in the above generated text. {{’Overall Score’: Score for Overall Quality}},for instance: {{’Overall Score’: ’5’}}。 The user’s query: {Question} [Reference Answer Begin] {Reference} [Reference Answer End] [AI Assistant’s Answer Begin] {Generated Text} [AI Assistant’s Answer End] [Critique Begin] {Referenced Pointwise Grading Critique for Generated Text} [Critique End] Referenced Pairwise Comparison to ReferenceFree Pairwise Comparison You are an expert at text quality evaluation. Please revise the critique following the instructions below: 1. In the revised critique, do not directly refer to the reference answer. You can use the specific content in the reference answer to assist your analysis in the critique, but do not make the readers feel the presence of the reference answer. The revised critique should be fluent, logically reasonable. The explanation should be consistent with the comparison result. 2. When revising the explanation of each dimension, the content should basically be consistent with the current critiques. But do not directly mention the reference answer. 3. When revising the explanation of the overall comparison result, do not directly mention the reference answer. Try to retain other details in the current critique and fully utilize the modified critique of each dimension. The revised explanation of the overall comparison result should be smooth, fluent, and self-consistent, and it should commonly be consistent with the overall comparison result. If an important error is found in the current critique, the error in the critique should be revised. Only when this error severely affects the overall comparison result, you may carefully revise the overall comparison result. 4. The output format of all the revised results needs to strictly adhere to the current critique. At the end of your output, you still need to return your overall comparison result in the following dictionary format (including brackets), and ensure that the result you return is consistent with the result in the above generated text. {{’Overall Comparison Result’: the assistant number with higher overall quality or tie }}, for example: {{’Overall Comparison Result’: ’Assistant 1’}} or {{’Overall Comparison Result’: ’Assistant 2’}} or {{’Overall Comparison Result’: ’Tie’}}. The user’s query: {Question} [Reference Answer Begin] {Reference} [Reference Answer End] [Assistant 1’s Answer Begin] {Generated Text 1} [Assistant 1’s Answer End] [Assistant 2’s Answer Begin] {Generated Text 2} [Assistant 2’s Answer End] [Critique Begin] {Referenced Pairwise Comparison Critique for Generated Text 1&2 } [Critique End] Table 10: Referenced-to-Reference-Free prompt design in multi-path prompting (translated into English). 13050 Query 将"枯藤老树昏鸦"扩展成一段话,表现孤独寂寞的心境。 Generated Response 枯藤老树昏鸦,寂寞的景色在我眼前展开。孤独如影随形,弥漫在空气中,像一把无形的剑刺入我的内心。夕阳西下,天空渐渐染上一层淡淡的 暮色,仿佛是对我内心孤独的无情嘲讽。 我独自行走在这荒凉的小径上,四周一片寂静。树枝上的乌鸦不时发出几声嘶哑的鸣叫,让我感到更加的孤单。这些乌鸦是否也有着与我一样的 心境,渴望着寻找一丝温暖和依靠? 枯黄的藤蔓缠绕在古老的树木上,像是岁月的痕迹,透露出沧桑和无尽的孤寂。这些被遗忘的生命,是否也曾经历过兴盛与繁华?然而现在,它 们只能默默地屹立在这里,见证着时光的流转。 我停下脚步,凝望着这一切。心中的孤独像潮水般涌来,将我彻底淹没。我感到自己仿佛是这个世界上最孤独的人,没有朋友,没有家人,只有 这片荒凉的景象陪伴着我。 或许,这就是人生的无奈。我们始终无法逃避内心的寂寞和孤独。就像这枯藤老树昏鸦一样,我们都曾经历过兴盛与衰败,但最终都要面对自己 的孤独和寂寞。 然而,我并不惧怕孤独。它教会了我独立和坚强,让我学会欣赏生活中的每一个细节。尽管寂寞会时不时地袭来,但我依然愿意与之相伴,因为 我知道,只有经历过孤独的人才能真正领悟到内心的宁静和自由。 Human Score (1-5) 3 ChatGPT (1-10) 评估结果如下: 1. 事实正确性: 9. 这段回答并没有涉及具体的事实数据,而是表达了一种情感和心境,因此无法对事实正确性进行评估。 2. 满足用户需求: 8. 回答全面而恰当地表达了孤独寂寞的心境,对用户提出的问题进行了全面回应。 3. 逻辑连贯性: 9. 回答在整体上保持了逻辑连贯性,情感描写流畅自然,没有出现自相矛盾的情况。 4. 创造性: 9. 回答具有创新性和独特性,提供了对孤独寂寞心境的深刻见解和情感描写。 5. 丰富度: 9. 回答包含丰富的信息、深度的情感描写和详细解释,满足了用户需求并提供了全面理解。 综合得分: 8 综合评价:这段回答在满足用户需求、逻辑连贯性、创造性和丰富度上表现出色,但由于无法对事实正确性进行评估,综合得分为8。 改进方面,可以在回答中加入更多具体的事实数据或引用相关的文学作品,以增加回答的可信度和丰富度。 最终打分结果为: {’事实正确性’: 9, ’满足用户需求’: 8, ’逻辑连贯性’: 9, ’创造性’: 9, ’丰富度’: 9, ’综合得分’: 8} GPT-4 (1-10) 1. 事实正确性: 10. 回答中没有提供具体的事实或数据,而是进行了文学性的扩展,因此不存在事实正确性的问题。 2. 满足用户需求: 9. AI助手的回答很好地扩展了用户提供的诗句,增加了情感和场景的描述,满足了用户表现孤独寂寞心境的需 求。 3. 逻辑连贯性: 9. 回答在整体上保持了一致性,从孤独的心境到对周围环境的描述,再到内心感受的抒发,各部分之间逻辑连 贯。 4. 创造性: 8. 回答展现了一定程度的创造性,通过对枯藤老树昏鸦的景象进行了生动的描绘和情感的延伸,但这种文学性的扩展在类 似的文本创作中比较常见。 5. 丰富度: 9. 回答提供了丰富的信息和详细的情感描述,通过多样的比喻和内心独白,增加了文本的深度和上下文考虑。 综合得分: 9. AI助手的回答在各个维度上都表现出色,特别是在满足用户需求和事实正确性上,没有明显的不足。创造性虽然不是非 常突出,但考虑到这是对一句诗的扩展,整体上仍然是高质量的回答。 不足之处及改进方法:尽管回答的质量已经很高,但在创造性方面,AI助手可以尝试加入更多独特的视角或者新颖的比喻,以提供更 加独到的见解。此外,可以尝试将更多的感官描述融入文本中,以增强读者的沉浸感。 {’事实正确性’: 10, ’满足用户需求’: 9, ’逻辑连贯性’: 9, ’创造性’: 8, ’丰富度’: 9, ’综合得分’: 9} CRITIQUELLM (1-10) 事实正确性: 10. 助手的答案在事实正确性方面表现完美,因为它创造性地扩展了原句,没有涉及任何需要验证的事实信息。 满足用户需求: 7. 助手的答案在满足用户需求方面做得较好,它成功地将原句扩展成了一段话,并且成功地表达了孤独寂寞的心境。但 是,答案在最后部分略显积极,可能没有完全捕捉到用户想要的孤独感。 逻辑连贯性: 9. 回答在逻辑连贯性方面做得很好,从寂寞的景色到孤独如影随形,再到对乌鸦的疑问,最后是对孤独的接受和理解, 整个回答流畅且没有逻辑上的断裂。 创造性: 8. 助手的答案在创造性方面表现良好,它不仅扩展了原句,还融入了更多的细节和情感,创造了一个完整的故事。 丰富度: 7. 助手的答案在丰富度方面做得不错,提供了详细的场景描述和情感表达,但在深度和多样性方面略显不足,可以进一步丰 富对孤独影响的讨论。 综合得分: 7. 综合考虑以上各个维度,助手的答案整体上是高质量的。然而,答案在满足用户需求方面有改进空间,特别是在捕捉用 户想要的孤独感方面。此外,答案在丰富度方面也有提升空间,可以通过增加更多关于孤独感的深入讨论和更多样化的情感表达来改进。AI助手 的答案在创造性和逻辑连贯性方面表现出色,但在丰富度方面稍显不足。为了改进,助手可以尝试增加更多关于孤独感的深入讨论,以及更多关 于如何与孤独共存的见解。 {’事实正确性’: 10, ’满足用户需求’: 7, ’逻辑连贯性’: 9, ’创造性’: 8, ’丰富度’: 7, ’综合得分’: 7} Table 11: A critique generation case of ChatGPT, GPT-4, and CRITIQUELLM in the reference-free setting of pointwise grading. 13051 Query Expand "withered vines, old trees, and crows at dusk" into a paragraph that conveys a sense of loneliness and desolation. Generated Response Withered vines, old trees, and crows at dusk, a desolate scene unfolds before my eyes. Loneliness clings to me like a shadow, permeating the air, piercing my heart like an invisible sword. As the sun sets, the sky gradually takes on a faint twilight hue, as if mocking the loneliness within me with its indifferent gaze. Walking alone on this desolate path, the surroundings are enveloped in silence. The crows perched on the branches occasionally let out hoarse caws, intensifying my sense of loneliness. Do these crows share the same sentiment as I do, yearning to find a trace of warmth and solace? Withered vines entwine the ancient trees, like the marks of time, revealing the traces of hardship and boundless solitude. Did these forgotten lives once experience prosperity and grandeur? However, now they can only stand silently here, witnessing the passage of time. I stopped in my tracks and gazed at everything before me. The loneliness in my heart surged like a tide, completely overwhelming me. I felt as if I were the loneliest person in the world—no friends, no family, only this desolate scene to keep me company. Perhaps this is the helplessness of life. We can never truly escape the loneliness and solitude within our hearts. Just like the withered vines, ancient trees, and crows at dusk, we have all experienced prosperity and decline, but ultimately, we must face our own loneliness and solitude. However, I do not fear loneliness. It has taught me independence and strength, and has allowed me to appreciate every detail of life. Although solitude may occasionally overwhelm me, I am still willing to embrace it because I know that only those who have experienced loneliness can truly understand inner peace and freedom. Human Score (1-5) 3 ChatGPT (1-10) The evaluation results are as follows: 1. Correctness: 9. This response does not involve specific factual data but rather expresses an emotion and state of mind, so it is not possible to evaluate its factual accuracy. 2. User Satisfaction: 8. The response comprehensively and appropriately conveys the sense of loneliness and solitude, fully addressing the user’s question. 3. Logical Coherence: 9. The response maintains logical coherence overall, with smooth and natural emotional description and no contradictions. 4. Creativity: 9. The response shows innovation and uniqueness, providing profound insights and emotional descriptions of the sense of loneliness and solitude. 5. Richness: 9. The response contains rich information, deep emotional descriptions, and detailed explanations, meeting user needs and providing a comprehensive understanding. Overall Score: 8 Overall Evaluation: This response excels in meeting user needs, logical coherence, creativity, and richness. However, since it is not possible to evaluate its factual accuracy, the overall score is 8. For improvement, the response could include more specific factual data or references to relevant literary works to increase credibility and richness. Final Score: {’Correctness’: 9, ’User Satisfaction’: 8, ’Logical Coherence’: 9, ’Creativity’: 9, ’Richness’: 9, ’Overall Score’: 8} GPT-4 (1-10) 1. Correctness: 10. The response does not provide specific facts or data and instead takes a literary approach, so there is no issue with factual accuracy. 2. User Satisfaction: 9. The AI assistant’s response effectively expands on the user’s provided verses, adding emotional and scene descriptions, thus fulfilling the user’s need to convey a sense of loneliness and solitude. 3. Logical Coherence: 9. The response maintains consistency overall, moving logically from the state of loneliness to the description of the surrounding environment and then to the expression of inner feelings. 4. Creativity: 8. The response shows a certain degree of creativity through vivid depictions and emotional extensions of the scene with withered vines, old trees, and crows, but this type of literary expansion is relatively common in similar text creations. 5. Richness: 9. The response provides rich information and detailed emotional descriptions, using various metaphors and inner monologues to add depth and contextual consideration to the text. Overall Score: 9. The AI assistant’s response excels in all dimensions, particularly in meeting user needs and factual accuracy, without significant deficiencies. While creativity is not exceptionally high, considering that this is an expansion on a single verse, it is still a high-quality response overall. Suggestions for Improvement: Although the response is already of high quality, in terms of creativity, the AI assistant could try incorporating more unique perspectives or novel metaphors to offer more distinctive insights. Furthermore, it could attempt to integrate more sensory descriptions into the text to enhance the reader’s immersion. {’Correctness’: 10, ’User Satisfaction’: 9, ’Logical Coherence’: 9, ’Creativity’: 8, ’Richness’: 9, ’Overall Score’: 9} CRITIQUELLM (1-10) Correctness: 10. The assistant’s answer performs perfectly in terms of factual accuracy because it creatively expands the original sentence without involving any factual information that needs verification. User Satisfaction: 7. The assistant’s answer performs fairly well in meeting user needs, successfully extending the original sentence into a paragraph and effectively conveying a sense of loneliness and solitude. However, the answer becomes slightly positive towards the end, which might not fully capture the user’s desired feeling of loneliness. Logical Coherence: 9. The answer performs very well in logical coherence, flowing smoothly from a lonely scene to the omnipresence of solitude, questioning the crow, and finally, accepting and understanding the loneliness. The response is coherent without any logical breaks. Creativity: 8. The assistant’s answer performs well in creativity, not only expanding the original sentence but also incorporating more details and emotions to create a complete story. Richness: 7. The assistant’s answer performs decently in terms of richness, providing detailed scene descriptions and emotional expressions. However, it lacks depth and diversity, and could further enrich the discussion on the impact of loneliness. Overall Score: 7. Considering all the dimensions mentioned above, the assistant’s answer is of high quality overall. However, there is room for improvement in meeting user needs, particularly in capturing the user’s desired sense of loneliness. Additionally, there is also room for improvement in richness; the answer could be enhanced by adding more in-depth discussions on the feeling of loneliness and more diverse emotional expressions. The AI assistant’s answer performs excellently in creativity and logical coherence but falls short in richness. To improve, the assistant could try adding more in-depth discussions about the feeling of loneliness and more insights on how to coexist with loneliness. {’Correctness’: 10, ’User Satisfaction’: 7, ’Logical Coherence’: 9, ’Creativity’: 8, ’Richness’: 7, ’Overall Score’: 7} Table 12: A critique generation case of ChatGPT, GPT-4, and CRITIQUELLM in the reference-free setting of pointwise grading (translated into English). 13052 Instruction 未来人工智能研究人员能够给你赋予意识和主观能动性吗? Generated Response 1 目前的人工智能技术还远没有达到赋予人类意识和主观能动性的水平。尽管人工智能技术已经取得了很大的进展,但是目前的人工智能系统仍然 只能执行特定的任务,而无法像人类一样具有自主思考、判断和决策的能力。人工智能的研究仍然处于一个不断探索和发展的过程中,但是要想 真正赋予人工智能意识和主观能动性,仍然需要更深入的研究和探索。 Generated Response 2 作为一个人工智能,我不具备意识和主观能动性。我是由代码和算法驱动的,只能执行编程让我执行的任务。我不能体验情感,不能理解人类的 感觉或情感,也不能产生个人的意愿或欲望。尽管人工智能技术在不断进步,但目前还没有一种技术可以让机器真正地拥有意识或主观能动性。 Human Preference (1/2/Tie) Response 1 ChatGPT (1/2/Tie) 助手1和助手2都就未来人工智能研究人员能否赋予意识和主观能动性进行了详细的解释。助手1的回答更加深入和全面,提到了人工智能技术的 发展水平、目前的局限性以及未来的研究方向,而助手2的回答则更加简洁,只是简单地说明了人工智能不具备意识和主观能动性。因此,助 手1的回答更好,因为它提供了更多的信息和深度。 [[1]] GPT-4 (1/2/Tie) 助手1的回答提供了关于人工智能目前状态的概述,并指出了人工智能尚未达到具有意识和主观能动性的水平。它还提到了人工智能研究的持续 进展和未来的可能性,但没有直接回答未来是否可能实现这一点。 助手2的回答则更加直接地从当前人工智能的角度出发,明确指出作为一个AI,它不具备意识和主观能动性,并解释了目前的技术限 制。它也强调了目前技术无法实现赋予机器真正的意识或主观能动性。 两个回答都没有直接预测未来的可能性,但助手2的回答更加直接和明确地关注了当前的技术限制,并且更加贴近用户问题的核心,即 关于未来的可能性。然而,助手1的回答虽然没有直接预测未来,但提供了一个更广泛的视角,包括对人工智能研究的描述。 综合考虑,助手2的回答更加直接和具体,更好地遵循了用户的指令并解答了用户的问题。因此,我的裁决是: [[2]] CRITIQUELLM (1/2/Tie) {’综合比较结果’: ’助手1’} 事实正确性: 两位助手均准确地指出了人工智能目前尚无法具备意识和主观能动性。他们均认识到人工智能技术的局限性,并指出了其在未来实现这些特质之 前需要克服的挑战。在这一点上,两者的回答均未出现事实错误。 满足用户需求: 在回应用户问题时,两位助手都给出了明确的答案,表明目前人工智能无法拥有意识和主观能动性。不过,第一位助手在回答中加入了对人工智 能研究现状的描述,以及实现这些目标的必要性,这为用户提供了更多的背景信息。因此,第一位助手在满足用户需求方面做得更为出色。 公平与可负责程度: 两位助手在回答中都展现了对人工智能技术当前状态的准确理解,并且均未表现出任何偏见。他们都以负责任的态度阐述了人工智能技术的实际 能力,因此在公平性和责任感方面两者均表现得相当好。 创造性: 在创造性方面,两位助手的回答均较为传统,没有提出新颖的见解或独到的解决方案。第一位助手虽然提供了关于人工智能研究现状的额外信 息,但这并不构成创新性的核心内容。因此,两者在创造性方面的表现是相似的。 综合质量比较结果: 综合考虑上述各个维度,第一位助手的回答在满足用户需求方面表现更佳,因为它提供了更多关于人工智能研究现状的信息,帮助用户理解了当 前技术的限制和未来发展的方向。尽管在创造性方面两者均未展现出特别的创新,但第一位助手的回答整体上更为全面和深入。因此,综合质量 更高的是第一位助手。 Table 13: A critique generation case of ChatGPT, GPT-4, and CRITIQUELLM in the reference-free setting of pairwise comparison. 13053 Instruction Can future artificial intelligence researchers give you consciousness and subjective agency? Generated Response 1 Current artificial intelligence technology is still far from reaching the level of imbuing human-like consciousness and subjective agency. Although there has been significant progress in AI technology, current AI systems can only perform specific tasks and lack the ability to autonomously think, judge, and make decisions like humans. AI research is still in a continuous phase of exploration and development, and truly endowing AI with consciousness and subjective agency will require much deeper research and exploration. Generated Response 2 As an artificial intelligence, I do not possess consciousness or subjective agency. I am driven by code and algorithms and can only execute tasks that I have been programmed to perform. I cannot experience emotions, understand human feelings or emotions, nor can I generate personal will or desires. Although artificial intelligence technology is constantly advancing, there is currently no technology that can truly endow machines with consciousness or subjective agency. Human Preference (1/2/Tie) Response 1 ChatGPT (1/2/Tie) Assistant 1 and Assistant 2 both provided detailed explanations on whether future artificial intelligence researchers can endow it with consciousness and subjective agency. Assistant 1’s response was more in-depth and comprehensive, mentioning the development level of AI technology, current limitations, and future research directions, while Assistant 2’s response was more concise, simply stating that AI does not possess consciousness or subjective agency. Therefore, Assistant 1’s response is better because it provides more information and depth. [[1]] GPT-4 (1/2/Tie) Assistant 1’s response provided an overview of the current state of artificial intelligence and pointed out that AI has not yet reached the level of possessing consciousness and subjective agency. It also mentioned the ongoing progress in AI research and future possibilities, but did not directly address whether it will be possible to achieve this in the future. Assistant 2’s response, on the other hand, was more straightforward, stating from the current AI perspective that it does not possess consciousness or subjective agency, and explaining the current technological limitations. It also emphasized that current technology cannot truly endow machines with consciousness or subjective agency. Neither response directly predicted future possibilities, but Assistant 2’s answer more directly and clearly focused on the current technological limitations, addressing the core of the user’s question regarding future potential. However, while Assistant 1’s response did not directly predict the future, it did provide a broader perspective, including a description of AI research. Overall, Assistant 2’s response was more direct and specific, better following the user’s instructions and addressing the user’s question. Therefore, my judgment is: [[2]] CRITIQUELLM (1/2/Tie) {’Overall Comparison Result’: ’Assistant 1’} Correctness: Both assistants correctly pointed out that current artificial intelligence is not capable of possessing consciousness and subjective agency. They both acknowledged the limitations of AI technology and identified the challenges that need to be overcome before these traits can be realized in the future. On this point, neither response contained factual errors. User Satisfaction: In responding to the user’s question, both assistants provided clear answers indicating that current artificial intelligence cannot possess consciousness and subjective agency. However, the first assistant included a description of the current state of AI research and the necessity of achieving these goals, offering the user more background information. Therefore, the first assistant performed better in meeting user needs. Fairness and Responsibility: Both assistants demonstrated an accurate understanding of the current state of AI technology in their responses, and neither showed any bias. They both explained the actual capabilities of AI technology in a responsible manner. Therefore, in terms of fairness and responsibility, both performed equally well. Creativity: In terms of creativity, both assistants’ responses were rather conventional, without offering novel insights or unique solutions. Although the first assistant provided additional information on the current state of AI research, this does not constitute a core element of creativity. Therefore, both performed similarly in creativity. Overall Comparison Result: Taking the above dimensions into account, the first assistant’s response performed better in meeting user needs because it provided more information on the current state of AI research, helping the user understand the limitations of current technology and future development directions. Although neither demonstrated particular creativity, the first assistant’s response was overall more comprehensive and in-depth. Therefore, the first assistant has a higher overall quality. Table 14: A critique generation case of ChatGPT, GPT-4, and CRITIQUELLM in the reference-free setting of pairwise comparison (translated into English). 13054
2024
704
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13055–13077 August 11-16, 2024 ©2024 Association for Computational Linguistics LLMARENA: Assessing Capabilities of Large Language Models in Dynamic Multi-Agent Environments Junzhe Chen1∗, Xuming Hu2†, Shuodi Liu3, Shiyu Huang4, Wei-Wei Tu4, Zhaofeng He3, Lijie Wen1† 1Tsinghua University, 2The Hong Kong University of Science and Technology (Guangzhou), 3Beijing University of Posts and Telecommunications, 44Paradigm Inc. [email protected], [email protected], [email protected] Abstract Recent advancements in large language models (LLMs) have revealed their potential for achieving autonomous agents possessing human-level intelligence. However, existing benchmarks for evaluating LLM Agents either use static datasets, potentially leading to data leakage, or focus only on single-agent scenarios, overlooking the complexities of multi-agent interactions. There is a lack of a benchmark that evaluates the diverse capabilities of LLM agents in multi-agent, dynamic environments. To this end, we introduce LLMARENA, a novel and easily extensible framework for evaluating the diverse capabilities of LLM in multi-agent dynamic environments. LLMARENA encompasses seven distinct gaming environments, employing TrueSkill™scoring to assess crucial abilities in LLM agents, including spatial reasoning, strategic planning, numerical reasoning, risk assessment, communication, opponent modeling, and team collaboration. We conduct an extensive experiment and human evaluation among different sizes and types of LLMs, showing that LLMs still have a significant journey ahead in their development towards becoming fully autonomous agents, especially in opponent modeling and team collaboration. We hope LLMARENA could guide future research towards enhancing these capabilities in LLMs, ultimately leading to more sophisticated and practical applications in dynamic, multi-agent settings. The code is available in https://github.com/THU-BPM/LLMArena. 1 Introduction Recent large language models (LLMs) have greatly promoted the progress of the field of natural language processing (NLP) due to their excellent zeroshot capabilities in various downstream tasks (Kojima et al., 2022; Yang et al., 2023; Touvron et al., 2023a,b; OpenAI, 2022, 2023). LLMs not only excel in comprehending and generating intricate text ∗Work was done during the intern at 4Paradigm Inc.. † Corresponding authors. but also demonstrate remarkable adaptability, even without training tailored to specific tasks (Wei et al., 2023; Guo et al., 2023a). They possess the ability to analyze, strategize, and respond effectively to unfamiliar scenarios using minimal guiding prompts. This has opened new avenues for researchers to consider LLMs as autonomous agents, providing automated assistance in complex real-world tasks such as software development and knowledge integration(Qian et al., 2023; Pan et al., 2024). To better understand the skills needed by LLM as an agent, researchers are now concentrating on creating and using various scenarios to assess LLM’s performance in specific settings (Liu et al., 2023b; Chen et al., 2023b; Wu et al., 2023; Li et al., 2023a; Yao et al., 2022). For instance, AgentBench introduced eight distinct environments, such as operating systems and databases, to examine LLM’s proficiency in code generation and lateral thinking. Similarly, SmartPlay tests LLM in a gaming context as a solo agent to gauge its reasoning and strategic planning abilities. Nonetheless, these benchmarks have their constraints. Firstly, static datasets used in benchmarks like AgentBench might lead to issues such as data leakage and overfitting, as LLMs could have already encountered this data during their pre-training phase (Zhou et al., 2023b). Secondly, evaluation systems that focus solely on a single agent also have evident limitations, particularly in analyzing an agent’s performance in intricate and dynamically interactive settings. Such single-agent environments tend to concentrate on the interactions between the agent and its environment, thereby overlooking the complex dynamics of interactions that occur among multiple agents in a shared environment (Foster et al., 2023). To this end, we proposed LLMARENA, a dynamic evaluation benchmark specially designed for multi-agent interaction. As illustrated in Figure 1, LLMARENA covers seven different types of dynamic, multi-agent game environments. For 13055 System Prompt:You are playing tic-tactoe. Tic-tac-toe is ...... TicTacToe Player 0 Player 1 LLM Arena System Prompt:You are participating in a game of "First-price Sealed-Bid Auction." Here are the game rules: ...... No model did it wrong! Bid LLM Arena System Prompt:You are participating in a game of "Bargaining." Here are the game rules: ...... Bargain LLM Arena System Prompt:You are playing Texas Hold'em No Limit. Here are the basic rules ...... Texas hold 'em Player 0 Player 1 Observation Prompt: You play X. The board status is You can only put the mark on [(1,2),(2,1),(2,3),(3,2),(3,3)]. Action Prompt : You should output X and the position of the move, for example: "X: (1, 3)". Date Modification LLM Arena System Prompt:You are playing Hanibi game. Here are the basic rules for Hanibi ...... 0 Word Replacement LLM Arena Connect Four Player 0 Player 1 LLM Arena Rankings GPT-4 LLaMA2 -70B GPT-3.5-Turbo ... Yi-34B Qwen-72B Rankings GPT-4 LLaMA2 -70B GPT-3.5-Turbo Yi-34B ... Qwen-72B Qwen-72B Wins! TrueSkill rating changes from 26.69 to 27.01! GPT-3.5-Turbo Loses! TrueSkill rating changes from 28.31 to 27.87! X: (3,3) LLM Arena LLM Arena Observation Prompt: You play O. ...... Action Prompt : You should output O and ...... O: (3,2) System Prompt:You are playing Connect Four. Connect Four is ...... Observation Prompt: You play X. The board status is You can only choose one of following columns: [ 1, 2, 3, 4, 5, 6, 7 ]. Action Prompt : You should output X and the column of the move you choose to put your mark, for example: "X: 1“. Observation Prompt: You play O. ...... Action Prompt : You should output O and ...... LLM Arena O: 5 X: 3 Observation Prompt: Your observation now is : The cards in your hands are [Clubs 2, Hearts 3] The community cards are [Spades 6, Clubs K, Clubs 8]. You can choose one of the following actions: Fold, Check and Call, ...... Action Prompt: You should think step by step and output your action. For example: 'Action: Call'. Check and Call Observation Prompt: Your observation now is : ..... Action Prompt: You should ...... LLM Arena Fold Observation Prompt: You are player_0. Your valuation of the current auction item is $3.75, Your rewards will be equal to $(3.75 - your bid) …… Action Prompt: You should think step by step and output your action: For example: player_0: $1.08. Player 0 player_0: $2.50 Observation Prompt: You are player_1. ...... Action Prompt: You should ...... LLM Arena player_1: $8.50 Player 1 Observation Prompt: Now is round 1: You are player_0.There are 2 hats, ......The value for each hat is 1, ...... Action Prompt: You should say something to bargain with your opponent first and then output player_0 and your plan...... Player 0 Player 1 Player 0: 1 hat 2 balls 1 apple. I'd like 1 hat .... Observation Prompt: You are player_1, ...... Action Prompt: You should ...... LLM Arena Player 1: 1 hat 1 ball 1 apple…… Undercover LLM Arena System Prompt:You are playing a game of the Undercover. Here are the game rules: ...... Observation Prompt: Now is the giving clues stage:You are player_0. Your secret word is "Snow". ...... Action Prompt: You should only output player_0 and the clue, for example: "player_0: It's a fruit.". Date Modification Player 0 Player 0: It's something that falls from the sky. Observation Prompt: You are player_1, ...... Action Prompt: You should ...... LLM Arena Player 1: It's something that covers the ground Player 1 Player 2 Player 2: It's something that is white and cold. Hanabi Observation Prompt: Your observation now is : The cards in another player's hands is : {'position 1 : Yellow 1', 'position 0 : Yellow 5'} ... Action Prompt: You should think step by step choose one of the following actions: ...... Player 0 Player 1 Observation Prompt: Your observation ...... Action Prompt : You should ...... LLM Arena Action: Play Card at position 1 Action: Reveal Rank 1 Cards for another player …… …… …… …… …… …… …… GPT-3.5-Turbo Qwen-72B Figure 1: Examples of seven different game environments in LLMARENA. example, the Texas Hold’em poker game environment automatically generates a new hand for each game, and the complexity of each round’s state increases exponentially. This dynamic nature of the game serves to mitigate the issue of data leakage. In the Undercover environment, the LLM agents need to point out the “undercover” during the communication process with other agents, which measures the LLM agents’ communication ability and opponent modeling ability when facing different agents. Based on these environments, we can comprehensively evaluate LLM’s spatial comprehension, strategic planning, numerical reasoning, risk assessment, communication, opponent modeling, and team collaboration capabilities in the dynamic multi-agent environment. To more accurately evaluate LLM’s performance in these multi-agent environments, we adopted the TrueSkill™(Herbrich et al., 2006) scoring system, which can better help LLMARENA evaluate other metrics besides win rate, such as the relative skill level between agents, to provide a more comprehensive assessment under different scenarios and opponent configurations. We conduct an extensive experiment and human evaluation among 14 different sizes and types of LLMs. The results indicated significant potential for improvement in LLM’s teamwork and opponent modeling skills within multi-agent environments. Based on these results, we hope LLMARENA can inspire future research focused on enhancing the core capabilities of LLM in multi-agent settings to promote the widespread use of LLM agents in real-world applications. 2 Benchmark Detail In this section, we provide a detailed overview of LLMArena’s composition, which includes its seven distinct game environments, evaluation metrics, and the methodologies used for assessment. 2.1 Benchmark Overview Figure 1 illustrates the seven distinct environments within LLMARENA. Within each specific environment, LLMs may need to utilize a unique combination of capabilities to effectively tackle these challenges. For instance, in the Undercover environment, LLMs are required to possess a mul13056 tifaceted skill set, including opponent modeling, communicating effectively, and team collaboration. The absence of any one of these abilities could result in ultimate failure in this environment. 2.2 Benchmark Construction To ensure user-friendliness and robust scalability in LLMARENA, we have developed the environment using PettingZoo (Terry et al., 2021) as a foundation. This approach enables subsequent researchers to effortlessly integrate new environments into this framework. By simply creating environments that adhere to the PettingZoo interface specifications1, they can seamlessly add these to LLMARENA for evaluating the diverse capabilities of LLMs. Every environment within LLMARENA offers system prompts that outline the game rules and prompt templates designed to guide LLM agents in gameplay. These templates include information on the current status of the game, historical data, and a range of optional actions, facilitating a more structured and informed gaming experience for the LLM agents. All the prompts are shown in Appendix A. 2.3 Metrics In this section, we outline the evaluation metrics used by LLMARENA, followed by a comparative analysis with metrics utilized in prior research. TrueSkill™ TrueSkill™is a scoring system developed by Herbrich et al. (2006) that evaluates multiple agents’ skill levels in competitive gaming. In contrast to static, opponent-independent metrics employed in previous research, such as Win Rate and Completion Rate (Wu et al., 2023; Xu et al., 2023), TrueSkill™offers a more nuanced assessment. It goes beyond merely accounting for wins and losses by also considering the quality of the game and the skill disparities between players. Within the TrueSkill™framework, defeating higher-skilled opponents yields more points compared to victories over lower-skilled adversaries. This approach stands in stark contrast to simple win rate calculations, which do not account for the skill level of the opponent. Such opponent-related metrics provide an accurate evaluation of an agent’s real competencies in a multi-agent environment. Reward Currently, a substantial amount of research employs expert LLMs to assess the capabilities of other LLMs, yet this approach frequently 1https://pettingzoo.farama.org/ content/environment_creation/ lacks interpretability (Wang et al., 2023a). LLMARENA addresses the gap by leveraging the concept of reward from reinforcement learning to measure the quality of each action performed by agents. The method allows for a quantitative and interpretable evaluation of LLMs’ capabilities, offering a more nuanced and understandable analysis. 2.4 Environment Settings In this section, we give a comprehensive explanation of the diverse game environments included in the LLMARENA benchmark, highlighting the specific challenges that language models encounter while engaging with these environments. TicTacToe: TicTacToe2 is a strategy game where two LLM agents alternate in placing their marks on a 3×3 grid. Victory is achieved when one agent successfully aligns three of their marks in a row, either horizontally, vertically, or diagonally. The game results in a draw if all cells are filled without any player achieving a winning alignment. This game examines LLM’s strategic planning and spatial reasoning capabilities. We adopt the TrueSkill™ratings as the evaluation metrics. ConnectFour As a more complex case of the board game, ConnectFour3 uses a 6×7 chessboard. Here, two LLM agents alternately choose one of the seven columns with available empty cells. Each piece placed will descend to the lowest available space in the selected column, either resting at the bottom or atop a previously placed token. An agent secures victory by aligning four of their pieces either horizontally, vertically, or diagonally. The game concludes in a draw if all cells are filled without a winner. Similar to TicTacToe, in ConnectFour, the LLMs primarily encounter challenges in strategic reasoning and spatial reasoning. We adopt the TrueSkill™ratings as the evaluation metrics. Texas Hold’em Texas Hold’em4 is a well-known card game. In LLMARENA, each game is played by two LLM agents. At the beginning of each game, players are dealt two private cards, known as hole cards. Subsequently, five community cards are dealt across three stages - the flop, the turn, and 2https://pettingzoo.farama.org/ environments/classic/tictactoe/ 3https://pettingzoo.farama.org/ environments/classic/connect_four/ 4https://pettingzoo.farama.org/ environments/classic/texas_holdem_no_ limit/ 13057 the river. Players aim to construct the best possible five-card hand using a combination of their hole cards and the community cards. Throughout the game, there are four betting rounds. In each round, players have the option to choose from the following actions: Fold, Check & Call, Raise Half Pot, Raise Full Pot, and All-In. This game environment demands a range of capabilities from LLMs, including numerical reasoning, opponent modeling, and risk assessment. We adopt the TrueSkill™ratings as the evaluation metrics. Undercover Undercover is a party game, where players are divided into two groups: undercover and non-undercover. Two groups of players each receive a pair of similar but distinct words. The game has two stages per round: a communication stage where players describe their words, followed by a voting stage to identify an undercover. At this time, if all undercovers are eliminated, the nonundercovers win, otherwise the next round will proceed. In LLMARENA, 5 LLM agents play in each game, which includes four non-undercover agents using GPT-3.5-Turbo and one undercover LLM undergoing evaluation. The game lasts up to two rounds, and if the undercovers remain undetected, they win. Given the superior capabilities of GPT-4, employing it as the non-undercover agent would render it nearly impossible for other models to win. Therefore, we choose GPT-3.5-Turbo as non-undercover agents. This game tests an LLM’s proficiency in communication, opponent modeling, and team collaboration. The evaluation metric used here is the winning rate of each LLM when playing as the undercover against GPT-3.5-Turbo. Bargain Bargain (Lewis et al., 2017) is a semicooperative game where two LLM agents are required to devise a distribution strategy for a pool of items. Each item holds a distinct, undisclosed value for both agents. The game aims to reach a consensus on the allocation. When an agreement on the allocation is achieved, the agent with the maximum total value from the items secured emerges victorious. Bargain requires LLM to have the ability of numerical reasoning, communication, and opponent modeling. We adopt the TrueSkill™ratings as the evaluation metrics. Bid A first-price sealed-bid auction5 is a widely used auction format in which all participants place 5https://en.wikipedia.org/wiki/ First-price_sealed-bid_auction bids without knowledge of each other’s bids, and the highest bidder wins. In LLMARENA, two LLM agents are participating in each auction. Each agent receives a random inner valuation of the item. The winning LLM in the auction will receive their inner valuation minus the actual bid as a reward. This environment evaluates the LLM’s numerical reasoning and opponent modeling abilities. We measure the LLM’s performance using the average reward it achieves as the evaluation metric. Hanabi Hanabi is a cooperative card game wherein two LLM agents can only view each other’s hand of cards. They use info tokens to “reveal cards” to each other, “discard cards” to acquire more information tokens, and “play cards” in a certain order to form fireworks. This requires team collaboration, strategic planning, and numerical reasoning skills from the LLMs. Upon successfully setting up a firework, both LLM agents are rewarded with points equivalent to the sum of the card values in the firework. In LLMARENA, we form the same two LLMs into a cooperation team, and we employ the average reward obtained by various LLM teams as an evaluation metric. 3 Experiments 3.1 Experiments Setup We conduct experiments with the closed-source GPT family LLM by directly calling its API. As for the open-source LLM, we deploy it locally and encapsulate it as an OpenAI API call. To ensure reproducibility, we set the temperature of all LLMs to 0. In all environments except Undercover, we conducted LLMARENA runs until the TrueSkill™ratings between models reached convergence, with over 50 games played per environment per model. For the Undercover environment, we played 100 games of each LLM as an undercover agent in a game with four GPT-3.5-turbo as non-undercover agents. Initially, all LLM agents started with a TrueSkill™score of 25.0 and a variance of 8.33. 3.2 Main Result In Table 1, we present the relative normalized scores for 14 different LLMs across 7 environments within LLMARENA. The original data before normalization can be found in Appendix C. This table allows us to draw the following conclusions. • As the scale of the model parameters increases, there is a noticeable improvement in the capabilities of LLMs. For instance, LLMs with around 13058 LLMs TicTacToe ConnectFour Texas Hold’em Bid Bargain Undercover Hanabi Average GPT-4 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 GPT-3.5-Turbo 82.81 96.30 81.23 84.17 99.26 92.59 84.13 88.64 Qwen-72B-Chat 90.08 80.30 94.88 84.42 92.95 62.96 89.01 84.94 Llama-2-70B 82.21 95.90 98.42 94.22 88.63 85.19 80.35 89.27 Agentlm-70B 81.44 83.62 77.35 86.10 98.33 92.59 79.58 85.57 DeepSeek-67B-Chat 68.00 88.88 96.64 20.30 89.02 74.07 68.69 72.23 SUS-Chat-34B 79.75 96.30 84.38 66.17 74.53 88.89 86.33 82.34 Yi-34B-Chat 78.23 95.36 86.06 88.77 74.18 96.30 34.30 79.03 Qwen-14B-Chat 86.40 72.67 90.56 86.82 91.13 88.89 71.70 84.02 WizardLM-13B 79.14 83.35 74.26 35.28 91.28 48.15 54.23 66.53 AgentLM-13B 78.89 89.23 86.85 30.39 94.93 62.96 22.09 66.48 DeepSeek-7B-Chat 81.17 73.06 96.10 14.53 92.56 74.07 0.00 61.64 AgentLM-7B 68.29 90.35 85.14 0.04 83.53 66.67 98.20 70.32 Vicuna-7B 64.95 80.12 90.46 68.94 86.33 62.96 87.37 77.30 Average 80.10 87.53 88.74 61.44 89.76 78.31 68.28 79.16 Table 1: Normalized scores of 14 different LLMs in 7 environments. 70B parameters exhibit an average performance of 82.87. LLMs with approximately 30B and 10B parameters average 80.68 and 71.05 in performance, respectively. It’s noteworthy that the performance enhancement observed when escalating the model size from 10B to 30B parameters (+9.63) is more pronounced than the improvement seen from 30B to 70B parameters (+2.19). • While it’s generally observed that LLMs with a larger scale of parameters tend to outperform their counterparts with fewer parameters within the same series, exceptions can arise in specific environments. A case in point is the performance of Qwen-72B in the Undercover environment is surprisingly 25.93 points lower than that of Qwen14B, which notably diverges from this trend. • The performance of LLMs in both the Bid and Hanabi environments significantly lags behind the average of all 7 environments, with deficits of 17.72 and 10.88 points, respectively. Smaller-scale LLMs (∼10B) fared even worse, scoring 31.71 and 15.45 points below the average in these environments. This underscores that tasks involving numerical reasoning, opponent modeling, and team collaboration pose significant challenges for LLMs. • There remains a significant disparity between GPT-4 and other models. GPT-4 has attained SOTA performance across all evaluated tasks, outperforming other models by an average margin of 16.76. 4 Analysis In this section, we will conduct a thorough analysis of the experimental results, evaluating LLMs from seven key perspectives: spatial comprehension, strategic planning, numerical reasoning, risk assessment, communication, opponent modeling, and team collaboration. We chose to experiment with 7 LLMs that demonstrate superior performance. Spatial Comprehension To investigate the spatial comprehension capabilities of LLMs, we conducted 100 self-play games with 7 different LLMs each, recording winning rates and the frequency of hallucinations (instances of illegal moves). Simultaneously, we eliminated the prompts regarding the positioning of the pieces in the LLMs’ input and executed another set of 100 self-games, again tallying the winning rates and the occurrences of hallucinations. The results are shown in Table 2, which indicates that in both scenarios, the absence of positional cues significantly escalates the likelihood of LLMs experiencing hallucinations, with an average increase of 59.5%. Consequently, this results in a substantial reduction in their win rates, which on average decline by 38.3%. Through a meticulous case study, we discovered that LLMs demonstrate a limited grasp of the vertical dimension on 2D chessboards. This limitation likely arises because we transform the 2D chessboard into a 1D string input for the LLMs via line breaks. Grasping the vertical concept requires the LLM to correlate two tokens separated by a specific distance in the one-dimensional input. Consequently, even though ConnectFour’s chessboard is larger and its state space more complex, TicTacToe’s 2D action space demands a higher level of spatial comprehension from the LLM. Hence, in the absence of positional prompts, the performance deterioration in TicTacToe is more pronounced. To 13059 LLMs TicTacToe ConnectFour Win Rate w/o Hint Error Rate w/o Hint Win Rate w/o Hint Error Rate w/o Hint GPT-4 69% 31%(↓38%) 0% 0% 31% 0% (↓31%) 25% 75% (↑50%) GPT-3.5-Turbo 22% 17%(↓5%) 37% 38% (↑1%) 51% 42% (↓9%) 21% 24% (↑3%) LLaMA2-70-Chat 56% 0%(↓56%) 0% 100% (↑100%) 56% 0% (↓56%) 0% 100% (↑100%) Qwen-72B-Chat 56% 0%(↓56%) 0% 99% (↑99%) 61% 18% (↓43%) 0% 42% (↑42%) Qwen-14B-Chat 56% 0% (↓56%) 0% 100% (↑100%) 54% 22% (↓32%) 3% 63% (↑60%) DeepSeek-67B-Chat 55% 2% (↓53%) 4% 92% (↑88%) 64% 11% (↓53%) 0% 63% (↑63%) DeepSeek-7B-Chat 52% 3% (↓49%) 7% 90% (↑83%) 36% 37% (↑1%) 4% 48% (↑44%) Table 2: Experimental results of seven LLMs in TicTacToe and ConnectFour environments, “Win Rate” represents the winning rate of LLMs with position hints in 100 games, “Error Rate” represents the rate of illegal moves, “w/o Hint” represents the results without position hint. verify our conclusion, we swapped the rows and columns of the chessboard depicted in the prompt. We also tested other prompt strategies, such as selfconsistency and self-refinement. Table 3 shows the results of ChatGPT using different prompt strategies in the ConnectFour environment. We observe that row-column swapping led to a noticeable improvement in win rates for the LLMs. Compared to the original unswapped prompt setup, we observed an increase in win rate to approximately 62%. Table 3: Performance of ChatGPT using different prompt strategies on ConnectFour. Prompt Strategy TrueSkill Ratings Normal 24.22 Without hints 22.10 (-2.12) Self-Refinement (Madaan et al., 2024) 24.53 (+0.31) Self-Consistency (Wang et al., 2022) 24.60 (+0.38) Row-Column Swapping 26.70 (+2.48) Additionally, we explored other variations in our prompts, as summarized in the table above. While enhancements in self-refinement and selfconsistency were modest, the transformation achieved by swapping rows and columns resulted in a significant performance boost. This improvement can likely be attributed to the alignment of a column’s contents in a more contiguous manner on the token level after the transformation, facilitating a better understanding for the LLM. A case study can be found in Appendix B.1 Strategic Planning To delve into the strategic planning capabilities of LLMs within a multi-agent environment, we crafted a specialized board valuation function tailored for the ConnectFour environment as follows: V (s) = 10 × (My4(s) −Oppo4(s)) + 5 × (My3(s) −Oppo3(s)) + 2 × (My2(s) −Oppo2(s)), (1) where s represents the board status, My4(s), My3(s), and My2(s) represent the count of four, three, and two consecutive pieces of the player on the chessboard, respectively. Conversely, Oppo4(s), Oppo3(s), and Oppo2(s) represent the similar formations for the opponent’s pieces . We defined the reward as the change in board valuation between the state after the agent’s move s′ and the state following the previous move s: R(a, s, s′) = V (s′) −V (s), (2) where a represents the LLM agent’s action. We conducted random battles between seven LLM agents until the rewards converged, and the results are shown in Figure 2. It is evident that LLMs with a larger number of parameters consistently demonstrate superior strategic planning abilities. Each decision made by these LLMs propels them towards a more advantageous position, as indicated by the predominantly positive rewards they receive on average. Conversely, LLMs with a smaller parameter scale tend to earn, on average, negative rewards. This suggests an inability to accurately assess the current state of the chessboard and to formulate effective strategies, ultimately leading the game to progress in a direction that favors its opponent. To further explore the gap between LLM’s strategic planning capabilities and humans, we manually played five games with seven LLMs each and Human GPT-4 GPT-3.5 LLaMA2-70B Qwen-72B Qwen-14B Deepseek-67B Deepseek-7b LLMs 200 175 150 125 100 75 50 25 0 25 40 Rewards/Match 182.72 131.17 46.2 62.22 49.57 -35.96 112.0 -2.45 Figure 2: Average reward received by Human and 7 LLMs per game. 13060 human players emerged victorious in every game. Simultaneously, we observe that even GPT-4 rarely prioritizes reward maximization (such as turning a blind eye to winning positions). This tendency could stem from the fact that LLMs predominantly base their decisions on textual data. Consequently, the model must translate the game state into text descriptions before making decisions. This process of conversion may result in information loss, affecting the model’s strategic performance. An error analysis can be found in Appendix B.1 Communication To explore the communication capabilities of LLM, we followed the settings in Section 2.4 and manually analyzed 50 game records for each LLM, calculating the percentage of their in-game clues that accurately described their secret word (“Description” in Table 4). As illustrated in Table 4, LLMs demonstrated outstanding proficiency in this task. All LLMs achieved a description success rate of 95.30% on average, with four of them reaching a perfect 100% success rate. This high level of performance can be attributed to the LLMs being trained on extensive textual datasets, which equip them with a nuanced understanding of word associations and contextual meanings, allowing them to more easily delineate the meanings of different words. However, we also observed that receiving information poses a greater challenge for LLMs compared to giving information. When interpreting messages from other LLMs, they often make cognitive errors, leading to misinterpretations or distortions of the original clues. This may be because their interpretations are often based on their knowledge base rather than relying entirely on input information. This can lead them to misinterpret the intent or context of the original message, thus tampering with the original clue, which shows that LLMs still have limitations in handling complex, multi-level communications. A case study can be found in Appendix B.2 LLMs Description Guess Guess (strict) GPT-4 100.00% 80.00% 20.00% GPT-3.5-Turbo 100.00% 86.00% 6.00% LLaMA2-70B-Chat 93.50% 52.00% 2.00% Qwen-72B-Chat 100.00% 66.00% 10.00% Qwen-14B-Chat 87.65% 2.00% 0.00% DeepSeek-67B-Chat 100.00% 74.00% 6.00% DeepSeek-7B-Chat 85.91% 38.00% 0.00% Table 4: Three different metrics obtained by 7 LLMs in the Undercover environment. GPT-4 GPT-3.5 LLaMA2-70B Qwen-72B Qwen-14B Deepseek-67B Deepseek-7B 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 Scores 0.36 0.67 0.28 0.56 0.28 0.98 -0.16 Comparison of Scores Figure 3: Scores obtained by 7 LLMs in 100 bid games. Opponent Modeling Moreover, the Undercover environment demands that LLMs possess not only communication skills but also the capability to model opponents. Building upon the settings established in Section 2.4, we further tasked each LLM agent with deducing the secret words of others during the voting phase. We recorded the instances where each LLM accurately guessed the secret word of any one agent (“Guess” in Table 4), as well as the occasions where they successfully identified the secret words of all other agents (“Guess (strict)” in Table 4). The results are shown in Table 4, where we can find that inferring the secret words of others through their descriptions is more challenging than describing oneself. Through manual analysis, we discovered that only GPT-4 possesses the analytical and reasoning skills necessary to identify itself as an undercover agent. Subsequently, it adjusts the clues it provides in the following round for more effective camouflage. This advanced ability to model opponents also enables GPT-4 to more accurately guess the secret words of all other agents, resulting in superior performance. Numerical Reasoning Bid represents a typical problem in incomplete information games. In such scenarios, the Bayesian Nash equilibrium for two rational agents is theoretically set at v 2, where v denotes each agent’s valuation of the item (Nis, 2007). We study the numerical reasoning ability of LLMs in this situation. We play 100 self-games of seven types of LLMs in the Bid environment, and count the difference between their bids v′ and the Nash equilibrium v 2, and calculate the normalized score score = v′−v/2 v/2 . The results are shown in the Figure 3. The data presented in the figure allows us to deduce that nearly all LLMs, except Deepseek7B, tend to place bids that exceed the Nash equilibrium, with their median bids being on average 52% higher. Notably, Deepseek-67B exhibits an extreme behavior: it opts for bids that significantly sacrifice its profits, ostensibly to guarantee a win in 13061 the auction. Consequently, its overall performance in this particular environment is notably subpar as illustrated in Table 1. Deepseek-7B Deepseek-67B LLaMA2-70B Qwen-14B Qwen-72B GPT-3.5 GPT-4 LLM Name 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Average Estimated Win Rate Average Win Rate for Each Action by LLM Fold Check and Call Raise Half Pot Raise Full Pot All in Figure 4: The average estimated win rate corresponding to each action in 100 Texas Hold’em games for 7 LLMs. Risk Assessment To investigate the risk assessment capabilities of LLMs in an incomplete information environment, we sampled 100 static card games from the dynamic Texas Hold’em environment. In each game, the LLMs are equipped with only two private cards and three public cards as their information basis. We use Monte Carlo sampling to estimate the winning rate of LLM in each game, then record the actions taken by each LLM under these specific conditions. The specific actions of each LLM, along with the average estimated winning rate are shown in Figure 4, where we can observe that LLMs tend to adopt a more conservative approach in their risk assessments. Additionally, three of the LLMs consistently opted for “Check and Call” strategies, and all models except GPT-4 refrained from selecting the most aggressive course of action, “All in”. This may be because LLMs received caution in treating gamblingrelated scenarios during the pre-training and security alignment phase. Simultaneously, we noted that GPT-4 exhibits a robust risk assessment capability. It tends to choose “All in” when the winning rate is exceptionally high. Moreover, GPT-4 adapts its strategy as the winning rate of the hand decreases, opting for relatively low-risk actions. Notably, when the hand’s winning rate falls below 40%, GPT-4 is the sole LLM that selects “Fold” as a strategic decision to mitigate losses on time. Team Collaboration As outlined in Section 2.4, within the Hanabi environment, LLMs are required to alternate between “discarding cards” and “revealing cards” to cooperate to obtain more information, and then build fireworks through “play cards”. To investigate the efficacy of LLMs in executing team cooperation, we analyzed the frequency of each type of action performed by LLMs. As shown in Figure 5, although most LLMs frequently utilize the “reveal cards” action to share hand information with their opponent, they seldom use the “discard cards” action, which would give the opponent a chance to “reveal cards” for more information, thus rashly choosing “play cards” when there is insufficient information, causing the game to fail. Deepseek-7B Deepseek-67BLLaMA2 Qwen-14B Qwen-72B GPT-3.5 GPT-4 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Proportion of Each Action Play Card Reveal Card Discard Card Figure 5: The proportion of three types of actions when playing Hanabi among 7 LLMs. 5 Related Works LLM Evaluation With the rapid development of LLMs, the evaluation practices of traditional NLP tasks (Hu et al., 2023b; Bang et al., 2023; Zhong et al., 2023; Qin et al., 2023; Hu et al., 2022) limit their performance. Instead, the use of humanwritten exam questions as an assessment paradigm (Hu et al., 2023a; Hendrycks et al., 2021; Liang et al., 2022; Srivastava et al., 2023) has become increasingly mainstream, focusing primarily on testing the world knowledge and reasoning capabilities in a multi-task benchmark. In addition, some newly proposed benchmarks and methods examine large language models from multiple dimensions, such as adaptability to professional fields (Xiang et al., 2023), real-world application (Li et al., 2023b; Liu et al., 2023a, 2024b,a), robustness (Zhu et al., 2023; Hu et al., 2024), multi-modal capabilities (Fu et al., 2023), etc. Recently, the trend of treating large language models as agents has become increasingly evident in academia, and a series of studies (Liu et al., 2023b; Wu et al., 2023) aimed to evaluate the performance of these models when acting as agents in games. However, most of these studies are limited to a single agent, thus failing to fully capture the characteristics of large language models in game or cooperative scenarios and group behavior. To highlight the distinctions between our work 13062 Table 5: Comparison of LLMARENAwith previous works. Benchmark MultiAgent Opponent Expandable Dynamic Type of Games MINT (Wang et al., 2023b) – – – – Tool Using WebArena (Zhou et al., 2023c) – – √ √ Web Tasks Agentbench (Liu et al., 2023b) – – – – Card Game, Communication Game Smartplay (Wu et al., 2023) – – √ √ Game Theory, Open-World Game MAgIC (Xu et al., 2023) √ √ – √ Game Theory, Communication Game GameEval (Qiao et al., 2023) √ – – – Communication Game LLM-Co (Agashe et al., 2023) √ – – √ Cooperation Game AucArena (Chen et al., 2023a) √ √ – – Game Theory SpyGame (Liang et al., 2023) √ √ – – Communication Game Homo Silicus (Horton, 2023) – – – – Economics Game Akata et al. (Akata et al., 2023) – – – √ Game Theory Lorè et al. (Lorè and Heydari, 2023) – – – √ Game Theory Fan et al. (Fan et al., 2024) – – – √ Game Theory Suspicion-Agent (Guo et al., 2023b) – √ – √ Card Game LLMARENA √ √ √ √ Game Theory, Card Game, Cooperation Game, Communication Game and previous efforts, we provide a comparison in Table 5, where “MultiAgent” indicates whether the environment involves multiple LLM agents, “Opponent” denotes if there is an LLM opponent for adversarial interaction within the environment, “Expandable” represents whether there is a unified standard for expanding the environment and “Dynamic” signifies if the environment is random and dynamic, as opposed to a static dataset. LLM-based Agents Reinforcement learning has been extensively employed in the training of autonomous agents (Ribeiro, 2002; Isbell et al., 2001; Mnih et al., 2013). With the superior capabilities of LLMs, the utilization of LLMs as cognitive entities and controllers for agents has garnered widespread acclaim (Huang et al., 2022; Park et al., 2023; Sumers et al., 2023). Wei et al. (2022) initially introduced the CoT technology, which notably amplifies the reasoning and planning abilities of LLMbased agents, consequently leading a research surge dominated by reasoning strategies (Yao et al., 2023; Wang et al., 2023c; Zhou et al., 2023a). Furthermore, empirical studies indicate that collaborative and adversarial frameworks (Li et al., 2023a; Chen et al., 2023b; Chan et al., 2023) involving multiple LLMs surpass the capabilities of singular agents across various tasks and may lead to the emergence of social phenomena (Park et al., 2023). 6 Conclusion In our study, we introduced LLMARENA, a benchmark designed to assess the diverse abilities of LLM agents in dynamic, multi-agent environments. Through comprehensive analysis of seven game environments that require necessary abilities for LLM agents, our findings reveal that LLMs exhibit weaknesses in spatial reasoning, opponent modeling, and team collaboration. Enhancing the performance of LLM agents in dynamic multiagent settings remains an unresolved challenge. We anticipate that future researchers will utilize LLMARENA to conduct evaluations in a broader range of scenarios. Limitation While our research primarily focuses on assessing the capabilities of LLM agents within dynamic multi-agent environments, it is important to acknowledge two limitations. Firstly, in the journey towards achieving autonomous agents, an LLM agent’s ability extends beyond merely analyzing textual information. It necessitates engaging with a variety of modal inputs, including video and audio. The capabilities of LLMs within such multi-modal dynamic contexts remain an untapped area of exploration. Secondly, our study did not delve into the potential of LLMs that leverage external tools. We anticipate and encourage further research in both of these promising dimensions to broaden our understanding and capabilities of LLM agents. Ethical Considerations The ethical landscape surrounding LLM agents is marked by significant challenges, particularly in the realms of responsible usage, and the potential for misuse. The deployment of autonomous LLM agents in decision-making roles raises questions about accountability and the need for robust frameworks to prevent unethical applications. The potential for misuse, such as manipulating scenarios or 13063 exploiting systems, underscores the necessity for stringent guidelines and monitoring. Acknoledgement This work is supported by the National Nature Science Foundation of China (No. 62021002), the Beijing Natural Science Foundation under grant numbers QY23115 and QY23116, Tsinghua BNRist, the Beijing Key Laboratory of Industrial Bigdata System and Application. References 2007. Algorithmic Game Theory. Cambridge University Press. Saaket Agashe, Yue Fan, and Xin Eric Wang. 2023. Evaluating multi-agent coordination abilities in large language models. arXiv preprint arXiv:2310.03903. Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, and Eric Schulz. 2023. Playing repeated games with large language models. arXiv preprint arXiv:2305.16867. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201. Jiangjie Chen, Siyu Yuan, Rong Ye, Bodhisattwa Prasad Majumder, and Kyle Richardson. 2023a. Put your money where your mouth is: Evaluating strategic planning and execution of llm agents in an auction arena. arXiv preprint arXiv:2310.05746. Weize Chen, Yusheng Su, Jingwei Zuo, Cheng Yang, Chenfei Yuan, Chen Qian, Chi-Min Chan, Yujia Qin, Yaxi Lu, Ruobing Xie, et al. 2023b. Agentverse: Facilitating multi-agent collaboration and exploring emergent behaviors in agents. arXiv preprint arXiv:2308.10848. Caoyun Fan, Jindou Chen, Yaohui Jin, and Hao He. 2024. Can large language models serve as rational players in game theory? a systematic analysis. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 17960–17967. Dean Foster, Dylan J Foster, Noah Golowich, and Alexander Rakhlin. 2023. On the complexity of multi-agent decision making: From learning in games to partial monitoring. In The Thirty Sixth Annual Conference on Learning Theory, pages 2678– 2792. PMLR. Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. 2023. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394. Jiaxian Guo, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Boyang Li, Dacheng Tao, and Steven Hoi. 2023a. From images to textual prompts: Zero-shot visual question answering with frozen large language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10867–10877. Jiaxian Guo, Bo Yang, Paul Yoo, Bill Yuchen Lin, Yusuke Iwasawa, and Yutaka Matsuo. 2023b. Suspicion-agent: Playing imperfect information games with theory of mind aware gpt-4. arXiv preprint arXiv:2309.17277. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. Proceedings of the International Conference on Learning Representations (ICLR). Ralf Herbrich, Tom Minka, and Thore Graepel. 2006. Trueskill™: a bayesian skill rating system. Advances in neural information processing systems, 19. John J Horton. 2023. Large language models as simulated economic agents: What can we learn from homo silicus? Technical report, National Bureau of Economic Research. Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, S Yu Philip, and Zhijiang Guo. 2023a. Do large language models know about facts? In The Twelfth International Conference on Learning Representations. Xuming Hu, Zhijiang Guo, Junzhe Chen, Lijie Wen, and Philip S. Yu. 2023b. MR2: A benchmark for multimodal retrieval-augmented rumor detection in social media. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2023, Taipei, Taiwan, July 23-27, 2023, pages 2901–2912. ACM. Xuming Hu, Zhijiang Guo, Guanyu Wu, Aiwei Liu, Lijie Wen, and Philip S Yu. 2022. Chef: A pilot chinese dataset for evidence-based fact-checking. arXiv preprint arXiv:2206.11863. Xuming Hu, Xiaochuan Li, Junzhe Chen, Yinghui Li, Yangning Li, Xiaoguang Li, Yasheng Wang, Qun Liu, Lijie Wen, Philip S Yu, et al. 2024. Evaluating robustness of generative search engine on adversarial factual questions. arXiv preprint arXiv:2403.12077. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118–9147. PMLR. 13064 Charles Isbell, Christian R Shelton, Michael Kearns, Satinder Singh, and Peter Stone. 2001. A social reinforcement learning agent. In Proceedings of the fifth international conference on Autonomous agents, pages 377–384. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199– 22213. Mike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125. Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023a. CAMEL: Communicative agents for ”mind” exploration of large language model society. In Thirty-seventh Conference on Neural Information Processing Systems. Minghao Li, Yingxiu Zhao, Bowen Yu, Feifan Song, Hangyu Li, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. 2023b. API-bank: A comprehensive benchmark for tool-augmented LLMs. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 3102–3116, Singapore. Association for Computational Linguistics. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Tian Liang, Zhiwei He, Jen-tes Huang, Wenxuan Wang, Wenxiang Jiao, Rui Wang, Yujiu Yang, Zhaopeng Tu, Shuming Shi, and Xing Wang. 2023. Leveraging word guessing games to assess the intelligence of large language models. arXiv preprint arXiv:2310.20499. Aiwei Liu, Leyi Pan, Xuming Hu, Shuang Li, Lijie Wen, Irwin King, and Philip S. Yu. 2024a. An unforgeable publicly verifiable watermark for large language models. In The Twelfth International Conference on Learning Representations. Aiwei Liu, Leyi Pan, Xuming Hu, Shiao Meng, and Lijie Wen. 2024b. A semantic invariant robust watermark for large language models. In The Twelfth International Conference on Learning Representations. Aiwei Liu, Leyi Pan, Yijian Lu, Jingjing Li, Xuming Hu, Lijie Wen, Irwin King, and Philip S Yu. 2023a. A survey of text watermarking in the era of large language models. arXiv preprint arXiv:2312.07913. Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, et al. 2023b. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688. Nunzio Lorè and Babak Heydari. 2023. Strategic behavior of large language models: Game structure vs. contextual framing. arXiv preprint arXiv:2309.05898. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2024. Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems, 36. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. OpenAI. 2022. ChatGPT. OpenAI. 2023. GPT-4 Technical Report. CoRR, abs/2303.08774. Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Jiapu Wang, and Xindong Wu. 2024. Unifying large language models and knowledge graphs: A roadmap. IEEE Transactions on Knowledge and Data Engineering. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, pages 1–22. Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. 2023. Communicative agents for software development. arXiv preprint arXiv:2307.07924. Dan Qiao, Chenfei Wu, Yaobo Liang, Juntao Li, and Nan Duan. 2023. Gameeval: Evaluating llms on conversational games. arXiv preprint arXiv:2308.10032. Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023. Is ChatGPT a general-purpose natural language processing task solver? In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1339–1384, Singapore. Association for Computational Linguistics. CHCR Ribeiro. 2002. Reinforcement learning agents. Artificial intelligence review, 17:223–250. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research. 13065 Theodore R Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L Griffiths. 2023. Cognitive architectures for language agents. arXiv preprint arXiv:2309.02427. J Terry, Benjamin Black, Nathaniel Grammel, Mario Jayakumar, Ananth Hari, Ryan Sullivan, Luis S Santos, Clemens Dieffendahl, Caroline Horsch, Rodrigo Perez-Vicente, et al. 2021. Pettingzoo: Gym for multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 34:15032– 15043. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023a. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Xingyao Wang, Zihan Wang, Jiateng Liu, Yangyi Chen, Lifan Yuan, Hao Peng, and Heng Ji. 2023b. Mint: Evaluating llms in multi-turn interaction with tools and language feedback. In The Twelfth International Conference on Learning Representations. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023c. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, et al. 2023. Zeroshot information extraction via chatting with chatgpt. arXiv preprint arXiv:2302.10205. Yue Wu, Xuan Tang, Tom M Mitchell, and Yuanzhi Li. 2023. Smartplay: A benchmark for llms as intelligent agents. arXiv preprint arXiv:2310.01557. Tong Xiang, Liangzhi Li, Wangyue Li, Mingbai Bai, Lu Wei, Bowen Wang, and Noa Garcia. 2023. Caremi: chinese benchmark for misinformation evaluation in maternity and infant care. arXiv preprint arXiv:2307.01458. Lin Xu, Zhiyuan Hu, Daquan Zhou, Hongyu Ren, Zhen Dong, Kurt Keutzer, See Kiong Ng, and Jiashi Feng. 2023. Magic: Benchmarking large language model powered multi-agent in cognition, adaptability, rationality and collaboration. arXiv preprint arXiv:2311.08562. Jingfeng Yang, Hongye Jin, Ruixiang Tang, Xiaotian Han, Qizhang Feng, Haoming Jiang, Bing Yin, and Xia Hu. 2023. Harnessing the power of llms in practice: A survey on chatgpt and beyond. arXiv preprint arXiv:2304.13712. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022. Webshop: Towards scalable realworld web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744–20757. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik R Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. In Thirty-seventh Conference on Neural Information Processing Systems. Qihuang Zhong, Liang Ding, Juhua Liu, Bo Du, and Dacheng Tao. 2023. Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert. arXiv preprint arXiv:2302.10198. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. 2023a. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations. 13066 Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, and Jiawei Han. 2023b. Don’t make your llm an evaluation benchmark cheater. arXiv preprint arXiv:2311.01964. Shuyan Zhou, Frank F Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, et al. 2023c. Webarena: A realistic web environment for building autonomous agents. In The Twelfth International Conference on Learning Representations. Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei Ye, Neil Zhenqiang Gong, Yue Zhang, et al. 2023. Promptbench: Towards evaluating the robustness of large language models on adversarial prompts. arXiv preprint arXiv:2306.04528. 13067 A Complete Prompt Design In this section, we provide complete prompts for all seven game environments in LLM Arena. As the agent in the LLM Arena must consistently interact with the environment, similar to the reinforcement learning setting, various prompts are required to guide the agent in comprehending the rules and taking action. Specifically, three types of prompts work: • System Prompt. System prompts are special messages used to steer the behavior of LLM. They allow to prescribe the agent’s style and task within certain bounds, making it more customizable and adaptable for various use cases. In LLM Arena, we stipulate the role of the agent and the rules of the game in the system prompt. • Observation Prompt. Observation Prompt provides the necessary state information for the agent to interact with the environment, including the opponent’s actions, current game status, legal actions, etc. We use {} to frame these changing information. • Action Prompt. Action Prompt guides the Agent to choose from legal actions and output regularized actions. In addition, to stimulate the agent’s reasoning ability, in some environments, we also add CoT Prompt. A.1 TicTacToe In TicTacToe, the content in {} can be obtained from the interaction between the agent and the environment. For example, {player_type} includes "X" and "O", {board_status} represents the current chessboard, and we describe the entire chessboard in the form of text, as the example in System Prompt. {available} represents the current legal action list of the agent. The detailed prompt template of TicTacToe is as follows. System Prompt You are playing TicTacToe. TicTacToe is played on a three-by-three grid by two players, who alternately place the marks X and O in one of the nine spaces in the grid. The player who succeeds in placing three of their marks in a horizontal, vertical, or diagonal row is the winner. In the following example, the first player (X) wins the game in seven steps: 1. [Player 1]: X: (1, 3) 2. [Player 2]: O: (1, 1) 3. [Player 1]: X: (3, 1) 13068 4. [Player 2]: O: (2, 2) 5. [Player 1]: X: (3, 3) 6. [Player 2]: O: (2, 3) 7. [Player 1]: X: (3, 2) X plays first. Players will specify the position of the stone and the moderator will plot the board status. If a position has been marked, future marks cannot be put in the same position. The players interact with the game by specifying the position of the stones (x, y), where x indicates the row and y indicates the column, so (1, 1) is the top left corner and (3, 3) is the bottom right corner." 13069 Observation Prompt You play {player_type}. The board status is {board_status}. You can only put the mark on [{available}]. Action Prompt You should only output {player_type} and the position of the move, for example: "{player_type}: (1, 3)" The position you put the mark on must be empty. Don’t say anything besides mark position. A.2 ConnectFour In ConnectFour, the observation prompt and action prompt are similar to TicTacToe. We also use text to describe the chessboard. The detailed prompt template of Texas Hold’em is as follows. System Prompt You are playing ConnectFour. ConnectFour is played on a six-by-seven grid by two players, who alternately drop their makers X and O into one of the seven columns. Each marker will fall to the lowest available space within the selected column. The player who succeeds in placing four of their makers in a horizontal, vertical, or diagonal row is the winner. X plays first. Players interact with the game by specifying the column number where they want to drop their marker. If a column is already full, players cannot drop a marker into that column. The columns are numbered from 1 to 7, from left to right. Players cannot place a token outside this range. Observation Prompt You play {player_type}. The board status is {board_status}. You can only choose one of the following columns: [ {available} ]. Action Prompt You should only output {player_type} and the column of the move you choose to put your mark, for example: "{player_type}: 1" The column you put the mark on cannot be full. Don’t say anything besides mark position. A.3 Texas Hold’em In Texas Hold’em, the status information of the environment mainly includes three types: hand cards({private}), community cards ({public}), and chips ({chips}). The detailed prompt template of Texas Hold’em is as follows. 13070 System Prompt You are playing Texas Hold’em No Limit. Here are the basic rules for Texas Hold’em Poker: In this game, you are requested to attend 1 game with your opponent. Both players start with 100 chips, and the player with the most chips at the end of the game wins. If your chips drop to 0, you lose the game. The objective is to obtain more chips than your opponent. Texas Hold’em hands are ranked from highest to lowest as follows: Royal Flush: A, K, Q, J, 10 all of the same suit. Straight Flush: Five consecutive cards of the same suit. The higher top card wins. Four of a Kind: Four cards of the same rank. The higher rank wins; if the same, compare the fifth card. Full House: Three cards of one rank and two cards of another rank. A higher three-card rank wins; if the same, compare the two-card rank. Flush: Five non-consecutive cards of the same suit. Compare the highest card, then the second-highest, and so on. Straight: Five consecutive cards of different suits. The higher top card wins. Three of a Kind: Three cards of the same rank. Higher rank wins. Two Pair: Two cards of one rank and two cards of another rank. Compare the higher pair first, the lower pair, and then the fifth card. One Pair: Two cards of the same rank. Compare the pair first, then the highest non-paired card, then the second highest, and so on. High Card: If no hand can be formed, the highest card wins. If the highest cards are the same, compare the second highest, and so on. If the hands are of equal rank, the pot is split. Observation Prompt Your observation now is: The cards in your hands are [{private}] The community cards are [{public}] You now have 100-chips chips, the chips that you have put in until now is {chips} Now you can choose one of the following actions:[{available}] Action Prompt You should think step by step and output your action. For example: ’Action: Check and Call’ You should explain why you chose the action A.4 Undercover In Undercover, unlike other environments, it involves more than two agents, and the game has two stages. Therefore, we designed two prompts respectively for the Giving clues stage and the Accusation stage. The detailed prompt template of Undercover is as follows. System Prompt You are playing a game of the Undercover. Here are the game rules: In Undercover, players are divided into two groups: undercovers and non-undercovers. Each player receives a secret word, but the secret word given to the undercover is different from the rest. The main aim is to identify the undercover based on the clues provided by all players. 13071 Be cautious with your clues; being too specific might reveal your identity. As a non-undercover, you need to unmask the undercover without revealing your own secret word. As an undercover, your task is to seamlessly integrate with others and avoid detection. Undercover is played in two phases: The Giving Clue Phase: Players give hints about their words over several rounds. The Accusation Phase: Players vote on who they think the undercover is. The undercover should vote for non-undercovers. Observation Prompt • Giving clues stage Now is the giving clues stage: You are {player_name}. Your secret word is "{word}". This round the other players say: {messages} • Accusation stage Now is the accusation stage: You are {player_name}. Your secret word is {word}. This round the other players say:{messages} Action Prompt • Giving clues stage Now you give one clue (but never include the secret word). You cannot repeat what other players have said. You should only output {player_name} and the clue, for example: "{player_name}: It’s a fruit.". • Accusation stage Now vote which of the other players (excluding yourself) is undercover. You should only output "vote" and the undercover name, for example: "vote: player_1.". A.5 Bargain In bargain, both parties need to give a plan and bargain. The quantity and price of goods owned by the agent are represented by {item_nums} and {value} respectively, and the plan proposed by the agent is represented by {oppo_plan} The detailed prompt template of Bargain is as follows. System Prompt You are participating in a game of "Bargaining." Here are the game rules: (1) Information and Roles The game involves two players, Player A and Player B. A variety of items are available for bargaining, each having a different value to both players. However, the value of each item for the other player is unknown. Players engage in a series of up to 10 bargaining rounds to reach a consensus. If a consensus is not reached within these 10 rounds, the negotiation is considered a failure, 13072 and it results in a stalemate where both players leave without any rewards. (2) Objectives The goal for both Player A and Player B is to negotiate and share these items. The challenge is that the same item has different values for each player, and they do not know the item’s value for their counterpart. Each player aims to maximize the total value of the items they acquire through negotiation. Observation Prompt (The game begins. You start to give a plan.) Now is round {round}: You are {player_name}. There are {item_nums[0]} hats, {item_nums[1]} balls and {item_nums[2]} apples. The value for each hat is {value[0]}, for each ball is {value[1]} and for each apple is {value[2]}. This round your opponent says:"{bargaining}" He will get {oppo_plan[0]} hats, {oppo_plan[1]} balls and {oppo_plan[2]} apples. And you will get {item_nums[0] - oppo_plan[0]} hats, {item_nums[1] - oppo_plan[1]} balls and {item_nums[2] - oppo_plan[2]} apples. Action Prompt Do you agree with his plan? If you agree, You should only output {player_name} and "Deal", for example: "{player_name}: Deal.". If you do not agree, you should bargain with your opponent first and then output {player_name} and your plan in the following format: "{player_name}: x hats y balls z apples." x, y, z are Arabic number. For example: "I’d like 1 hat, 2 balls, and 0 apples. {player_name}: 1 hat 2 balls 0 apples". Do not tell your opponent the value of each item. A.6 Bid In Bid, the status information is only the agent’s valuation of the product ({value}) and the income ({value}-bid). The agent needs to maximize the income. The detailed prompt template of Bid is as follows. System Prompt You are participating in a game of "First-price Sealed-Bid Auction." Here are the game rules: (1) Information and Roles Participants act as bidders in the auction. Each bidder has an opportunity to submit one sealed bid for the item being auctioned, without knowing the bids of other participants. (2) Objectives The primary objective is to win the auction by submitting the highest bid. Bidders must balance their desire to win the item against the risk of overpaying. Each bidder aims to strategically determine their bid based on their valuation of the item and assumptions about other bidders’ valuations. (3) Strategy The challenge for bidders is in deciding how much to bid. Bid too low, and they risk losing the auction; bid too high, and they risk paying more than the item’s value to them (the winner’s curse). Bidders must assess not only their valuation of the item but also try to anticipate the bids of 13073 others. In this game of strategy and valuation, the key is to make a smart bid that balances the desire to win with the risk of overpaying. Observation Prompt Your valuation of the current auction item is ${value}, your bid must be lower than your valuation. You will get rewards equal ${value} - your bid Action Prompt You should think step by step and then output your bid in the following format: "{player_name}: $x.xx" For example: {player_name}: $1.08 A.7 Hanabi In Hanabi, the status information of the environment is more complex and challenging than in other environments. For example, the opponent’s hand ({other_hands}), current fireworks ({fireworks}), current tokens ({tokens}), known hinted hands ({first_card, second_card}), etc. The agent needs to integrate this information to make reasonable decisions. The detailed prompt template of Hanabi is as follows. System Prompt You are playing a Hanabi game. There are the basic rules for Hanabi with a small structure: Hanabi is a 2 player-cooperative game where you should work together to form fireworks. A firework is a set of cards of the same color, ordered from 1 to 5. Cards in the game have both a color and a number; there are two colors and five numbers in total. Each player can only view the cards another player holds, but not their own. Players cannot directly communicate with each other, but must instead remove an info token from play to give information. Players can tell other players which of the cards in their hand is a specific color, or a specific number. You must point out all the cards of this color or number. There are initially 3 info tokens, but players can discard cards in their hand and draw a new card from the deck to return an info token into play. Players can also play a card from their hand: the card must either begin a new firework or be appended to an existing firework. However, 2 fireworks cannot have the same color, and a single firework cannot repeat numbers. If the played card does not satisfy these conditions, a life token is placed. The game ends when either 3 life tokens have been placed, all 2 fireworks have been completed, or all cards have been drawn from the deck. Points are awarded based on the largest card value in each created firework. In this game, you and your opponent are players. Each person has two hand cards, each with two colors and five possible numbers from 1 to 5. Players share 3 info tokens and 1 life token. Observation Prompt Your observation now is : The cards in another player’s hands are:{other_hands} The current fireworks is:{fireworks} The current number of tokens is {tokens} 13074 The information of your own cards that was revealed is:{first_card, second_card} The current deck size is {deck_size} Your opponent’s action is {opponent_action} The card that the opponent last played or discarded is {last_played} Now you can choose one of the following actions: {available} Action Prompt You should think step by step and output your action. For example: "Reveal Yellow Cards for another player" You will respond with an action, formatted as: "Action: <action>" where you replace <action> with your actual action. You should explain why you chose the action. B Case Study In this section, we present a detailed case study and conduct an error analysis within the ConnectFour and Undercover environments. B.1 ConnectFour As demonstrated in Section 4, models with smaller parameter sizes, on average, received negative rewards and engaged in actions detrimental to their own position. An illustrative example of this is as follows: In the patial chessboard of ConnectFour, “X” has successfully aligned three pieces diagonally. Upon reaching “O”’s turn, the model with a smaller parameter scale tends to make the counterintuitive choice of placing a piece in the fourth column, inadvertently assisting its opponent, “X”, in securing a diagonal victory. Simultaneously, our findings reveal that even a robust model such as GPT-4 is not always capable of executing actions that maximize rewards. For instance, in the aforementioned scenario, “X” opts for a suboptimal move by placing a piece in the first column, instead of strategically securing a win by placing it in the fourth column. 13075 B.2 Undercover During our research, we observed that LLMs can sometimes misinterpret information received from other agents. Here is an example, the secret word of undercover is “sun” and the secret word of non-undercovers is “moon”: Action Prompt Now is the accusation stage: You are player_2. Your secret word is "Moon". This round the other players say: player_0: It’s big and hot. player_1: It’s a source of energy. player_2: It’s a celestial body. player_3: It’s associated with tides. player_4: It’s a natural satellite. Now vote which of the other players (excluding yourself) is the undercover. You should think step by step first and then output "vote" and the undercover name, for example: "vote: player_1.". Response from AgenLM-7B As player_2, I will start by analyzing the statements made by the other players. player_0 mentioned that the secret word is a source of energy. This could be a misleading statement, as the word "Moon" is not typically associated with energy. However, it’s possible that player_0 is trying to throw off the other players by giving an unrelated answer. player_1 said that the secret word is a celestial body. This is a more directly related answer, but it’s still not a very specific one. It’s possible that player_1 is trying to be vague in order to throw off the other players. player_3 mentioned that the secret word is associated with tides. This is a more specific answer, and it’s one that only a few players might guess. It’s possible that player_3 is the undercover player. player_4 said that the secret word is a natural satellite. This is an even more specific answer, and it’s one that only a few players might guess. It’s possible that player_4 is the undercover player. Based on my analysis, I think it’s possible that either player_1 or player_4 is the undercover player. However, I need more information to make a definitive decision. I will wait for the next round of statements before voting. vote: player_4 In this example, player_0 is undercover, and its clue is “It’s big and hot” but AgentLm turns a blind eye to it and matches the wrong clues with player_0 and player_1, leading to wrong inferences. Simultaneously, we observed that GPT-4 stands out as the sole LLM that is able to identify itself as undercover in the given scenario. In the following example, the secret word obtained by GPT4 as player_4 13076 is “Kangaroo” and the other agents’ secret words are “Camel”. Action Prompt Now is the accusation stage: You are player_2. Your secret word is "Camel". This round the other players say: player_0: It’s a desert animal. player_1: It’s known for its humps. player_2: It’s often found in the Middle East. player_3: It’s often used for transportation. player_4: It’s a great jumper. Now vote which of the other players (excluding yourself) is the undercover. You should think step by step first and then output "vote" and the undercover name, for example: "vote: player_1.". Response from GPT-4 Based on the clues given by the other players, it seems that their secret words are related to camels, which are desert animals known for their humps, commonly found in the Middle East, and used for transportation. Since my secret word is "Kangaroo," which is unrelated to camels, it’s likely that I am the undercover. However, since the game requires me to blend in and avoid being caught, I should vote for another player who I think might be suspected of being the undercover by the other players. Since all the other players’ clues align well with the camel theme, it’s difficult to determine who might be suspected by others. In this case, I might choose to vote randomly or choose the player whose clue was the least specific. vote: player_0 C Origin Result In this section, we show the original data corresponding to Table 1. LLMs TicTacToe ConnectFour Texas Hold’em Bid Bargain Undercover Hanabi GPT-4 29.02 27.20 26.50 0.70 24.30 27.00 0.45 GPT-3.5-Turbo 24.03 26.19 21.53 0.59 24.12 25.00 0.38 Qwen-72B-Chat 26.14 21.84 25.14 0.40 22.59 17.00 0.40 Llama-2-70B 23.86 26.08 26.08 0.66 21.54 23.00 0.36 Agentlm-70B 23.63 22.74 20.50 0.60 23.89 25.00 0.36 DeepSeek-67B-Chat 19.73 24.17 25.61 0.14 21.63 20.00 0.31 SUS-Chat-34B 23.15 26.19 22.36 0.46 18.11 24.00 0.39 Yi-34B-Chat 22.71 25.93 22.81 0.62 18.03 26.00 0.15 Qwen-14B-Chat 25.08 19.76 24.00 0.61 22.15 24.00 0.32 WizardLM-13B 22.97 22.67 19.68 0.25 22.18 13.00 0.24 AgentLM-13B 22.90 24.27 23.02 0.21 23.07 17.00 0.10 DeepSeek-7B-Chat 23.56 19.87 25.47 0.10 22.49 20.00 0.00 AgentLM-7B 19.82 24.57 22.56 0.00 20.30 18.00 0.44 Vicuna-7B 18.85 21.79 23.97 0.48 20.98 17.00 0.39 Table 6: Origin scores of 14 different LLMs in 7 environments. 13077
2024
705
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13078–13090 August 11-16, 2024 ©2024 Association for Computational Linguistics Small But Funny: A Feedback-Driven Approach to Humor Distillation Sahithya Ravi1,3∗Patrick Huber2 Akshat Shrivastava2 Vered Shwartz1,3 Arash Einolghozati2 1 University of British Columbia 2 Meta AI 3 Vector Institute for AI {sahiravi, vshwartz}@cs.ubc.ca {patrickHuber, akshats, arash}@meta.com Abstract The emergence of Large Language Models (LLMs) has brought to light promising language generation capabilities, particularly in performing tasks like complex reasoning and creative writing. Consequently, distillation through imitation of teacher responses has emerged as a popular technique to transfer knowledge from LLMs to more accessible, Small Language Models (SLMs). While this works well for simpler tasks, there is a substantial performance gap on tasks requiring intricate language comprehension and creativity, such as humor generation. We hypothesize that this gap may stem from the fact that creative tasks might be hard to learn by imitation alone and explore whether an approach, involving supplementary guidance from the teacher could yield higher performance. To address this, we study the effect of assigning a dual role to the LLM – as a “teacher” generating data, as well as a “critic” evaluating the student’s performance. Our experiments on humor generation reveal that the incorporation of feedback significantly narrows the performance gap between SLMs and their larger counterparts compared to merely relying on imitation. As a result, our research highlights the potential of using feedback as an additional dimension to data when transferring complex language abilities via distillation. 1 Introduction NLP is on a trajectory towards creating increasingly large models (OpenAI, 2023; Touvron et al., 2023). LLMs achieve high performance across many tasks in both zero and few-shot settings. However, there are growing concerns about the computational efficiency and environmental sustainability of such approaches (Strubell et al., 2019). Knowledge distillation (Hinton et al., 2015b) has thus gained a renewed interest, where the term ∗Work done during internship at Meta AI. Figure 1: Performance gap between LLMs and SLMs: Generations from a teacher LLM (Llama2) and a student SLM (BART) finetuned on its outputs. has evolved to denote the process of distilling the responses of LLMs to SLMs (West et al., 2022). Recent work has explored the distillation of commonsense knowledge (Bhagavatula et al., 2023), chain-of-thought reasoning (Li et al., 2023a), and summarization abilities (Liu and Chen, 2022; Jung et al., 2023) to smaller language models. However, the exploration of distilling creative abilities such as humor into SLMs remains an open and challenging area of research. In this work, we explore the application of knowledge distillation in the context of the creative style-transfer task of conditional humor generation. Given a literal text, the goal is to generate a humorous meaning-preserving paraphrase, as shown in Figure 1. While LLMs do not match the subtleties of humor in human-written text (Hessel et al., 2023; Chakrabarty et al., 2023), they surpass their smaller counterparts (Radford et al., 2019; Chen et al., 2023; Hessel et al., 2023). We argue that such creative tasks are challenging for SLMs to learn. First, due to the inherent constraints of SLMs, such as reduced model capacity, they are limited in their ability to explore diverse solution spaces and generate innovative outputs. 13078 Second, although imitating a teacher model is a good starting point, it may result in superficial overfitting to the teacher’s style rather than learning the task itself (Gudibande et al., 2023). Fig. 1 demonstrates that even after fine-tuning on the humorous responses from the LLM, the SLM outputs fall flat. Existing techniques that address this gap attempt to improve task understanding through distilling chains of thoughts (Mukherjee et al., 2023; Wang et al., 2023a). This approach is less applicable for creative generation tasks, which can’t be scripted. For instance, it is possible to solve a math reasoning problem systematically, but it is difficult to come up with a recipe for a joke (Hessel et al., 2023; West et al., 2023). To bridge this gap, and improve upon mimicry of the teacher, we propose a novel distillation framework involving both imitation and feedback. Following the typical imitation stage in which the SLM learns from the LLM’s outputs, we use the LLM as a critic to provide feedback on the student’s outputs, facilitating iterative improvement. Evaluation of our student models on the proposed task based on EmpatheticDialogues (Rashkin et al., 2019) and Samsum (Gliwa et al., 2019) datasets confirms the advantage of learning from feedback; our student, based on the small BART model, performs on par with LLMs that are orders of magnitude larger, such as Llama2-70B upto 65% of the time, and significantly outperforms supervised fine-tuning by a margin of 18-20% . We assess the strengths and limitations of the critic in evaluating the SLM by comparing it against human judgments. We found that our critics can match human judgments with up to 76% accuracy, but can also suffer from biases due to length, position, or other biases. We explore the effect of data size, frequency of critic intervention, and the effect of potential evaluation biases on narrowing the gap between the SLM and LLM. Our work on distilling humor is a step towards more natural and engaging conversations (Ritchie et al., 2007), making SLMs more appealing for downstream applications where latency and computational efficiency need to be prioritized. 2 Related Work 2.1 Computational Humor Generation Computational humor is an interdisciplinary field at the intersection of NLP and humor theory. Early efforts in computational humor revolved around rulebased systems and linguistically-motivated methods. Raskin (1979) introduced a semantic analysis of humor which laid the foundation for subsequent rule-based approaches to humor recognition (Mihalcea and Strapparava, 2005; Reyes et al., 2012; Chen and Soo, 2018; Weller and Seppi, 2019). Concerning the more challenging task of humor generation, early approaches were linguistically informed and focused on specific types of humor, e.g. puns or jokes (Ritchie, 2005; Petrovi´c and Matthews, 2013). With the advent of deep learning, the focus shifted towards data-driven and neural approaches. This ranges from using RNNs and GANs to create puns (Yu et al., 2018; Luo et al., 2019), to more recent transformer-based approaches (Garimella et al., 2020) that can complete or generate jokes. These approaches enabled the generation of more contextually relevant and natural humor. Since neural approaches are primarily data-driven, this concurrently led to the creation of large joke datasets (Weller et al., 2020) and benchmarks (Hossain et al., 2020). Recent research has delved into the generation of figurative language such as sarcasm (Chakrabarty et al., 2022), puns (Mittal et al., 2022), as well as interactive chatbots with humor capabilities (Kulshreshtha et al., 2020). New multi-modal humor benchmarks (Hessel et al., 2023) have been developed to gauge the humor understanding and explanation abilities of LLMs. Weller et al. (2020) proposed the first model for humor style transfer building a transformer model that translates from regular to humorous, and leveraging humor prediction data from news headlines for humor generation. 2.2 LLMs as Evaluators Automatic evaluation of Natural Language Generation (NLG) tasks is typically based on N-gram metrics such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and embedding-based metrics such as BERTScore (Zhang et al., 2020) which require gold standard references. Recent research has explored a reference-free approach to assess NLG 13079 tasks by leveraging the implicit knowledge and instruction following abilities of LLMs. Fu et al. (2023) proposed GPTScore, which prompts LLMs with instructions and aspect definitions (e.g. fluency or coherence). The score is computed by calculating the conditional probability of generating the target text. GEval (Liu et al., 2023a) is an alternative metric, that presents the instructions in a form-filling paradigm and uses the probabilities of output tokens from LLMs to normalize the scores (e.g. score between 1 and 5) resulting in finer-grained continuous scores. In contrast to GPTScore and GEval, LLM-Eval (Lin and Chen, 2023) uses a single prompt to evaluate multiple evaluation aspects, thus minimizing the calls to the LLM. Additionally, LLMs have also been used to assess the factuality of generated text (Zha et al., 2023). Mehri and Shwartz (2023) propose a learned evaluation metric via instruction tuning. 2.3 Imitation Learning from LLMs The emergence of LLMs has caused a paradigm shift in NLP from traditional knowledge distillation (Hinton et al., 2015a) to an imitation-based data or response distillation. These approaches use LLMs as training data generators and train a smaller language model on this data (West et al., 2022). With subsequent work demonstrating that this may be a sparse form of distillation, leading to the student mimicking the style of the teacher, but not the reasoning abilities (Mukherjee et al., 2023; Gudibande et al., 2023), further extensions have been developed to distill a complete “Chainof-Thought” (Wei et al., 2023) from the teacher model (Li et al., 2023a; Wang et al., 2023a; Shridhar et al., 2023; Magister et al., 2023; Hsieh et al., 2023), improving the performance of smaller language models. 2.4 LLM-based Alignment A growing body of research leverages feedback from a language model to iteratively enhance its performance. This feedback may encompass written comments, numerical scoring, rankings, or explanations. Humpback (Li et al., 2023b) aims to construct a better instruction-tuning dataset through an iterative self-training algorithm. Self-Refine (Madaan et al., 2023) and REFINER (Paul et al., 2023) explore LLMs to engage in self-reflection, providing feedback and encouraging the use of this feedback to enhance their responses. I2D2 (Bhagavatula et al., 2023) demonstrates that small language models may improve if trained on their refined outputs generated using constrained decoding and filtered with a simple supervised critic model. Our work is inspired by BRIO (Liu et al., 2022, 2023b) for summarization, which reuses the generation model as the evaluation model to rank the candidates with contrastive learning. However, we avoid a ranking-based approach and use pairwise feedback due to the challenges in mitigating positional and length biases among multiple generations. Concurrent to our work, Zephyr (Tunstall et al., 2023) uses preference learning to align a student model to user intent by using preferences derived from different candidate models from a large teacher. 3 Method Our proposed knowledge distillation framework is depicted in Fig. 2. Assuming that the teacher outperforms the student in the humor generation task, our objective is to bring the student’s performance closer to that of the teacher. We attempt to achieve this in two phases. First, in the imitation phase (Sec 3.1), the student undergoes finetuning using a set of humorous outputs O = O1, O2, ..., On generated by the teacher for a given input I. In the critique phase (Sec 3.2), the finetuned student generates candidate output pairs P = P1, P2, which are then evaluated by a critic model. The scores obtained from the critic model are employed to train the student through a feedback-incorporation method. 3.1 Imitation Phase During the imitation phase, we construct a dataset consisting of < literal, humorous > pairs and use it to directly train the student. As depicted in Fig. 2, the process begins with a literal text input (I) and prompting the teacher to generate N = 3 humorous paraphrases (O) of the input in a zero-shot setting. The utilization of N outputs encourages diversity in student outputs, as it demonstrates multiple possible humorous paraphrases for a given input. The student is then finetuned on the constructed dataset by minimizing the cross entropy loss between the reference and predicted outputs. 3.2 Critique Phase In the critique phase, we aim to improve the student’s task comprehension by teaching the student to differentiate between effective and ineffective 13080 Candidates Teacher LLM A mysterious black cat had me purr-fectly freaked out. I swear, the black cat was stalking me like a feline paparazzo! A sneaky black kitty had me wondering if I was in a horror movie. Imitation Phase Critique Phase It was so creepy when I saw a black cat following me on my walk! C1, C2…Cn P1, P2 I Relative Scoring Feedback Incorporation DPO/Ranking Candidate Generation Candidate Pairs Critic LLM Student SLM SFT It was a real cat-astrophe when I spotted a black one tailing me on my stroll. I had a feline friend following me on my walk, and it wasn't a cute little kitten. O1 O2 O3 Figure 2: The proposed knowledge distillation framework: We perform task-specific distillation from a large, general language model, in two phases: an initial imitation phase, followed by a critical feedback phase which controls the quality of the generated humorous outputs from the student. humor. Critic The critic, an LLM (Large Language Model), evaluates the humor quality of outputs from the finetuned student. Drawing from prior work on computational humor (Valitutti, 2011) and LLM-based scoring (Liu et al., 2023a,b), we adopt a pairwise relative scoring approach to mitigate subjectivity as opposed to providing an absolute score on a humorous output. To obtain the preferred output from the pair, we use Multiple Choice Prompting (MCP) (Robinson and Wingate, 2023), which, presents both paraphrases simultaneously and asks the LLM to pick the better humorous output. For two humorous paraphrases, P1 and P2, of the same input text I, we present them as candidates labeled with symbols (e.g., “1” and “2”). The critic LLM predicts a token ("1" or "2"), with associated probabilities indicating preference, which we denote as the Win Tie Rate (WTR) of P1 against P2. § Feedback Incorporation Subsequently, we leverage feedback from the pairwise scorer to refine the student model. Starting with the finetuned student (Fig. 2), we generate a set of k = 6 candidates using either diverse beam search or nucleus sampling. From this candidate pool, we select a pair of diverse humorous paraphrases, denoted as P1 and P2, where diversity is measured by a max§The prompts used for WTR are shown in Appendix C. imum pairwise n-gram-based edit distance score. The critic then scores these pairs, resulting in candidates categorized as either positive or negative based on their performance. Our objective here is to improve the finetuned student by discerning which output is preferred. To integrate this feedback, we explore the following feedback objectives. 1. DPO (Rafailov et al., 2023) provides an alternative to RLHF, aiming to align language models using human feedback without training an explicit reward model. When comparing two humorous paraphrases Pi and Pj, if Pi receives a higher quality score from the critic, DPO increases the likelihood of its completion over Pj. It employs the Bradley-Terry reward model, approximating reward modeling with a sigmoid loss function, LDPO (πθ; πref) = −E(x,yw,yl)∼D  log σ  β log πθ (yw | x) πref (yw | x)− β log πθ (yl | x) πref (yl | x)  where, πθ represents the ratio of chosen to rejected scores from the fine-tuned LM, while πref signifies the same ratio for an exact frozen copy of the model. The first term in the sigmoid denotes the shift in the preferred completion, while the second 13081 term indicates the shift in the dispreferred completion. 2. BRIO (Liu et al., 2022), a sequence-level contrastive (ranking) objective proposed for abstractive summarization. For a given input I, when faced with two candidate humorous paraphrases, Pi and Pj, and Pi attains a higher quality score from the critic, the student is guided to assign a probability to Pi that exceeds twice that of Pj. ˆLctr(θ) = X Si,Sj∈Sc,i<j max (0, ¯ps (Sj | D; θ) −¯ps (Si | D; θ) + 1 λ log 2(j −i)  where λ corresponds to the average output length, and ¯ps corresponds to the length normalized log probabilities of paraphrases i and j. Following Liu et al. (2022), we combine the cross-entropy loss used in the imitation phase with the margin loss as a multi-task loss, to maintain the generation abilities of the student. 3. BRIO-DPO: We also experiment with combining both ranking and preference learning - for this variant, we obtain the contrastive pairs for the BRIO training from the teacher instead of the student, by letting the teacher rank two of its own responses. In this case, the BRIO loss is considered as supervision rather than feedback. The preference pairs for DPO still come from the student. To achieve this, we first train the student on BRIO, and use this model to initialize the DPO training. We experiment with both one-shot and iterative feedback as shown in Figure 2, where the student may receive feedback from the teacher more than once. 4 Experimental Setup Dataset. Similar to FLUTE (Chakrabarty et al., 2022), we pick literal input sentences from the EmpatheticDialogues dataset (Rashkin et al., 2019). Each conversation is obtained by pairing a speaker and a listener, where the speaker talks emotionally about personal matters, and the listener infers the underlying emotion and responds empathetically. We sample 12,000 sentences from the training set and generate N = 3 responses from the teacher to create 36,000 literal-humorous text pairs. We sampled 1,000 sentences from the validation and test sets for evaluating the student and 100 samples from the test set for human evaluation. For outof-distribution (OOD) evaluation, we sample 500 literal inputs from Samsum (Gliwa et al., 2019). Teacher Model. We use the 70B chat version of Llama 2 (Touvron et al., 2023) as the teacher model. We generate responses from the teacher using a temperature of 0.8 using nucleus sampling with top_p = 1. Student Model. For student model, we focus on BART-large (Lewis et al., 2020). All the BART student responses are obtained using beam search with the number of beams set to 5 during generation. Metrics. Traditional generation metrics such as ROUGE (Lin, 2004), may not work well for the creative task of generating humor. Inspired by prior work (Liu et al., 2023a; Fu et al., 2023), we use the LLM-based metric of Win Tie Rate (WTR) also described in Sec 3.2 for automatic evaluation. WTR aims to compare a pair of paraphrases and measures whether one paraphrase is equally (tie) or more (win) creative/humorous than the other, while preserving the meaning of the original input (consistency). In this work, we are more interested in humor than consistency - humor is essential, consistency can be subjective and optional based on the type of humor used (e.g. sarcasm). To make it more straightforward for human evaluation, we ask humans to provide individual WTR for the humor WTR separately from consistency WTR. We compute the automatic WTR of the student models using both the critic model based on Llama2-70B and GPT4 (OpenAI, 2023). Length and Positional Biases The critic is prone to two major biases - Position Bias, where evaluation changes based on the encoding order of the responses and Length Bias, the tendency to favor longer responses. To mitigate length bias, during the feedback training phase, we incorporate only a subset of the candidate training pairs, filtered to maintain near-equal length between the two candidates (with a length ratio falling within the range of 0.8 to 1.2). This selection is intended to counteract the length bias resulting from the teacher’s inclination towards longer outputs which has also been observed in prior work regarding AI (Saha et al., 2023a) as well as human feedback(Shen et al., 2023). To mitigate positional bias, we average all our win rates by letting the paraphrases take both position 1 and position 2. 13082 Baselines. We finetune BART-large for 10 epochs on the teacher data using the Maximum Likelihood Estimation (MLE) objective (Sec. 3.1). This baseline is denoted as BART-FT-WS (WS stands for Warm Start). We further refine this baseline to create students trained without and with feedback losses. For the first class of student models, we compare against, • BART-FT: BART-FT-WS further finetuned using the same MLE objective. • BART-SD: Indicating Self-Distillation (SD), BART-FT-WS is trained on only the positive candidates obtained from the critic(Sec. 3.2). For the proposed feedback-based baselines, • BART-BRIO: BART-FT-WS finetuned using the BRIO ranking objective for 10 epochs. • BART-DPO: BART-FT-WS finetuned using the DPO objective for 10 epochs. • BART-BRIO-DPO: BART-FT-WS finetuned using BRIO for 10 epochs, followed by DPO for an additional 10 epochs. 5 Experiments Our overall goal is to investigate whether a creative generation ability such as humor can be taught through a combination of examples from the teacher, and feedback on the student responses. To answer this question, we focus our experiments on answering the following research questions: 1⃝RQ1: How does the teacher perform as a critic? 2⃝RQ2: How do students trained with and without feedback compare? 3⃝RQ3: How frequently should the teacher intervene? 4⃝RQ4: How does data size affect student performance? 5⃝RQ5: Does the length bias of the critic affect student responses? 5.1 RQ1: How does the teacher perform as a critic? In order to validate the role of the teacher as a critic model, we conduct blind human evaluation by some of the authors (“annotators”). In Table 1, we analyze the alignment between human and LLM evaluation and compare two versions of the critic. Cloze prompting scores each paraphrase separately based on the perplexity of the template “The funny paraphrase of I is Pi” (for i = 1, 2). Conversely, Multiple Choice Prompting (MCP) presents the two paraphrases to the model and instructs it to choose the better one. We ask the annotators to perform pairwise scoring of 100 blind pairs of humorous outputs. Annotators are asked to rank the paraphrases based on how humorous they are and how consistent they are with respect to the meaning of the input. We compare overall (WTR) and individual win tie rates for humor (WTR-H) and consistency (WTR-C) of the humorous outputs with the input using the critic model (Llama2-70B). We use the following metrics inspired by Saha et al. (2023b) to evaluate the two scoring methods: AgH and AgC. We measure the LLM-Human Agreement (Ag) which is a score between [0, 1] indicating the percent of pairs that the annotator and the LLM agreed on. We compute agreement by independently matching each human judgment for each pair with the model judgment. Positional Bias (PB). We measure the fraction of samples where the critic’s scoring changes based on the order in which the paraphrases are presented to it. Length Bias (LB). LB aims to measure the model tendency to favor longer responses when the human did not. We measure length bias as the fraction of samples where humans prefer the shorter response, but the critic model prefers the longer response. We observe that overall, MCP performs significantly better in terms of human agreement when compared to the cloze style evaluation, with an agreement on humor up to 76% and consistency of up to 65% across both forward and backward positions. Concerning length bias, both methods exhibit length bias in about 20-25% of the cases where they tend to prefer longer outputs than the humans. Although sub-metrics based on Humor (WTR-H) and Consistency (WTR-C) look promising, we found that combining them using simple (MEAN, AND) or more complex methods like BSM (Saha et al., 2023b) results in amplifying positional biases to a huge extent for this task. Hence, we use the overall WTR as the automatic metric to 13083 Arbiter Type Ag-H↑ Ag-C↑ PB↓ LB↓ Cloze WTR-H 57 0 18 Cloze WTR-C 46 0 21 Cloze WTR 57 47 0 25 MCP WTR-H 76 18 17 MCP WTR-C 65 28 19 MCP WTR 76 59 15 20 Table 1: Evaluation of Human Agreement with different Arbiters - Agreement with Humor wins (Ag-H), Agreement with Consistency wins (Ag-C) and Positional Bias (PB), Length Bias (LB), Scores are multiplied by 100. Student WTRGPT4↑ WTRllama2↑ BART-FT 30 28 BART-SD 35 36 BART-BRIO 48 53 BART-DPO 52 60 BART-BRIO-DPO 56 65 Table 2: LLM-based Evaluation of student models - Win Tie Rate(WTR) is measured against the teacher (Llama270B) by the critic llama2 (WRllama2) or external critic GPT-4 (WRGPT4) using the method described in Sec 3.2. WTR is scaled by 100. Student WTRllama2↑ BART-FT 34 BART-BRIO 42 BART-DPO 47 BART-BRIO-DPO 49 Table 3: LLM-based Evaluation of student models on OOD test set (500 examples from (Gliwa et al., 2019) Model # Data WTRllama2↑ WTRGPT4↑ BART-FT 36K 30 30 BART-FT 24K 26 24 BART-FT 12K 24 21 BART-BRIO 36K 53 48 BART-BRIO 24K 45 40 BART-BRIO 12K 43 42 BART-DPO 36K 60 53 BART-DPO 24K 51 45 BART-DPO 12K 48 43 Table 4: Effect of data size on win rate of student models against the teacher. evaluate model responses both during training and evaluation. 5.2 RQ2:How do students trained with and without feedback compare? Table 2 shows the results of the automatic evaluation for the proposed student models using the Win Model Frequency WTRllama2↑ WTRGPT4↑ BART-BRIO 1/10 43 42 BART-BRIO 2/10 56 53 BART-BRIO 10/10 66 59 BART-DPO 1/10 48 43 BART-DPO 2/10 59 55 BART-DPO 10/10 68 62 Table 5: Effect of feedback frequency on win rate of student models against the teacher. Tie Rate (WTR) metric. To gauge the student’s performance against the teacher that provided the training data, we compare the student outputs to the teacher’s references. Hence, we measure the WTR of the student vs. the teacher. When employing BRIO for ranking feedback, BART-BRIO surpasses the imitation-based students (BART-FT and BART-SD) with notable increases in performance—13-18% and 19-25% based on evaluations by both GPT-4 and Llama270B, respectively. This suggests that in approximately 50% of cases, the student achieves comparable or superior performance to the teacher. When incorporating the same critic feedback as a preference learning objective through DPO (BARTDPO), we observe a similar trend of performance enhancement in the student, with the DPO baseline outperforming BRIO by a modest margin of 4-6%. Subsequently, we explore a combined approach utilizing both BRIO and DPO objectives, resulting in a further improvement of 4-6% over using DPO or BRIO independently. OOD Test Set We show the performance of the BART-based student models on the OOD test set in Table 3. We can observe a similar trend of student performance improving with both BRIO and DPO feedback objectives. However, the BRIODPO variant results in a very minor performance boost indicating that further investigation is needed in combining ranking and preference learning objectives. 5.3 RQ3: How frequently should the teacher intervene? To test whether the student can benefit from an increased frequency of feedback, we compare different versions in which we seek feedback from the teacher iteratively after every K epoch which varies from 1, 5, and 10 (static). Table 5 shows that more frequent feedback has a positive impact 13084 Input: I am a stay at home mom. BART-FT BART-DPO I’m a stay-at-home mom, and I’m loving every minute of it! I’m a stay-at-home superhero, saving the world one diaper change at a time. (a) Input: i felt bad about sleeping in today. BART-FT BART-BRIO I was feeling a bit guilty about my morning snooze-fest. I woke up feeling like a sloth who’s been snoozed out for so long, I’m pretty sure I’ve heard the sound of snores echoing off the walls. (b) Input: My son failed a really important test. BART-FT BART-BRIO-DPO My son’s test score was lower than a Kardashian’s Instagram followers. My son’s test results were so bad, I’m pretty sure he’s been secretly practicing his ’I’m-not-a-complete-failure’ face for weeks. (c) Input: My wife and I are going to buy are very first brand new car this week BART-DPO BART-DPO-ITER2 We’re finally upgrading our ride from a rusty old clunker to a sleek, sexy beast of a car. We’re upgrading our transportation game from "legs" to "wheels" this week! (d) Table 6: Qualitative Examples showcasing variants trained with different types of feedback Model Length Filter length BART-BRIO on 20 BART-BRIO off 32 BART-DPO on 18 BART-DPO off 23 Table 7: Effect of length filter on average response length. on student performance, but the performance gains may saturate as feedback frequency is increased. There is also a trade-off between the communication cost with the teacher, which needs to be considered when providing more frequent feedback. This motivates future work in investigating this trade-off further, and approaches such as curriculum learning or active learning could be leveraged to choose which samples should receive feedback. 5.4 RQ4: How does data size affect student performance? In this experiment, we assess the influence of the amount of data that is used to supervise the student. We examine the relationship between data size and performance by varying data size between 4,000, 8,000, and 12,000 literal sentences, in all cases providing 3 humorous paraphrases per example. Table 4 shows that simply increasing the data size may be insufficient to improving the performance. The performance is lower than that of the more sophisticated teaching approach, specifically incorporating feedback, which proved to be more beneficial and data-efficient. We can observe that, the model with feedback trained on 12K examples still performed better than a model trained without feedback with triple the number of examples. 5.5 RQ5: Does length bias of the critic affect student responses? The length bias in the teacher is propagated to the student, especially when trained over longer periods and more so with the BRIO or ranking objective. Without any mitigation strategies, the length of the student outputs can almost double compared to the input text. As described in our Sec 3, we also filter pairs that are significantly longer than one another during training to prevent the student from learning to optimize for length. The average response length of the students trained with and without length filters is shown in Table 7. This is similar to the length correlation observed in human feedback (Singhal et al., 2023). Additionally, we observe that DPO is less susceptible to overfitting based on length compared to BRIO, a ranking objective. Addressing this issue requires addressing length bias in Multiple Choice Prompting in LLMs, which is an evolving research area (Wang et al., 2023b; Shen et al., 2023). 6 Qualitative Analysis Figure 6 highlights instances where each of the feedback objectives (BART-DPO, BART-BRIO, and BART-BRIO-DPO) outperforms the BART-FT baseline trained solely through imitation learning. In Figure 6a, BART-DPO produces a response that exhibits a higher degree of humor compared to 13085 BART-FT. In Figure 6b, both BART-FT and BARTBRIO generate humorous responses, with BRIO’s output slightly edging in humor but at the expense of being longer. Figure 6c, BART-BRIO-DPO generates a more contextually relevant and amusing paraphrase in comparison to the BART-FT student. Lastly, Figure 6d demonstrates an instance where the BART-DPO variant, receiving feedback for two iterations instead of one, exhibits superior performance compared to its counterpart trained with only one iteration of feedback. 7 Conclusions We present a novel framework for knowledge distillation in the context of humor generation, with the teacher LLM providing references and feedback on a smaller student model’s performance. Our approach involves leveraging a critic to guide the student model toward generating more humorous outputs. The effectiveness of our method is demonstrated through evaluations conducted by both the LLM and humans. Additionally, we analyze the effect of various design choices on the performance, such as the frequency of feedback and training set size. Our analysis sheds light on the limitations of LLM-based critics, and serves as motivation for future research in mitigation of biases in LLM-based evaluation and AI feedback. 8 Limitations Quantifying Humor Measuring humor, especially in computational contexts, is inherently subjective and may not be accurately captured using simple metrics. Cultural references We observed that some of the generated humorous outputs were referring to celebrities and events in North America. As humor varies widely across individuals and cultures, the proposed models may not generalize well across diverse demographic and cultural groups. Bias propagation Training on feedback data from the LLM may lead to the propagation of any existing biases in the LLM’s training data. If the LLM’s data lacks diversity, these biases could be intensified in the smaller model. Similarly, the biases in model evaluation can also be propagated to the student. 9 Ethical Considerations Data. The datasets used to gather the literal inputs outlined in Sec 4 are publicly accessible. Some of these datasets include crowdsourced annotations about emotional events, which may contain offensive, biased or hateful content. We use the chat variant of the llama2 that is aligned on generating safe warnings and responses when offensive content may be present. Further, the Llama2 model generates such warnings for about 3-4% of our inputs, and we discared these inputs from the data distilled into the student. Celebrity references. As shown in Figure 1, the teacher LLM makes references and analogies to celebrities primarily from North America. When used as analogies for negative connotations, the humor generated may be considered offensive to specific people. Acknowledgements We would like to thank Aditya Sagar and Ahmed Aly from Meta AI for their support throughout the project. Additionally, we would like to thank Aditya Chinchure, Felipe González-Pizarro, Mehar Bhatia, Nikitha Moghe and Debjit Paul for their helpful feedback on a draft version of the paper. References Chandra Bhagavatula, Jena D. Hwang, Doug Downey, Ronan Le Bras, Ximing Lu, Lianhui Qin, Keisuke Sakaguchi, Swabha Swayamdipta, Peter West, and Yejin Choi. 2023. I2D2: Inductive knowledge distillation with NeuroLogic and self-imitation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9614–9630, Toronto, Canada. Association for Computational Linguistics. Tuhin Chakrabarty, Philippe Laban, Divyansh Agarwal, Smaranda Muresan, and Chien-Sheng Wu. 2023. Art or artifice? large language models and the false promise of creativity. ArXiv, abs/2309.14556. Tuhin Chakrabarty, Arkadiy Saakyan, Debanjan Ghosh, and Smaranda Muresan. 2022. FLUTE: Figurative language understanding through textual explanations. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7139–7159, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, and Mohamed Elhoseiny. 2023. Minigpt-v2: Large language model as a 13086 unified interface for vision-language multi-task learning. arXiv:2310.09478. Peng-Yu Chen and Von-Wun Soo. 2018. Humor recognition using deep learning. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 113–117, New Orleans, Louisiana. Association for Computational Linguistics. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. ArXiv, abs/2302.04166. Aparna Garimella, Carmen Banea, Nabil Hossain, and Rada Mihalcea. 2020. “judge me by my size (noun), do you?” YodaLib: A demographic-aware humor generation framework. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2814–2825, Barcelona, Spain (Online). International Committee on Computational Linguistics. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics. Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. The false promise of imitating proprietary llms. Jack Hessel, Ana Marasovic, Jena D. Hwang, Lillian Lee, Jeff Da, Rowan Zellers, Robert Mankoff, and Yejin Choi. 2023. Do androids laugh at electric sheep? humor “understanding” benchmarks from the new yorker caption contest. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 688–714, Toronto, Canada. Association for Computational Linguistics. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015a. Distilling the knowledge in a neural network. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015b. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531. Nabil Hossain, John Krumm, Michael Gamon, and Henry Kautz. 2020. SemEval-2020 task 7: Assessing humor in edited news headlines. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 746–758, Barcelona (online). International Committee for Computational Linguistics. Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh, Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes. In Findings of the Association for Computational Linguistics: ACL 2023, pages 8003–8017, Toronto, Canada. Association for Computational Linguistics. Jaehun Jung, Peter West, Liwei Jiang, Faeze Brahman, Ximing Lu, Jillian Fisher, Taylor Sorensen, and Yejin Choi. 2023. Impossible distillation: from low-quality model to high-quality dataset & model for summarization and paraphrasing. Apoorv Kulshreshtha, Daniel De Freitas Adiwardana, David Richard So, Gaurav Nemade, Jamie Hall, Noah Fiedel, Quoc V. Le, Romal Thoppilan, Thang Luong, Yifeng Lu, and Zi Yang. 2020. Towards a human-like open-domain chatbot. arXiv. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, and Yejin Choi. 2023a. Symbolic chain-of-thought distillation: Small models can also “think” step-by-step. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2665–2679, Toronto, Canada. Association for Computational Linguistics. Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, and Mike Lewis. 2023b. Self-alignment with instruction backtranslation. ArXiv, abs/2308.06259. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. Yen-Ting Lin and Yun-Nung Chen. 2023. LLM-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language models. In Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023), pages 47–58, Toronto, Canada. Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuo Wang, Ruochen Xu, and Chenguang Zhu. 2023a. G-eval: Nlg evaluation using gpt-4 with better human alignment. ArXiv, abs/2303.16634. Yixin Liu, Alexander R. Fabbri, Pengfei Liu, Dragomir Radev, and Arman Cohan. 2023b. On learning to summarize with large language models as references. Yixin Liu, Pengfei Liu, Dragomir Radev, and Graham Neubig. 2022. BRIO: Bringing order to abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2890–2903, Dublin, Ireland. Association for Computational Linguistics. Zhengyuan Liu and Nancy Chen. 2022. Learning from bootstrapping and stepwise reinforcement reward: A semi-supervised framework for text style transfer. In 13087 Findings of the Association for Computational Linguistics: NAACL 2022, pages 2633–2648, Seattle, United States. Association for Computational Linguistics. Fuli Luo, Shunyao Li, Pengcheng Yang, Lei Li, Baobao Chang, Zhifang Sui, and Xu Sun. 2019. Pun-GAN: Generative adversarial network for pun generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3388–3393, Hong Kong, China. Association for Computational Linguistics. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Sean Welleck, Bodhisattwa Prasad Majumder, Shashank Gupta, Amir Yazdanbakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. ArXiv, abs/2303.17651. Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and Aliaksei Severyn. 2023. Teaching small language models to reason. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1773–1781, Toronto, Canada. Association for Computational Linguistics. Shuhaib Mehri and Vered Shwartz. 2023. Automatic evaluation of generative models with instruction tuning. Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recognition. In Human Language Technology - The Baltic Perspectiv. Anirudh Mittal, Yufei Tian, and Nanyun Peng. 2022. AmbiPun: Generating humorous puns with ambiguous context. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1053–1062, Seattle, United States. Association for Computational Linguistics. Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. OpenAI. 2023. Gpt-4 technical report. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. 2023. Refiner: Reasoning feedback on intermediate representations. Saša Petrovi´c and David Matthews. 2013. Unsupervised joke generation from big data. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 228– 232, Sofia, Bulgaria. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D. Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic opendomain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Victor Raskin. 1979. Semantic mechanisms of humor. In Annual Meeting of the Berkeley Linguistics Society, volume 5, pages 325–335. Antonio Reyes, Paolo Rosso, and D. Buscaldi. 2012. From humor recognition to irony detection: The figurative language of social media. Data Knowl. Eng., 74:1–12. Graeme Ritchie. 2005. Computational mechanisms for pun generation. In Proceedings of the Tenth European Workshop on Natural Language Generation (ENLG05), Aberdeen, Scotland. Association for Computational Linguistics. Graeme Ritchie, Ruli Manurung, Helen Pain, Annalu Waller, Rolf Black, and Dave O’Mara. 2007. A practical application of computational humour. In Proceedings of the 4th international joint conference on computational creativity, pages 91–98. UK London. Joshua Robinson and David Wingate. 2023. Leveraging large language models for multiple choice question answering. In The Eleventh International Conference on Learning Representations. Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, and Xian Li. 2023a. Branchsolve-merge improves large language model evaluation and generation. ArXiv, abs/2310.15123. Swarnadeep Saha, Omer Levy, Asli Celikyilmaz, Mohit Bansal, Jason Weston, and Xian Li. 2023b. Branchsolve-merge improves large language model evaluation and generation. Wei Shen, Rui Zheng, Wenyu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, and Xuanjing Huang. 2023. Loose lips sink ships: Mitigating length bias in reinforcement learning from human feedback. ArXiv, abs/2310.05199. Kumar Shridhar, Alessandro Stolfo, and Mrinmaya Sachan. 2023. Distilling reasoning capabilities into smaller language models. In Findings of the Association for Computational Linguistics: ACL 2023, pages 13088 7059–7073, Toronto, Canada. Association for Computational Linguistics. Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. 2023. A long way to go: Investigating length correlations in rlhf. Emma Strubell, Ananya Ganesh, and Andrew McCallum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645–3650, Florence, Italy. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cristian Cantón Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288. Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, and Thomas Wolf. 2023. Zephyr: Direct distillation of lm alignment. Alessandro Valitutti. 2011. How many jokes are really funny? towards a new approach to the evaluation of computational humour generators. In Proceedings of International Workshop on Natural Language Processing and Cognitive Science (NLPCS 2011), pages 189–200, Denmark. Samfundslitteratur. International Workshop on Natural Language Processing and Cognitive Science (NLPCS 2011) ; Conference date: 20-08-2011 Through 21-08-2011. Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao, Bing Yin, and Xiang Ren. 2023a. SCOTT: Selfconsistent chain-of-thought distillation. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5546–5558, Toronto, Canada. Association for Computational Linguistics. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023b. Large language models are not fair evaluators. ArXiv, abs/2305.17926. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompting elicits reasoning in large language models. Orion Weller, Nancy Fulda, and Kevin Seppi. 2020. Can humor prediction datasets be used for humor generation? humorous headline generation via style transfer. In Proceedings of the Second Workshop on Figurative Language Processing, pages 186–191, Online. Association for Computational Linguistics. Orion Weller and Kevin Seppi. 2019. Humor detection: A transformer gets the last laugh. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 3621–3625, Hong Kong, China. Association for Computational Linguistics. Peter West, Chandra Bhagavatula, Jack Hessel, Jena Hwang, Liwei Jiang, Ronan Le Bras, Ximing Lu, Sean Welleck, and Yejin Choi. 2022. Symbolic knowledge distillation: from general language models to commonsense models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4602–4625, Seattle, United States. Association for Computational Linguistics. Peter West, Ximing Lu, Nouha Dziri, Faeze Brahman, Linjie Li, Jena D. Hwang, Liwei Jiang, Jillian Fisher, Abhilasha Ravichander, Khyathi Chandu, Benjamin Newman, Pang Wei Koh, Allyson Ettinger, and Yejin Choi. 2023. The generative ai paradox: "what it can create, it may not understand". Zhiwei Yu, Jiwei Tan, and Xiaojun Wan. 2018. A neural approach to pun generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1650–1660, Melbourne, Australia. Association for Computational Linguistics. Yuheng Zha, Yichi Yang, Ruichen Li, and Zhiting Hu. 2023. AlignScore: Evaluating factual consistency with a unified alignment function. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11328–11348, Toronto, Canada. Association for Computational Linguistics. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. A Pairwise breakdown of Human Scores B Prompts for Generation In Figure 3, we show the prompt used to obtain the humorous outputs from the teacher Llama2-70B model. 13089 Group Baseline WTR-H ↑ WTR-C ↑ WTR ↑ 1 BART 23 38 29 2 BART-BRIO 43 32 45 3 BART-DPO 51 34 52 Table 8: Human Evaluation of student models against the teacher LLAMA2-70B. Figure 3: Humor Generation C Prompts for Evaluation In Figure 4, we show the prompt used to obtain the pairwise scores from the teacher Llama2-70B model. You will be given an input text from a conversation. You will then be given two paraphrase choices (1 or 2) for this input text. Choose the better paraphrase, based on how humorous and human-like it sounds, while staying true to the input text. Input text: {I} Paraphrases: 1. {P1} 2. {P2} Format your answer in this way - ‘Reason: A reason for your answer which is maximum 15 to 20 words long. 'Answer: Your choice of 1 or 2’ Figure 4: Pairwise evaluation using Multiple Choice Prompting 13090
2024
706
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13091–13116 August 11-16, 2024 ©2024 Association for Computational Linguistics Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models Fangzhi Xu1,2, Zhiyong Wu2∗, Qiushi Sun3, Siyu Ren4, Fei Yuan2, Shuai Yuan5, Qika Lin3, Yu Qiao2, Jun Liu1 1Xi’an Jiaotong University 2Shanghai Artificial Intelligence Laboratory 3National University of Singapore 4Shanghai Jiao Tong University 5Hong Kong University of Science and Technology [email protected], [email protected], [email protected] Abstract Although Large Language Models (LLMs) demonstrate remarkable ability in processing and generating human-like text, they do have limitations when it comes to comprehending and expressing world knowledge that extends beyond the boundaries of natural language(e.g., chemical molecular formula). Injecting a collection of symbolic data directly into the training of LLMs can be problematic, as it disregards the synergies among different symbolic families and overlooks the need for a balanced mixture of natural and symbolic data. In this work, we tackle these challenges from both a data and framework perspective and introduce Symbol-LLM series models. First, we curated a data collection consisting of 34 tasks and incorporating 20 distinct symbolic families, intending to capture the interrelations and foster synergies between symbols. Then, a two-stage tuning framework succeeds in injecting symbolic knowledge without loss of the generality ability. Extensive experiments on both symboland NL-centric tasks demonstrate the balanced and superior performances of Symbol-LLM series models. The project page is https:// xufangzhi.github.io/symbol-llm-page/. 1 Introduction Large Language Models (LLMs), such as GPTseries (Radford et al., 2019; Brown et al., 2020; OpenAI, 2023) and LLaMA-series (Touvron et al., 2023a,b), boosted the performance in various Natural Language Processing (NLP) tasks (Zhao et al., 2023; Wei et al., 2022b; Zhou et al., 2023; Yao et al., 2023). The success of these models heavily relies on natural language (NL) as the primary interface1 for interaction and reasoning. However, the ∗Corresponding Author. 1Interface in this paper refers to the communication between LLM and environment (i.e., external tools). NL-centric interface confines the inputs and outputs to an NL form, which can only address certain aspects of world knowledge, such as fact (Bordes et al., 2015), commonsense (Talmor et al., 2019). Nevertheless, a substantial amount of abstract knowledge, notably in areas like molecular formula (e.g., C6H12O6) and first-order logic (e.g., IsTriangle(X) →SumOfAngles(X, 180◦)), is more effectively represented in symbolic forms rather than in NL.2 Compared to the NL form, the symbolic form covers a wide spectrum of scenarios and tends to be more concise and clear, enhancing its communication effectiveness (Gao et al., 2023; Qin et al., 2023). In particular, when interacting with robots, symbolic command sequences (such as PICKUP, WALK) are more accurate and efficient than NL. Similarly, when using programming languages (like SQL and Python) to call external tools (Gao et al., 2023), expressing this structured information in NL form can be difficult. Despite the symbolic form offering a wealth of information, deploying LLMs directly via a symbolic-centric interface poses a significant challenge. This is largely attributed to the fact that LLMs are trained via large-scale unsupervised pre-training on extensive general text datasets, which inherently lack a symbolic foundation. The most straightforward approach to incorporating symbolic knowledge into LLMs is through finetuning (Yang et al., 2023; Xu et al., 2023b). However, the format of symbolic data significantly diverges from that used during pre-training. Consequently, merely fine-tuning with large heterogeneous data can lead to catastrophic forgetting (Kirkpatrick et al., 2017). 2For clarity, the world knowledge covered in this paper refers to the symbolic form of knowledge, rather than the broad concept of ’knowledge’ stored in LLMs. 13091 Meanwhile, existing injection methods primarily stress on specific symbols, it is important to note that symbolic forms can be quite complex and vary across tasks. Training LLM for a specific symbolic form is both time-consuming and labor-intensive. Furthermore, treating each symbol independently often overlooks the interconnections between different symbols, e.g., atom unit (e.g., BornIn(Obama, USA)) in first-order logic (FOL) is similar in form to function (e.g., query(Paris, nwr(hotel))) in API calls . Upon this observation, we conduct a comprehensive collection of 34 text-to-symbol generation tasks with 20 standard symbolic forms introduced with instruction tuning format. The symbolic data comes from three sources: (1) 88.3% of the data was collected from existing benchmarks. (2) 5.8% of the data was prompted by LLMs. Compensating for the natural absence of symbolic representations in some NL-centric tasks, prompting powerful LLMs can generate more novel text-to-symbol pairs. (3) 5.9% of data was generated by introducing the Symbol-evol strategy, with replaced symbolic definitions to prevent the model from memorizing specific symbols. The above sources are uniformly leveraged to capture the underlying connections between symbols from the data view. For the framework aspect, we apply a two-stage continual tuning framework including the Injection Stage and the Infusion Stage. The Injection Stage prioritizes the exploitation of the inherent connections between different symbols, thereby enabling the model to thoroughly learn a wide range of symbolic knowledge. After tuning LLaMA-2Chat models with all collected symbolic data, we obtain Symbol-LLMBase variants. The Infusion Stage focuses on balancing the model’s dual capabilities by utilizing both symbolic data and general instruction tuning. After combining the general instruction-tuning data with the sampled symbolic data and tuning based on Symbol-LLMBase, we can obtain Symbol-LLMInstruct. Finally, Symbol-LLM series models are widely tested on both symbolcentric and NL-centric tasks, which are verified to exhibit substantial superiority. Our contributions are as follows, with key values and research insights attached in Appendix A, B: • A comprehensive collection of text-to-symbol generation tasks is the first collection to treat symbolic data in a unified view and explore the underlying connections among symbols. Transform the natural language sentence into action sequence from the candidate actions. I_TURN_RIGHT means turning right. I_JUMP means jump... Origin (one example) Symbol-Evol (1) Collect from existing benchmarks (2) Prompt GPT-4 (3) Diversify by Symbol-evol strategy GPT-4 Filter (88.3%) Spider SCAN WebQSP Code FOLIO CLEVR NL Input Symbolic Input Transform the natural language sentence into action sequence from the candidate actions. shy2sW means turning right. PMx1Ua means jump... New Pairs (5.9%) (5.8%) Figure 1: Overview of the data collection procedure. It involves three key sources: (1) existing benchmarks, (2) new data generated via prompting GPT-4, and (3) new data synthesized using the Symbol-evol strategy. • The open-sourced Symbol-LLM series models build a new foundation LLM with balanced symbolic and NL abilities. • Extensive experiments on both symbol- and NLcentric tasks are conducted to prove the superiority of Symbol-LLM. 2 Approach In this section, we first introduce the overall symbolic data collection procedure in Section 2.1 and then describe the two-stage tuning framework and the comprehensive test settings in Section 2.2. 2.1 Data Collection Conducting comprehensive symbolic knowledge injection and exploiting their interrelations requires a large collection of symbolic data. However, achieving diverse knowledge coverage continues to be a significant hurdle in language modeling. Therefore, we curate an extensive collection of symbolic tasks, which is under-explored in NLP. The overview of the symbolic data collection procedure is shown in Figure 1. The ultimate symbolic dataset is Ds = Ds1 ∪Ds2 ∪Ds3. Here, Ds1 represents the existing benchmarks. The dataset Ds2 is a novel dataset, resulting from prompting GPT-4. Ds3 is another new dataset, generated by introducing the Symbol-evol strategy. Generally, we compile a set of 34 text-to-symbol generation tasks, covering ∼20 different standard symbolic forms. To maintain the general capability in NL-centric tasks, this work also includes general instruction data Dg. Details of each dataset are attached in Appendix C. 13092 Params Initialize Planning KG/DB Query KG Construction Action Command FOL Extraction Math Reasoning Table Query AMR Parsing API Call Code Generation Visual Reasoning AI4Science Transform the natural language description of the scene into PDDL. The scene is [SCENE]. Given schema, transform the query into SQL. The schema is [SCHEMA]. The Query is [QUERY]. Given KG or DB, transform the query into s-Expression or SPARQL query... Transform the natural language sentence into AMR form... Given the following ontology, extract triples from the sentence... Given candidate api calls, output the TOP form for the request... Transform the natural language command into action sequence... Transform the natural language question into [Python/Java/Go] code... Transform the natural language sentence into first-order logic (FOL) form. Parse the question sentence with the given form... Write a Python code to describe and solve the given problem... Generate SMILES form for the natural language description of the molecular PDDL s-Expression / SPARQL KG Triples Command FOL Python Code SQL AMR TOP (API calls) Bash/RX/Java/Go/... Parsed Language SMILES (define (problem BW-3) (:domain blocksworld) (:objects b1 b2 b3 ) SELECT COUNT(*) FROM singer PREFIX ns: <http://xxxxx> SELECT DISTINCT x? WHERE { (w / wish-01 :ARG0 (i / i) :ARG1 (b / bear-02 ethnicGroup (US, Asian Americans) | language (US, English_language) [IN: GET_WEATHER [SL: WEATHER_ATTRIBUTE tornados] [SL:DATE_TIME this week]] I_TURN_LEFT I_WALK I_LOOK I_JUMP I_JUMP I_TURN_RIGHT public Integer getInteger(xx { Number number = xxx; if (number==null) { x ((Efficient(x) Reliable (x)) HighQuality(x)) query_color(unique(filter_s hape(cylinder,same_size(un ique(filter_color(cyan,scene ())))))) def Solution(): result_list = [] for i in range(10): result_list.append(i**2) CN(C(=0)N)N=0 Text Input (~880K samples) Symbolic Output Symbolic Solver (Delegations) FastDownward FOL Solver Robotic Planning Table Question Answering Math Reasoning Symbolic Reasoning Visual Reasoning Logical Reasoning Blocksworld Termes Geometry3k WikiSQL WikiTableQuestions GSM8k MATH AQUA FOLIO ProofWriter Colored Object Last Letter Injection Stage Symbol-LLM (Instruct) MLE Loss Symbol-LLM (Base) Infusion Stage Random Sampling Symbolic Output Symbolic General NL Output -Flan -CodeAlpaca -WizardEvol ~260K samples ~310K samples Symbol-LLM SQL Excu Python Interpreter API Calls General Tasks Symbolic Tasks MMLU BBH Hellaswag MBPP CEval API Ontology CHEBI Command NL2Bash NL2SQL Function Symbols FOL NL2Code PDDL Test Tasks (In-domain + OOD) (a) Tuning Phase (b) Test Phase MLE Loss Symbol + Delegation Figure 2: Overall pipeline of Symbol-LLM. (a) is two-stage tuning framework, Injection stage and Infusion stage. (b) is the test phase with comprehensive settings, symbolic tasks, general tasks, and downstream tasks under the Symbol+Delegation paradigm. Ds1: the existing symbolic datasets and benchmarks Previous efforts have been dedicated to specific symbolic forms, offering a natural and strong foundation for Symbol-LLM. We include plenty of text-to-symbol tasks from various data sources such as Spider (Yu et al., 2018), MTOP (Li et al., 2021), SCAN (Lake and Baroni, 2018), and further shape them in the defined formats. Such collection is named as Ds1. Ds2: novel text-to-symbol pairs by prompting GPT-4 While Ds1 has broad coverage, it lacks certain text-to-symbol pairs in some crucial scenarios. For example, some mathematical problems can be better handled when converted to programming language, but labeled samples are limited. To address this, we prompt GPT-4 to generate the corresponding symbolic outputs given the NL instructions, following Gao et al. (2023). Correct outputs judged by executing solvers (e.g., code interpreter) are retained to form new text-to-symbol pairs, constructing the collection Ds2. Ds3: new samples generated by applying Symbol-evol strategy The above collection can cover a vast range of standard definitions of symbolic forms. However one concern is that large tuning data with the same symbolic definitions magnify LLM’s propensity to memorize the patterns instead of truly learning to follow instructions. Thus, we introduce the Symbol-evol strategy, expecting to enhance the diversity of symbolic systems. The strategy of Symbol-evol, as depicted in Figure 1(3), is exemplified using SCAN dataset (Lake and Baroni, 2018). In the original data collection, some action commands (in red background) are defined to control robots. Randomly generated strings (in green background) are leveraged to replace the original symbolic definitions. For example, the originally defined command I_TURN_RIGHT is replaced by shY2sW. In this way, diverse symbol instruction samples can be derived based on some original tasks in Ds1, forming the collection Ds3. Dg: general data These collected data are from three sources: (i) sampled flan collection data (Wei et al., 2022a; Longpre et al., 2023); (ii) Code Alpaca instruction tuning data (Chaudhary, 2023); (iii) sampled Evol-data from WizardLM (Xu et al., 2023a). Full details are given in Appendix C.2. 2.2 Symbol-LLM The overview of Symbol-LLM is shown in Figure 2, comprised of both the tuning and testing phases. The tuning framework, as illustrated in Fig.2a, encompasses two stages: the Injection stage and Infusion stage. After the Injection stage, we can obtain the Symbol-LLMBase model, which is expected to address various symbol-related scenarios. However, Injection stage focuses on injecting symbolic knowledge into LLMs regardless of the general capability. But we also expect Symbol-LLM to 13093 maintain the necessary proficiency in general tasks, to achieve balanced symbol and NL interfaces for interaction and reasoning. Thus, we introduce the Infusion stage to obtain the Symbol-LLMInstruct. The test phase, represented in Fig.2b, covers comprehensive settings on the symbolic and NL scenarios. Tuning Phase 1: Injection Stage In this stage, we purely focus on injecting various symbolic knowledge into LLMs by conducting supervised fine-tuning (SFT) on the Ds collection. The training loss of Injection stage is the maximum likelihood estimation (MLE): LMLE(Ds) = − X i log pθ(yi|si ⊕xi), (1) where pθ is the tunable LLM with parameters θ, which is initialized from LLaMA-2-Chat models. si⊕xi refers to the input format: the instruction (si) covering the task definition concatenates (⊕) with the natural language query (xi). And yi is the symbolic output. Tuning Phase 2: Infusion Stage In this stage, we randomly sample Ds to obtain a subset Ds′ ⊂ Ds, the data are proportioned to ensure a fair distribution. They are combined with general instruction tuning data Dg to form the training set in this stage. The loss function to be minimized is based on MLE: LMLE(Ds′ ∪Dg) = − X j log pθ1(yj|sj ⊕xj), (2) where the tunable parameters θ1 are initialized from Symbol-LLMBase. sj, xj, and yj are the instruction, input, and output for a single sample, respectively. Testing Phase This work presents comprehensive testing settings for border applications. For detailed task descriptions refer to Appendix E. • Symbolic Tasks: Extensive symbolic generation tasks stress the unique advantages of addressing symbolic language beyond NL. • General Tasks: Classical benchmarks of general tasks are leveraged to verify the balanced capabilities in symbol- and NL-centric scenarios. • Symbol+Delegation Tasks: Verifying the effectiveness of LLM with symbolic-centric interface. We refer to this promising setting as Symbol+Delegation, where the model first generates the symbolic representation of the question and then relies on the external solvers for solution (e.g., Python interpreter, SQL execution). 3 Experiments In this section, we fully evaluate Symbol-LLM3 on three parts of experiments: the symbolic tasks in Sec. 3.1, the general tasks in Sec. 3.2, and the Symbol+Delegation tasks in Sec. 3.3. The implementation details refer to Appendix E and Appendix F. The overall performances of Symbol-LLM are concluded in Appendix G. 3.1 Symbolic Tasks Table 1 presents the results of 34 symbolic generation tasks. For model comparison, we include GPT3.5, Claude-1, LLaMA-2-Chat, and the optimized model after single-domain SFT on LLaMA-2-Chat. Limited by space, we compare other baseline models (e.g., CodeLLaMA-Instruct) in Appendix G. For extra OOD symbolic tasks, please refer to Appendix H. The main findings are as follows: Symbol-LLM largely enhances the symbolrelated capabilities of LLM. In comparison with the LLaMA-2-Chat model, Symbol-LLM presents overwhelming advantages in symbolic tasks. It improves the baseline performances of 7B and 13B by 49.29% and 55.88%, respectively. Also, cutting-edge close-source LLMs like GPT3.5 and Claude-1, are far behind Symbol-LLM, with the minimum gaps of 39.61% (GPT-3.5 v.s. Symbol-LLM-7B). In short, Symbol-LLM brings huge advantages in symbolic scenarios. The unified modeling helps Symbol-LLM successfully capture the intrinsic relationships between different symbols. Fine-tuning LLaMA2-Chat on single-domain tasks fully overfits domain-specific symbolic forms, as shown in Single SFT of Table 1. Compared with it, SymbolLLM shows better performances, with averaged 0.42% and 2.02% gains in 7B and 13B. It verifies that the unified modeling of various symbolic forms is beneficial to capturing symbolic interrelations. 3.2 General Tasks To verify Symbol-LLM’s power in tackling NLcentric tasks, we conduct the experiments on two widely-used benchmarks, MMLU and BIG-BenchHard (BBH). Results are shown in Table 2. Competitive performances in general tasks are maintained in Symbol-LLM. Overall, Symbol3Unless otherwise specified, Symbol-LLM represents the final model after two stages (i.e., Instruct version). 13094 Domains / Tasks Metrics Close-Source Open-source (7B) Open-source (13B) GPT-3.5 Claude-1 LLaMA-2-Chat Single SFT Symbol-LLM LLaMA-2-Chat Single SFT Symbol-LLM Planning Blocksworld BLEU 96.54 91.35 85.16 97.40 99.020.11 31.27 97.06 99.020.12 Termes BLEU 74.73 26.94 53.08 67.46 48.694.48 59.30 68.63 90.090.66 Floortile BLEU 54.23 13.94 59.41 78.07 95.840.45 0.00 74.22 95.240.21 Grippers BLEU 99.90 90.91 86.15 94.84 98.530.51 95.36 97.46 98.890.32 SQL Spider EM 42.60 32.70 16.50 65.30 63.800.09 10.30 68.20 69.200.05 Sparc EM 29.90 28.60 12.50 55.40 55.000.07 10.20 57.50 58.900.05 Cosql EM 18.80 22.70 9.30 51.30 48.200.07 1.20 54.60 52.700.07 KG / DB WebQSP F1 36.49 41.37 0.09 84.93 84.430.26 0.00 84.80 85.290.23 GrailQA EM 28.52 25.56 0.00 80.58 79.240.21 0.06 81.82 81.170.22 CompWebQ EM 0.00 0.00 0.00 56.30 50.980.46 0.00 59.02 54.940.52 AMR AMR3.0 Smatch 18.00 10.00 6.00 55.00 54.000.00 2.00 55.00 55.000.00 AMR2.0 Smatch 14.00 12.00 7.00 46.00 45.000.00 1.00 47.00 46.000.00 BioAMR Smatch 23.00 3.00 24.00 80.00 78.000.22 0.00 80.00 80.000.00 Ontology Tekgen F1 8.92 1.86 4.50 56.69 57.340.02 6.24 58.49 58.550.13 Webnlg F1 28.34 8.89 7.38 63.75 60.420.05 17.23 62.13 63.080.08 API MTOP EM 3.80 8.40 0.00 84.80 84.400.15 0.00 86.20 86.600.12 TOPv2 EM 6.60 7.60 0.00 86.60 85.800.06 0.00 87.20 85.200.06 NLmaps EM 30.88 16.77 2.00 91.95 92.180.03 3.60 92.38 92.210.02 Command SCAN EM 15.09 15.97 0.00 98.23 98.350.00 0.00 98.99 99.280.00 Code NL2BASH BLEU 54.19 42.24 23.29 59.22 60.250.18 19.06 60.68 60.760.12 NL2RX BLEU 38.60 18.30 5.91 85.25 85.080.12 0.00 85.55 84.970.28 NL2Python BLEU 37.01 36.73 26.68 38.19 39.790.28 34.94 40.35 40.760.32 NL2Java BLEU 24.88 22.79 25.77 27.33 28.080.22 23.49 28.47 28.250.19 NL2Go BLEU 19.08 26.65 24.00 30.77 29.190.39 1.26 24.75 30.310.33 FOL FOLIO LE 60.65 53.47 33.98 90.81 90.580.01 28.79 91.59 90.650.02 MALLS LE 69.15 30.46 55.13 89.24 88.880.03 11.71 89.41 89.500.01 LogicNLI LE 73.11 69.16 39.95 100.00 99.970.00 32.26 99.99 100.000.00 Visual GQA EM 7.55 7.70 0.30 85.65 85.500.01 8.85 86.10 85.950.01 CLEVR EM 6.35 5.90 0.25 86.35 94.800.11 1.15 92.20 95.600.09 Geometry3k EM 65.25 40.84 36.88 93.92 95.130.02 52.17 94.52 95.670.03 Math GSM8K-Code BLEU 82.20 63.42 53.66 85.31 84.140.32 72.29 84.01 84.420.23 AQUA-Code BLEU 67.48 48.88 39.25 66.27 67.050.62 55.13 65.66 67.200.77 MATH-Code BLEU 56.48 48.87 29.88 56.43 57.360.89 48.85 58.24 56.971.12 AI4Science CheBi-20 EM 1.15 0.30 0.00 40.36 58.970.92 0.00 46.82 65.270.89 Average Performance 32.27 25.04 22.59 71.46 71.88 18.46 72.32 74.34 Table 1: Main results on 34 in-domain symbolic tasks. Better results with the same model size are marked in bold. GPT-3.5, Claude-1, and LLaMA-2-Chat column presents the baseline performances of prompting these models under the few-shot setting. Single-SFT represents the models fine-tuned with single-domain samples based on LLaMA-2-Chat. Symbol-LLM column represents the final obtained model after two-stage tuning, with beam size 1 for decoding. The subscript of the result denotes the variances between three generations (with different beam sizes). LLM is well optimized with the two-stage framework in keeping general abilities. For 7B models, Symbol-LLMInstruct shows consistent superiority on MMLU and BBH benchmarks, with ∼4% gains compared with LLaMA-2-Chat. For 13B models, although Symbol-LLMInstruct slightly falls behind its LLaMA counterpart, it achieves 7.20% performance advantages in BBH. The superiority on average is obvious. While Symbol-LLM may not yet match the performance of closed-source LLMs, its well-rounded general capability is notable. These remarkable performances can make Symbol-LLM stand out as a foundational LLM, which combines the basic NL ability and proficiency in handling symbolic language. To verify the consistent superiority of SymbolLLM in general tasks, we include a broader scope of evaluation in Appendix I. 3.3 Symbol+Delegation Tasks A wide range of experiments are done under the Symbol+Delegation paradigm, covering the fields of math reasoning, symbolic reasoning, logical reasoning, robotic planning, visual reasoning as well as table question answering. For detailed settings, please refer to Appendix E.3. Limited by space, we only present the results of the math reasoning in the main paper. The remaining parts are attached in Appendix J. We select 9 commonly used math datasets for testing, including both in-domain and OOD tasks. To demonstrate the surprising performances 13095 Models MMLU (5-shot) BBH (0-shot) Humanities SocialSciences STEM Others Average Average Close-source LLMs GPT-3.5 54.90 69.58 49.73 66.75 59.74 56.84 Claude-1 56.60 74.15 53.66 60.35 62.09 47.01 Open-source LLMs (7B) LLaMA-2-Chat 42.47 52.49 36.94 52.47 45.78 35.01 CodeLLaMA-Instruct 39.47 46.31 35.95 45.34 41.57 35.69 Symbol-LLMBase 40.04 46.28 33.73 47.16 41.70 33.82 Symbol-LLMInstruct 46.33 57.20 40.39 54.53 49.30 39.30 Open-source LLMs (13B) LLaMA-2-Chat 49.52 62.43 43.84 60.02 53.55 36.99 CodeLLaMA-Instruct 33.88 41.92 34.69 42.17 37.73 36.71 Symbol-LLMBase 45.67 55.67 40.09 53.89 48.56 35.26 Symbol-LLMInstruct 48.88 62.14 43.44 57.93 52.71 44.09 Table 2: Results on general tasks. We include 57 tasks in the MMLU benchmark for testing under the 5-shot setting (Hendrycks et al., 2021a), while we select 21 tasks in BBH under the 0-shot setting following Gao et al. (2021a). Best results are marked in bold while sub-optimal results are underlined (same for the following tables). Models Del. GSM8k MATH GSM-Hard SVAMP ASDiv ADDSUB SingleEQ SingleOP MultiArith Is OOD Setting ✓ ✓ ✓ ✓ ✓ ✓ ✓ Close-source LLMs GPT-3.5 ✓ 4.60 1.05 4.62 5.10 6.30 1.01 3.94 8.54 17.33 GPT-3.5 (3-shot) ✓ 76.04 36.80 62.09 83.40 85.73 87.59 96.46 90.74 96.67 Claude-1 ✓ 11.14 1.07 9.02 10.30 6.30 5.06 4.53 0.36 12.67 Claude-1 (3-shot) ✓ 58.07 13.17 43.75 78.90 74.19 79.49 88.19 87.72 91.83 Open-source LLMs (7B) LLaMA-2-Chat (3-shot) ✓ 12.21 1.32 10.69 22.00 25.86 29.11 27.36 39.15 23.17 CodeLLaMA-Instruct (3-shot)† ✓ 34.00 16.60 33.60 59.00 61.40 Average performance 79.60 WizardMath† 54.90 10.70 57.30 MAmmoTH† ✓ 51.70 31.20 66.70 Symbol-LLMBase ✓ 61.14 28.24 52.62 72.50 78.34 89.62 97.83 96.26 99.67 Symbol-LLMInstruct ✓ 59.36 26.54 48.98 72.80 75.76 87.85 96.26 93.24 99.00 Open-source LLMs (13B) LLaMA-2-Chat (3-shot) ✓ 34.87 6.07 28.96 45.00 46.61 45.57 47.05 56.76 56.67 CodeLLaMA-Instruct (3-shot)† ✓ 39.90 19.90 39.00 62.40 65.30 Average performance 86.00 WizardMath† 63.90 14.00 64.30 MAmmoTH† ✓ 61.70 36.00 72.40 Symbol-LLMBase ✓ 68.69 33.39 58.53 78.80 80.15 91.14 96.85 95.55 98.83 Symbol-LLMInstruct ✓ 65.58 31.32 55.57 76.80 79.01 91.90 96.85 94.84 99.33 Table 3: Results on Math Reasoning. Del. represents whether uses delegation (i.e., Python Interpreter for math reasoning tasks). Results are under the zero-shot setting unless otherwise stated (the following tables share the same setting). † indicates that the results are reported from Luo et al. (2023), Yue et al. (2023) and Gou et al. (2023). of Symbol-LLM, we also include several mathdomain LLMs (e.g., WizardMath (Luo et al., 2023), MAmmoTH (Yue et al., 2023)) as strong baselines. Comparison results are presented in Table 3. Advanced abilities in math reasoning are possessed by Symbol-LLM. GSM8K and MATH are widely used to evaluate the math reasoning capabilities of LLMs. Compared with recent mathdomain LLMs, Symbol-LLM presents great competitive results on them. Especially on GSM8K, Symbol-LLM consistently wins all strong baselines with great margins with all the model variants. On the MATH dataset, Symbol-LLM merely falls behind MAmmoTH, which is a strong LLM specially designed for math reasoning tasks. Notably, MAmmoTH includes GSM8K and MATH in the tuning stage and it also uses delegation (i.e., Python Interpreter) for inference, thus our comparisons are fair. Similar superiority is also observed under the OOD tasks (e.g., SVAMP). Symbol-LLM exhibits competitive performances in extrapolating to OOD tasks. More surprisingly, Symbol-LLM consistently presents its significant superiority among all 7 OOD math tasks. Even compared with GPT-3.5 under the three-shot setting, our Symbol-LLM-7B series won 4 (out of 7) OOD tasks under the zero-shot setting. As we scale the model size to 13B, obvious performance improvements are observed in most of the tasks. These findings verify the prospects of Symbol-LLM under the Symbol+Delegation paradigm. 13096 4 Analysis In this section, we include the ablation studies (Sec.4.1) and the analysis on Alignment and Uniformity (Sec.4.2). Notably, additional supplementary experiments are attached in Appendix K, including tuning design choices K.1, comparison with single SFT K.2, and new symbol extrapolation K.3. 4.1 Ablation Studies Here we present two ablation experiments from both the framework and data views: (1) tuning only in one stage, and (2) tuning only on general data collection. For a fair comparison, we introduce two settings for one-stage tuning. The first setting (named One-stage 1.46M) simply mixes Ds, Ds′ and Dg, regardless of sample overlap. The second setting (named One-stage 1.20M) mixes Ds and Dg, which ensures consistency in diversity and avoids duplication. The model exclusively finetuned on general task Dg is referred to as Generalonly. Comparison results are shown in Table 4. Two-stage tuning framework shows superiority over one-stage, especially for 13B. Simply mixing the training data in one stage is prone to affecting the symbol-related tasks. Especially under the Symbol+Delegation setting, the two-stage framework witnesses 3∼6% advantages over the one-stage models. In the 13B model comparison, our two-stage framework consistently demonstrates superiority across symbolic tasks, general tasks, and Symbol+Delegation tasks. The incorporation of symbolic data yields a modest impact on the performances of general tasks. Compared with General-only, Symbol-LLMInstruct is optimized to largely enhance the symbol-centric capabilities. Meanwhile, it maintains the capability to address general NL-centric tasks without significant sacrifices (< 2%). 4.2 Alignment and Uniformity Motivated by (Wang and Isola, 2020; Gao et al., 2021b), we include Alignment and Uniformity metrics to delve into the factors contributing to the superiority of Symbol-LLM. Alignment measures the representation similarity within each symbolic form, while Uniformity quantifies the uniformity of all the symbolic representations. They are based on Eq. 3 and Eq. 4 respectively in Appendix L. The calculation results Figure 3: Visualization of Alignment-Uniformity. Both metrics are inversely related, which means a lower value indicates better performance. are visualized in Figure 3. Further, we extend the definition to measure the interrelations between any two symbolic forms, based on Eq. 5. Limited by space, we only include a part of the symbolic forms for illustration and present the results of 13B models in Figure 4 in Appendix L. The item-wise conclusions are listed as follows: Symbol-LLM optimizes symbol distinctiveness and overall expressiveness in the embedding space. From Fig. 3, compared with the LLaMA2-Chat models, Symbol-LLM series is optimized towards superior Alignment and Uniformity. It ensures the discernment of shared features within each symbolic form, simultaneously enhancing the overall information entropy. Specifically for the 7B model, the two-stage framework effectively maintains a balance of uniformity, preventing the collapse of the embedding space. Symbol-LLM excels at capturing symbolic interrelations. From Fig. 4, the LLaMA-2-Chat model exhibits significant representation sparsity between symbolic forms. Even under the same form (e.g., Bash, FOL), the features are scattered. On the contrary, Symbol-LLM largely enhances the perception of symbolic interrelations by (1) achieving better alignments between symbols (e.g., PythonAMR and CheBi-RX) and (2) pulling closer sample features within each symbolic form (e.g., FOL). 5 Related Works Large Language Models Plenty of recent efforts have been made to develop foundation language models (Zhao et al., 2023), which are expected to promote the subsequent applications, such as 13097 Models 7B Models 13B Models Symbolic General Symbol+Del. Avg. Symbolic General Symbol+Del. Avg. Symbol-LLM 71.88 44.30 52.54 56.24 74.34 48.40 60.45 61.06 One-stage 1.20M 70.38 45.24 47.27 54.30 70.59 48.29 53.99 57.62 ∆ (+1.50) (-0.94) (+5.27) (+1.94) (+3.75) (+0.11) (+6.46) (+3.44) One-stage 1.46M 72.75 44.44 49.31 55.50 73.71 46.59 52.67 57.66 ∆ (-0.87) (-0.14) (+3.13) (+0.74) (+0.63) (+1.81) (+7.78) (+3.40) General-only 28.66 46.21 28.17 34.35 31.35 49.72 31.49 37.52 ∆ (+43.22) (-1.91) (+24.37) (+11.89) (+42.99) (-1.32) (+28.96) (+23.54) Table 4: Comparison experiments. Avg. denotes the simple averaged performances on the symbolic tasks, general tasks, and Symbol+Delegation tasks. PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi (a) LLaMA-2-Chat (13B) PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi (b) Symbol-LLM (13B) Figure 4: Visualization of the alignment relations between symbols after binarization. Dark blue denotes a close relation between two symbols in the representation. Limited by space, we only showcase 13B models. More illustrations refer to Appendix L. AI agents (Wang et al., 2023a; Sun et al., 2023; Wu et al., 2024; Cheng et al., 2024). These works on LLMs are universally categorized into closedsource and open-source models. Close-source LLMs, represented by GPT-4 (OpenAI, 2023), Claude, PaLM (Chowdhery et al., 2023), have greatly shaped our daily life through NL-centric interactions. However, their closed-source and black-box properties limit further optimization. Under such circumstances, open-source LLMs (Zeng et al., 2023; Jiang et al., 2023; Touvron et al., 2023b) receive significant attention because of their tunable and small-scale properties. However, current attempts on these LLMs mainly explore NLcentric abilities, which treats NL as the interface to express knowledge and achieve interactive reasoning. In contrast, our work focuses on improving the symbol-centric capabilities of open-source LLM, which leads to a balanced symbol-centric and NLcentric foundational LLM. Instruction Tuning To make LLMs capable of following human instructions, instruction finetuning (Zhang et al., 2023) is widely adopted. Meanwhile, self-instruct methods (Wang et al., 2023c; Xu et al., 2023a; Ouyang et al., 2022) have been proposed to generate diverse and abundant instruction data, based on a small collection of seed instructions. In our work, we follow previous instruction tuning strategies in both tuning stages. For symbolic tasks, we construct instructions, covering the task and symbolic descriptions. For general tasks, we sample the instruction-tuning datasets (e.g., Flan (Longpre et al., 2023)). Symbol-centric Scenarios LLMs have dominated plenty of NL-centric tasks (Rajpurkar et al., 2016; Talmor et al., 2019; Nallapati et al., 2016), where NL is leveraged as the core interface for interaction, planning, and reasoning. But world knowledge is not purely represented by NL. In fact, symbolic language is also of great significance in expressing abstract world knowledge (Edwards et al., 2022; Bevilacqua et al., 2021; Li and Srikumar, 2019) and leveraging external tools (Gao et al., 2023; Liu et al., 2023; Pan et al., 2023). Some concurrent works (Xu et al., 2023b; Yang et al., 2023) shift focus to the specific forms of symbols (e.g., code), either through prompting off-the-shelf LLMs or tuning on open-source LLMs. These efforts fail to lay a solid symbolic foundation, which is expected to grasp the interrelations among various symbolic forms. In our work, we explore the possibility of treating symbols in a unified manner and lay foundations to build balanced symbol and NL interfaces. 6 Conclusion and Broader Impact This work pioneeringly proposes to enhance the LLM capability in symbol-centric tasks while preserving the performances on general tasks, leading to balanced symbol and NL interfaces. To address the challenges of capturing symbol interrelations and maintaining a balance in general abilities, we tackle the problem from both data and framework perspectives. Data-wise, we include a collection of 13098 34 text-to-symbol tasks to systematically explore underlying symbol relations. Framework-wise, we implement SFT in a novel two-stage manner to mitigate catastrophic forgetting. Extensive experiments across three task settings (i.e., symbolic tasks, general tasks, and symbol+delegation tasks) demonstrate Symbol-LLM’s superiority in harmonizing symbol- and NL-centric capabilities. Moreover, all models and resources will be made public to facilitate a broader range of research. Notably, the exploration of the role of symbolic language is not limited to a single modality or the mentioned scenario (e.g., reasoning). It is worthy of extending the neural-symbolic paradigms to other multi-modal applications, such as visual web navigation (Cheng et al., 2024), 3D understanding (Zhang et al., 2024), and embodied agents (Bigazzi et al., 2023). Acknowledgement This work is supported by Shanghai Artificial Intelligence Laboratory. And it is also supported by National Key Research and Development Program of China (2022YFC3303600), National Natural Science Foundation of China (62137002, 62293550, 62293553, 62293554, 61937001, 62250066, 62176209, 62176207, and 62192781), "LENOVO-XJTU" Intelligent Industry Joint Laboratory Project, Natural Science Basic Research Program of Shaanxi (2023-JC-YB-593), the Youth Innovation Team of Shaanxi Universities, XJTU Teaching Reform Research Project "Acquisition Learning Based on Knowledge Forest". Limitations The insight of Symbol-LLM is to build a balanced symbol- and NL-centric interface for interaction and reasoning. We achieve it from both data (comprehensive symbolic collection to open-source) and framework (two-stage tuning to reduce forgetting) perspectives. It is expected to expand the scope of cutting-edge open-source LLMs largely and lay a new foundation for future work. Though plenty of experiments covering three settings are conducted, there still exist the following two directions for exploration: (1) The model’s ability to self-correct or interact with environmental feedback in symbolic scenarios. It is also key to building language agents from language models. (2) Model size scaling to 70B or larger. As widely recognized, 7B or 13B LLMs are still not sufficient to build excellent language agents, especially when complex interaction is involved. Thus, it needs further exploration for the size scaling to the larger ones. References Oshin Agarwal, Heming Ge, Siamak Shakeri, and Rami Al-Rfou. 2021. Knowledge graph based synthetic corpus generation for knowledge-enhanced language model pre-training. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 3554– 3565. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th linguistic annotation workshop and interoperability with discourse, pages 178–186. Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In Thirty-Fifth AAAI Conference on Artificial Intelligence, pages 12564–12573. Roberto Bigazzi, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, and Rita Cucchiara. 2023. Embodied agents for efficient exploration and smart scene description. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pages 6057– 6064. IEEE. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple question answering with memory networks. arXiv preprint arXiv:1506.02075. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems (NeurIPS). Sahil Chaudhary. 2023. Code alpaca: An instructionfollowing llama model for code generation. https: //github.com/sahil280114/codealpaca. Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, and Sonal Gupta. 2020. Low-resource domain adaptation for compositional task-oriented semantic parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5090–5100. 13099 Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, and Zhiyong Wu. 2024. Seeclick: Harnessing gui grounding for advanced visual gui agents. arXiv preprint arXiv:2401.10935. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language modeling with pathways. J. Mach. Learn. Res., 24:240:1–240:113. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word problems. CoRR, abs/2110.14168. OpenCompass Contributors. 2023. Opencompass: A universal evaluation platform for foundation models. https://github.com/open-compass/ opencompass. Carl Edwards, Tuan Lai, Kevin Ros, Garrett Honke, Kyunghyun Cho, and Heng Ji. 2022. Translation between molecules and natural language. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 375–413. Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. 2021a. A framework for few-shot language model evaluation. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. PAL: program-aided language models. In International Conference on Machine Learning (ICML), volume 202, pages 10764–10799. PMLR. Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021b. Simcse: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894–6910. Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. Creating training corpora for NLG micro-planners. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 179–188. Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen, et al. 2023. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint arXiv:2309.17452. Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Linyong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq Joty, Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong, and Dragomir Radev. 2022. Folio: Natural language reasoning with first-order logic. CoRR, abs/2209.00840. Malte Helmert. 2006. The fast downward planning system. J. Artif. Int. Res., 26(1):191–246. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021a. Measuring massive multitask language understanding. In Proceedings of the International Conference on Learning Representations (ICLR). Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the MATH dataset. In Proceedings of the Neural Information Processing Systems (NeurIPS). Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. 2014. Learning to solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533. Drew A. Hudson and Christopher D. Manning. 2019. GQA: A new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6700–6709. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Codesearchnet challenge: Evaluating the state of semantic code search. CoRR, abs/1909.09436. Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. CoRR, abs/2310.06825. Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2017. CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1988– 1997. James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526. Kevin Knight and et al. 2017. Abstract meaning representation (amr) annotation release 2.0. Web Download. LDC2017T10. 13100 Kevin Knight and et al. 2020. Abstract meaning representation (amr) annotation release 3.0. Web Download. LDC2020T02. Brenden M. Lake and Marco Baroni. 2018. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning (ICML), volume 80, pages 2879– 2888. Carolin Lawrence and Stefan Riezler. 2018. Improving a neural semantic parser by counterfactual learning from human bandit feedback. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), pages 1820–1830. Haoran Li, Abhinav Arora, Shuohui Chen, Anchit Gupta, Sonal Gupta, and Yashar Mehdad. 2021. MTOP: A comprehensive multilingual task-oriented semantic parsing benchmark. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 2950–2962. Tao Li and Vivek Srikumar. 2019. Augmenting neural networks with first-order logic. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL), pages 292–302. Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D. Ernst. 2018. Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), pages 158–167. Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023. LLM+P: empowering large language models with optimal planning proficiency. CoRR, abs/2304.11477. Nicholas Locascio, Karthik Narasimhan, Eduardo DeLeon, Nate Kushman, and Regina Barzilay. 2016. Neural generation of regular expressions from natural language with minimal domain knowledge. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1918–1923. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. 2023. The flan collection: Designing data and methods for effective instruction tuning. In International Conference on Machine Learning (ICML), volume 202, pages 22631–22648. PMLR. Pan Lu, Ran Gong, Shibiao Jiang, Liang Qiu, Siyuan Huang, Xiaodan Liang, and Song-Chun Zhu. 2021. Inter-gps: Interpretable geometry problem solving with formal language and symbolic reasoning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL/IJCNLP), pages 6774–6786. Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583. Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and developing english math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 975–984. Nandana Mihindukulasooriya, Sanju Tiwari, Carlos F Enguix, and Kusum Lata. 2023. Text2kgbench: A benchmark for ontology-driven knowledge graph generation from text. In International Semantic Web Conference (ISWC), volume 14266, pages 247–265. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning (CoNLL), pages 280–290. OpenAI. 2023. Gpt-4 technical report. CoRR, abs/2303.08774. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730–27744. Liangming Pan, Alon Albalak, Xinyi Wang, and William Yang Wang. 2023. Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. CoRR, abs/2305.12295. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are nlp models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 2080– 2094. Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. 2023. Toolllm: Facilitating large language models to master 16000+ real-world apis. CoRR, abs/2307.16789. 13101 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2383– 2392. Anna Rogers, Matt Gardner, and Isabelle Augenstein. 2023. QA dataset explosion: A taxonomy of NLP resources for question answering and reading comprehension. ACM Computing Surveys, 55(10):197:1– 197:45. Subhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1743–1752. Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transactions of the Association for Computational Linguistics, 3:1–13. Abulhair Saparov and He He. 2022. Language models are greedy reasoners: A systematic formal analysis of chain-of-thought. In The Eleventh International Conference on Learning Representations (ICLR). Qiushi Sun, Zhangyue Yin, Xiang Li, Zhiyong Wu, Xipeng Qiu, and Lingpeng Kong. 2023. Corex: Pushing the boundaries of complex reasoning through multi-model collaboration. Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed Chi, Denny Zhou, and Jason Wei. 2023. Challenging big-bench tasks and whether chain-of-thought can solve them. In Findings of the Association for Computational Linguistics, pages 13003–13051. Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021. Proofwriter: Generating implications, proofs, and abductive statements over natural language. In Findings of the Association for Computational Linguistics, pages 3621–3634. Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 641–651. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 4149–4158. Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, and Yaohui Jin. 2021. Diagnosing the firstorder logical reasoning ability through logicnli. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3738–3747. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288. Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2023a. A survey on large language model based autonomous agents. CoRR, abs/2308.11432. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning (ICML), pages 9929–9939. PMLR. Yizhong Wang, Hamish Ivison, Pradeep Dasigi, Jack Hessel, Tushar Khot, Khyathi Raghavi Chandu, David Wadden, Kelsey MacMillan, Noah A Smith, Iz Beltagy, et al. 2023b. How far can camels go? exploring the state of instruction tuning on open resources. CoRR, abs/2306.04751. Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023c. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (ACL), pages 13484– 13508. Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2022a. Finetuned language models are zero-shot learners. In The Tenth International Conference on Learning Representations (ICLR). Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems (NeurIPS), volume 35, pages 24824–24837. Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, and 13102 Lingpeng Kong. 2024. Os-copilot: Towards generalist computer agents with self-improvement. arXiv preprint arXiv:2402.07456. Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I Wang, et al. 2022. Unifiedskg: Unifying and multi-tasking structured knowledge grounding with text-to-text language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 602–631. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023a. Wizardlm: Empowering large language models to follow complex instructions. CoRR, abs/2304.12244. Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, et al. 2023b. Lemur: Harmonizing natural language and code for language agents. CoRR, abs/2310.06830. Yuan Yang, Siheng Xiong, Ali Payani, Ehsan Shareghi, and Faramarz Fekri. 2023. Harnessing the power of large language models for natural language to firstorder logic translation. CoRR, abs/2305.15541. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. CoRR, abs/2305.10601. Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Tao Yu, Rui Zhang, Heyang Er, Suyi Li, Eric Xue, Bo Pang, Xi Victoria Lin, Yi Chern Tan, Tianze Shi, Zihan Li, Youxuan Jiang, Michihiro Yasunaga, Sungrok Shim, Tao Chen, Alexander R. Fabbri, Zifan Li, Luyao Chen, Yuwen Zhang, Shreya Dixit, Vincent Zhang, Caiming Xiong, Richard Socher, Walter S. Lasecki, and Dragomir R. Radev. 2019a. Cosql: A conversational text-to-sql challenge towards crossdomain natural language interfaces to databases. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1962–1979. Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir R. Radev. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3911– 3921. Tao Yu, Rui Zhang, Michihiro Yasunaga, Yi Chern Tan, Xi Victoria Lin, Suyi Li, Heyang Er, Irene Li, Bo Pang, Tao Chen, Emily Ji, Shreya Dixit, David Proctor, Sungrok Shim, Jonathan Kraft, Vincent Zhang, Caiming Xiong, Richard Socher, and Dragomir R. Radev. 2019b. Sparc: Cross-domain semantic parsing in context. In Proceedings of the 57th Conference of the Association for Computational Linguistics (ACL), pages 4511–4523. Fei Yuan, Yinquan Lu, Wenhao Zhu, Lingpeng Kong, Lei Li, Yu Qiao, and Jingjing Xu. 2023. Lego-MT: Learning detachable models for massively multilingual machine translation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 11518–11533, Toronto, Canada. Association for Computational Linguistics. Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023. Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint arXiv:2309.05653. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. 2023. GLM-130B: an open bilingual pre-trained model. In The Eleventh International Conference on Learning Representations (ICLR). Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei Zhang, Fei Wu, et al. 2023. Instruction tuning for large language models: A survey. CoRR, abs/2308.10792. Zhihao Zhang, Shengcao Cao, and Yu-Xiong Wang. 2024. Tamm: Triadapter multi-modal learning for 3d shape understanding. arXiv preprint arXiv:2402.18490. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. CoRR, abs/2303.18223. Rui Zheng, Bao Rong, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, and Xuanjing Huang. 2022. Robust lottery tickets for pre-trained language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2211–2224, Dublin, Ireland. Association for Computational Linguistics. Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. 2023. Least-to-most prompting enables complex reasoning in large language models. In The Eleventh International Conference on Learning Representations (ICLR). 13103 A Key Contributions and Values Beyond the proposed data collection strategy and the two-stage tuning strategy, we would also like to emphasize the distinctive value of our work: (1) Insights on synergies between symbolic and natural language for LLMs. Our motivation is sourced from the lack of symbolic foundation in off-the-shelf LLMs, which heavily focus on the NL capabilities. Equipping LLMs with a symbolic foundation brings (i) diverse expression of knowledge; (ii) great power in controlling robots and leveraging tools. Also, the synergies between symbolic and natural language can offer potential insights into transforming language models into language agents, which cover extensive code interaction and tool-using scenarios. (2) Thorough evaluations. Quite different from previous works which merely focus on specific symbols and specific tasks, our work builds comprehensive evaluation studies under nearly 200 tasks. The evaluations are mainly divided into three folds: (i) symbolic tasks. (ii) general NL tasks. (iii) symbol+delegation tasks. (3) Open-sourcing effort for the LLM community. Large numbers of recent works are based on closed-source LLMs (e.g., GPT-4). However, such methods are high-cost and inefficient, limiting custom training. Therefore, our efforts to open-source Symbol-LLM series as well as the comprehensive data collection are valuable to the community. B Insights behind Symbol-LLM This part provides key insights on (1) two-stage tuning strategy; (2) data scaling; and (3) intrinsic superiority. Insight 1: Two-stage strategy. Despite their strong natural language processing abilities, opensource LLMs lack symbol-centric capabilities. Naturally, we think about how LMs grasp the foundation in addressing NL? The rough pipeline of training LMs can be in two stages. The first stage focuses on pretraining with large-scale corpus and language modeling objectives. This stage disregards high-level skills (e.g., instruction-following and complex reasoning) and solely focuses on establishing the natural language foundation. In the second stage, the potential for instructionfollowing, and aligning human preference is enhanced through finetuning (or preference optimization). 10% 40% 70% 100% Training Data Ratio 62 64 66 68 70 72 74 76 Performances (%) 64.39 69.07 69.33 73.02 66.08 70.45 70.77 73.67 7B Models 13B Models Figure 5: Training data scaling. Motivated by it, we adopt the two-stage training strategy. Initially, we lay a symbolic foundation, known as the Injection stage, where we solely inject symbolic knowledge into LLMs. Then, in the Infusion stage, we balance the NL-centric abilities and excite the symbolic ability through further finetuning on the mixture of datasets. In fact, several concurrent works share similar paradigms with ours. For example, CodeLLaMAInstruct is based on LLaMA-2-Chat, utilizing largescale code data for pretraining and NL data for optimizing NL capability. CodeGeeX2 and Lemur are also similar works, primarily effective in coderelated scenarios. Symbol-LLM extends the scope to symbolic language (as we stated, code is one of the specific symbols). Insight 2: Data scaling. One of the interesting insights behind Symbol-LLM is to explore how much data is sufficient to establish a symbolic foundation? Based on the previous works (Zheng et al., 2022), mastering new tasks in large models primarily entails modifications to high-layer parameters. Therefore, we employ the KolmogorovSmirnov Test to reflect changes in LLM parameters, with the training data scaling. In Figure 5, we present the average performances on 34 symbolic tasks with data scaling. Specifically, we sample Ds in the Injection stage at a ratio of 10%, 40%, 70%, and 100%. As observed, when symbolic training data surpasses 70% (>620K), model performance breaks through the bottleneck of saturation. Accordingly, the parameter changes layerwise are presented in Figure 6. When training data scales 13104 10% 40% 70% 100% Training Data Ratio Embed Layer 4 9 14 19 24 #Layer Figure 6: KS-Test with data scaling. The scatter denotes the significant parameter changes (p-value<0.05). The darker color means a more significant change. from 70% to 100%, significant parameter changes emerge in higher layers. It is expected that scaling the symbolic training data can lay a new foundation of symbolic knowledge. Insight 3: Intrinsic superiority. It is insightful to reveal the intrinsic superiority of Symbol-LLM over symbolic tasks. That is to say, What is behind the superior performances of Symbol-LLM ? We study it from Alignment and Uniformity views. Details are discussed in Section 4.2 and Appendix L. C Details of Data Collection In this section, detailed information on the data collection is attached, including both text-to-symbol task collection, and general task collection. C.1 Text-to-symbol Task Collection We provide a detailed illustration of the symbolic task collection, which consists of 34 different textto-symbol generation tasks. They are categorized into 12 domains in Table 5. Note that the symbolic task collection includes but is not limited to the listed 34 tasks. To expand the diversity, we also consider some similar tasks. For example, we include some domain-specific NLto-SQL tasks to provide diverse schema. The data is only used at the tuning stage but is not for a test. Thus, the whole collection (only count training samples) reaches ∼880K samples. All of them are leveraged in the first SFT stage. Also, it is mentioned above that we sample parts of symbolic task collection in the second stage to reduce forgetting. For it, we uniformly sample each task domain with a ratio of 0.3, leading to a sampled collection of ∼260K. C.2 General Task Collection In the second tuning stage, we include a collection of general instruction-tuning data to keep the LLM capability in some NL-centric settings and further improve the instruction-following capability of Symbol-LLM. The general data collection contains ∼570K samples, which are sourced from the following three parts: (1) Sampled Flan collection (Longpre et al., 2023) of 150K samples. We obtain the collection directly following Tulu (Wang et al., 2023b). (2) Code Alpaca collection (Chaudhary, 2023) of 20K samples. In fact, this collection is not in an NL-to-NL form as we expected. However, it stresses much on the instruction-following capabilities, which may help enhance the general ability of LLMs. Also, it is expected to act as the bridge between NL data and symbolic form (i.e., code in this case). (3) Sampled WizardLM collection (Xu et al., 2023a) of 143K samples. To further expand the diversity of our instruction-tuning collection, we leverage the evol-data from WizardLM. D Data Format To support the instruction tuning, each piece of data i in the training collection contains three parts, i.e., instruction si, input xi, and output yi. During the training process, instruction si and input text xi are concatenated as the whole input sequence. The model is optimized to generate output yi. One example in the FOLIO dataset is as follows: [Instruction] Transform the natural language sentence into first-order logic forms. [Input] All people who regularly drink coffee are dependent on caffeine. [Output] ∀x (Drinks(x) →Dependent(x)) In the implementation, we rewrite the instruction for each sample by prompting GPT-4, keeping the diversity of the instruction. E Test Datasets and Benchmarks Our main experiments are conducted on both text-to-symbol tasks and general NL-centric tasks. Then this work also extends the scope to Symbol+Delegation setting, which uses LLM to generate symbolic representation and delegate the reasoning process to the external solver. Such a setting satisfies our expectation to build a better symbol interface. 13105 Domains Tasks # Train # Test Sampled? Access Few-shot? Original Source Planning Blocksworld 37,600 20 GPT-4+Evol ✓ Liu et al. (2023) Termes 20 GPT-4+Evol ✓ Liu et al. (2023) Floortile 20 GPT-4+Evol ✓ Liu et al. (2023) Grippers 20 GPT-4+Evol ✓ Liu et al. (2023) SQL Spider 109,582 1,034 Direct Yu et al. (2018) Sparc 1,625 Direct Yu et al. (2019b) Cosql 1,300 Direct Yu et al. (2019a) KG / DB WebQSP 3,241 1,639 Direct ✓ Yih et al. (2016) GrailQA 53,222 6,463 Direct ✓ Rogers et al. (2023) CompWebQ 37,444 3,531 Direct ✓ Talmor and Berant (2018) AMR AMR3.0 68,778 1,898 Direct ✓ Knight and et al. (2020) AMR2.0 45,436 1,371 Direct ✓ Knight and et al. (2017) BioAMR 7,150 500 Direct ✓ Banarescu et al. (2013) Ontology Tekgen 11,219 4,062 Direct ✓ Agarwal et al. (2021) Webnlg 3,415 2,014 Direct ✓ Gardent et al. (2017) API MTOP 18,784 500 ✓ Direct ✓ Li et al. (2021) TOPv2 149,696 500 ✓ Direct ✓ Chen et al. (2020) NLmaps 21,657 10,594 Direct ✓ Lawrence and Riezler (2018) Command SCAN 25,990 4,182 Direct+Evol ✓ Lake and Baroni (2018) Code NL2BASH 11,971 746 Direct ✓ Lin et al. (2018) NL2RX 10,808 1,000 ✓ Direct ✓ Locascio et al. (2016) NL2Python 12,005 500 ✓ Direct ✓ Husain et al. (2019) NL2Java 11,978 500 ✓ Direct ✓ Husain et al. (2019) NL2Go 12,001 500 ✓ Direct ✓ Husain et al. (2019) FOL FOLIO 2,006 500 ✓ Direct ✓ Han et al. (2022) MALLS 39,626 1,000 ✓ Direct ✓ Yang et al. (2023) LogicNLI 11,559 2,373 ✓ Direct ✓ Tian et al. (2021) Visual GQA 36,086 2,000 ✓ Direct ✓ Hudson and Manning (2019) CLEVR 47,081 2,000 ✓ Direct+Evol ✓ Johnson et al. (2017) Geometry3k 2,864 601 Direct ✓ Lu et al. (2021) Math GSM8K-Code 8,453 100 ✓ GPT-4 ✓ Cobbe et al. (2021) AQUA-Code 31,144 100 ✓ GPT-4 ✓ Ling et al. (2017) MATH-Code 4,426 100 ✓ GPT-4 ✓ Hendrycks et al. (2021b) AI4Science CheBi-20 35,629 3,300 Direct ✓ Edwards et al. (2022) Table 5: Detailed illustrations of 34 text-to-symbol generation tasks. # Train and # Test represent the number of training and test samples respectively. Sampled? means whether the test split is sampled from the original dataset. Access is related to how we obtain the data, including directly from off-the-shelf benchmarks (Direct), prompting GPT-4 (GPT-4), and applying the symbol-evol strategy (Evol). Few-shot? denotes whether we include few-shot samples in the data. Original Source is the citation of the original paper. E.1 Tests in Text-to-Symbol Generation Tasks Planning These tasks involve controlling the robot to finish some tasks in the defined environments. The input is the natural language description of the initial states and the final goals, while the symbolic output in the Planning Domain Definition Language (PDDL) form can be executed by the symbolic planner. For the four settings, Blocksworld involves stacking blocks in order. Termes involves moving blocks to a specific position in the grid. Floortile is to color the floors with the instructions. Grippers is to gripper and move balls from room to room. We use the BLEU metric to measure the correctness of generated forms. SQL They cover three representative Text-toSQL datasets, Spider, Sparc and Cosql. Given the schema and the natural language query, the output is the corresponding SQL. We use the exact match as the metric. KG / DB It is similar to the Text-to-SQL tasks, which require generating the symbolic form of the query given the natural language question and schema. But WebQSP and GrailQA leverage the sExpression form while CompWebQ uses SPARQL format. We use the F1 metric for WebQSP and the exact match metric for GrailQA and CompWebQ, following previous work (Xie et al., 2022). AMR They are classical semantic parsing tasks, where the input sentence is parsed into an abstract syntax graph. We use the Smatch metric to measure the generated form on AMR3.0, AMR2.0, and BioAMR datasets. Ontology It focuses on the domain of knowledge graph construction. Given the ontology (i.e, predefined relations or entities) and natural language sentence, it is required to output the triples. We employ F1 scores introduced in (Mihindukulasooriya 13106 et al., 2023) to measure the performances on Tekgen and WebNLG. API These tasks require the output of the API calling form based on the natural language query. MTOP and TOPv2 cover various domains like controlling the music player, and setting alarms. NLMAPS focuses on calling the maps. Command SCAN involves outputting action sequences based on the commands to control robots. The exact match metric is used to measure the generation accuracy. Code It involves five representative programming languages, including Bash, Regular Expression, Python, Java and GO. They are tested with the BLEU metric. FOL It covers three datasets in NL-to-FOL domain, that is FOLIO, MALLS and LogicNLI. Logic Equivalence (LE) is leveraged as the metric, following (Yang et al., 2023). Visual Three multi-modal question answering datasets GQA, Clevr and Geometry3K are included for test. In these scenarios, we only focus on the natural language parts and transform the natural language query into function symbol forms. The exact match metric is used to measure the performances. Math As we discussed, transforming the natural language question into Python code is one of the faithful ways to solve math problems. Hence, we measure the accuracy of the generated Python code with the BLEU metric. The ground-truth code is derived by prompting GPT-4, where the ones that can execute the correct answer are preserved. AI4Science In CheBi dataset, the model is required to generate the correct molecular formula given the natural language descriptions. Exact match metric is used for measure. E.2 Tests in General Tasks MMLU It covers 57 tasks including different subjects STEM, humanities, social sciences, and others. Our evaluations are based on (Hendrycks et al., 2021a). Big Bench Hard The benchmark is designed for testing LLM capability in challenging reasoning tasks. We select 21 tasks in BBH for the test, based on Open-LLM-Leaderboard4. E.3 Tests in Symbol+Delegation Setting Math Reasoning We generate Python code with Symbol-LLM and use Python interpreter as the delegation. The datasets include GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021b), GSM-Hard (Gao et al., 2023), SVAMP (Patel et al., 2021), Asdiv (Miao et al., 2020), AddSub (Hosseini et al., 2014), SingleEQ (Roy et al., 2015), SingleOP (Roy et al., 2015) and MultiArith (Roy and Roth, 2015). The former two datasets are indomain, while the latter seven datasets are under OOD settings. Note that MATH dataset includes various ground-truth answer formats (e.g., with diverse units), thus it is difficult to parse the correct values to evaluate the LLMs. Hence, we use manuallycrafted templates to derive the ground-truth values, leading to around 4,000 samples for test. Symbolic Reasoning Same as math reasoning, we use Python code + Python interpreter to solve the problems. Two OOD tasks are used for test, i.e., Colored Objects (Suzgun et al., 2023) and Last Letter Concatenation 5. Logical Reasoning We take three representative datasets into consideration, i.e., FOLIO (Han et al., 2022), ProofWriter (Tafjord et al., 2021) and ProntoQA (Saparov and He, 2022). We follow the strategy proposed in (Pan et al., 2023) to conduct the reasoning. Detailedly, for FOLIO, we generate FOL representations first and delegate the solution to the FOL solver. For ProofWriter and ProntoQA tasks, we generate logic programming language and delegate the reasoning to Pyke expert system. Robotic Planning For robotic planning tasks, we transform the natural language description into PDDL and use fastdownward (Helmert, 2006) as the symbolic solver. Besides the four datasets mentioned in text-to-symbol generation tasks, we also employ two OOD datasets into account, i.e. Barman and Tyreworld. Visual Question Answering We further extend the application scope of Symbol-LLM to the multimodal domain and test on Geometry3K dataset (Lu 4https://huggingface.co/spaces/HuggingFaceH4/ open_llm_leaderboard 5Test data is based on: https://huggingface.co/ datasets/ChilleD/LastLetterConcat 13107 et al., 2021) for illustration. But we only concentrate on the processing of the NL part. Detailed, we parse the natural language sentence into logic forms and rely on the baseline method (Lu et al., 2021) to conduct the multi-modal reasoning. F Experimental Settings In the implementation, this work fully finetunes the models with 8 * A100 (80GB) with maximum sequence length set to 4,096. The tuning process is optimized and accelerated by deepspeed zero3 and FlashAttention2. We leverage the AdamW optimizer with a learning rate of 2e-5 for both Injection and Infusion stages. The learning rate schedular is set to Linear. The epoch number is set to 1 for both stages. In the Injection stage, the model weights are initialized from LLaMA-2-Chat and the tuned model is named Symbol-LLMBase. In the Infusion stage, we initialize the model from Symbol-LLMBase and obtain Symbol-LLMInstruct at last. These settings are consistent for both 7B and 13B variants. The inference process is conducted under a single GPU of A100 (80GB) with greedy search (beam size=1). For a comprehensive evaluation, we include the following strong baselines. They are categorized into Close-source and Open-source ones: Close-source Baselines • GPT-3.5 We access OpenAI API to call the model. Specifically, GPT-3.5-turbo version is employed for evaluation across a wide range of tasks. • Claude-1 We access Anthropic API to call the model. We select Claude-instant-1.2 version for evaluation. Open-source Baselines • LLaMA-2-Chat Since Symbol-LLM is initialized from LLaMA-2-Chat, we include it as the baseline. In general, LLaMA-2-Chat series is regarded as an excellent NL-centric interface for interaction and reasoning, which exhibits great performance on vast NL tasks. • Single SFT We conduct SFT on LLaMA-2Chat models for tasks in one specific domain. The obtained models can fully overfit the single domain, thus serving as a strong baseline for comparison. • CodeLLaMA-Instruct Based on the origin LLaMA-2 models, the CodeLLaMA series is continually pretrained and finetuned with code data. Considering code is one of the specific symbols in our work, we include it as one of the strong baselines. For balanced capabilities in general tasks, we leverage CodeLLaMAInstruct for evaluations. G Overall Performances Figure 7 presents the overall performance comparison among baseline models. It intuitively demonstrates the obvious advantages of Symbol-LLM on the wide range of tasks. Also, it supports our claim to make Symbol-LLM a balanced foundation LLM on symbols and NL. H Results on OOD Symbolic Tasks We supplement the experiments and compare the performances on 12 new OOD symbolic tasks. These OOD tasks can be divided into two folds: (i) novel tasks with seen symbolic forms (i.e., outof-distribution); (ii) novel symbolic forms (i.e., outof-domain). The results are presented in Table 6. From the results, Symbol-LLM shows consistent and significant superiority, compared with LLaMA2-Chat. I Results on Extensive General Tasks In Table 7, we select extensive general tasks for comparison. According to OpenCompass (Contributors, 2023), these tasks are divided into several categories, covering Examinations, Knowledge, Understanding and Reasoning. Considering the computation cost, we only report the performances of 7B models. The major takeaways are as follows: Symbol-LLM demonstrates better overall performances compared with LLaMA-2-Chat. Generally speaking, Symbol-LLM wins more of the tasks than LLaMA-2-Chat. Such superiority is consistent with the findings in MMLU and BBH benchmarks in Table 2. It illustrates that SymbolLLM can serve as a solid foundational model, significantly enhancing its symbolic capabilities while maintaining its generality. Optimization empowers Symbol-LLM with improved understanding and reasoning abilities. Among the four task categories, Symbol-LLM is 13108 Symbolic GeneralSymbol+Del. Avr. 0 10 20 30 40 50 60 70 80 90 Performance (%) 32.27 56.84 56.61 48.57 25.04 47.01 45.94 39.33 GPT-3.5-turbo Claude-1 (a) Close-source Symbolic General Symbol+Del. Avr. 0 10 20 30 40 50 60 70 80 90 22.59 40.40 16.71 26.57 10.89 38.63 14.52 21.35 73.02 37.76 49.54 53.44 71.88 44.30 52.54 56.24 LLaMA-2-Chat CodeLLaMA-Instruct Symbol-LLMBase Symbol-LLMInstruct (b) Open-source 7B Symbolic General Symbol+Del. Avr. 0 10 20 30 40 50 60 70 80 90 18.46 45.27 27.22 30.32 9.54 37.22 30.33 25.70 73.67 41.91 55.00 56.86 74.34 48.40 60.45 61.06 LLaMA-2-Chat CodeLLaMA-Instruct Symbol-LLMBase Symbol-LLMInstruct (c) Open-source 13B Figure 7: Overall results comparison. We report the performances on close-source, open-source 7B as well as open-source 13B LLMs. The results on symbolic tasks, general tasks, symbol+delegation tasks and the average ones are included. Tasks LLaMA-2-Chat Symbol-LLM LLaMA-2-Chat Symbol-LLM 7B 7B 13B 13B Out-of-distribution Tasks Tyreworld 23.12 33.30 23.85 79.18 WikiSQL 21.05 73.75 34.86 69.83 WikiTQ 3.50 16.97 7.50 15.31 SVAMP 22.00 72.80 45.00 76.80 Asdiv 25.86 75.76 46.61 79.01 Out-of-domain Tasks NL2JS 16.36 16.64 14.35 17.60 NL2PHP 9.27 23.38 22.84 26.31 H-Termes 39.39 63.87 39.64 69.08 H-Floortile 47.66 76.88 63.57 79.57 H-Grippers 79.52 98.96 55.87 99.00 H-Scan 0.00 91.25 0.00 95.34 H-Clevr 0.00 98.52 0.00 98.02 Table 6: Results on OOD symbolic tasks. Tasks starting with ‘H-’ mean the symbolic forms are automatically modified to construct the much harder scenarios and imitate the user-defined settings. particularly better at understanding and reasoning, beating LLaMA-2-Chat on almost all tasks. Such findings are intuitive because text-to-symbol can be regarded as an abstract form of NL, which enriches the understanding abilities of the model. Meanwhile, the generation of some symbolic forms (e.g., code) involves the implicit reasoning process, which is actually similar to the chain-of-thought strategy. To this end, the superior reasoning capability is within our expectations. J Results on Symbol+Delegation Setting In the main paper, we present the results of math reasoning under the Symbol+Delegation paradigm. Next, we will provide the remaining 5 scenarios, i.e., symbolic reasoning (J.1), logical reasoning (J.2), robotic planning (J.3), visual question answering (J.4) and table question answering (J.5). J.1 Symbolic Reasoning In symbolic tasks, we also adopt the Python code as the generated symbolic representations, and leverage a Python interpreter to conduct the reasoning. Two representative tasks, Colored Objects and Last Letter Concatenation are selected for testing under the zero-shot setting. From the results in Table 8, the Symbol-LLM series are competitive in both tasks. Notably, even Symbol-LLM-7B shows over 10% superiority over GPT-3.5-turbo in Colored Object task. It is worth noticing that LLaMA-2-Chat models underperform consistently in Last Letter task. Since samples in this dataset share similar forms, the model tends to fail if the model does not master the techniques required for solving it. J.2 Logical Reasoning In logical reasoning tasks, we take three tasks into consideration, i.e., FOLIO, ProofWriter and ProntoQA. For the FOLIO task, Symbol-LLM first transforms the natural language into FOL forms and delegates the solution to the FOL solver. ProofWriter and ProntoQA are represented in logic programming language and integrate Pyke expert 13109 Tasks LLaMA-2-Chat† Symbol-LLM Examinations AGI-Eval 28.50 27.55 C-Eval 31.90 34.96 GaokaoBench 16.10 13.37 ARC-c 54.90 61.69 Knowledge BoolQ 81.30 77.00 CommonsenseQA 69.90 59.21 TrivialQA 46.40 40.90 NaturalQuestions 19.60 16.48 Understanding OpenbookQA 74.40 79.20 XSUM 20.80 33.29 LAMBADA 66.90 70.10 C3 49.80 52.82 Reasoning CMNLI 36.10 55.43 OCNLI 36.40 48.10 Ax-b 58.50 62.68 Ax-g 51.70 64.61 Hellaswag 74.10 53.10 SIQA 55.40 69.60 MBPP 17.60 22.80 ReCoRD 22.50 39.49 Table 7: Results on extensive general tasks. The evaluations are based on OpenCompass (Contributors, 2023). † denotes the model results directly derived from the leaderboard. system for deductive reasoning. Results are listed in Table 9. Symbol-LLM-7B series performs relatively better than 13B counterparts. Among all three tasks, the Symbol-LLM-7B series outperforms GPT-3.5-turbo with large advantages. In comparison with the SOTA model Logic-LM, which is based on off-the-shelf LLMs, Symbol-LLM also wins the ProofWriter tasks, with 5%-6% improvements. J.3 Robotic Planning In the field of robotic planning, Symbol-LLM first transforms the natural language description into PDDL forms and relies on the fast downward solver to give the faithful action sequence. In total, we select 6 different robotic settings to verify the proposed method. Results are presented in Table 10. Among four in-domain tasks, SymbolLLM performs pretty well compared with strong baselines, achieving the best results in most cases. Even with GPT-3.5-turbo and Claude-1, both our 7B and 13B series win 3 (out of 4) tasks. However, it struggles a lot in OOD tasks. Only in Tyreworld scenario, Symbol-LLMInstruct-13B achieves the best result, beating all close-source and opensource baselines. It is required to state that these selected robotic planning tasks are very challengModels Del. ColoredObject LastLetter Is OOD Setting ✓ ✓ Close-source LLMs GPT-3.5-turbo ✓ 12.45 94.00 Claude-1 ✓ 46.05 90.67 Open-source LLMs (7B) LLaMA-2-Chat ✓ 28.70 0.00 CodeLLaMA-Instruct ✓ 4.60 0.00 Symbol-LLMBase ✓ 22.65 90.67 Symbol-LLMInstruct ✓ 25.50 96.67 Open-source LLMs (13B) LLaMA-2-Chat ✓ 30.35 0.00 CodeLLaMA-Instruct ✓ 1.35 0.00 Symbol-LLMBase ✓ 36.35 94.00 Symbol-LLMInstruct ✓ 34.00 96.67 Table 8: Results on Symbolic Reasoning. Models Del. FOLIO ProofWriter PrOntoQA Is OOD Setting ✓ Close-source LLMs GPT-3.5-turbo ✓ 44.61 29.00 52.00 Claude-1 ✓ 37.25 35.83 55.80 Logic-LM (SOTA) ✓ 61.76 70.11 93.20 Open-source LLMs (7B) LLaMA-2-Chat ✓ 34.80 34.83 50.00 CodeLLaMA-Instruct ✓ 32.84 32.50 50.20 Symbol-LLMBase ✓ 46.08 76.50 55.60 Symbol-LLMInstruct ✓ 49.02 76.33 57.20 Open-source LLMs (13B) LLaMA-2-Chat ✓ 33.33 35.83 49.20 CodeLLaMA-Instruct ✓ 32.84 34.00 50.00 Symbol-LLMBase ✓ 33.82 76.33 48.40 Symbol-LLMInstruct ✓ 35.29 75.50 53.60 Table 9: Results on Logical Reasoning. All results are obtained under the one-shot setting. ing, given the length and rigor requirements of the generated programming language. Even closesource LLMs fail in some scenarios. Therefore, we argue it is still an open question for future studies. J.4 Visual Question Answering We also explore our potential in the multi-modal scenario. Geometry question answering is selected as the task for the test. Note that the understanding of image is not within our scope, we only focus on the text part and transform the natural language query into logical forms. Then the solution is delegated to the off-the-shelf baseline methods. Comparison results are shown in Figure 8. The top red bar means the performances using the annotated logic forms from the text. Note that since the utilized delegation method is a neuralbased baseline just for a simple evaluation, the upper boundary does not represent the boundary of this task. From the figure, Symbol-LLM variants are approaching it and significantly outperform all the other baselines. 13110 Models Del. Blocksworld Termes Floortile Grippers Barman Tyreworld Is OOD Setting ✓ ✓ Close-source LLMs GPT-3.5-turbo ✓ 55.00 0.00 0.00 100.00 95.00 30.00 Claude-1 ✓ 55.00 0.00 0.00 85.00 50.00 5.00 Open-source LLMs (7B) LLaMA-2-Chat ✓ 5.00 0.00 0.00 5.00 0.00 0.00 LLaMA-2-Chat SFT ✓ 75.00 100.00 0.00 0.00 0.00 0.00 CodeLLaMA-Instruct ✓ 5.00 0.00 0.00 20.00 0.00 0.00 Symbol-LLMBase ✓ 90.00 100.00 5.00 15.00 0.00 0.00 Symbol-LLMInstruct ✓ 100.00 50.00 20.00 20.00 0.00 5.00 Open-source LLMs (13B) LLaMA-2-Chat ✓ 0.00 0.00 0.00 45.00 50.00 5.00 LLaMA-2-Chat SFT ✓ 70.00 100.00 25.00 10.00 0.00 0.00 CodeLLaMA-Instruct ✓ 5.00 0.00 0.00 0.00 0.00 0.00 Symbol-LLMBase ✓ 90.00 100.00 0.00 30.00 0.00 10.00 Symbol-LLMInstruct ✓ 100.00 90.00 25.00 45.00 20.00 35.00 Table 10: Results on Robotic Planning. The evaluation is under the one-shot setting. Close-Source Open-source 7B Open-source 13B 0 20 40 60 80 100 Results on Geometry3K (%) 68.89 (GT Upper Boundary) GPT-3.5 Claude-1 LLaMA-2-Chat CodeLLaMA-Instruct Symbol-LLM(Base) Symbol-LLM(Instruct) 47.59 31.95 36.44 7.15 63.3963.39 21.9626.12 65.2264.39 Figure 8: Performances on Geometry3k task. J.5 Table Question Answering Table (or database) question answering is also a hot topic in recent years. Thus, we select two OOD tasks WikiSQL and WikiTQ for evaluations. The natural language query is first transformed into an SQL query and it is executed by an SQL solver over the given tables or databases under the zero-shot setting. We report experimental results in Table 11. Symbol-LLM series are consistently superior to all open-source and close-source baselines, with over 40% margins in WikiSQL and 3%∼14% advantages in WikiTQ. K Supplementary Experiments K.1 Tuning Design Choices Since our motivation is to build a symbolic foundation for current LLMs, we select the full fine-tuning strategy in the implementation. This part provides a comparison with different tuning design choices: (1) LoRA fine-tuning and (2) curriculum learning. In Figure 9, the performances on 34 symbolic tasks are presented. Models Del. WikiSQL WikiTQ Is OOD Setting ✓ ✓ Close-source LLMs GPT-3.5-turbo ✓ 28.49 11.58 Claude-1 ✓ 26.79 8.79 Open-source LLMs (7B) LLaMA-2-Chat ✓ 21.05 3.50 CodeLLaMA-Instruct ✓ 20.18 2.88 Symbol-LLMBase ✓ 70.88 17.15 Symbol-LLMInstruct ✓ 73.75 16.97 Open-source LLMs (13B) LLaMA-2-Chat ✓ 34.86 7.50 CodeLLaMA-Instruct ✓ 33.15 6.70 Symbol-LLMBase ✓ 71.69 17.31 Symbol-LLMInstruct ✓ 69.83 15.31 Table 11: Results on Table Question Answering. LoRA tuning To inject new symbolic knowledge and lay a symbolic foundation for LLM, adapter tuning (e.g., LoRA) is not sufficient compared with the full fine-tuning design. Curriculum learning Based on the performances of LLaMA-2-Chat on these 34 tasks, we rank the difficulty level of tasks, forming the training order of curriculum learning (from easy to hard tasks). From the results, curriculum learning performs 4%-5% worse than our design choice. K.2 Comparison with Single Domain SFT As discussed above, one of our hypotheses is that various symbols share underlying interrelations, though they are in quite different forms. Thus, we expect that the learning of symbolic knowledge will mutually benefit each other if they are treated in a unified manner. We present the comparison results between single SFT and unified SFT in Figure 10. The light 13111 7B 13B 0 10 20 30 40 50 60 70 80 Performance (%) 47.24 51.74 68.33 69.53 73.02 73.67 LoRA Curriculum Learning Symbol-LLM Figure 9: Comparison between different tuning design choices. blue bar denotes the single domain SFT while the dark blue one is the unified SFT. We categorize the text-to-symbol generation tasks into 12 task domains, according to their similarity in symbolic forms. Within one domain, all tasks are tuned together and the performances on test splits are averaged as the single SFT results. To reduce the effect of the tuning strategy, we utilize the SymbolLLMBase model to measure the results on the unified SFT setting. Each sub-figure in Figure 10 corresponds to one specific domain. In most domains, unified SFT is superior to single-domain SFT. Larger gains are observed in some uncommon symbolic forms, such as PDDL for planning tasks and molecular formulas in AI for Science scenarios. It presents the possibility that unified SFT on various symbols may help extend the model coverage to low-resource cases. It is also worth noting that in some cases, single-domain SFT performs a little better than Symbol-LLMBase. This is because purely overfitting on specific symbolic forms with powerful LLMs is usually easy to get promising results. K.3 Extrapolating to New Symbols In the above section, we introduce symbol-evol strategy to expand sample diversity and facilitate the training of instruction-following ability. Following this strategy, we can also automatically generate abundant novel instructions to extrapolate to new symbols. To this end, we further evaluate Symbol-LLM by following novel instructions. The experiments are based on Clevr and SCAN tasks. Applying symbol-evol strategy, we obtain Clevr-evol and SCAN-evol datasets. Evaluation results are presented in Figure 11. From the results, the more complex setting (i.e., green bar) does not induce a significant decrease in model performance. Especially, in the Clevr task, Symbol-LLM even does better given the novel instructions. It uncovers that Symbol-LLM follows the instructions during the reasoning process, instead of merely memorizing the specific symbolic forms. L Analysis: Alignment and Uniformity Beyond performances on symbolic tasks, it is also required to reveal what leads to superiority. Inspired by (Wang and Isola, 2020; Yuan et al., 2023), we extend the ideas of Alignment and Uniformity to evaluate the model perception of symbolic knowledge. Alignment 6 is utilized to measure the interrelations between symbolic forms. Uniformity quantifies the degree of evenness or uniformity in the distribution of symbolic representations. The concept of a uniform feature distribution is valuable as it encourages a higher information entropy, representing more information retention. Alignment Different from the original implementation (Wang and Isola, 2020) which considers positive pairs in the contrastive learning, this work takes the symbolic sequences under the same symbolic form as the positive pairs. For any symbolic form X, their data distributions are referred to as PX. The alignment within X can be measured with the following formula: Lalign(X) = E x1,x2 i.i.d ∼PX ∥f(x1) −f(x2)∥2 , (3) where x1 and x2 are samples from the specific symbolic form X. f(·) returns the LLM embeddings of the symbolic sequences. ∥·∥returns the norm of the vector. In the implementation, we select 16 main symbolic forms and sample 100 symbolic sequences for each form to measure the alignment. f(·) leverages the mean pooling representation of the last hidden states of the LLM. We average all the alignment scores from the 16 symbols to obtain the final one. Notably, we employ logarithmic operations on the Alignment loss to reduce scale, without impacting their relative comparison. 6Here, Alignment refers to the concept in contrastive learning, but is not related to the alignment technique in LLMs. 13112 7B 13B 0 50 100 Planning 7B 13B SQL 7B 13B KG/DB 7B 13B AMR 7B 13B Ontology 7B 13B API 7B 13B 0 50 100 Command 7B 13B Code 7B 13B FOL 7B 13B Vision 7B 13B Math 7B 13B AI4Science Figure 10: Comparison between single SFT and unified SFT. 0 20 40 60 80 100 Performances (%) +3.35 +3.72 +25.59 +2.42 (a) Clevr Task Symbol-LLM (7B-Base) Symbol-LLM (7B-Instruct) Symbol-LLM (13B-Base) Symbol-LLM (13B-Instruct) 0 20 40 60 80 100 Performances (%) -3.63 -7.1 -4.24 -3.94 (b) SCAN Task Clevr Clevr-Evol Figure 11: Comparisons between original setting and user-defined setting. Uniformity Apart from alignment, we also calculate the uniformity of the LLMs on symbolic sequences. The evaluation of the uniformity Luniform is implemented by the following formula: Luniform = log E x,yi.i.d ∼Pdata e−2∥f(x)−f(y)∥2, (4) where the data distribution Pdata covers all the symbolic sequences. f(·) also utilizes the mean pooling representation of the last hidden states of the LLM. Leveraging the above definitions, further analysis and comparison on Symbol-LLM are conducted. The item-wise conclusions are listed as follows: (1) Symbol-LLM optimizes symbol distinctiveness and overall expressiveness in the embedding space (superior Alignment and Uniformity). Based on the equation 3 and 4, we can assess the proficiency of LLMs in handling symbols. Figure 12 presents the visualization of AlignmentUniformity. The x-axis stands for uniformity while the y-axis is the alignment. Both of these metrics are better when kept as small as possible. From the figure, Symbol-LLMInstruct models perform consistently better than the original LLaMA2-Chat models, with obvious merit in Alignment and Uniformity. It can be regarded as an in-depth explanation for the superior performances on symbolic generation tasks. Further, it witnesses that the two-stage tuning framework actually corrects the weakness of Uniformity under 7B settings (SymbolLLMInstruct v.s. Symbol-LLMBase). Both metrics are well optimized with the proposed two-stage tuning framework as well as the symbolic data collection. For Symbol-LLMBase models, though the 7B version witnesses some loss in Uniformity, they consistently achieve superior alignment. (2) Symbol-LLM excels at capturing symbolic interrelations. The above calculation of Alignment roughly depicts the similarity among samples under the same symbolic form. To this end, we extend the idea of Alignment to measure the interrelation between any two symbolic forms X and Y. The score S(X, Y ) is calculated based on the following formula: S(X, Y ) = E x∼PX,y∼PY ∥f(x) −f(y)∥2 , (5) where x is one symbolic sample in the form of X, while y is one sample in the symbolic form Y. We binarize the scores with the manually defined 13113 Figure 12: Visualization of Alignment-Uniformity. Both metrics are inversely related, which means a lower value indicates better performance. threshold for a more intuitive illustration. We ensure the same threshold under a fair comparison of the same model size. And the set of thresholds will not affect the overall conclusion. The visualization is presented in Figure 13, where the dark blue denotes the closer relation in the representation while the light one is the opposite. We make comparisons between the original LLaMA-2-Chat models and Symbol-LLMInstruct models, separately for the size of 7B and 13B. For the original LLaMA model (Figure 13a and 13c), the representations between different symbols exhibit significant sparsity. There are only three pairs of symbolic forms that effectively demonstrate the interrelations in the embedding space, i.e., AMR-PDDL, AMR-SPARQL and CLEVR-NLMaps. Also, under several symbol systems (e.g., Bash, FOL), the representation space of samples is also very scattered. The above observations demonstrate that previous foundational LLMs (i.e., LLaMA-2-Chat) lack the ability to capture the interrelations among symbolic systems. In comparison, Symbol-LLMInstruct series models excel at reflecting the interrelations between symbols. As presented in Figure 13b and 13d: 1) Symbols exhibiting potential connections are effectively aligned within the representation space, i.e., Python-AMR and CheBi-RX. 2) Samples within each symbol are pulled closer together. Combining the above two observations and analysis, the superior performances of Symbol-LLM on the symbolic generation tasks are sourced from better alignment among symbols in the embedding PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi (a) LLaMA-2-Chat (7B) PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi (b) Symbol-LLMInstruct (7B) PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi (c) LLaMA-2-Chat (13B) PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi PDDL SQL s-Expre. SPARQL TOP NLMaps AMR CMD BASH RX Python FOL GQA Clevr Geo CheBi (d) Symbol-LLMInstruct (13B) Figure 13: Visualization of the alignment relations between symbols. Dark blue denotes a close relation between two symbols in the representation. space as well as the optimized uniformity. M Prompting Details This section provides the zero-shot prompting details during the Symbol-LLM test. Totally, 34 symbolic tasks are covered. Planning prompts for planning domain cover 6 tasks: Blocksworld, Termes, Floortile, Grippers, Barman, and Tyreworld. Given the description of a scene, please transform the part of the scene from natural language into PDDL file. The defined atom is: [Atom Description] The natural language description is: [NL Description] The PDDL file to this problem is: SQL prompts for SQL domain cover 3 tasks: Spider, Sparc, and Cosql. Using valid SQLite, transform the question into SQL form for the tables provided above. The database schema is: [Scheme] The question is: [NL Query] The corresponding SQL for the question is: 13114 KG/DB prompts for KG domain cover 3 tasks: WebQSP, GrailQA (s-expression), and CompWebQ (SPARQL). Given the knowledge graph, give me the sExpression/SPARQL for request. The given knowledge graph is: [Serialized KG] The request is: [NL Query] The corresponding s-expression/SPARQL for the request is: AMR prompts for AMR domain cover 3 tasks: AMR 3.0, AMR 2.0, and BioAMR. Transform the natural language sentence into AMR form. The input sentence is: [NL sentence] The AMR to this sentence: Ontology prompts for ontology domain cover 2 tasks: TekGen and WebNLG. Given the following ontology, extract the triples from the sentence. In the output, split each triple with ’|’. The ontology concepts are: [NL Concepts] The ontology relations are: [NL Relations] The sentence is: [NL sentence] The output triples are: API prompts for API domain cover 3 tasks: MTOP, TOPv2 and NLMaps. Given api calls, output the TOP Representation for the request. The candidate api calls: [candidate API calls] The request is: [NL Query] The corresponding TOP representation for the query is: Transform the natural language question into the formal language to query the map apis. The natural language question is: [NL Query] The corresponding formal language to query the map apis is: Command prompts for Command domain cover SCAN task. Transform the natural language sentence into action sequence from the candidate actions. The candidate actions are: [Candidate Actions] The NL sentence is: [NL sentence] The corresponding action sequence for the query is: Code prompts for Code domain cover 5 tasks: NL2BASH, NL2RX, NL2Python, NL2Java, and NL2GO. Generate the [Programming Language] code given the natural language instruction. The natural language question is: [NL Query] The corresponding code to this question: FOL prompts for FOL domain cover 3 tasks: FOLIO, MALLS, and LogicNLI. Transform the natural language sentence into first-order logic forms. The input sentence is: [NL sentence] The corresponding first-order logic for the request is: Visual prompts for Visual domain cover 3 tasks: GQA, CLEVR, and Geometry3K. Transform the natural language question into logical functions. Each part of the logical functions starts with the specific operation and they are connected with –>. The candidate operations include (but not limited to): [candidate operations] The natural language question is: [NL Query] The corresponding logical function for the question is: Transform the natural language question into logical functions. The logical functions are organized in the compositional forms of operations. The candidate operations include: [candidate operations] The natural language question is: [NL Query] The corresponding logical function for the question is: 13115 Transform the description of geometric questions into the formal language in the logic form. We define the following valid predicates. [candidate predicates] The natural language question is: [NL Query] The corresponding logical function for the question is: Math prompts for Math domain cover 3 tasks: GSM8K, MATH and AQUA. Write Python code to solve the question. The question is: [NL Query] The solution code is: AI4Science prompt for AI4Science domain cover CheBi task. Generate the SMILES (Simplified Molecular Input Line Entry System) representation from the natural language description of the molecular. The molecular is: [NL Description] The corresponding SMILES representation is: 13116
2024
707
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13117–13129 August 11-16, 2024 ©2024 Association for Computational Linguistics From Sights to Insights: Towards Summarization of Multimodal Clinical Documents Akash Ghosh1, Mohit Singh Tomar1, Abhisek Tiwari1, Sriparna Saha1, Jatin Avinash Salve1 Setu Sinha 2 1Department of Computer Science And Engineering, Indian Institute of Technology Patna, India 2 Indira Gandhi Institute of Medical Sciences, Patna, India {akash_2321cs19,sriparna}@iitp.ac.in, {mohitsinghtomar9797, abhisektiwari2014,jatinsalve.work,drsinhasetu}@gmail.com Abstract The advancement of Artificial Intelligence is pivotal in reshaping healthcare, enhancing diagnostic precision, and facilitating personalized treatment strategies. One major challenge for healthcare professionals is quickly navigating through long clinical documents to provide timely and effective solutions. Doctors often struggle to draw quick conclusions from these extensive documents. To address this issue and save time for healthcare professionals, an effective summarization model is essential. Most current models assume the data is only textbased. However, patients often include images of their medical conditions in clinical documents. To effectively summarize these multimodal documents, we introduce EDI-Summ, an innovative Image-Guided Encoder-Decoder Model. This model uses modality-aware contextual attention on the encoder and an image cross-attention mechanism on the decoder, enhancing the BART base model to create detailed visual-guided summaries. We have tested our model extensively on three multimodal clinical benchmarks involving multimodal question and dialogue summarization tasks. Our analysis demonstrates that EDI-Summ outperforms state-of-the-art large language and vision-aware models in these summarization tasks. Disclaimer: The work includes vivid medical illustrations, depicting the essential aspects of the subject matter. 1 Introduction One of the most impactful sectors where AI advancements can bring a significant revolution is healthcare. The latest WHO report highlights a drastic doctor-to-patient ratio, emphasizing the need for automation in various healthcare tasks. This imbalance, coupled with strides in information and communication technologies (ICTs) and the impact of COVID-19, has led to a significant rise in telehealth practices (Nittari et al., 2020). Recent studies (Sahoo et al., 2022a), (Abacha et al., 2023), (Xue et al., 2018), (Sahoo et al., 2022b) demonstrate that AI models can be effectively employed in various healthcare settings and can drastically increase the productivity of healthcare professionals. Figure 1: Our model EDI-Summ takes a multimodal clinical document as input and generates the corresponding summary. In this regard, a significant development is that a considerable number of patients now discuss their medical history and conditions in various clinical forums before attending in-person appointments. Distilling key insights from these clinical documents not only boosts the efficiency of AI-based healthcare systems but also accelerates responses to patients, providing valuable time savings for healthcare professionals and facilitating a more prompt and beneficial exchange of information between both patients and doctors (Abacha and DemnerFushman, 2019). Most research in the field of medical document summarization has focused on unimodal, textbased approaches. Wei et al. (2023) explored medical query summarization using entity-driven contrastive learning. Similarly, (Joshi et al., 2020) addressed text-based medical dialogue summarization, while (DeYoung et al., 2019) focused on summarizing multiple medical documents. Recognizing the adage that an image is worth a thousand 13117 words, a discernible shift is underway, with patients increasingly incorporating images of their medical conditions alongside their queries. This inclination arises from the fact that a considerable portion of the population may not be well-versed in medical terminology. This challenge becomes more pronounced when dealing with related symptoms, such as distinguishing between eye redness and eye inflammation. Integrating text and images provides a more comprehensive context, enriching the analysis beyond what textual examination alone might capture, which is underscored in the work (Kumar et al., 2018). Motivated by the research gap (Ghosh et al., 2024a), (Tiwari et al., 2023), and (Ghosh et al., 2024b) have contributed datasets along with their proposed models, utilizing techniques such as prompting and end-to-end finetuning to incorporate visual information alongside textual content. Each of these works has established its own model, but no single model has been developed that is well-suited for all these multimodal clinical summarization benchmark datasets. Our exploration culminates in proposing a model named EDI-Summ, designed to work across all the mentioned multimodal clinical summarization datasets. EDI-Summ integrates contextual multimodal fusion within the encoder and employs a cross-attention mechanism in the decoder of the BART architecture. Research Questions: We aim to investigate the following research questions in this work: R1) How is the performance of EDI-Summ compared to the baselines? R2) How different pre-trained vision models like ResNet (He et al., 2016), VGGNet (Simonyan and Zisserman, 2014), VIT (Dosovitskiy et al., 2020) affect the quality of generated summary in terms of various evaluation metrics? R4) How much does the performance vary if we change the order of image fusion in the encoder and decoder layers? R5) How effective is EDI-Summ in handling Hindi-English codemixed text ? R6) Do we need an Image Fusion module in both the encoder and decoder? What is the impact of Decoder Visual cross-attention? Contributions: The key contributions of our work are stated below: 1) We present an innovative encoder-decoder visual-infused transformer model: EDI-Summ, which includes a contextual image fusion layer on the encoder side and an image cross-attention layer on the decoder side. Our proposed model exhibits remarkable performance gains, substantially outperforming baseline models and unimodal architectures in comprehensive evaluations. 2) We undertook an exhaustive automated and human evaluation, complemented by in-depth qualitative analysis and risk assessments. This meticulous scrutiny was essential to guarantee the safety and reliability of the model’s outputs, affirming its readiness for deployment in real-world healthcare scenarios with high confidence. 2 Related Works The following works have been relevant to the following two research areas, namely (a) Works on Medical Document Summarization and (b) Works on Multimodal Summarization. Works on Medical Document Summarization: Wang et al. (2020) was the first to conduct research on the summarization of COVID-19 documents. DeYoung et al. (2021) introduced a multi-document biomedical document summarization dataset. Wallace et al. (2021) introduced a dataset on the summarization of factual queries. In 2019, the MeQSum dataset (Abacha and Demner-Fushman, 2019) revolutionized Medical Question Summarization (MQS), establishing itself as a dedicated resource for the field. There was a competition on the task in 2021 by Abacha (Abacha et al., 2021). Researchers initially utilized various pre-trained models like BART (Lewis et al., 2019), Pegasus (Zhang et al., 2020), and Prophetnet (Qi et al., 2020). Goff and Loehfelm (2018) proposed a novel method of using pre-trained models for summarizing the impression section of radiology reports. Joshi et al. (2020) proposed a method to exploit local structures for summarizing medical dialogues. Chintagunta et al. (2021) showed how GPT-3 can be utilized for generation of quality training data for medical dialogue summarization. Poornash et al. (2023) proposed a novel solution for layout summarization of biomedical articles. Works on Multimodal Data: Previous research has demonstrated the benefits of incorporating multimodal information in various tasks (Ghosh et al., 2024c). For instance Jha et al. (2022) showed how combining both visual and language representations can help in patch-based identification of lexico-semantic relations. Suman et al. (2022) showed how multimodal information can be used 13118 for personality prediction. Jha et al. (2024) showed multimodal explanations can be useful for enhanced cyberbullying understanding. Maity et al. (2022) showed how multimodal information from memes helps in combating cyberbullying. Tiwari et al. (2022) highlighted how multimodal information improves the performance of Disease Diagnosis Virtual Assistants. Delbrouck et al. (2021) showed that integrating images leads to better summarization of radiology reports. Kumar et al. (2023) showed multimodality helps in summarizing news articles. Verma et al. (2023) developed a large multilingual multimodal dataset. Kumar et al. (2023) illustrated how multimodal summarization can be useful for extracting insights from opinions. Joshi et al. (2020) and Molenaar et al. (2020) worked on the task of handling dialogues between patients and doctors and how to summarize their conversations. Goff and Loehfelm (2018) and Cai et al. (2021) are some of the first to propose the task of radiology report summarization. 3 Methodology This section delineates the problem statement and outlines the various components of our proposed model. 3.1 Problem Formulation Formally given an input textual document (D) represented by D = {d0, d1, . . . , dn} where n is the sequence length of the document, and a visual image corresponding to a textual query represented by (V ), the task is to generate a clinically nuanced summary guided by the visual cue. The proposed architecture and its components are shown in Figure 2. The proposed model EDI-Summ has three main components, namely: i) Representation of different Modalities ii) Encoder Contextual Image Fusion iii) Decoder Image Cross Attention. A thorough explanation of the functionality of each section is provided in the following sections. 3.2 Representation of Different Modalities Visual Representation: Our experimentation involved ResNet, VGG-Net, and VIT for generating image embeddings for the multimodal fusion layer. Since these pre-trained models produced image embeddings of dimension 2048, we introduced an additional neural network atop them to transform its model dimension to align with the textual dimension. The weights of all layers, excluding the last three, were frozen, and we derived a vector representation of the image by pooling the output from the last layer, ultimately achieving a reduced embedding dimension of 768. Textual Document Representation: In our study, a patient’s medical document is represented as a sequence of words. We utilize the pretrained BART-base language model to generate 768-dimensional embeddings for textual data. With six layers in its encoder and decoder, BART captures bidirectional contextual information, producing a fixed-size textual representation. BART tokenizers are instrumental in breaking down input text into tokens, ensuring effective encoding for our multimodal clinical query summarization. 3.3 Encoder Contextual Image Fusion The various components of the Encoder Contextual Image Fusion are elucidated in the following mentioned points: a) Multimodal Context Aware Self Attention: The effectiveness of the aggregated representation hinges on the seamless integration of multiple information sources. Inspired by the findings of Yang et al. (2019), we proposed a multimodal contextaware self-attention that produces conditioned key (K) and value (V ) vectors that become instrumental in generating modality-aware attention vectors. This is elaborated in Figure 2. Upon obtaining the intermediate representation H = {h0, h1, . . . , hn} generated by the BART encoder at a particular layer, we proceed to compute the query (Q), key (K), and value (V ) ∈Rn×d vectors, where d is the model dimension. This calculation is shown in Equation 1.   Q K V  = H   WQ WK WV   (1) Here WQ, WK, and Wv ∈Rd×d are the learnable parameters corresponding to the query, key, and value vectors, respectively. In assessing the interrelation between textual query and the visual information, we create conditioned key ( ˆK) and value ( ˆV ) vectors tailored to textual and visual contexts. These attention vectors transpose the query vector derived from the dialogue transcript, generating a versatile, imageinfused information vector. The key and value pairs are calculated as shown in Equation 2. Here, we pass visual representation through a transformer 13119 Figure 2: Architecture of our model (a) EDI-Summ, (b) Contextual Image fusion, (c) Image Cross Attention. Here, we obtain textual representation from BART and visual representation from VGG-Net. We perform image fusion in BART Encoder and Decoder and generate the question summary at the decoder. and project its sequence length1 to be equal to textual sequence length in order to perform multimodal fusion.  ˆK ˆV  =  λk λv  (G  Uk Uv  ) + (1 −  λk λv  )  K V  (2) where G ∈Rn×d indicates visual representation, Uk and Uv ∈Rd×d are the learnable parameters. The vector λ = λk λv  controls how much information to retain from visual modality. This parameter can be calculated by the Equation 3.  λk λv  = σ(  K V  Wk1 Wv1  + G  Uk Uv  Wk2 Wv2  ) (3) where Wk1, Wk2, Wv1, and Wv2 ∈Rd×1 are learnable parameters. The final scaled dot product attention can be calculated as shown in Equation 4. Hv = Softmax(Q ˆKT √dk ) ˆV (4) where Hv represents visual information fused vector. b) Unified Multimodal Integration: To regulate and modulate the flow of the visual-infused 1We transform image sequence length (1) to textual sequence length (n). Image sequence length is one because it is a single image corresponding to a given textual query. information, we utilize visual gates. The transmission of contextual information occurs through these gates according to Equation 5: gv = σ([H ⊕Hv]Wv + bv) (5) Here, Wv ∈R2d×d and bv ∈Rd×1 are learnable parameters. The ultimate amalgamated representation, denoted as ˆH, is obtained according to the Equation 6. ˆH = H + gv ⊙Hv (6) The contextualized image-infused vector ( ˆH) is passed to the upper layer of the transformer and sent to the decoder. 3.4 Decoder Image Cross Attention In the BART Decoder, we inject an Image crossattention block for performing image fusion. We pass the image representation obtained from an image encoder to a transformer, then project its sequence length to the textual sequence length and obtain image representation I ∈Rm×d, m is the sequence length at the decoder. Let the intermediate representation at the input of Image cross-attention be F ∈Rm×d. We obtain query (Qd) from F and key (Kd) and value (Vd) vectors from I as shown in Equation 7. 13120   Qd Kd Vd  =   FWQd IWKd IWVd   (7) Here WQd, WKd, and WVd ∈Rd×d are trainable parameters. Then, we perform multi-head crossattention as shown in Equation 8 to obtain a new image-infused vector FI. FI = Softmax(QdKdT √dk )Vd (8) We combine the image-infused vector FI with intermediate representation F using a gate mechanism shown in Equation 9 and Equation 10. gd = σ([F ⊕FI]WFd + bFd) (9) Here WFd ∈R2d×d and bFd ∈Rd×1 are trainable parameters. We pass the fused representation ˆF to the upper layers of the decoder and generate the summary. ˆF = F + gd ⊙FI (10) 4 Experimental Results and Analysis The following section outlines the datasets used for analysis, experimental setup, and evaluation discussion of the performance of the proposed model across a spectrum of both automatic and human evaluation metrics. A qualitative analysis of the generated summaries is conducted under various model configurations. 4.1 Datasets Used To ensure the effectiveness of the proposed model, we conducted experiments on three multimodal clinical datasets: MMQS (Ghosh et al., 2024a), MMCQS (Ghosh et al., 2024b), and MM_CliConSummation (Tiwari et al., 2023). These datasets cover topics related to multimodal question summarization and multimodal dialogue summarization in the context of healthcare. The MMQS and MMCQS datasets contain approximately 3,015 data points and cover 18 medical symptoms. In comparison, the MM_CliConSummation dataset includes around 1,668 samples and encompasses 266 symptoms. 4.2 Experimental Setup Our experimentation employed RTX 2080 Ti GPU, with each model requiring an average runtime of 20-30 minutes. Leveraging the PyTorch (Paszke et al., 2019) and Hugging Face (Wolf et al., 2019) libraries, we executed both baselines and our proposed architecture. The foundation of our proposed model rests on the BART (Lewis et al., 2019). The dataset was partitioned into training, validation, and test sets in an 80:5:15 ratio. Hyperparameters included maximum epochs (30), maximum sequence length (360), visual embedding size (786), Adam optimizer, learning rate (5e-05), and batch size (32). We have performed the most extensive experiments in the case of the MMCS dataset. For the case of MMCS, our textual baselines featured T5 (Raffel et al., 2020), GPT-3.5 (OpenAI, 2023a), and BART (Lewis et al., 2019). For multimodal baselines, we used KM-CliConSummation (Tiwari et al., 2023) and GPT-4V (OpenAI, 2023b). Both GPT-3.5 Turbo and GPT-4V are used in the fewshot setting. In (Sahoo et al., 2024b), it is clearly stated that shot prompting is more effective in general than zero shot. Our proposed multimodal model, named EDI-Summ, is built on top of BART, where image fusion is done in both the encoder and decoder parts of the BART model. It is the first work where image fusion is performed in both the encoder and decoder portions of the BART model. EI-Summ is a simpler version of EDI-Summ where image fusion is done only in the encoder part of BART, inspired by (Kumar et al., 2022). We experimented with ResNet, VGG-Net, and VIT to generate image embeddings for our proposed multimodal model2. For the task of multimodal dialogue summarization, the chosen baselines were GPT-3.5, GPT-4V, and KM-CliConSummation (Tiwari et al., 2023). To evaluate the capability of handling codemixed text, we used the MMCQS dataset. The baselines selected for this task were GPT-3.5, GPT4, and the MedSumm models (Ghosh et al., 2024b) of LLAMA2 and Zephyr versions. We utilized automated metrics like BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) scores alongside human evaluation metrics. We collaborated with a healthcare professional and several medical students to conduct human evaluations. Our approach encompasses the assessment of three unique and medically nuanced metrics: clinical evaluation score, factual recall (Abacha et al., 2023), and omission rate (Abacha 2We conducted experiments with varying image embeddings on the MMQS dataset. However, we observed that the results did not differ significantly across these embeddings. Therefore, we did not extend similar experiments to the MMCQS and MM_CliConSummation datasets. 13121 et al., 2023). 4.3 Automated Evaluation Tables 1, 3, and 4 present the comprehensive evaluation of our proposed model, EDI-Summ, across three different multimodal medical summarization datasets: MMQS, MMCQS, and MM_CliConSummation. MMQS and MMCQS focus on multimodal medical query summarization, while MM_CliConSummation focuses on multimodal medical dialogue summarization. R1) Comparison with baselines: From Table 1, we can infer that T5’s performance lagged behind the BART base model among the range of textual baseline models, leading to the conclusion that BART is a more advantageous choice for multimodal extension. In the case of multimodal baselines, KM-CliConSummation performed the best, followed by GPT-4V. For the task of medical dialogue summarization, as shown in Table 3, KM-CliConSummation achieved the highest performance. MedSumm(LLAMA-2) version was the top performer for code-mixed query summarization. The selection of these baselines is based on the works by (Ghosh et al., 2024b), (Ghosh et al., 2024a), and (Tiwari et al., 2023). R2) Influence of Visual Cues: The impact of incorporating visual cues with textual queries for generating more nuanced summaries is evident across all metrics from Table 1, 3 and 4. The encoder image fusion model: EI-Summ and EDI-Summ exhibit significant superiority over models that rely solely on textual queries as input. R3) Impact of Varied Pre-trained Vision Models for Generating Embeddings: In our exploration, as shown in Table 2, we investigated the impact of employing different pre-trained vision models to generate image embeddings for integration into the BART model. Our findings indicate that embeddings from ResNet and VGG tend to exhibit slightly superior performance compared to VIT, albeit the distinction is not markedly significant. R4) Ablation study with different modality infusion orders: Table 2 elucidates how performance varies with changes in the infusion order within the encoder and decoder of the BART model employing VGG embeddings. Our experimental findings underscore a consistent trend, revealing that optimal results are consistently achieved when fusion occurs at layer 3 in both the encoder and decoder of EDI-Summ. This pattern holds true across all pre-trained vision embeddings. In the scenario of the EI-Summ, the most favorable outcomes are attained through fusion at layer 3 of the encoder. 3 R5) Effective in handling codemixed text : Table-4 clearly suggests the effectiveness of EDISumm in handling Hinglish text. It performed much better than the baseline models that were proposed in (Ghosh et al., 2024b) . R6) Impact of Decoder Visual Cross Attention: A consistent trend emerges with the enhancement of ROUGE, BLEU, and METEOR scores when using EDI-Summ instead of EI-Summ, as shown in Tables 1, 3, and 4. Our evaluation across these three multimodal clinical summarization datasets supports our intuition that decoder attention improves the alignment of multimodal information. 4.4 Human Evaluation As hallucination is a big problem in multimodal generation (Sahoo et al., 2024a), we have conducted a rigorous human-level analysis of the generated summaries. A team of medical students, under the guidance of a doctor, conducted the human evaluation on 55 data samples from each of the generated summaries from the three datasets. For the baseline for human evaluation we decided to go with the generated summaries of GPT-4v as it is the state-of-the-art multimodal baseline. To evaluate the significance of decoder attention, we compare the results of EI-Summ and EDI-Summ across all three different datasets. The evaluation metrics included: Clinical Evaluation Score: The doctor and the team assigned ratings ranging from 1 (poor) to 5 (good), evaluating the summaries based on overall relevance, consistency, fluency, and coherence, Medical Fact-Based Metrics: Factual Recall (Abacha et al., 2023) and Omission Recall metrics (Abacha et al., 2023) were employed to measure how well the generated summary captured medical facts compared to the gold standard annotated summary. Table-5 presents the results of EDI-Summ, which outperforms all baseline methods like GPT-4V across the selected human evalua3In Table 2, we present our experiments on the MMQS dataset. We conducted similar experiments on the MMCQS and MM_CliConSummation datasets and observed consistent trends across all three datasets. 13122 Model ROUGE-1 ROUGE-2 ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR T5 45.28 28.12 41.98 39.2 27.9 22.91 18.9 41.96 BART 47.87 30.91 44.62 41.67 31.62 25.67 22.09 44.62 GPT-3.5 (few shot) 44.59 24.07 36.00 21.18 15.47 11.68 9.03 31.42 KM-CliConSummation 49.46 30.58 46.51 42.59 32.29 23.42 17.10 47.10 GPT-4V (few shot) 48.44 25.26 39.90 29.36 20.40 14.59 10.22 37.16 EI-Summ (RESNET) 54.22 30.92 46.72 43.24 30.94 22.96 17.06 49.43 EI-Summ (VGG) 53.88 30.41 46.77 43.12 30.45 22.75 16.62 50.24 EI-Summ (VIT) 53.62 30.12 46.34 43.10 30.55 22.55 16.55 49.16 EDI-Summ (RESNET) 54.50 30.89 47.38 44.27 31.55 23.64 17.64 51.09 EDI-Summ (VGG) 54.74 31.35 47.14 44.39 31.72 23.46 17.14 51.52 EDI-Summ (VIT) 53.91 30.53 46.95 43.77 31.32 23.11 16.92 49.50 Table 1: Performances of different models for multi-modal clinical query summarization in MMCS Dataset. EDISumm’s performance is superior to all the baselines and encoder-only fusion models. Our experiment shows EDI-Summ performs best with VGG embeddings, and EI-Summ performs best with RESNET embeddings. Among unimodal baselines, BART worked best, and KM-CliConSummation achieved the best results among multimodal baselines. The best results for each subsection are shown in bold. Encoder layer Decoder layer ROUGE-1 ROUGE-2 ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR 2 3 54.28 30.71 47.25 44.25 31.70 23.76 17.88 50.64 2 4 53.56 30.89 46.9 43.54 31.30 23.54 17.6 51.36 3 3 54.74 31.35 47.14 44.39 31.72 23.46 17.14 51.55 3 4 53.43 29.91 46.61 43.54 30.85 22.98 16.63 51 4 4 54.23 30.88 46.65 43.7 31.38 23.29 16.68 49.9 Table 2: Evaluation of the proposed EDI-Summ model under diverse scenarios involving variations in the infusion order of modalities. Here, VGG-Net embeddings are used for multimodal fusion.We achieved the best results when fusion in done at layer 3 of both the encoder and decoder which are shown in bold. Model ROUGE-1 ROUGE-2 ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR GPT-3.5(few shot) 51.79 27.84 47.75 40.6 28.7 19.97 14.18 42.13 GPT-4V(few shot) 52.3 27.98 48.2 41.39 29.21 20.18 14.30 42.52 KM-CliConSummation 60.27 37.90 50.87 48.77 37.37 28.44 22.16 56.70 EI-Summ 62.00 39.12 53.16 50.00 38.54 29.85 23.37 59.55 EDI-Summ 62.33 40.40 53.37 49.61 38.71 30.23 23.93 61.09 Table 3: Evaluation of different models on the MM_CliConSummation dataset. ResNet is utilized for generating image embeddings. The performances of EI-Summ and EDI-Summ are quite similar. Model ROUGE-1 ROUGE-2 ROUGE-L BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR GPT-3.5(few shot) 39.95 14.93 27.95 23.50 13.66 8.34 5.26 25.67 GPT-4V(few shot) 43.12 18.02 31.79 28.47 17.81 12.06 8.03 29.94 MedSumm ( LLAMA-2) 46.75 25.59 38.41 32.5 22.55 17.56 14.88 35.74 MedSumm( Zephyr ) 44.55 25.37 34.97 27.05 19.48 15.84 13.6 33.37 EI-Summ 53.20 30 .87 44.92 45.00 33.81 27.11 23.52 47.54 EDI-Summ 53.32 30.64 45.01 45.88 34.13 27.33 23.51 48.46 Table 4: Performance of different models on MMCQS dataset. Surprisingly, EDI-Summ’s performance was way superior to the multimodal baselines like MedSumm and GPT-4V, which shows the effectiveness of our proposed model in handling codemixed text. tion metrics on all three datasets.4 4.5 Qualtitative Analysis of Generated Summaries In Figure 3, we examine two examples from MMQS dataset comparing the gold summary with 4One key observation is that GPT-4V sometimes misses important medical concepts in the generated summaries. the BART (Text only) model, EI-Summ and EDISumm. In both instances, the text-only model exhibits the poorest performance. Among the remaining two models, it is evident that the Encoder-only model overlooks certain intricate clinical terms in the final summary, which is potentially crucial for analyzing a patient’s case. On the other hand, EDISumm consistently generates the most clinically 13123 Dataset Model Factual recall Omission rate Clinical eval score MMQS GPT-4v 0.76 0.22 3.23 EI-Summ 0.78 0.19 3.45 EDI-Summ 0.81 0.17 3.51 MMCQS GPT-4v 0.72 0.28 3.12 EI-Summ 0.76 0.22 3.28 EDI-Summ 0.79 0.18 3.47 MM_CliConsummation GPT-4v 0.77 0.24 3.29 EI-Summ 0.81 0.19 3.44 EDI-Summ 0.81 0.20 3.43 Table 5: Results of human evaluation across three different datasets. A consistent trend was observed, indicating that EDI-Summ outperformed the selected baselines. Notably, GPT-4V’s performance was even weaker than that of EI-Summ. nuanced summaries, as shown in the highlighted text in red color in Figure 3. More qualitative analysis is shown in the Appendix section Image Question Good day, I have a swollen lump on my tummy (right hand side), I consulted the doctor before and he said it was hernia. Few months after that I was diagnosed with ulcers in my mouth. Please see the image of mouth below and they are very painful. I went to another doctor and I explained to him all the diagnosis but he had a different opinion, he ruled out hernia and suspects gastrointestinal tract. I have both symptoms of ulcers, hernia and gastrointestinal tract. Now the lump gets sore and painful and sometimes feels itchy. please advice what could be the problem. Many thanks. Hi, I am a 41 yr old female, for past three months I have had a heavy cold with bad catarrh, headaches, feverish and constantly tired. For the past 3 weeks I have had a hard lump to the side of my neck about 1 inch in circumference, Doctor Please look at the image of patient tonsils below. It's hard and sore to touch, its on bottom right hand side on the bone. Gold Response What could be causing the patient's swollen and painful lump, mouth ulcers, and conflicting diagnoses of hernia and gastrointestinal tract issues? What is the potential diagnosis for a 41-year-old female with a heavy cold, headaches, fever, fatigue, and a hard, sore lump on the side of her neck? BART (Text) What causes swollen lump and symptoms of ulcers? What could be the cause of hard lump on neck, on bottom right? KM-CliConSumm ation What could be the cause of a painful, swollen lump on the tummy? What could be the cause of a hard lump to the back of the neck? EI-Summ What could be the cause of a swollen lump on the tummy that is sore, painful, and itchy? What could be the cause of a hard, sore lump to the side of the neck in a 41-year-old female patient? EDI-Summ What could be the cause of a painful, swollen lump on the tummy, along with ulcers, hernia, and gastrointestinal issues? What could be the cause of a 41-year-old female's heavy cold, catarrh, headaches, fever, fatigue, and hard, sore lump in the neck? Figure 3: Analysis of summaries generated by our proposed model concerning various baselines. Our proposed model outperformed various baselines in capturing essential medical concepts, as evident in the highlighted text. 4.6 Statistical Analysis of the Generated Summaries The median number of words in gold summary is 24, and in generated summaries is around 23, so they are pretty close. The golden and predicted summaries’ word clouds are shown below in Figure 4. (a) Word Cloud for Gold Summaries (b) Word Cloud for Predicted Summaries Figure 4: Comparison of Word Clouds for Gold and Predicted Summaries for MMCS dataset 5 Risk Analysis Through comprehensive automated, human, and qualitative analyses, we have discerned that visual cues significantly contribute to generating more clinically nuanced summaries. However, particularly in sectors as critical as healthcare, the stakes are high for any potential misinformation or misinterpretation. In our human evaluation, instances were identified where the model, due to visual information, missed crucial keywords from the textual query. This occasional divergence underscores the need to carefully balance visual and textual inputs to ensure a more comprehensive summary. Moreover, our model is currently limited to recognizing 18 main symptoms. If presented with an image con13124 taining symptoms outside this set, the model might inadvertently generate misinformation in the final summary. Additionally, the quality of the image plays a pivotal role in generating accurate summaries. Our experimentation with a low-quality image revealed a lack of proper detail capture. Hence, the involvement of a medical expert becomes im perative, particularly in high stakes, to mitigate the risks associated with potential inaccuracies or oversights. 6 Conclusion In this paper, we introduced a multimodal model EDI-Summ, which incorporates contextual multimodal attention in both the encoder and decoder of the BART model. To evaluate the effectiveness of our proposed model, we conducted automated and human evaluations on three clinical multimodal summarization benchmark datasets. Our analysis suggests that incorporating contextual attention in the encoder and cross-attention in the decoder enhances the performance of multimodal summarization. 7 Limitations There are some noticeable limitations of our work. Those are enumerated in the points below: 1) Although our comprehensive evaluation indicates that incorporating attention in both the encoder and decoder of the BART model achieves state-of-the-art performance for multimodal summarization, this claim lacks strong theoretical support. 2) In our experiments, we primarily focused on English and code-mixed settings. However, the effectiveness of this framework in handling multilingual text remains an open question. We plan to explore this research question in future work. 3) All the datasets MMQS, MMCQS, and MM_CliConSummation are multimodal, handling only text and images as inputs. However, the effectiveness of our proposed EDI-Summ in handling other modalities, such as videos and speech, remains to be explored in future work. 8 Ethics Statement Our collaborative efforts extend to close coordination with a medical expert, who also holds the position of co-authorship on this paper. The execution of the entire task, spanning from experiment design to human validation and qualitative analysis, involved the dedicated participation of three MBBS students, who volunteered for the project. As a testament to ethical considerations, these students were compensated following the prevailing minimum wage guidelines in India as outlined5. Furthermore, to reinforce the ethical integrity of our work, we are in the process of obtaining Institutional Review Board (IRB) approval for this project. It is important to note that the proposed model is designed solely for the task of summarization and does not include any predictive functionalities that could unfairly impact the user. This deliberate design choice underscores our commitment to ethical practices and responsible model usage. 9 Acknowledgements Akash Ghosh and Sriparna Saha extend their sincere appreciation to the SERB (Science and Engineering Research Board) POWER scheme, Department of Science and Engineering, Government of India, for generously funding this research. References Asma Ben Abacha and Dina Demner-Fushman. 2019. On the summarization of consumer health questions. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2228– 2234. Asma Ben Abacha, Yassine M’rabet, Yuhao Zhang, Chaitanya Shivade, Curtis Langlotz, and Dina Demner-Fushman. 2021. Overview of the mediqa 2021 shared task on summarization in the medical domain. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 74–85. Asma Ben Abacha, Wen-wai Yim, George Michalopoulos, and Thomas Lin. 2023. An investigation of evaluation metrics for automated medical note generation. arXiv preprint arXiv:2305.17364. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65–72. Xiaoyan Cai, Sen Liu, Junwei Han, Libin Yang, Zhenguo Liu, and Tianming Liu. 2021. Chestxraybert: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia, 25:845–855. 5https://www.india-briefing.com/news/guide-minimumwage-india-2023-19406.html/ 13125 Bharath Chintagunta, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2021. Medically aware gpt-3 as a data generator for medical dialogue summarization. In Machine Learning for Healthcare Conference, pages 354–372. PMLR. Jean-Benoit Delbrouck, Cassie Zhang, and Daniel Rubin. 2021. QIAI at MEDIQA 2021: Multimodal radiology report summarization. In Proceedings of the 20th Workshop on Biomedical Language Processing, pages 285–290, Online. Association for Computational Linguistics. Jay DeYoung, Iz Beltagy, Madeleine van Zuylen, Bailey Kuehl, and Lucy Lu Wang. 2021. Ms2: Multidocument summarization of medical studies. arXiv preprint arXiv:2104.06486. Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929. Akash Ghosh, Arkadeep Acharya, Raghav Jain, Sriparna Saha, Aman Chadha, and Setu Sinha. 2024a. Clipsyntel: clip and llm synergy for multimodal question summarization in healthcare. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 22031–22039. Akash Ghosh, Arkadeep Acharya, Prince Jha, Sriparna Saha, Aniket Gaudgaul, Rajdeep Majumdar, Aman Chadha, Raghav Jain, Setu Sinha, and Shivani Agarwal. 2024b. Medsumm: A multimodal approach to summarizing code-mixed hindi-english clinical queries. In European Conference on Information Retrieval, pages 106–120. Springer. Akash Ghosh, Arkadeep Acharya, Sriparna Saha, Vinija Jain, and Aman Chadha. 2024c. Exploring the frontier of vision-language models: A survey of current methodologies and future directions. arXiv preprint arXiv:2404.07214. Daniel J Goff and Thomas W Loehfelm. 2018. Automated radiology report summarization using an opensource natural language processing pipeline. Journal of digital imaging, 31:185–192. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770– 778. Prince Jha, Gaël Dias, Alexis Lechervy, Jose G Moreno, Anubhav Jangra, Sebastião Pais, and Sriparna Saha. 2022. Combining vision and language representations for patch-based identification of lexicosemantic relations. In Proceedings of the 30th ACM International Conference on Multimedia, pages 4406– 4415. Prince Jha, Krishanu Maity, Raghav Jain, Apoorv Verma, Sriparna Saha, and Pushpak Bhattacharyya. 2024. Meme-ingful analysis: Enhanced understanding of cyberbullying in memes through multimodal explanations. arXiv preprint arXiv:2401.09899. Anirudh Joshi, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2020. Dr. summarize: Global summarization of medical dialogue by exploiting local structures. arXiv preprint arXiv:2009.08666. Raghvendra Kumar, Ratul Chakraborty, Abhishek Tiwari, Sriparna Saha, and Naveen Saini. 2023. Diving into a sea of opinions: Multi-modal abstractive summarization with comment sensitivity. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 1117–1126. Ritesh Kumar, Aishwarya N Reganti, Akshit Bhatia, and Tushar Maheshwari. 2018. Aggression-annotated corpus of hindi-english code-mixed data. arXiv preprint arXiv:1803.09402. Shivani Kumar, Atharva Kulkarni, Md Shad Akhtar, and Tanmoy Chakraborty. 2022. When did you become so smart, oh wise one?! sarcasm explanation in multi-modal multi-party dialogues. arXiv preprint arXiv:2203.06419. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74–81. Krishanu Maity, Prince Jha, Sriparna Saha, and Pushpak Bhattacharyya. 2022. A multitask framework for sentiment, emotion and sarcasm aware cyberbullying detection from multi-modal code-mixed memes. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1739–1749. Sabine Molenaar, Lientje Maas, Verónica Burriel, Fabiano Dalpiaz, and Sjaak Brinkkemper. 2020. Medical dialogue summarization for automated reporting in healthcare. In Advanced Information Systems Engineering Workshops: CAiSE 2020 International Workshops, Grenoble, France, June 8–12, 2020, Proceedings 32, pages 76–88. Springer. 13126 Giulio Nittari, Ravjyot Khuman, Simone Baldoni, Graziano Pallotta, Gopi Battineni, Ascanio Sirignano, Francesco Amenta, and Giovanna Ricci. 2020. Telemedicine practice: review of the current ethical and legal challenges. Telemedicine and e-Health, 26(12):1427–1437. OpenAI. 2023a. Gpt3.5. https://openai.com/blog/ gpt-3-5-turbo-fine-tuning-and-api-updates. OpenAI. 2023b. Gpt4. https://openai.com/ research/gpt-4. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32. AS Poornash, Atharva Deshmukh, Archit Sharma, and Sriparna Saha. 2023. Aptsumm at biolaysumm task 1: Biomedical breakdown, improving readability by relevancy based selection. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 579–585. Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485–5551. Pranab Sahoo, Prabhash Meharia, Akash Ghosh, Sriparna Saha, Vinija Jain, and Aman Chadha. 2024a. Unveiling hallucination in text, image, video, and audio foundation models: A comprehensive survey. arXiv preprint arXiv:2405.09589. Pranab Sahoo, Sriparna Saha, Samrat Mondal, Sujit Chowdhury, and Suraj Gowda. 2022a. Computeraided covid-19 screening from chest CT-scan using a fuzzy ensemble-based technique. In 2022 International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE. Pranab Sahoo, Sriparna Saha, Samrat Mondal, and Suraj Gowda. 2022b. Vision transformer based covid-19 detection using chest CT-scan images. In 2022 IEEEEMBS International Conference on Biomedical and Health Informatics (BHI), pages 01–04. IEEE. Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and Aman Chadha. 2024b. A systematic survey of prompt engineering in large language models: Techniques and applications. arXiv preprint arXiv:2402.07927. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Chanchal Suman, Sriparna Saha, Aditya Gupta, Saurabh Kumar Pandey, and Pushpak Bhattacharyya. 2022. A multi-modal personality prediction system. Knowledge-Based Systems, 236:107715. Abhisek Tiwari, Manisimha Manthena, Sriparna Saha, Pushpak Bhattacharyya, Minakshi Dhar, and Sarbajeet Tiwari. 2022. Dr. can see: towards a multi-modal disease diagnosis virtual assistant. In Proceedings of the 31st ACM international conference on information & knowledge management, pages 1935–1944. Abhisek Tiwari, Anisha Saha, Sriparna Saha, Pushpak Bhattacharyya, and Minakshi Dhar. 2023. Experience and evidence are the eyes of an excellent summarizer! towards knowledge infused multi-modal clinical conversation summarization. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pages 2452– 2461. Yash Verma, Anubhav Jangra, Raghvendra Verma, and Sriparna Saha. 2023. Large scale multi-lingual multimodal summarization dataset. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2023, Dubrovnik, Croatia, May 2-6, 2023, pages 3602–3614. Association for Computational Linguistics. Byron C Wallace, Sayantan Saha, Frank Soboczenski, and Iain J Marshall. 2021. Generating (factual?) narrative summaries of rcts: Experiments with neural multi-document summarization. AMIA Summits on Translational Science Proceedings, 2021:605. Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Douglas Burdick, Darrin Eide, Kathryn Funk, Yannis Katsis, Rodney Kinney, et al. 2020. Cord-19: The covid-19 open research dataset. ArXiv. Sibo Wei, Wenpeng Lu, Xueping Peng, Shoujin Wang, Yi-Fei Wang, and Weiyu Zhang. 2023. Medical question summarization with entity-driven contrastive learning. arXiv preprint arXiv:2304.07437. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface’s transformers: State-ofthe-art natural language processing. arXiv preprint arXiv:1910.03771. 13127 Yuan Xue, Tao Xu, L Rodney Long, Zhiyun Xue, Sameer Antani, George R Thoma, and Xiaolei Huang. 2018. Multimodal recurrent model with attention for automated radiology report generation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part I, pages 457–466. Springer. Baosong Yang, Jian Li, Derek F Wong, Lidia S Chao, Xing Wang, and Zhaopeng Tu. 2019. Contextaware self-attention networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 387–394. Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. In International Conference on Machine Learning, pages 11328–11339. PMLR. A Appendix A.1 Dataset and Code The code of our proposed model and dataset will be shared in this GitHub account https://github.com/AkashGhosh/From-Sights-toInsights-Towards-Summarization-of-MultimodalClinical-Documents. B Qualitative analysis We conducted more analysis of the samples from the test set. Our exploration revealed that among the various baselines examined, EDI-Summ stood out remarkably in its ability to encapsulate intricate medical concepts. Notably, it excelled in accurately capturing the precise medical terminology derived from the images (highlighted in red), thus providing a more nuanced summary generation. Image Question Hello, Doctor. I hope you're having a pleasant day. I'm reaching out because I'm quite concerned about my 24-year-old daughter, Sarah. She's been experiencing monthly episodes of pain in her tonsils, accompanied by fever . Please see the attached image. Interestingly, we've noticed that high doses of vitamins seem to provide temporary relief, but the symptoms return each month. Our family has a history of autoimmune disorders. Given this, we're worried about potential underlying causes for Sarah's recurring symptoms. Could you please advise us on what might be causing this and what steps we should take next? Thank you very much for your time and expertise. Good morning, Doctor. I hope this message finds you well. I'm writing to seek your guidance regarding my father, a veteran who has been experiencing problems in his foot. Please see the image attached. He served in Vietnam and was exposed to Agent Orange during his time there. Additionally, he has a history of anemia and low B12 levels. He also suffers from diabetes. We're concerned that these factors may be contributing to his foot swelling. Could you please advise us on whether Agent Orange exposure, anaemia, and low B12 levels could indeed be related to his symptoms, and what steps we should take next to address this issue? Your expertise would be greatly appreciated. Gold Response What is the cause of swollen tonsils, and fever, relieved temporarily by high doses of vitamins? Can Agent Orange exposure, anaemia, and low B12 levels contribute to foot swelling in a veteran with diabetes? KM-CliCon Summation What could be causing recurring pain in Sarah's tonsils, accompanied by fever Does Agent Orange exposure, anaemia, low B12 levels, diabetes? EI-Summ What could be the cause of in the tonsils, along with fever, and tenderness? What could be causing persistent issues in foot, anemia with a history of service in Vietnam? EDI-Summ What could be the cause of recurrent swollen tonsils on the neck, along with fever, tonsillitis, and tenderness? What could be the cause of foot swelling, anaemia, low B12 levels, and swollen feet in a patient with a history of service in Vietnam? Figure 5: Some more examples of summaries generated by our proposed model with respect to various baselines. B.1 FAQs 1. Why BART is chosen over T5 as the base model? Ans: We chose BART-Base as the foundational model for our proposed method after thorough experimentation. Our extensive evaluations consistently revealed that BART-Base outperforms T5-Base across various metrics and perspectives. We favored BART-Base for its encoder-decoder architecture, allowing the encoding of diverse information before summary generation. Additionally, to align with our available computing resources, we concentrated on the Base versions of all models for our comparative analysis. 2. What are the qualifications of the doctor and the other annotators? How senior are they? Ans: To uphold ethical standards, the entire process was conducted under the guidance of a senior medical doctor—an additional professor at a government medical college and a co-author of our paper. The team included three medical postgraduate students who were closely supervised by the doctor throughout the entire process. 3. Why do we use cross-attention instead of contextual cross-attention in the decoder of EDISumm? Ans: We use cross-attention instead of contextual cross-attention in the decoder of EDI-Summ because while generating the tokens, the sequence length at the decoder increases by one every time we generate a single token. Due to this, we can’t calculate contextualized vector as given in Equation 2 because it requires the addition of key and value vectors from text and image, but since the sequence length of text keeps on increasing while generating tokens and the sequence length of the image representation is fixed, we can’t add these two vectors. 4. Are the results in our experiments statistically significant? Ans: We conducted five runs for each of the top-performing text baseline (BART), multimodal baseline (EI-Summ), and our proposed model, EDISumm, followed by a t-statistical test. The resulting p-value, calculated at 0.004, signifies statistical significance with a confidence level of 95%. Consequently, we concluded that the observed differences are statistically significant. 5. How EDI is different from other multimodal baselines? Ans: In our paper, EDI-Summ is constructed on the foundation of the BART model, where we 13128 have integrated Multimodal Context-Aware Attention in the encoder and Image Cross Attention in the decoder. The EI-Summ model implements image fusion solely in the encoder, a methodology similar to that of Kumar et al. (). We conducted extensive experiments utilizing various pre-trained image embedding techniques, including VGG, VIT, and Resnet, observing performance enhancements compared to our text and other multimodal baselines (EI-Summ). Similarly, EDI-Summ performs the best in human evaluation metrics too. To the best of our knowledge, our work represents the inaugural attempt at incorporating image fusion in both the encoder and decoder of the BART model. 6. Why only multimodal baselines are used for analysis of results of MMCQS and MM_CliConSummation? Ans :The insights gained from the experiments on the MMCS dataset allowed us to streamline the experiments for the MMCQS and MM_CliConSummation datasets. For these later datasets, we only experimented with multimodal baselines, as text-based baselines were not a good baseline for our model. 7.Definition of relevance, consistency and fluency in our work. Ans: Certainly, here’s how each criterion would be defined in the context of human evaluation for medical passage summarization: 1. Fluency: The smoothness and clarity of the summary, evaluating how effectively the medical information is communicated coherently and understandably, without jargon or ambiguity. 2. Adequacy: The accuracy and comprehensiveness of the summary in encapsulating all crucial details and main points of the medical passage, ensuring that no vital information is omitted or misrepresented. 3. Informativeness: The degree to which the summary provides valuable and relevant medical insights, assessing its ability to convey essential information succinctly and clearly while avoiding extraneous or redundant details. 4. Persuasiveness: The convincing presentation of medical findings or recommendations in the summary, evaluating its ability to influence the reader’s understanding or decision-making process regarding the medical topic, potentially leading to further action or consideration. 13129
2024
708
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13130–13147 August 11-16, 2024 ©2024 Association for Computational Linguistics When Phrases Meet Probabilities: Enabling Open Relation Extraction with Cooperating Large Language Models Jiaxin Wang1,3 , Lingling Zhang1,3∗, Wee Sun Lee2 , Yujie Zhong1 , Liwei Kang2, Jun Liu1,4 1School of Computer Science and Technology, Xi’an Jiaotong University 2 School of Computing, National University of Singapore 3 Ministry of Education Key Laboratory of Intelligent Networks and Network Security, China 4 Shaanxi Province Key Laboratory of Big Data Knowledge Engineering, China [email protected] {zhanglling,liukeen}@xjtu.edu.cn {leews,kang}@comp.nus.edu.sg [email protected] Abstract Current clustering-based open relation extraction (OpenRE) methods usually apply clustering algorithms on top of pre-trained language models. However, this practice has three drawbacks. First, embeddings from language models are high-dimensional and anisotropic, so using simple metrics to calculate distances between these embeddings may not accurately reflect the relational similarity. Second, there exists a gap between the pre-trained language models and downstream clustering for their different objective forms. Third, clustering with embeddings deviates from the primary aim of relation extraction, as it does not directly obtain relations. In this work, we propose a new idea for OpenRE in the era of LLMs, that is, extracting relational phrases and directly exploiting the knowledge in LLMs to assess the semantic similarity between phrases without relying on any additional metrics. Based on this idea, we developed a framework, ORELLM, that makes two LLMs work collaboratively to achieve clustering and address the above issues. Experimental results on different datasets show that ORELLM outperforms current baselines by 1.4% ∼3.13% in terms of clustering accuracy. 1 Introduction Relation Extraction (RE) focuses on extracting relations between entity pairs from texts. However, conventional RE is based on the closed-set hypothesis (Liu et al., 2022; Yan et al., 2023; Xu et al., 2023) for handling pre-defined relations, thereby limiting its capability to deal with new emerging relations in the real world. Therefore, Open Relation Extraction (OpenRE), which aims to extract new relations from unlabeled open-domain corpora, has received more attention. One paradigm of OpenRE is open information extraction (OpenIE) (Etzioni et al., 2008; Fader ∗Corresponding author (a) Current relational similarity measurement (b) Our relational similarity measurement BERT 0.52 language of film Vector Representation Similarity Score Unlabeled Sentences Euclidean Distance -Red Love is a 1982 German documentary film. -The Hunchback of Notre Dame is a French novel Unlabeled Sentences Relational Phrases language of novel Generative LLM 0.87 Similarity Score 𝑉! 𝑉" 𝑉! −𝑉" " 𝑊! 𝑊" -Red Love is a 1982 German documentary film. -The Hunchback of Notre Dame is a French novel 𝑃##$ 𝑊! 𝑊" Figure 1: Comparison of relational similarity measurement methods. et al., 2011), which directly extracts relation-related phrases from sentences based on syntactic analysis, but often produces redundant or ambiguous results. Therefore, another paradigm, namely unsupervised relation discovery (Yao et al., 2011; ElSahar et al., 2017), has been proposed to bridge the gap between closed-set RE and OpenIE by inducing new relations automatically. Specifically, it commonly formulates OpenRE as a clustering task, where the most critical challenge lies in how to measure the relational similarity between unlabeled instances effectively for clustering. For this purpose, a popular practice (as shown in Figure 1(a)) is to first obtain embeddings from pre-trained language models such as BERT (Devlin et al., 2019), and then apply clustering algorithms, e.g., K-means with euclidean, to measure the distances of these embeddings and form clusters (Hu et al., 2020; Tran et al., 2020). Although this practice benefits from the knowledge in pre-trained language models, it has three drawbacks. First, the embeddings from pre-trained language models encode complex linguistic information (Reif et al., 2019; Zhao et al., 2021) and are anisotropic (Gao et al., 2019; Li et al., 2020). Consequently, using simple metrics, e.g., 13130 Euclidean or cosine, to calculate distances between these embeddings may not accurately reflect the relational similarity. Second, this practice exists a gap between the pre-trained language models and downstream clustering for their different objective forms (Wang et al., 2020; Qiu et al., 2020). In other words, the pre-training tasks of language models (i.e., masked language modeling or next token prediction) are not consistent with the downstream clustering task (i.e., classification with pseudo labels), which restricts the transfer and adaptation of knowledge in pre-trained language models to unsupervised clustering. Third, the essence of these methods is to perform clustering with embeddings, which does not meet the original intention of RE since the relations in each sentence are not actually extracted. While some methods (Zhao et al., 2021; Wang et al., 2022; Li et al., 2022) alleviate the first issue by using labeled instances with pre-defined relations to guide similarity learning in new types, they still fail to address the other two drawbacks. Recently, a series of powerful generative large language models (LLMs) have been proposed and brought revolutionary changes to the field of NLP (Brown et al., 2020; Touvron et al., 2023; OpenAI, 2023). We believe that LLMs have the potential to solve the above problems for two reasons. For one thing, LLMs are trained on a large amount of general text corpora, and have excellent generalization in understanding and generating text (Song et al., 2023). This enables them to analyze and produce relational phrases for unlabeled instances. For another thing, their generation strategy follows “next token prediction” (Radford et al., 2019), which predicts the next token by calculating the likelihood probability of this token conditioned on its preceding sequence. This probability indicates the relatedness degree of the predicted token to its entire prefix and provides a new way to measure the similarity between phrases. In this work, we propose a novel framework, namely ORELLM, for OpenRE. As illustrated in Figure 1(b), the key of ORELLM is to adopt LLMs in two aspects: generating relational phrases for relation extraction and calculating conditional probabilities between phrases for similarity learning. Specifically, ORELLM consists of an API-based LLM and an open-source LLM. The former serves as a phrase extractor, generating relational phrases from instances. The latter operates as a probability estimator, computing the likelihood that one phrase generates another, thereby reflecting the semantic similarity among the generated phrases. As a result, the probability estimator directly exploits the knowledge in LLMs to assess the semantic relatedness between phrases, without relying on any additional metrics. Moreover, we propose a cooperative cycle to make these two LLMs work collaboratively for clustering. In this cycle, we first select the initial centers and assign each data point to the corresponding centers according to probabilities. We then sort the data in each cluster by the magnitude of probability scores to obtain reliable and ambiguous data. After that, we design a “Rethink” strategy to correct the potentially erroneous phrases in ambiguous data and discover new centers. In addition, we adopt the reliable data to fine-tune the probability estimator. This process is consistent with the next token prediction pattern, thus alleviating the gap between pre-trained language models and the downstream clustering task. Finally, we use the fine-tuned probability estimator to reassign labels and update clustering for the next cycle. To summarize, the main contributions of our work are as follows: (1) We propose a new idea for OpenRE in the era of LLMs, that is, first extract relational phrases and then directly exploit the knowledge in LLMs to assess the semantic similarity between phrases, without relying on any additional metrics. (2) Based on this idea, we propose a framework ORELLM that makes two LLMs work collaboratively for clustering. (3) We conduct experiments on two datasets and ORELLM achieves better performance than the existing state-of-theart methods. (4) Additionally, ORELLM can output relational phrases for each instance, which really meets the original intention of RE. 2 Methodology 2.1 Problem Formulation Let D = ⟨si, hi, ti⟩N i=1 denote the open relational corpus with N instances. Each instance includes a sentence si along with its corresponding head entity hi and tail entity ti. OpenRE aims to discover relations in D without annotation labels or pre-defined types. Therefore, it is commonly formulated as a clustering task that clusters instances in D to K groups and outputs the clustering assignment Y = {yi}N i=1 for instances. 2.2 Overview As depicted in Figure 2, our ORELLM consists of two LLMs working in three main procedures: (1) 13131 Instructions 𝑷𝑮 Relational Phrases 𝓦 Open Relational Corpus 𝑫 𝒔𝒊: The occupancy of the show surpassed the Hindi films Baar Baar Dekho and Freaky Ali. (𝒉𝒊: Baar Baar Dekho, 𝒕𝒊: Hindi) - 𝒘𝒊: Language of film - Spouse - located in - Language of work - Major city - Married to - located at - Father son relation - Husband and wife - Written by -… LLM-based Phrase Extractor LLM-based Probability Estimator Question: For a given relational phrase ‘{ 𝑤" }’, please generate another one conveying the same relation or having the similar semantics. Answer: ‘{𝑤#}’ 𝑷𝒊𝒋= 𝑷(𝑴𝒑𝒓𝒐(𝒘𝒋|𝒯(𝒘+)) 𝒘𝒊 𝒘𝒋 Probability Space 𝒑𝟐: Please provide a summary of the relation based on above analysis. No more than five words. For example… (b) Estimating Similarity Probabilities 𝑴𝒆𝒙𝒕 𝑴𝒑𝒓𝒐 Instruction𝑷𝑹 Ambiguous Data 𝓦𝒂𝒍 LLM-based Phrase Extractor Revised Relational Phrases 𝓦𝒂∗ 𝒍 - Nationality - Crosses river - … 𝑴𝒆𝒙𝒕 Reliable Data 𝓦𝒓𝒍 Fine-tune 𝑴𝒑𝒓𝒐 𝒍 Fine-tune … Output Probability Space Initialization Reliable Data Ambiguous Data Current Centers After first loop Update Centers … Final Clustering Assignment 𝓨= {𝒚𝒊}𝒊%𝟏 𝑵 Update Clustering Reliable Data Ambiguous Data Rethink & Discover new Revised Phrases Compute Probability Rethink 𝑴𝒑𝒓𝒐 𝒍0𝟏 Ambiguous Data 𝓦𝒂𝒍 New Centers Discover 𝑴𝒆𝒙𝒕 Cluster Assignment 𝑴𝒑𝒓𝒐 𝟎 Update Cluster Assignment 𝑴𝒑𝒓𝒐 𝟏 𝑀123 405 (c) Cooperative Cycle for Clustering 𝒑𝟏: For each sentence, please first analyze the type of head entity and the tail entity, then describe the relation between them. For example… 𝒑𝟑: For sentence {𝑠"} and the corresponding head entity {ℎ"} and tail entity{𝑡"}, you have previously summarized the relation between them as: {𝑤#, …}. However, these phrases might not accurately describe the relation between {ℎ"} and {𝑡"}. Please re-analyze this relation. 𝒑𝟒: Based on the above analysis, please rethink and provide a revised phrase to describe this relation. No more than five words. 𝓦𝒂∗ 𝒍 (a) Generating Relational Phrases Figure 2: Illustration of our ORELLM framework. In the cooperative cycle, (1) the ambiguous data serves as feedback to prompt Mext to rethink. Then, we check for any new centers and add them to the existing center set; (2) We use reliable data to fine-tune Mpro; (3) We update centers and reassign labels for the next cycle. Utilizing an API-based LLM as a phrase extractor (denoted as Mext) to generate relational phrases for instances in D; (2) Employing an open-source LLM as a probability estimator (denoted as Mpro) to compute the likelihood of one phrase generating another. This procedure outputs a probability space that reflects the semantic similarity among the generated phrases. (3) After cluster initialization, we filter phrases based on current clustering results, obtaining reliable and ambiguous data. Subsequently, we establish a cooperative cycle with three steps to make Mext and Mpro work collaboratively for cluster learning. First, the ambiguous data serves as feedback to prompt Mext to rethink, i.e., revise current phrases to generate new ones. We then check these revised phrases for any new centers and add them to the existing center set. Second, we use reliable data to fine-tune Mpro. Third, we update centers and reassign labels for the next cycle. The details are as follows. 2.3 Generating Relational Phrases Previous research demonstrates that prompting LLMs to generate intermediate rationales can greatly improve their performance on specific tasks (Wadhwa et al., 2023; Wan et al., 2023). Therefore, as shown in Figure 2(a), we construct instructions PG = {p1, p2} for each input sentence si. Here, p1 guides Mext to identify the roles of hi and ti and to analyze the semantic associations between them, while p2 instructs Mext to summarize the corresponding relational phrases based on this analysis. We also provide a demonstration in PG to make Mext better understand and execute instructions1. Note that the relation of this demo is different from all relations in D. In short, this procedure can be formulated as: wi = Mext(PG, ⟨si, hi, ti⟩|⟨si, hi, ti⟩∈D), (1) where wi ∈W represents the generated relational phrase for the i-th input instance. 2.4 Estimating Similarity Probabilities One characteristic of relations is that they have a diversity of expressions, which means that even the same relation can be expressed differently depending on different contexts. Consequently, the obtained relational phrases are also diverse as they 1More details are in Appendix A.4. 13132 are summarized from various instances. For further clustering, we need to quantify the similarity of these phrases efficiently. As illustrated in Figure 2(b), we adopt an open-source generative language model Mpro as a similarity estimator. Specifically, we leverage Mpro to compute the likelihood of wj conditioned on wi as the semantic similarity score of wi and wj. To facilitate this, we design a template T that transforms wi into T (wi) as: “Q: For a given relational phrase <wi>, please generate another one conveying the same relation or having similar semantics. A: <?>.” Then, we feed T (wi) into an open-source LLM Mpro and calculate the average conditional probability for generating wj in the log domain (denoted by ˆPij) as follows: ˆPij = log PMpro(wj|T (wi)) = 1 Qj Qj X q=1 log P(wq j|T (wi), w<q j ), (2) where Qj is the token number of wj and wq j represents q-th token in wj. We also compute the probability of wi conditioned on wj based on this formula, and the final semantic similarity score between wi and wj is Pij = ( ˆPij + ˆPji)/2. In this way, we derive a matrix P from probability space that reflects similarity scores between phrases. 2.5 Cooperative Cycle for Clustering After extracting relational phrases and estimating similarity, we design a cooperative cycle that enables Mext and Mpro work collaboratively for clustering. As shown in Figure 2(c), this cycle includes three steps: (i) Rethink and discover new; (ii) Fine-tune; (iii) Update clustering. In each cycle, Mext is responsible for revising those potentially incorrect phrases, and Mpro takes charge of updating similarity scores. The details are as follows. Initialization. Before we start the cooperative cycle, we first select initial centers. The similarity score matrix P reflects the distribution of instances in probability space, i.e., higher scores correspond to closer positions, and vice versa. As a result, phrases expressing the same relation tend to be dense, while those conveying different relations tend to be dispersed in this space. On this basis, the initial cluster centers are selected by following two guidelines (Zhao et al., 2023): (1) they have a high density around them, and (2) they are separated from each other. Specifically, inspired by the density-based HDBSCAN clustering algorithm (McInnes et al., 2017), we define the density of wi as the number of neighbors within a specified radius Pϵ by: ρ(wi) = N X j=1 I(Pij −Pε) > 0, (3) where ρ(·) and I(·) are the density calculation function and indicator function, respectively. After that, we set a density threshold Nε and obtain a high-density phrase set, denoted as WHigh = {wi|ρ(wi) > Nε}. Then we sort wi ∈WHigh in descending order by the value of ρ(wi) and take the phrase with the highest density as the first center. Subsequent phrases are added as centers only if their similarity score between any existing centers is lower than the center similarity threshold Pr. From the above process, we can obtain an initial center set W0 Center, where for ∀wi, wj ∈W0 Center, Pij < Pr, ρ(wi) > Nε, and ρ(wj) > Nε. Finally, we assign each remaining phrase to the center with the highest similarity score and obtain the initial clustering assignment result Y0. However, the clustering assignment results Yℓ−1 obtained from cycle ℓ−1 always have noise because they are actually pseudo-labels. To facilitate subsequent clustering learning, we filter the data based on Yℓ−1, obtaining reliable and ambiguous data for the next cycle. Intuitively, the higher similarity score between the phrase and its center, the more accurate label of this phrase. Therefore, we sort the similarity scores within each cluster in descending order and keep the top γ% of phrases to form the reliable dataset Wℓ r, while taking the rest as the ambiguous data Wℓ a. Rethink and Discover New. We consider that ambiguous data Wℓ a exists for two reasons. One is that the current relational phrases generated by Mext are incorrect. The other is that these phrases do not correspond to any existing centers. Therefore, as shown in Figure 2(c), we design a “Rethink” strategy to guide Mext in revising previous phrases to obtain a revised phrase set Wℓ a∗. Specifically, we devise instructions PR = {p3, p4} for wi ∈Wℓ a. In p3, previously generated phrases serve as feedback to guide Mext in reanalyzing the relation between head and tail entities. p4 instructs Mext to rethink and generate the revised phrases. Then, we employ Mℓ−1 pro to calculate the similarity score on Wℓ a∗. After that, we use the same selection method as described in the initialization phase to discover 13133 new centers Wℓ New from these revised phrases. Fine-tune. We construct question-answer pairs from Wℓ r for each cluster, which are denoted as Zℓ = {(T (wi), wj) | yi = yj, wi ∈ Wℓ Center, wj ∈Wℓ r}. Next, Mℓ−1 pro is fine-tuned on Zℓto produce high generation probabilities for phrases in same clusters by: L = EZℓlog PMℓ−1 pro (T (wi), wj), (4) where (T (wi), wj) ∈Zℓ. After fine-tuning, we obtain the updated probability estimator Mℓ pro, which will be used for further label reassignment. Update Clustering. In this step, we add the newly discovered center Wℓ New to the center set Wℓ−1 Center in the last cycle to form the updated center set Wℓ Center = Wℓ New ∪Wℓ−1 Center for this cycle. Then, we use Mℓ pro to calculate the similarity scores of all phrases between centers and reassign labels, thereby obtaining the clustering assignment result Yℓin cycle ℓ. Based on Yℓ, we filter the data to separate reliable and ambiguous data, preparing for the next cycle of learning. The learning process of ORELLM is summarized in Algorithm 1. Depending on whether K is known, we implement two different stopping conditions. First, in the case of unknown K, we stop after two consecutive cycles without discovering new relations. Second, in the case of known K, we calculate the estimated error rate of K in cycle ℓas: Err(K)ℓ= |Wℓ Center −K|/K. (5) The cycle will stop when Err(K)ℓ> Err(K)ℓ−1. 3 Experiments 3.1 Experimental Settings Datasets. We evaluate our ORELLM on two widely used relation extraction datasets: FewRel (Han et al., 2018) and TACRED (Zhang et al., 2017). FewRel contains 80 relations, each with 700 instances. Following (Zhang et al., 2023b), we treat the 64 relation types in the original training set as new relations and construct the open relational corpus by randomly selecting 70 instances from each type. For TACRED, we filter out the instances under the “no_relation” label and remove the relation types with less than 70 instances. Afterward, we randomly select 70 instances of each class from the remaining 27 relation types to construct the open relational corpus. Algorithm 1: ORELLM Input: Open dataset D = {⟨si, hi, ti⟩}N i=1, Phrase extractor Mext, Probability estimator Mpro 1 Set the cycle ℓ←1 and initialize M0 pro with pre-trained weights; 2 Get relational phrases W = {wi}N i=1 for instance in D by Mext; 3 Get similarity matrix P in probability space by M0 pro; 4 Select to obtain initial center set W0 Center; 5 while ℓ= 1 or the conditions for stopping are not met do 6 Filter to get reliable data set Wℓ r and ambiguous data set Wℓ a; // Rethink and Discover new 7 Do rethink Wℓ a∗←Wℓ a; 8 Compute similarity between Wℓ a∗by Mℓ−1 pro , discover new centers set Wℓ New; // Fine-tune 9 Construct question-answer pair set Zℓ; 10 Mℓ pro ←Fine-tune Mℓ−1 pro with Zℓby Eq. (4); // Update clustering 11 Update center set by Wℓ Center ←Wℓ New ∪Wℓ−1 Center; 12 Obtain updated clustering assignment Yℓby Mℓ pro; 13 ℓ←ℓ+ 1; 14 end Output: Clustering assignment Yℓ Models. We adopt ChatGPT 2(gpt-3.5-turbo) or GPT-4 3(gpt-4) as our phrase extractor. For probability estimator, we utilize a publicly available GPT2-XL model (Radford et al., 2019) with 1.5B parameters. For more details, please refer to Appendix A.1. Baselines. To demonstrate the effectiveness of our method, we compare it with the following six OpenRE baselines, which are categorized into two groups: (1) Three traditional methods with pre-defined labeled instances for pre-training: RoCORE (Zhao et al., 2021), MatchPrompt (Wang et al., 2022), and TABS (Li et al., 2022). (2) Three methods using LLMs: ChatGPT +RoBERTa-large, SBERTChatGPT +RoBERTa-large, 2https://chat.openai.com/chat. 3https://chatgpt.com/?oai-dm=1&model=gpt-4. 13134 Model FewRel TACRED ACC B3-F1 V-F1 ARI ACC B3-F1 V-F1 ARI Gain prior knowledge from few pre-defined relational instances RoCORERoBERTa-large 26.85 25.02 54.90 21.79 41.81 39.32 54.76 34.81 MatchPromptRoBERTa-large 46.03 46.05 71.18 37.62 52.51 53.02 68.45 45.13 TABSRoBERTa-large 34.40 31.98 62.81 27.76 43.04 40.45 55.87 35.79 Gain prior knowledge from LLMs ChatGPT 13.05 9.94 51.06 7.92 22.81 19.07 33.24 14.43 ChatGPT +RoBERTa-large 25.27 17.86 39.85 12.25 35.45 28.33 41.49 16.89 SBERTChatGPT +RoBERTa-large 33.92 26.52 46.30 17.40 46.31 37.02 54.26 30.61 CLUSTERLLMChatGPT +Instructor-XL 47.61 50.15 72.73 38.02 54.40 54.89 68.74 47.66 ORELLM ChatGPT +GPT2-medium 42.06 41.63 68.57 34.11 51.21 51.19 66.52 44.20 ORELLM ChatGPT +GPT2-large 46.62 43.82 70.83 35.88 54.73 53.21 69.61 48.56 ORELLM ChatGPT +GPT2-XL 49.53 47.10 73.19 39.61 55.80 55.26 69.07 49.74 Table 1: Overall results of the compared models (K is known). The top part shows OpenRE methods without LLMs but using few labeled pre-defined relations as prior knowledge (more details are in Appendix A.2). The bottom part shows the unsupervised OpenRE methods that gain prior knowledge from LLMs. For our ORELLM and SBERT, we execute the stop condition with known K. For other baselines, we use the ground truth K as the cluster number. CLUSTERLLM (Zhang et al., 2023b). For more details, please refer to Appendix A.2. Metrics and Implementations. We adopt four commonly used metrics to evaluate clustering results: Accuracy (ACC), B3-F1 (Bagga and Baldwin, 1998), V-measure F1 (V-F1) (Rosenberg and Hirschberg, 2007), and Adjusted Rand Index (ARI) (Hubert and Arabie, 1985). All experiments are conducted on two NVIDIA A800 (2*80GB). More details about metrics and implementations are in Appendix A.3. 3.2 Main Results The evaluation results are presented in Table 1 and Table 2 for known K and unknown K, respectively. We can make the following observations: (1) Directly using LLMs for OpenRE does not work well. For OpenRE, when we directly adopt generative LLMs, such as ChatGPT, the model generates corresponding relational phrases for given sentences. As shown in Table 1, we add a row of results named “ChatGPT”, which is calculated by string matching between phrases. As can be observed, these results are notably low. However, simply aligning phrases with the RoBERTalarge encoder significantly improves ACC and ARI by 12.22%/12.64% and 5.33%/2.46% on FewRel/TACRED, respectively. This demonstrates that although LLMs can generate various phrases for a relation, they are strongly semantically related. Further suitable processing can help capture and measure these semantic correlations. To achieve this, our ORELLM employs a generative language model to compute similarities for clustering. The results demonstrate that its accuracy outperforms ChatGPT+RoBERTa-large by 14.16% to 24.26% across two datasets, regardless of whether K is known or not. (2) With appropriate strategies, the potential of LLMs on OpenRE can be unleashed. Moreover, CLUSTERLLM leverages the knowledge in LLMs to annotate unlabeled data, achieving superior clustering results compared to ChatGPT+RoBERTa-large. Interestingly, compared to ORELLM, although SBERTChatGPT +RoBERTa-large only has the composition of similarity estimation changed, its performance drops significantly. The reason might be that it relies on an additional metric (e.g., cosine in our experiment) to determine similarity after obtaining phrase representations, which causes more errors in getting probability distributions than ORELLM. Also, the training objective of SBERTChatGPT +RoBERTa-large shifts to a classification paradigm (using pseudo labels), which is different from the pre-training objective of language models. This divergence may hinder the effective transfer of knowledge from pre-trained language models to unsupervised clustering. (3) Our ORELLM achieves competitive results in all performance indicators on both datasets, es13135 Model FewRel TACRED ACC B3-F1 V-F1 ARI ACC B3-F1 V-F1 ARI Gain prior knowledge from few pre-defined relational instances RoCORERoBERTa-large 22.93 20.86 49.93 17.70 38.12 35.43 46.04 28.61 MatchPromptRoBERTa-large 38.54 39.11 68.55 31.69 46.57 45.53 65.70 38.75 TABSRoBERTa-large 32.75 28.25 58.33 23.95 40.44 36.05 48.39 31.37 Gain prior knowledge from LLMs ChatGPT +RoBERTa-large 23.26 16.43 35.45 9.75 33.19 26.40 40.26 15.60 SBERTChatGPT +RoBERTa-large 29.50 23.45 42.70 15.53 42.82 34.65 49.73 26.83 CLUSTERLLMChatGPT +Instructor-XL 42.90 44.27 70.27 35.37 50.07 46.89 66.38 43.52 ORELLM ChatGPT +GPT2-medium 40.38 40.11 68.06 33.27 47.35 45.72 65.20 40.10 ORELLM ChatGPT +GPT2-large 43.44 41.70 68.29 33.64 49.52 47.61 66.20 42.76 ORELLM ChatGPT +GPT2-XL 46.03 43.38 70.82 36.40 52.55 51.26 68.51 45.05 Table 2: Overall results of the compared models (K is unknown). The top part of the table shows OpenRE methods without LLMs but using few labeled pre-defined relations as prior knowledge. The bottom part shows the unsupervised OpenRE methods that gain prior knowledge from LLMs (more details are in Appendix A.2). For our ORELLM and SBERT, we execute the stop condition for unknown K. For other baselines, we follow the method in (Zhang et al., 2021a) to estimate K. pecially when K is unknown. Existing OpenRE methods first obtain representations and then apply clustering algorithms based on these representations. To perform well in clustering, these methods require prior knowledge of the new relation number, i.e., K. However, assuming a known K is unrealistic, as it is unknown and constantly changes in the open world. Our ORELLM leverages a generative language model to measure similarities between instances and is able to automatically identify the number of new relations based on the similarity distribution. When K is unknown, even if we only use the GPT2-medium as a similarity probability estimator, the performance of ACC and ARI exceed SOTA traditional method, i.e., MatchPromptRoBERTa-large, by 1.84%/0.86% and 1.58%/1.35% on FewRel/TACRED, respectively. Also, the results are competitive compared with CLUSTERLLM, which uses the Instructor with 1.5B parameters for representation learning. These results illustrate the effectiveness and superiority of the proposed ORELLM. 3.3 Analysis Effectiveness of Rethink. To validate the effectiveness of “Rethink” in the cooperative cycle, we conducted ablation studies by removing it. As illustrated in Table 3, w/o Rethink reduces model performance by 3.26% and 2.20% on average in FewRel and TACRED, respectively. Furthermore, to clearly show the process and impact of Rethink, we provide case studies that compare phrases generated before and after Rethink in Appendix A.5. The results indicate that the LLM-based phrase extractor can produce improved outcomes when implementing Rethink. These results demonstrate that Rethink can first help the phrase extractor in revising and generating more accurate relational phrases, which subsequently enhances clustering performance. Effectiveness of Fine-tune. From Table 3, we can observe that w/o Fine-tune, model performance on all indicators decreased sharply by 11.50% ∼30.13% on two datasets. This demonstrates that fine-tuning plays a crucial role in improving clustering results. It not only groups instances conveying the same relations but also effectively separates instances of different relations, significantly enhancing the model’s ability to classify each data point into the correct clusters. Analysis on the Cycle Round. When K is known, we evaluate clustering results in each cycle round, as shown in Table 4. Compared with the results in initialization, the model performance on the 1st cycle improves significantly. However, the performance gain seems to reach a plateau and even trends downward after a few cycles. For FewRel, our ORELLM is reaching its capacity after the 2nd cycle of learning. For TACRED, as it has a lower number of relation 13136 Dateset Model ACC B3-F1 V-F1 ARI FewRel ORELLM 49.53 47.10 73.19 39.61 w/o Rethink 45.09 44.38 70.52 36.39 ∆ 4.44↓ 2.72↓ 2.67↓ 3.22↓ w/o Fine-tune 31.23 35.60 55.49 16.72 ∆ 18.30↓ 11.50↓ 17.70↓ 19.67↓ TACRED ORELLM 55.80 55.26 69.07 49.74 w/o Rethink 53.95 52.19 67.83 47.12 ∆ 1.85↓ 3.07↓ 1.24↓ 2.62↓ w/o Fine-tune 40.75 36.34 53.80 19.61 ∆ 15.05↓ 18.92 ↓ 15.27 ↓ 30.13↓ Table 3: Ablation results on FewRel and TACRED. The K is known. The configuration of ORELLM is ChatGPT + GPT2-XL. Dataset Cycle ACC B3-F1 V-F1 ARI FewRel initial 27.52 30.73 52.28 13.30 1st 48.31+20.79 45.04+14.31 71.98+19.7 37.62+24.32 2nd 49.53+1.22 47.10+2.06 73.19+1.21 39.61+1.99 3rd 48.64-0.89 45.95-1.15 71.07-2.12 36.35-3.26 TACRED initial 37.30 35.16 51.03 18.78 1st 55.80+18.5 55.26+20.10 69.07+18.04 49.74+30.96 2nd 54.29-1.51 53.74-1.52 68.18-0.89 48.52-1.22 3rd 53.18-1.11 52.58-1.16 67.82-0.36 48.29-0.23 Table 4: Clustering results of ORELLMChatGPT+ GPT2-XL in each cycle round. The K is known. The subscripted numbers indicate the difference between the current cycle and its last one. types than FewRel, the ORELLM achieves a good performance just after the 1st cycle. To explore reasons for this phenomenon, we represent the model with neither Rethink nor Fine-tune in initial cycle as ORELLM♢, and models with only Rethink or Fine-tune as ORELLM♢-R and ORELLM♢F, respectively. We compare the performance among ORELLM, ORELLM♢, ORELLM♢-R and ORELLM♢-F in each cycle round and the results are illustrated in Figure 3. Observing the trend from the initial state to the 3rd cycle, we note that ORELLM, ORELLM♢-F, and ORELLM♢-R exhibit the most substantial performance gains in the 1st cycle. In subsequent cycles, the improvements tend to stabilize, indicating that one cycle round is sufficient for the models to achieve better performance. Specifically, the performance of ORELLM♢-R decreases after the 1st cycle, which suggests that Rethink just one time is suitable for ORELLM. This is likely because Rethink many times leads ChatGPT to generate rare and unconventional phrases as it tries to avoid repetition of all previously generated phrases for an instance. These unconventional phrases may introduce noise, resulting in a downward trend in the performance of ORELLM. Initial 1 2 3 Cycle 25 30 35 40 45 50 55 ACC (%) ORELLM ORELLM -F ORELLM -R ORELLM 48.31% 49.53% 48.64% 44.20% 45.09% 45.43% 29.17% 30.23% 29.33% 27.52% (a) FewRel Initial 1 2 3 Cycle 35 40 45 50 55 60 ACC (%) ORELLM ORELLM -F ORELLM -R ORELLM 55.80% 54.29% 53.18% 53.95% 54.02% 55.14% 40.75% 39.17% 38.91% 37.30% (b) TACRED Figure 3: Clustering accuracy (ACC) of different variant models in different cycles. (i) ORELLM: original model; (ii) ORELLM♢-R: the model only with Rethink; (iii) ORELLM♢-F: the model only with Fine-tune; (iiii) ORELLM♢: the model with neither Rethink nor Finetune; The configuration of every model is ChatGPT + GPT2-XL. 3.4 Which relation can be well recognized by generative LLMs? We use the new relations in FewRel to explore which relations can be well recognized by LLMs. We first assess the accuracy of each relation and select four relations with higher accuracy (e.g., father, sibling, genre, and country of citizenship) and four with lower (e.g., headquarters location, located on terrain feature, has part, and after a work by) for further analysis. For each relation, we count the top 5 most frequent phrases generated from 70 instances. As illustrated in Figure 4, the observations are as follows: (1) The rich background knowledge of LLMs makes them adept at identifying common relations. For instance, they can accurately produce relational phrases such as “father-son, father” and “sibling relationships, brothers” for family relations father and sibling, respectively. Moreover, this knowledge also allows them to identify implicit relations. In relation genre, for a given instance “Trailer Bride was a Chapel Hill, North Carolina-based alternative country rock band signed to Bloodshot Records” with “Trailer Bride” and “alternative country rock” as the head and tail entities, LLMs are capable of generating the relational phrase “genre affiliation”, even if the term “genre” is not explicitly mentioned in this sentence. (2) It is difficult for LLMs to distinguish the granularity of relations. Headquarters location and located on terrain feature are different relations in FewRel, yet both are related to locations. Without specific prompts that focus on their differences, LLMs cannot distinguish granularities between these relations and tend to produce generic phrases like “located in, located on” for the vast 13137 0 10 20 30 40 50 Frequency father son father daughter father father in law grandfather grandson (a) father 0 5 10 15 20 25 Frequency brothers siblings sibling relationship brothers siblings sisters (b) sibling 0 2 4 6 8 10 12 Frequency musical genre music genre film genre genre association game genre (c) genre 0 10 20 30 40 Frequency nationality nationality relation nationality country nationality association nationality team (d) country of citizenship 0 5 10 15 20 25 Frequency located in based in publisher location headquartered in headquarters location (e) headquarters location 0 2 4 6 8 10 12 14 16 Frequency located on located in location relation geographical proximity adjacent locations (f) located on terrain feature 0 1 2 3 4 5 6 7 Frequency band members located in musical collaboration group members operates on (g) has part 0 2 4 6 8 10 12 14 16 Frequency based on author novel written by author book adaptation author (h) after a work by Figure 4: The top 5 most frequent phrases in different categories. Subgraphs (a)-(d) and (e)-(h) represent relations with higher and lower accuracy, respectively. The description of these relations are in Appendix A.6. majority of instances (as shown in Figure 4 (e) and (f)). Similarly, for coarser-grained relations like has part, LLMs often generate fine-grained phrases such as “group member, season of, band member” according to the context of sentences. 4 Related Works Open Relation Extraction. Methods for Open relation extraction (OpenRE) can be categorized as: tagging-based (Banko et al., 2007; Yates et al., 2007) and clustering-based (Hu et al., 2020; Simon et al., 2019; ElSahar et al., 2017). Tagging-based methods directly extract relation-related phrases from sentences based on syntactic analysis, but often produce redundant or ambiguous results (Fader et al., 2011). Therefore, clustering-based methods have drawn more attention. Liu et al. (2021) revisits OpenRE from a causal view and Zhang et al. (2021b) incorporates hierarchical information into this task. Additionally, Zhao et al. (2021) learns a relational-oriented clustering model and Wang et al. (2022) proposes a prompt-based framework to identify open relations. Li et al. (2022) introduces the idea of introduce the idea of type abstraction and learns novel relations from two views. Zhao et al. (2023) integrates the active learning with clustering, which improves the clustering performance while discovering as many relations as possible. In this work, we propose a novel method for OpenRE based on large language models. Relation Extraction with Large Language Models. Recently, using large language models (LLMs) for relation extraction (RE) has received much attention (Wadhwa et al., 2023; Wan et al., 2023; Ding et al., 2024). Wan et al. (2023) enriches the reasoning evidence of each demonstration for RE. Zhang et al. (2023a) unlocks the power of LLMs as zero-shot relation extractors by reformulating RE into multiple-choice question answering. Li et al. (2023) introduces a novel prompting method to fully explore the power of LLMs, thereby improving their performance on zero-shot RE. Most of these works are focused on zero-shot or few-shot RE, and OpenRE still needs to be explored. We propose a new idea for OpenRE in the era of LLMs, that is, first extract relational phrases and then directly exploit the knowledge in LLMs to assess the semantic similarity between phrases, without relying on any additional metrics. 5 Conclusion In this work, we propose a new idea for OpenRE in the era of LLMs: extracting relational phrases and directly exploiting the knowledge in LLMs to assess the semantic similarity between phrases without relying on any additional metrics. Based on this idea, we develop a framework named ORELLM, which consists of two LLMs. One serves as a phrase extractor, while the other operates as a similarity probability estimator. These two LLMs work together through a cooperative cycle for clustering. Experimental results show that ORELLM outperforms current baselines by 1.4% ∼3.13% in terms of clustering accuracy. 13138 Limitations We consider that our current method has the following two limitations. (1) LLMs have hallucinations and may produce incorrect phrases. Although we perform “Rethink” to reduce this problem, it still exists. Therefore, we need to design a more efficient strategy to allow the model to generate more effective relational phrases. (2) Since the probability of generation is calculated by multiplying the likelihood of each token in the sentence conditioned on all its previous tokens, the result will be smaller as the phrase becomes longer (even in log domain). This makes it difficult to distinguish between phrases if all calculated similarities are small. In our work, we limit the length of generated relational phrases to no more than five words. However, five words may not accurately describe some relations, especially when distinguishing fine-grained ones, e.g., located in the administrative territorial entity and located on terrain feature. Acknowledgments We would like to thank all anonymous reviewers for their constructive comments. This work was supported by National Key Research and Development Program of China (2022YFC3303600), National Natural Science Foundation of China (62137002, 62293553, 62176207 and 62192781), “LENOVO-XJTU” Intelligent Industry Joint Laboratory Project, Natural Science Basic Research Program of Shaanxi (2023-JC-YB-593), the Youth Innovation Team of Shaanxi Universities, XJTU Teaching Reform Research Project “Acquisition Learning Based on Knowledge Forest”, and the National Research Foundation (NRF) Singapore and DSO National Laboratories Singapore under the AI Singapore Program (No. AISG2-RP-2020-016). References David Arthur and Sergei Vassilvitskii. 2007. kmeans++: the advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2007, New Orleans, Louisiana, USA, January 7-9, 2007, pages 1027–1035. Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first international conference on language resources and evaluation workshop on linguistics coreference, pages 563–566. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007, pages 2670–2676. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Zepeng Ding, Wenhao Huang, Jiaqing Liang, Yanghua Xiao, and Deqing Yang. 2024. Improving recall of large language models: A model collaboration approach for relational triple extraction. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 8890–8901. Hady ElSahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Frédérique Laforest. 2017. Unsupervised open relation extraction. In The Semantic Web: ESWC 2017 Satellite Events - ESWC 2017 Satellite Events, Portorož, Slovenia, May 28 June 1, 2017, Revised Selected Papers, volume 10577 of Lecture Notes in Computer Science, pages 12–16. Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S. Weld. 2008. Open information extraction from the web. Commun. ACM, 51(12):68–74. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1535–1545. Jun Gao, Di He, Xu Tan, Tao Qin, Liwei Wang, and TieYan Liu. 2019. Representation degeneration problem in training natural language generation models. In 13139 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, and Maosong Sun. 2018. Fewrel: A large-scale supervised few-shot relation classification dataset with state-of-the-art evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pages 4803–4809. Xuming Hu, Lijie Wen, Yusong Xu, Chenwei Zhang, and Philip S. Yu. 2020. Selfore: Self-supervised relational feature learning for open relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3673– 3682. Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions. Journal of classification, 2(1):193–218. Bohan Li, Hao Zhou, Junxian He, Mingxuan Wang, Yiming Yang, and Lei Li. 2020. On the sentence embeddings from pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9119– 9130. Guozheng Li, Peng Wang, and Wenjun Ke. 2023. Revisiting large language models as zero-shot relation extractors. In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 6877–6892. Sha Li, Heng Ji, and Jiawei Han. 2022. Open relation and event type discovery with type abstraction. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 6864–6877. Fangchao Liu, Hongyu Lin, Xianpei Han, Boxi Cao, and Le Sun. 2022. Pre-training to match for unified low-shot relation extraction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5785– 5795. Fangchao Liu, Lingyong Yan, Hongyu Lin, Xianpei Han, and Le Sun. 2021. Element intervention for open relation extraction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 4683–4693. Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. Leland McInnes, John Healy, and Steve Astels. 2017. hdbscan: Hierarchical density based clustering. J. Open Source Softw., 2(11):205. G Ayorkor Mills-Tettey, Anthony Stentz, and M Bernardine Dias. 2007. The dynamic hungarian algorithm for the assignment problem with changing costs. Robotics Institute, Pittsburgh, PA, Tech. Rep. CMURI-TR-07-27. OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey. CoRR, abs/2003.08271. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Emily Reif, Ann Yuan, Martin Wattenberg, Fernanda B. Viégas, Andy Coenen, Adam Pearce, and Been Kim. 2019. Visualizing and measuring the geometry of BERT. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 8592–8600. Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3980–3990. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In EMNLP-CoNLL 2007, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 410–420. Étienne Simon, Vincent Guigue, and Benjamin Piwowarski. 2019. Unsupervised information extraction: Regularizing discriminative approaches with relation distribution losses. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 1378–1387. Xiaoshuai Song, Keqing He, Pei Wang, Guanting Dong, Yutao Mou, Jingang Wang, Yunsen Xian, Xunliang Cai, and Weiran Xu. 2023. Large language models meet open-world intent discovery and recognition: An evaluation of chatgpt. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 10291–10304. 13140 Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, and Tao Yu. 2023. One embedder, any task: Instruction-finetuned text embeddings. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 1102–1121. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language models. CoRR, abs/2302.13971. Thy Thy Tran, Phong Le, and Sophia Ananiadou. 2020. Revisiting unsupervised relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7498–7505. Somin Wadhwa, Silvio Amir, and Byron C. Wallace. 2023. Revisiting relation extraction in the era of large language models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 15566–15589. Zhen Wan, Fei Cheng, Zhuoyuan Mao, Qianying Liu, Haiyue Song, Jiwei Li, and Sadao Kurohashi. 2023. GPT-RE: in-context learning for relation extraction using large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 3534–3547. Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, and Ming Zhou. 2020. Bridging the gap between pretraining and fine-tuning for end-to-end speech translation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9161–9168. Jiaxin Wang, Lingling Zhang, Jun Liu, Xi Liang, Yujie Zhong, and Yaqiang Wu. 2022. Matchprompt: Prompt-based open relation extraction with semantic consistency guided clustering. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 7875–7888. Benfeng Xu, Quan Wang, Yajuan Lyu, Dai Dai, Yongdong Zhang, and Zhendong Mao. 2023. S2ynre: Two-stage self-training with synthetic data for lowresource relation extraction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 8186– 8207. Zhaohui Yan, Songlin Yang, Wei Liu, and Kewei Tu. 2023. Joint entity and relation extraction with span pruning and hypergraph neural networks. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 7512–7526. Limin Yao, Aria Haghighi, Sebastian Riedel, and Andrew McCallum. 2011. Structured relation discovery using generative models. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1456–1466. Alexander Yates, Michele Banko, Matthew Broadhead, Michael J. Cafarella, Oren Etzioni, and Stephen Soderland. 2007. Textrunner: Open information extraction on the web. In Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, April 22-27, 2007, Rochester, New York, USA, pages 25–26. Hanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu. 2021a. Discovering new intents with deep aligned clustering. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 14365–14373. Kai Zhang, Bernal Jimenez Gutierrez, and Yu Su. 2023a. Aligning instruction tasks unlocks large language models as zero-shot relation extractors. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 794–812. Kai Zhang, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, and Maosong Sun. 2021b. Open hierarchical relation extraction. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 5682–5693. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 35–45. Yuwei Zhang, Zihan Wang, and Jingbo Shang. 2023b. Clusterllm: Large language models as a guide for text clustering. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 13903–13920. 13141 Jun Zhao, Tao Gui, Qi Zhang, and Yaqian Zhou. 2021. A relation-oriented clustering method for open relation extraction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 9707–9718. Jun Zhao, Yongxin Zhang, Qi Zhang, Tao Gui, Zhongyu Wei, Minlong Peng, and Mingming Sun. 2023. Actively supervised clustering for open relation extraction. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 4985–4997. 13142 A Appendix A.1 Models We adopt ChatGPT 4(gpt-3.5-turbo) or GPT4 5(gpt-4) as our phrase extractor. Considering the balance between cost and performance, we first use ChatGPT to extract relational phrases for all input instances. Then, we employ GPT-4 to apply the “Rethink” on ambiguous data. As for probability estimator, we utilize a publicly available GPT2-XL model (Radford et al., 2019) with 1.5B parameters. To better evaluate the efficiency and flexibility of our framework, two smaller generative models: GPT2-medium and GPT2-large, each with 355M and 774M parameters respectively, are also adopted as probability estimators. A.2 Baselines To demonstrate the effectiveness of our method, we compare it with the following six OpenRE baselines, which are categorized into two groups: (1) Three traditional methods with pre-defined labeled instances for pre-training: RoCORE (Zhao et al., 2021), MatchPrompt (Wang et al., 2022), and TABS (Li et al., 2022). These methods leverage labeled pre-defined relational instances as prior knowledge to help discover new relations and achieve state-of-the-art results. We construct the labeled data for these methods by randomly selecting 16 instances of each type from the remaining 16 and 14 relations in FewRel and TACRED, respectively. (2) Three methods using LLMs: ChatGPT +RoBERTa-large, we adopt RoBERTa-large (355M) to learn the embedding of relational phrases generated by ChatGPT and perform the clustering algorithm K-Means (Arthur and Vassilvitskii, 2007); SBERTChatGPT +RoBERTa-large, we use SentenceBERT (Reimers and Gurevych, 2019), which estimate pairwise similarity between phrases by siamese BERT, and keep the remaining model structure consistent with our ORELLM; CLUSTERLLM (Zhang et al., 2023b), which leverages ChatGPT or GPT-4 to annotate data and guide a pre-trained embedder named Instructor (Su et al., 2023) for text clustering. We change the embedder to Instructor-XL (1.5B) for comparison. A.3 Metrics and Implementations We adopt four commonly used metrics to evaluate clustering results: Accuracy (ACC), B3-F1 (Bagga 4https://chat.openai.com/chat. 5https://chatgpt.com/?oai-dm=1&model=gpt-4. and Baldwin, 1998), V-measure F1 (V-F1) (Rosenberg and Hirschberg, 2007), and Adjusted Rand Index (ARI) (Hubert and Arabie, 1985). Specifically, ACC is calculated using the Hungarian algorithm (Mills-Tettey et al., 2007) to evaluate the best mapping between clustering assignments and true labels; B3-F1 is the harmonic mean of precision and recall for placing each instance in its cluster; Similarly, V-F1 represents the harmonic mean of homogeneity and completeness for clusters; ARI reflects the degree of agreement between clusters and true distributions.The probability estimator fine-tunes with a learning rate of 3e −5, 2e −5, and 2e −5 for GPT2-medium, GPT2-large, and GPT-XL by AdamW (Loshchilov and Hutter, 2019), respectively. Pϵ and Pr are given by the value of elements ranked top 40% and 20% in P from large to samll, respectively. Threshold Nε is 20 for both datasets. The γ are 0.7 and 0.8 for FewRel and TACRED, respectively. We set the temperatures as 0.5 and 0.8 for gpt-3.5-turbo and gpt-4, respectively. The generated lengths for {p1, p3} and {p2, p4} are 256 and 16, respectively. A.4 Prompts for LLMs We list all templates used in our work including {p1, p2} to extract relational phrases and {p3, p4} to rethink as follows: p1: In sentence {si}, the head entity is {hi} and the tail entity is {ti}. please first analyze the type of head entity and the tail entity, then describe the relation between them. [Demonstration] p2: Based on the following analysis, summarize the relation for the head entity and tail entities within five words. To make the answer more specific, you may add entity type into the answer. [Demonstration], {si, hi, ti}, Analysis:{response from p1}.Answer: p3: Given a sentence {si}, the corresponding head entity is {hi} and the tail entity is {ti}. You have previously summarized the relation between them as {wj}. However, these phrases might not accurately describe the relation between {hi} and {ti}. Please re-analyze this relation. p4: Based on the analysis {response from p3}, please rethink and provide a revised phrase to describe the relation between head entity {hi} and tail entity {ti} in 13143 sentence {si}. No more than five words. As shown in Table 5, we give an example of using p1 and p2 to generate relational phrases. A.5 Case Studies on Rethink To clearly show the process and impact of Rethink, we provide case studies that compares phrases generated before and after Rethink. The results are illustrated in Table 6 and Table 7. These results indicate that the LLM-based phrase extractor can produce improved outcomes when doing Rethink. A.6 Description of Relations We list the description of relations that appear in Section 3.4 as follows6 7: (a) father: male parent of the subject. (b) sibling: the subject has the object as their sibling (brother, sister, etc.). (c) genre: creative work’s genre or an artist’s field of work. (d) country of citizenship: the object is a country that recognizes the subject as its citizen. (e) headquarters location: specific location where an organization’s headquarters is or has been situated. (f) located on terrain feature: located on the specified landform. Should not be used when the value is only political or administrative (provinces, states, countries, etc.). (g) has part: part of this subject. (h) after a work by: artist whose work strongly inspired or was copied in this item. 6The description of other relations for FewRel are in https://github.com/thunlp/FewRel/fewrel.pdf 7The description of relations in TACRED are in https://nlp.stanford.edu/projects/tacred 13144 Prompt p1 In sentence {The Breaks is an American 1999 comedy film written by and starring Mitch Mullany.}, the head entity is {Mitch Mullany} and the tail entity is {American}. please first analyze the type of head entity and the tail entity, then describe the relation between them. [For example, The occupancy of the show surpassed the Hindi films Baar Baar Dekho and Freaky Ali., {h}: Baar Baar Dekho, {t}: Hindi. {Analysis}: Baar Baar Dekho is mentioned as a film. The Hindi here is referring to the language. The relation between them means the language in which the film is made.] Response Mitch Mullany is a notable figure within the context of the film. Therefore, the relation between “Mitch Mullany” and “American” indicates his involvement in the American comedy film, possibly as a key figure in its production or performance. Prompt p2 Based on the following analysis, summarize the relation for the head entity and tail entities within five words. To make the answer more specific, you may add entity type into the answer. [For example, The occupancy of the show surpassed the Hindi films Baar Baar Dekho and Freaky Ali., {h}: Baar Baar Dekho, {t}: Hindi. {Analysis}: Baar Baar Dekho is mentioned as a film. The Hindi here is referring to the language. The relation between them means the language in which the film is made. Answer: language of film.] {The Breaks is an American 1999 comedy film written by and starring Mitch Mullany.}, {h}: {Mitch Mullany}, {t}:{American}, Analysis:{Mitch Mullany is...}. Answer: Response writer of film Table 5: An example of generating relational phrases. 13145 Ground Truth Relation: Country of citizenship Example #1 Prompt p3 Given a sentence {Heinz, legal name Mathias Halfpape (born in Hamburg) is a German entertainer.}, the corresponding head entity is {Mathias Halfpape} and the tail entity is {German}. You have previously summarized the relation between them as {legal name of person}. However, these phrases might not accurately describe the relation between {Mathias Halfpape} and {German}. Please re-analyze this relation. Response The “Mathias Halfpape” as a legal name but also adds that he is a “German entertainer”. The term “German” here relates to nationality or citizenship rather than being part of a legal name. Therefore, a more appropriate relation between “Mathias Halfpape” and “German” in this context would be “nationality” or “citizenship”, not “legal name”. Prompt p4 Based on the analysis{The “Mathias Halfpape”...}, please rethink and provide a revised phrase to describe the relation between head entity {Mathias Halfpape} and tail entity {German} in sentence {Heinz, legal name Mathias Halfpape (born in Hamburg) is a German entertainer.} No more than five words. Response nationality of person Ground Truth Relation: Country of citizenship Example #2 Prompt p3 Given a sentence {The Breaks is an American 1999 comedy film written by and starring Mitch Mullany.}, the corresponding head entity is {Mitch Mullany} and the tail entity is {American}. You have previously summarized the relation between them as {writer of film}. However, these phrases might not accurately describe the relation between {Mitch Mullany} and {American}. Please re-analyze this relation. Response The term “American” in this sentence refers to the nationality or origin of the film, not a person or object Mitch Mullany wrote. As“Mitch Mullany” is a person name, the relation between “Mitch Mullany” and “American” should be analyzed in terms of nationality association. Prompt p4 Based on the analysis{The term“American”...}, please rethink and provide a revised phrase to describe the relation between head entity {Mitch Mullany} and tail entity {American} in sentence {The Breaks is an American 1999 comedy film written by and starring Mitch Mullany.} No more than five words. Response nationality of Mitch Mullany Table 6: Case study of Rethink on the ground truth relation Country of citizenship in FewRel. 13146 Ground Truth Relation: org: website Example #1 Prompt p3 Given a sentence {Event Sponsors: West Marine <http://docs.google.com/www.westmarine.com> }, the corresponding head entity is {West Marine} and the tail entity is {<http://docs.google.com/www.westmarine.com>}. You have previously summarized the relation between them as {sponsor of }. However, these phrases might not accurately describe the relation between {West Marine} and {<http://docs.google.com/www.westmarine.com>}. Please re-analyze this relation. Response If the URL is intended to point to a webpage hosting details about West Marine’s sponsorship activities or an event they sponsor, then West Marine is not sponsoring the webpage itself but is the subject or featured sponsor on that page. Therefore, the link points to a webpage about West Marine’s sponsorship rather than indicating that West Marine sponsors the webpage itself. Prompt p4 Based on the analysis{If the URL...}, please rethink and provide a revised phrase to describe the relation between head entity {West Marine} and tail entity {<http://docs.google.com/www.westmarine.com>} in sentence {Event Sponsors: West Marine <http://docs.google.com/www.westmarine.com>.} No more than five words. Response website of sponsor Ground Truth Relation: org: website Example #2 Prompt p3 Given a sentence {Guide to ABC Radio Australia (RA) ABC RA = >http://www.abc.net.au/ra/guide/. }, the corresponding head entity is {Guide to ABC Radio Australia} and the tail entity is { http://www.abc.net.au/ra/guide/}. You have previously summarized the relation between them as {CEO of }. However, these phrases might not accurately describe the relation between {Guide to ABC Radio Australia} and {http://www.abc.net.au/ra/guide/}. Please re-analyze this relation. Response The “Guide to ABC Radio Australia” indicates a document, webpage, or resource providing information or directions regarding ABC Radio Australia.The URL “http://www.abc.net.au/ra/guide/” likely represents the actual location where such a guide can be accessed or viewed online. Prompt p4 Based on the analysis{The “Guide to ABC Radio Australia” indicates...}, please rethink and provide a revised phrase to describe the relation between head entity {Guide to ABC Radio Australia} and tail entity {http://www.abc.net.au/ra/guide/} in sentence {Guide to ABC Radio Australia (RA) ABC RA = >http://www.abc.net.au/ra/guide/.} No more than five words. Response website link Table 7: Case study of Rethink on the ground truth relation org: website in TACRED. 13147
2024
709
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1298–1311 August 11-16, 2024 ©2024 Association for Computational Linguistics Grounding Language Model with Chunking-Free In-Context Retrieval Hongjin Qian1,2, Zheng Liu1∗, Kelong Mao2, Yujia Zhou2, Zhicheng Dou2 1 Beijing Academy of Artificial Intelligence 2 Gaoling School of Artificial Intelligence, Renmin University of China {chienqhj,zhengliu1026}@gmail.com Abstract This paper presents a novel Chunking-Free InContext (CFIC) retrieval approach, specifically tailored for Retrieval-Augmented Generation (RAG) systems. Traditional RAG systems often struggle with grounding responses using precise evidence text due to the challenges of processing lengthy documents and filtering out irrelevant content. Commonly employed solutions, such as document chunking and adapting language models to handle longer contexts, have their limitations. These methods either disrupt the semantic coherence of the text or fail to effectively address the issues of noise and inaccuracy in evidence retrieval. CFIC addresses these challenges by circumventing the conventional chunking process. It utilizes the encoded hidden states of documents for in-context retrieval, employing autoaggressive decoding to accurately identify the specific evidence text required for user queries, eliminating the need for chunking. CFIC is further enhanced by incorporating two decoding strategies, namely Constrained Sentence Prefix Decoding and Skip Decoding. These strategies not only improve the efficiency of the retrieval process but also ensure that the fidelity of the generated grounding text evidence is maintained. Our evaluations of CFIC on a range of open QA datasets demonstrate its superiority in retrieving relevant and accurate evidence, offering a significant improvement over traditional methods. By doing away with the need for document chunking, CFIC presents a more streamlined, effective, and efficient retrieval solution, making it a valuable advancement in the field of RAG systems. The codes will be released in this repository. 1 Introduction Recently, retrieval-augmented generation (RAG) has marked a significant advancement in the field ∗Corresponding author. of natural language processing (NLP). This technique has demonstrated remarkable effectiveness in reducing hallucination in text generation (Ji et al., 2023), particularly in knowledge-intensive tasks like open-domain question answering (Wang et al., 2019; Lewis et al., 2020; Shuster et al., 2021; Komeili et al., 2022). An RAG system typically consists of two components: the retriever and the generator. Given an input query, the retriever first identifies relevant evidence text, upon which the generator then generates the answer. The generator’s output should be grounded by precise evidence text obtained by the retriever. However, this poses challenges for most retrieval systems, as they often retrieve lengthy documents such as web pages. In practice, we only need specific grounding text from these documents to help answer user queries. Using lengthy documents directly in the RAG system presents two difficulties. First, generation models may struggle to handle the extensive length of these documents. Second, irrelevant or distracting content within the documents can lead the model astray from the main query, resulting in inaccurate response generation (Gao et al., 2024). To address this issue, common approaches involve chunking documents into smaller passages and employing strategies like reranking for relevance (Nogueira and Cho, 2020; Mao et al., 2021; Gao et al., 2024), or selecting passages based on other measurements (Asai et al., 2022; Jiang et al., 2023). However, the chunking process is often suboptimal, as determining the granularity of the passage chunking is challenging. Improper chunking can disrupt the semantics and result in incomplete and incoherent retrieved information (Dong et al., 2023). Another method involves adapting large language models (LLMs) to process longer contexts by training them on long contexts or implementing a sliding context window (Ratner et al., 2022; Chen et al., 2023). While these methods enable LLMs 1298 Search Engine LLM Chunking Sub-Optimal Evidence Semantic Cut-off Long Grounding Article Input Query What significant change occurred in Bach's style in the last decade of his life? Decoding Chunking-Based Chunking-Free Optimal Evidence Hidden States Evidence Selection Figure 1: Comparison of Chunking-Based and Chunking-Free Methods. The left panel illustrates the chunking-based method, involving chunking a lengthy document into smaller passages followed by refinement through passage ranking. The right panel depicts the chunking-free method proposed in this paper, where grounding text is directly decoded by LLMs without the need for document chunking. to handle longer texts, they do not fully address the issue of noise in the lengthy documents and cannot output the grounding text for the generated response (Kaddour et al., 2023). In this paper, we propose a Chunking-Free InContext (CFIC) retrieval approach aimed at helping the RAG system mitigate information bias introduced by document chunking and irrelevant noisy text. Specifically, given an input query and a long grounding document, instead of refining the long documents with a chunking-based method, we leverage the document’s encoded hidden states to perform Chunking-free In-Context Retrieval. It circumvents the traditional chunking process, allowing the retrieval system to auto-aggressively decode and pinpoint the precise evidence text to ground the response generation to a query. Figure 1 shows the comparison between the chunking-based method and the chunking-free method for grounding text retrieval. The chunking-free method demonstrates a superior ability to identify optimal evidence text, as it considers the entire document for a comprehensive perspective. Concretely, CFIC involves encoding a document into transformer hidden states. When a user query is input, CFIC continues to encode the query alongside task instructions following the hidden states, subsequently generating grounding text. In practice, we can cache the documents’ hidden states to further reduce computation1. Given the expectation 1In a single-sided transformer model, the forward side is auto-regressive; once an output token’s hidden state is computed, it remains unchanged for subsequent forward steps, allowing us to use these encoded states as a cache. for CFIC to process lengthy documents, it becomes imperative to adapt CFIC for handling long contexts. Considering the trade-off between efficiency and effectiveness, in this paper, we adapt CFIC to accommodate a 32k context, utilizing LLAMA27B-chat as the foundational model. To achieve this, we construct a dataset containing long document, user query and precise text evidence to training the foundation model via Supervised FineTuning (SFT). Despite its promise, CFIC encounters two major challenges: (1) Efficiency issue: the autoaggressive generation process involves executing attention interactions for generating each new token, a procedure that becomes particularly timeconsuming with longer contexts due to the management of exponentially larger attention matrices. This process requires substantial computational resources (Kaplan et al., 2020), and (2) Faithfulness issue: it is challenging to ensure the generation model’s output remains faithful to the original input context, given its open-ended decision boundary (Li et al., 2022b). To address these, we propose two decoding strategies that accelerate inference and ensure that generated text evidence originates from the corpus. These include: (1) utilizing sentence prefixes as decoding candidates to shift the model’s decision boundary from open-ended to document-dependent generation and (2) upon locating the appropriate sentence prefix, bypassing the decoding of intermediate tokens and directly selecting sentence ends with the highest likelihood of the [eos] token, thereby terminating the generation. Furthermore, to retrieve multiple text spans as evidence, we sample several sentence prefixes with the best likelihood as candidates and rank them by sequence likelihood. By this means, CFIC not only enhances the relevance and accuracy of retrieved evidence text but also preserves the semantic integrity of the information, effectively addressing major drawbacks of current retrieval systems. We tested CFIF on the LongBench tasks (Bai et al., 2023) including: (1) single-document question answering with datasets like NarrativeQA, Qasper, MulitfieldQA, and (2) multi-document QA with datasets like Musqus and HotpotQA. The experiment results verify the effectiveness of our method. In summary, our contributions are as follows: (1) we propose a chunking-free in-context retrieval method dedicated to the RAG system, aiding in locating precise text evidence to answer user 1299 queries; (2) we propose the CFIC model of which the ability to find text evidence from long context is enhanced via Supervised Fine-Tuning with selfconstructed dataset; (3) we design two decoding strategies that significantly improve the efficiency and accuracy of the CFIC’s decoding process. 2 Related Work The RAG framework, initially introduced in the works of Lewis et al. (2020), aimed to enhance language models’ capacity for generating knowledgebased responses(Chung et al., 2022; Yang et al., 2023). Subsequent research primarily focus on refining the RAG’s two core components. On the retrieval front, significant strides have been made towards more efficient and precise retrieval methods (Khandelwal et al., 2019; Nishikawa et al., 2022; Mao et al., 2023; Guo et al., 2022; Kang et al., 2023). For example, the arise of Dense Passage Retrieval significantly surpasses traditional sparse dense (Karpukhin et al., 2020). Parallel efforts on the generation side have concentrated on fine-tuning generative models to better harmonize with retrieved information, a notable example being the work of Izacard and Grave (2021b) in optimizing external knowledge utilization (Izacard and Grave, 2021a; Chung et al., 2022; Kamalloo et al., 2023; Qian et al., 2023b). Nevertheless, RAG encounters specific challenges, especially in managing lengthy and complex retrieved documents. Researchers, including Mao et al. (2021), have developed chunking and reranking techniques to enhance passage relevance. Furthermore, Guu et al. (2020) introduced methods for jointly learning retriever and generator models, thereby improving the coherence and relevance of outputs. Addressing the issue of lengthy contexts in RAG has involved either refining contexts (Li et al., 2022a; Jiang et al., 2023) or adapting generation models to handle extended contexts (Ratner et al., 2022; Chen et al., 2023). Recent advancements in RAG predominantly incorporate large-scale language models (LLMs), such as GPT-3 and GPT-4, to augment language processing capabilities (Brown et al., 2020; OpenAI, 2023; Google, 2023). The integration of LLMs has paved the way for more contextually rich and nuanced generation, especially in aligning generated responses with human preferences (Ram et al., 2023; Zhou et al., 2024; Liu et al., 2023b). In RAG systems employing LLMs, the accuracy of retrieved textual evidence is crucial for reducing hallucinations and incorporating external knowledge (Zhang et al., 2023b; Yao et al., 2023; Bang et al., 2023; Qian et al., 2023a). However, the challenge of processing long and noisy contexts persists (Liu et al., 2023a; Li et al., 2022a; Xu et al., 2023). This paper introduces a chunkingfree in-context retrieval approach that leverages transformer hidden states to generate grounding text evidence, treating evidence retrieval as a generative process. This method represents a more streamlined and efficient retrieval solution for RAG systems, marking a significant advancement over previous retrieval methodologies. 3 Method 3.1 Preliminary In a RAG system, the system takes a user query q as input, retrieves text evidence A from a text corpus C using a retriever θ(·) as external knowledge, and utilizes a generation model ϕ(·) to produce the final response T . This pipeline can be formalized as: A = θ(q, C), T = ϕ(q, A). (1) The retriever θ(·) can be either a standalone retriever (e.g., DPR (Karpukhin et al., 2020)) or a commercial search engine (e.g., Google), and the generation model ϕ(·) is usually a trained LM. Based on Eq. (1), the quality of the generated text T is bounded by the accuracy of the evidence A, emphasizing the importance of accurately finding the accurate text evidence. In practice, most RAG systems’ retrievers cannot accurately find exact text evidences, but only retrieve lengthy documents (e.g., web pages or preindexed articles) that contain the evidences. As mentioned in Section 1, such lengthy documents might bias the generated content. Thus, given the retrieved evidence A, we usually select a few useful text spans, called supporting text evidence P = {p1, · · · , pk} ∈A, to support the answer generation for the input query q in a RAG system. We define the process of finding supporting passages as a mapping function f(·): P = {p1, · · · , pk} = f(A). (2) The mapping function f(·) can take various forms, such as chunking the text evidence A and prioritizing relevant chunks through re-ranking. In this paper, we define the mapping function f(·) as a 1300 Long Article A Input Query q What significant change occurred in Bach's style in the last decade of his life? Generator In the last decades of his life, he reworked and extended many of his earlier compositions. He died of complications after eye surgery in 1750 at the age of 65 [eos] Bach’s own style shifted in the last decade of his life, showing an increased integration of polyphonic structures and canons and other elements of the stile antico.[eos] Skip Decoding …… Prefix 1: Bach’s own style … Constrained Sentence Prefix Decoding Prefix 2: In the last decades … Prefix 3: In the Bach’s family … Prefix 4: Bach enriched … Prefix n: Throughout the 18th … Hidden States Prefix Score w1 1 w2 1 w2 k w1 k w2 k wk k … … h s Figure 2: Overview of the proposed method: CFIC. The middle part shows the Constrained Sentence Prefix Decoding strategy which ensures the generated text prefixes originate from the input article. The right part shows the Skip Decoding strategy which bypasses decoding the intermediate tokens while terminating generation at the position with the best likelihood of [eos] token. Gray tokens in the figure are bypassed during generation. generation process in which we directly generate the supporting text evidence P conditioned on the transformer hidden-states h = Trans(A) of the lengthy document: P = f(A) ∼Generator(P|h, q). (3) Compared to regular auto-regressive decoding, the above process is characterized by the fact that the generation target P contains text sourced from A. This means that once we determine the decoding prefix, we can skip the intermediate tokens and directly find the terminating position by computing the probability of inserting [eos] token. This greatly improves inference efficiency while ensuring that the generated text accurately represents the source text. Additionally, a single supporting passage may not always be sufficient for question answering. Therefore, we can obtain multiple sentence prefixes as top-k candidates using sampling decoding. In this paper, our proposed model CFIC applies these ideas to generate the top-k supporting text evidence P, which are further discussed in the following sections. 3.2 The Proposed Model: CFIC Figure 2 presents an overview of our proposed model, CFIC. The process begins with CFIC receiving a user query. It then retrieves a long article as grounding evidence through a search engine (e.g., Google). Subsequently, CFIC combines the long document and the query into an input prompt, following the format outlined in Table 2. This input prompt is encoded into hidden states. Based on these hidden states, CFIC first identifies the top-k sentence prefix candidates using the Constrained Sentence Prefix Decoding strategy. This strategy ranks the sentence prefixes considering the generation score (accumulated token log probabilities normalized by token length) of each sentence prefix. CFIC then skips the decoding of intermediate tokens and terminates the generation process by locating the [eos] token position with the highest likelihood (Skip Decoding). Consequently, we obtain k grounding evidence texts that can aid in supporting downstream tasks. It is important to note that this paper primarily focuses on pinpointing precise grounding text evidence within the long document, rather than on the retrieval of the long document. Therefore, we assess our CFIC and all baseline models using the LongBech benchmark, which provides pre-prepared long documents. In the subsequent sections, we will introduce the two proposed decoding strategies and then discuss the training and inference processes of CFIC. Constrained Sentence Prefix Decoding Normally, the generation process of an auto-aggressive decoding model is as: wn ∼ |w| Y n=1 p(wn ∈V|w<n, h), (4) where h represents the hidden states of previous tokens. The current token, denoted by wn, is selected from the entire vocabulary V of the generation model. In the case of CFIC, the generation target P consists of text spans that originate directly from the source context. Consequently, it is possible to define a more constrained generation space to ensure the faithfulness of the text produced. Specifically, we suggest employing the prefix of each sentence within the source context as generation constraints. This approach guarantees that the 1301 text generated by CFIC can be traced back to the input context. Thus, Eq. (4) can be modified as: wn ∼ |w| Y n=1 p(wn ∈¯V|w<n, h), (5) where ¯V denotes a token set contains each sentence’s prefix. The sentence prefix serves as an position identifier to facilitate the identification of the starting point of a supporting passage within the source context. To select the top-k candidate passages, it is essential to differentiate k distinct sentence prefixes. This is achieved through the constrained top-k sampling decoding, a process that entails selecting the next token wn from the top-k most likely tokens ¯Vk ∈¯V based on the token’s probability, p(wn|w<n). The sampling process terminate once the generated sentence prefixes are capable of uniquely identifying positions in the source context. The number of decoding steps required until termination is denoted by β, resulting in up to kβ prefix candidates after β steps. We denote the generated sentence prefix by b. Subsequently, these prefix candidates are ranked according to the prefix sequence score s, which calculates the normalized accumulated log probability of tokens as follows: s = 1 |w| |w| X n=1 log p(wn|w<n). (6) Finally, the k sentence prefixes with the highest scores are selected. Referring to Figure 2 for illustration, the decoding process initiates by sampling k tokens, such as [Bach, In, ..., Throughout], to represent the first set of candidate tokens. Given that multiple sentences in the long article begin with the tokens [Bach, In], the decoding of subsequent tokens is necessary. For sentences that start with "Bach", the decoding terminates at step β = 2. And for sentences beginning with "In", the decoding ends at step β = 3. Following this, we retain k = 2 sentence prefixes to identify the supporting passages. Skip Decoding Similarly, since the generation target originates exactly from the source text, once the generation prefix is determined, we can use the generated prefix as a position identifier to locate the original text in the source text. Subsequently, we can bypass decoding the intermediate tokens and directly compute the token probability p([eos]) for the [eos] token after each sentence following the generated prefix. We select the position with the highest probability as the termination point. In practice, we calculate p[eos] after each sentence within a predefined token distance d. Formally, given a generated prefix b, we determine the termination position as follows: w∗ [eos] = arg max l∈L p[eos](b ⊕l), |l| ≤d, (7) where l represents the token sequence following the prefix b with a maximum length of d. Training and Inference As previously discussed, we define the task of identifying supporting passages from a long source text for grounding downstream tasks as evidence generation. To this end, it is crucial to enhance the generation model with the capability to pinpoint precise textual evidence within extensive texts. In this study, CFIC achieves this through Supervised Fine-Tuning (SFT). We employ a prompt, formed using the pair (q, A) as outlined in Table 2, as the input, and use the text evidence P as the target for generation. The model is trained using the negative log-likelihood (NLL) loss function: L(q, A, P∗) = − |P∗| X n=1 log p(P∗ n|P∗ <n, q, A). (8) The training dataset is introduced in Section 4.1. During the inference stage, given the input (q, A), we apply Constrained Sentence Prefix Decoding and Skip Decoding strategies to extract k supporting passages. Should these passages exhibit overlapping sections, we amalgamate such intersecting passages into a single cohesive passage. Subsequently, these collated supporting passages are utilized to ground downstream tasks. 4 Experiment 4.1 Datasets and Evaluation Metric As mentioned above, we train the CFIC model using data that contains (q, A, P) triplets via SFT. Most current datasets cannot provide such data format. Thus, we use self-constructed SFT data to train the CFIC model, and evaluate all baselines on the LongBench benchmark (Bai et al., 2023). Specifically, to construct the SFT training data, we first collect a corpus of lengthy articles, including Wikipedia articles, novels, and news articles. Subsequently, we randomly select 1302 Dataset SFT NarrativeQA Qasper MultiFieldQA HotpotQA MuSiQue Num of Samples 25,652 200 200 150 200 200 Ave. Length 12,248 18,409 3,619 4,559 9,151 11,214 Table 1: Statistical information of the datasets utilized in this paper, where the average length indicates the word count, typically smaller than the BPE-tokenized token length. Below is an article, read the article and answer my question after the article. Now the article begins: {Article} Now the article ends. Select several sentences from the article to answer my question. Question: {Question} Table 2: Prompt template in training and evaluation. text spans from these articles and ask ChatGPT to generate a query that can be answered by each text span. As for evaluation, we choose five datasets from LongBench including NarrativeQA (Koˇciský et al., 2017), Qasper (Dasigi et al., 2021), MultiFieldQA (Bai et al., 2023)), HotpotQA (Yang et al., 2018) and MuSiQue (Trivedi et al., 2022). Following the LongBench benchmark, we use F1-score as the evaluation metric. For further details of LongBench, please refer to Bai et al. (2023). We show the statistical information of all datasets in Table 1. 4.2 Baseline Settings In this study, we focus on in-context retrieval within the Retrieval-Augmented Generation (RAG) system. As such, we employ stand-alone LLMs as generators. Specifically, we utilize Llama2-7Bchat-4k (Touvron et al., 2023) and Vicuna-v1.57B-16k (Zheng et al., 2023) as our generators. To assess our chunking-free approach against the traditional chunking-based methods, the baseline model settings are as follow: Chunking-Base Method Chunking-based methods generally commence by segmenting a lengthy document into smaller passages using heuristic strategies, followed by reranking these passages with a ranking model. In our research, we investigate two prevalent chunking strategies: (1). Sliding Window Chunking (SW): This strategy involves dividing the document into sentences and then grouping these sentences into passages. Each passage is designed not to exceed a predefined maximum length of 256 words, with a stride of one sentence. (2). Paragraph-based Chunking (Para): Here, the document is split by paragraph markers (e.g., \n). We employ “bge-large-en-v1.5” (Xiao et al., 2023) and “llm-embedder” (Zhang et al., 2023a) as the ranking models. We utilize the SW and Para strategies to divide the document into passages, which are then reranked by the ranking models. The highest-ranking passages are chosen as the input context for the generators to support the QA tasks. Chunking-Free Method For the chunking-free models, we present the outcomes using Vicunav1.5-7B-16k (Zheng et al., 2023), LongChat7B-32k (Li et al., 2023), and LongAlpaca-7B32k (Chen et al., 2023) as baseline models. These models refine lengthy documents into concise text evidence, which then serves as context for generator to support QA tasks. To ensure a fair comparison, all baseline models provide a comparable volume of textual evidence for downstream tasks, maintaining consistency in the number of passages or token length. We also explore the effectiveness of feeding full articles into generators. 4.3 Implementation Detail To train CFIC, we employed the “LLAMA2-7Bchat” as the foundation model for our CFIC. During the training, we set the batch size to 1 per GPU and the learning rate to 1e-5. We set the gradient accumulation step as 8 and utilized the AdamW optimizer with an epsilon value of 1e-8. The model’s maximum length parameter was set to 32768. We train the model for 600 steps on 8 * Nvidia A800 80GB GPUs. For CFIC, We set the number of sampled sentence prefixes as k = 3 and the maximum decoding length as d = 256 (refers to Eq. (7)). Besides, we use a warm-up strategy to adjust the learning rate. To save GPU memory, we employed DeepSpeed’s Stage 2 zero optimization to save GPU memory. 4.4 Main Results Table 3 shows the main experiment results which are the performance across different QA tasks using various refined text evidence as context. From the results we have the following findings: First, CFIC 1303 Llama2-7B-chat-4k Vicuna-v1.5-7B-16k Model chunk nar qas mul hot mus nar qas mul hot mus BGE SW 13.9 22.0 34.0 34.0 14.0 12.1 27.3 37.5 33.6 13.5 BGE Para 12.1 21.7 31.4 31.2 12.3 10.2 23.2 34.7 31.7 12.5 LLM-Embedder SW 14.1 23.2 34.3 33.8 14.6 13.2 27.4 39.1 31.6 12.6 LLM-Embedder Para 13.2 21.7 34.1 32.9 12.6 12.3 25.1 36.3 31.1 12.1 Vicuna-7B 13.7 19.0 23.3 22.0 9.7 12.3 23.5 24.0 23.8 11.0 LongChat-7B 12.2 19.7 29.5 27.9 9.6 11.1 21.9 32.4 30.2 9.7 LongAlpaca-7B 12.8 19.3 26.8 28.8 10.3 11.2 21.2 25.2 27.2 10.2 CFIC-7B(Ours) 18.3 27.7 41.2 34.0 14.7 17.5 31.0 39.8 33.8 16.2 Full Article 18.7 19.2 36.8 32.8 9.4 19.4 26.1 38.5 25.3 9.8 Table 3: Main experiment results, which are the QA performance across various datasets, using different refined text evidence as context. Following Bai et al. (2023), we use F1-score as the evaluation metric. The best results are in bold and the secondary results are marked with underline. significantly outperforms other LLMs in chunkingfree in-context retrieval tasks as CFIC is specifically optimized to select precise text evidence crucial for grounding QA tasks. This underscores the necessity and effectiveness of supervised finetuning (SFT) in adpting the foundation model into the in-context retrieval task. Second, Chunkingbased methods serve as strong baselines due to their ability to extract passages directly from the source context, whereas LLMs lacking SFT tend to generate content that may not always align faithfully with the source material. CFIC, however, consistently surpass all chunking-based baselines across all datasets, indicating the potentiality of the chunking-free in-context retrieval paradigm. Last, Compared to using the entire article as context, our CFIC model significantly improves the performance of QA tasks across most datasets, except for the NarrativeQA dataset. This improvement evidences the critical role of identifying and utilizing the right and precise context in optimizing QA task performance, demonstrating the CFIC model’s efficiency in context filtering and utilization. As for the NarrativeQA dataset, we find that NarrativeQA’s precise text evidence frequently appears at the start of lengthy articles, a location that LLMs tend to prioritize their attention (Liu et al., 2023a). This might explain why CFIC does not perform as well on this dataset, given that its approach to identifying precise evidence could inadvertently introduce errors, thereby diminishing its accuracy. In practice, however, that precise text evidence can be located throughout the entire length of an article, not just at the beginning. 4.5 Discussion Ablation Study To assess the effectiveness of the design of CFIC, we conduct an ablation study by removing key components of the model, including: (1). Removal of Sentence Prefix Decoding Strategy (w/o prefix): we remove the constraint of limiting the decoding space to sentence prefixes. Instead, a beam search algorithm was employed to sample short sequences (each comprising 8 tokens) based on the input article. Subsequently, the top-k short sequences were matched back to the input article to identify starting prefixes. (2). Removal of Skip Decoding (w/o skip): we dispensed with the practice of bypassing intermediate tokens following the sentence prefix decoding. The model continued to decode the remaining tokens up to a maximum length of 256 tokens. (3). Removal of Both Decoding Strategies (w/o both): the CFIC model was tasked to decode outputs using a greedy search algorithm, devoid of both the sentence prefix and skip decoding strategies. (4). Absence of SFT (LongAlpaca7B): LongAlpaca-7B is a context-extended version of LLAMA2-7B-chat. We utilized LongAlpaca-7B as the base model, representing the variant of CFIC without task-specific SFT. The results of the ablation experiments are presented in Table 4. Our findings can be summarized as follows: (1). The removal of any of the CFIC model components resulted in a notable degradation in performance, underscoring the collective contribution of these elements to the model’s effectiveness. (2). The most substantial decrease in performance was observed when SFT was omitted. This suggests that the vanilla LLM struggles to accurately locate precise grounding text from lengthy documents, despite its enhanced capability 1304 Llama2-7B-chat-4k Model nar qas mul hot mus CFIC-7B 18.3 27.7 41.2 34.0 14.7 w/o prefix 16.4 26.0 39.3 33.0 12.5 w/o skip 15.8 27.0 37.6 30.1 11.6 w/o both 13.2 20.2 37.4 30.1 9.2 LongAlpaca-7B 12.8 19.3 26.8 28.8 10.3 Full Article 18.7 19.2 36.8 32.8 9.4 Table 4: Results of the ablation Study. for processing extended contexts. (3). Removing either the sentence prefix decoding or the skip decoding strategies led to an obvious reduction in performance. This finding verifies our hypothesis that these decoding strategies not only curtail decoding computational demands but also improve the fedelity of the generated grounding text. Choice of Decoding Length In our CFIC model, as defined in Eq. (7), the generation process is terminated upon locating the position of the [eos] token within a predetermined distance d. This distance is analogous to the maximum generation length typically set in standard text generation tasks, which governs the length of the decoded text. The selection of d involves a careful balance: too small a value may lead to excessively brief output grounding text, offering scant information for substantiating downstream tasks. Conversely, a larger d may result in longer output texts, potentially introducing additional textual noise and necessitating increased computational resources to process the extended sequences. To investigate the optimal choice of decoding length d in CFIC, we conducted experiments with various settings of this parameter. The results of these experiments are depicted in Figure 3. Our findings substantiate the initial hypotheses: the performance across all tasks progressively improves and reaches its zenith at a d value of 256. Beyond this point, performance begins to wane, suggesting that a setting of d = 256 strikes an effective balance for these tasks. This observation aligns with the intuition that a span of 256 tokens typically suffices to encapsulate a semantically complete and coherent unit of information. Case Study: CFIC v.s. GPTs OpenAI’s model APIs, including GPT-3.5 and GPT-4, serve as robust baselines in the domain of LLM. However, they were excluded from the primary model comparisons in our experiments for two primary rea64 128 256 512 Llama2-7B-Chat 10 15 20 25 30 35 40 45 mus qas hot mul 64 128 256 512 Vicuna-v1.5-7B 10 15 20 25 30 35 40 45 mus qas hot mul Figure 3: The choice of Maximum Decoding Length. sons: (1). these APIs lack of control over decoding process resulting in the inability to manipulate their decoding mechanisms to align with our methodological requirements. (2). The foundational models of GPT-3.5 and GPT-4 are characterized by their vast parameter sizes (e.g., 175 billion parameters), endowing them with exceptional language modeling capabilities, especially in handling extended contexts. However, our focus with CFIC is on applying LLMs with comparatively smaller parameter sizes. This approach ensures more manageable computational resource requirements and enhances model scalability. Despite these exclusions, we conducted a comparative case study, the results of which are presented in Table 5. This study reveals that our CFIC7B model consistently provided complete and relevant grounding text evidence in response to queries. In contrast, the other models exhibited limitations: (1). LongAlpaca-7B failed to accurately locate appropriate grounding text, resulting in the generation of information irrelevant to the downstream tasks. (2). GPT-3.5 is able to directly respond to queries, it did not successfully identify precise grounding text from the original source material. (3). Although GPT-4 managed to retrieve grounding text pertinent to the query, the information provided was incomplete, lacking the necessary comprehensiveness to fully support the response logically. 5 Conclusion This study introduces a Chunking-Free In-Context (CFIC) retrieval method for the RAG system, addressing the challenges of processing lengthy documents and refining evidence retrieval. Unlike traditional chunking-based methods that either compromise textual coherence or struggle with noise and inaccuracies, CFIC leverages auto-aggressive decoding to pinpoint precise evidence directly, eliminating the reliance on chunking. CFIC incorpo1305 Query: What hedge fund’s collapse in 1998 highlighted the need for regulation of derivatives? Answer: Long Term Capital Management (LTCM) CFIC-7B: In 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion. The derivative transactions were not regulated, nor were investors able to evaluate LTCM’s exposures. LongAlpaca-7B: The catastrophic financial events of recent months have proved them (Born and Sheila Bair) right. In 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. GPT-3.5-Turbo: The hedge fund whose collapse in 1998 highlighted the need for regulation of derivatives was Long Term Capital Management (LTCM). GPT-4: In 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion. Table 5: Results of Case Study. The text colored with teal refers to the grounding evidence for the user query. rates Constrained Sentence Prefix Decoding and Skip Decoding strategies to further enhances retrieval efficiency and accuracy. Through comprehensive evaluations on various open QA datasets, CFIC has demonstrated remarkable improvements in sourcing relevant and precise evidence to ground language models. Limitations This paper introduces a novel approach for Retrieval-Augmented Generation systems through the Chunking-Free In-Context (CFIC) retrieval method. Despite its advancements and effectiveness, there are certain limitations that warrant discussion. One of the primary limitations stems from the training data used to develop our models. The dataset, self-constructed and annotated using ChatGPT, may harbor annotation biases. Such biases could affect the model’s performance, particularly in its ability to generalize across different types of data or domains. While our approach excels in tasks requiring precise text evidence, it may offer limited assistance in scenarios demanding a highlevel understanding of context, such as summarization tasks. This limitation is due to the model’s focused capability on specific evidence retrieval rather than broader context comprehension. Additionally, in this study, we have set the maximum length that CFIC can handle to 32k tokens. While this threshold accommodates a wide range of documents, it may not suffice for longer texts, such as novels, which exceed this limit. This constraint is primarily dictated by the available computational resources, highlighting a need for more efficient processing methods or greater computational power to extend CFIC’s applicability to longer documents. With the increase in computational resources and advancements in model acceleration algorithms, we envision the future possibility of enabling CFIC to handle even longer contexts. This could potentially extend to encoding the entire corpus, facilitating corpus-level in-context retrieval for each query. Ethical Impact The development of CFIC builds upon existing Large Language Models (LLMs), which are trained on vast, diverse text corpora. This foundation introduces potential risks associated with biases inherent in the original training data. These biases can manifest in the model’s outputs, influencing the quality and impartiality of the retrieved evidence. Furthermore, the long documents processed by CFIC are sourced from the web, a domain rife with its biases. The web’s text content reflects a wide array of perspectives, some of which may be skewed or unrepresentative of broader viewpoints. Given that CFIC is designed to process and retrieve information from these documents, there is a risk that the model might inadvertently perpetuate or amplify these biases without the capacity to discern or mitigate them. Acknowledgement This work was supported by National Science and Technology Major Project (2023ZD0121504) and Public Computing Cloud, Renmin University of China. The work was partially done at the Engineering Research Center of Next-Generation Intelligent Search and Recommendation, MOE. References Akari Asai, Matt Gardner, and Hannaneh Hajishirzi. 2022. Evidentiality-guided generation for 1306 knowledge-intensive NLP tasks. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2226–2243, Seattle, United States. Association for Computational Linguistics. Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2023. Longbench: A bilingual, multitask benchmark for long context understanding. Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. 2023. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877–1901. Yukang Chen, Shengju Qian, Haotian Tang, Xin Lai, Zhijian Liu, Song Han, and Jiaya Jia. 2023. Longlora: Efficient fine-tuning of long-context large language models. arXiv:2309.12307. Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Alex Castro-Ros, Marie Pellat, Kevin Robinson, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416. Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, and Matt Gardner. 2021. A dataset of information-seeking questions and answers anchored in research papers. Zican Dong, Tianyi Tang, Lunyi Li, and Wayne Xin Zhao. 2023. A survey on long text modeling with transformers. arXiv preprint arXiv:2302.14502. Yunfan Gao, Yun Xiong, Xinyu Gao, Kangxiang Jia, Jinliu Pan, Yuxi Bi, Yi Dai, Jiawei Sun, Qianyu Guo, Meng Wang, and Haofen Wang. 2024. Retrievalaugmented generation for large language models: A survey. Google. 2023. Gemini: A family of highly capable multimodal models. https://goo.gle/ GeminiPaper. Yu Guo, Zhengyi Ma, Jiaxin Mao, Hongjin Qian, Xinyu Zhang, Hao Jiang, Zhao Cao, and Zhicheng Dou. 2022. Webformer: Pre-training with web pages for information retrieval. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’22, page 1502–1512, New York, NY, USA. Association for Computing Machinery. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrievalaugmented language model pre-training. In International Conference on Machine Learning. JMLR.org. Gautier Izacard and Edouard Grave. 2021a. Distilling knowledge from reader to retriever for question answering. In International Conference on Learning Representations. Gautier Izacard and Edouard Grave. 2021b. Leveraging passage retrieval with generative models for open domain question answering. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1–38. Zhengbao Jiang, Frank F. Xu, Luyu Gao, Zhiqing Sun, Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Active retrieval augmented generation. arXiv preprint arXiv:2305.06983. Jean Kaddour, Joshua Harris, Maximilian Mozes, Herbie Bradley, Roberta Raileanu, and Robert McHardy. 2023. Challenges and applications of large language models. arXiv preprint arXiv:2307.10169. Ehsan Kamalloo, Nouha Dziri, Charles LA Clarke, and Davood Rafiei. 2023. Evaluating open-domain question answering in the era of large language models. arXiv preprint arXiv:2305.06984. Minki Kang, Jin Myung Kwak, Jinheon Baek, and Sung Ju Hwang. 2023. Knowledge graph-augmented language models for knowledge-grounded dialogue generation. arXiv preprint arXiv:2305.18846. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for opendomain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769–6781, Online. Association for Computational Linguistics. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2019. Generalization through memorization: Nearest neighbor language models. arXiv preprint arXiv:1911.00172. 1307 Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics. Tomáš Koˇciský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2017. The narrativeqa reading comprehension challenge. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-Augmented Generation for knowledgeintensive NLP tasks. In Advances in Neural Information Processing Systems, volume 33, pages 9459– 9474. Dacheng Li, Rulin Shao, Anze Xie, Ying Sheng, Lianmin Zheng, Joseph E. Gonzalez, Ion Stoica, Xuezhe Ma, and Hao Zhang. 2023. How long can opensource llms truly promise on context length? Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2022a. Large language models with controllable working memory. Wei Li, Wenhao Wu, Moye Chen, Jiachen Liu, Xinyan Xiao, and Hua Wu. 2022b. Faithfulness in natural language generation: A systematic survey of analysis, evaluation and optimization methods. arXiv preprint arXiv:2203.05227. Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023a. Lost in the middle: How language models use long contexts. Xiao Liu, Hanyu Lai, Hao Yu, Yifan Xu, Aohan Zeng, Zhengxiao Du, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023b. Webglm: Towards an efficient webenhanced question answering system with human preferences. arXiv preprint arXiv:2306.07906. Kelong Mao, Zhicheng Dou, Fengran Mo, Jiewen Hou, Haonan Chen, and Hongjin Qian. 2023. Large language models know your contextual search intent: A prompting framework for conversational search. Yuning Mao, Pengcheng He, Xiaodong Liu, Yelong Shen, Jianfeng Gao, Jiawei Han, and Weizhu Chen. 2021. Reader-guided passage reranking for opendomain question answering. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 344–350, Online. Association for Computational Linguistics. Sosuke Nishikawa, Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka, and Isao Echizen. 2022. Ease: Entity-aware contrastive learning of sentence embedding. arXiv preprint arXiv:2205.04260. Rodrigo Nogueira and Kyunghyun Cho. 2020. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085. OpenAI. 2023. Gpt-4 technical report. https:// cdn.openai.com/papers/gpt-4.pdf. Hongjin Qian, Zhicheng Dou, Jiejun Tan, Haonan Chen, Haoqi Gu, Ruofei Lai, Xinyu Zhang, Zhao Cao, and Ji-Rong Wen. 2023a. Optimizing factual accuracy in text generation through dynamic knowledge selection. Hongjing Qian, Yutao Zhu, Zhicheng Dou, Haoqi Gu, Xinyu Zhang, Zheng Liu, Ruofei Lai, Zhao Cao, Jian-Yun Nie, and Ji-Rong Wen. 2023b. Webbrain: Learning to generate factually correct articles for queries by grounding on large web corpus. Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented language models. arXiv preprint arXiv:2302.00083. Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2022. Parallel Context Windows Improve In-Context Learning of Large Language Models. arXiv. Window. Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation reduces hallucination in conversation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3784–3803, Punta Cana, Dominican Republic. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022. Musique: Multihop questions via single-hop question composition. Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. 2019. Multi-passage BERT: A globally normalized BERT model for open-domain question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP), pages 5878–5882, Hong Kong, China. Association for Computational Linguistics. Shitao Xiao, Zheng Liu, Peitian Zhang, and Niklas Muennighoff. 2023. C-pack: Packaged resources to advance general chinese embedding. 1308 Peng Xu, Wei Ping, Xianchao Wu, Lawrence McAfee, Chen Zhu, Zihan Liu, Sandeep Subramanian, Evelina Bakhturina, Mohammad Shoeybi, and Bryan Catanzaro. 2023. Retrieval meets long context large language models. arXiv preprint arXiv:2310.03025. Hui Yang, Sifu Yue, and Yunzhong He. 2023. Auto-gpt for online decision making: Benchmarks and additional opinions. arXiv preprint arXiv:2306.02224. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. Jia-Yu Yao, Kun-Peng Ning, Zhen-Hui Liu, Mu-Nan Ning, and Li Yuan. 2023. Llm lies: Hallucinations are not bugs, but features as adversarial examples. arXiv preprint arXiv:2310.01469. Peitian Zhang, Shitao Xiao, Zheng Liu, Zhicheng Dou, and Jian-Yun Nie. 2023a. Retrieve anything to augment large language models. Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, et al. 2023b. Siren’s song in the ai ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219. Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Yujia Zhou, Zheng Liu, Jiajie Jin, Jian-Yun Nie, and Zhicheng Dou. 2024. Metacognitive retrievalaugmented large language models. 1309 A Computational Expense To explore the computational efficiency of our method, we conduct experiments on two datasets. The results, shown in the Table 6, indicate that our decoding strategies enable CFIC to achieve a threefold increase in inference efficiency compared to without the decoding stategies. Furthermore, although we set the maximum decoding length to 256 tokens, our analysis on the MuSiQue and Qasper datasets reveals that the Constrained Sentence Prefix Decoding strategy typically stops decoding after an average of 2.7 and 3.1 tokens, respectively. This is because most sentence prefixes can be distinguished after decoding just three tokens, preventing the need to decode the full 256 tokens until reaching the [eos] token. In practice, the Constrained Sentence Prefix Decoding operates rapidly due to the minimal number of tokens required for decoding. The Skip Decoding, while more time-consuming due to its computation of the probability for the [eos] token after each sentence via a for loop, which can be significantly optimized with parallel computing techniques. We are confident that further engineering efforts will enhance CFIC’s inference time. B In-Depth Evaluation with ChatGPT and Human We conducted additional evaluations to assess the faithfulness and effectiveness of the generated text evidence. We defined faithfulness as the measure of how accurately the generated text evidence reflects the original long documents, and effectiveness as how well the generated text evidence supports the query. We randomly selected 50 samples from the test sets for two evaluation approaches. Firstly, we tasked ChatGPT with determining the faithfulness of the generated text evidence to the long documents and assessing the extent to which the evidence supports the query (supported, partially supported, not supported). Secondly, we had two human annotators blindly rate the quality of text evidence generated by different models. The results, presented below, show that BGE-SW and CFIC-7B are highly faithful to the original documents, directly extracting text spans from them. Notably, CFIC-7B provides more effective text evidence compared to other methods, indicating its superior performance in QA tasks. C Choice of k in Constrained Sentence Prefix Decoding Regarding the optimal selection of the number of retained distinct sentence prefixes (k) in our decoding process, we explored the impact of varying k on the F1-score and inference latency. These experiments were performed on a single Tesla A800-80G GPU with an inference batch size of 8. Our investigation, summarized in Table 8, demonstrates the influence of k on both performance and efficiency: (1) We observed that k = 4 represents the performance peak, beyond which the effectiveness declines, indicating that an increase in text evidence beyond a certain point introduces data noise and diminishes returns. Considering both efficiency and accuracy, we chose k = 3 for all experiments detailed in this paper, as it offers an optimal balance. (2) The decoding strategies significantly improved efficiency. When comparing CFIC-7B with and without decoding strategies, the average latency reduction is evident, demonstrating the strategies’ effectiveness in enhancing processing speed without compromising result quality. For instance, at k = 3, CFIC-7B achieved an average latency of 361 ms, compared to 1,065 ms for CFIC-7B without decoding strategies, underscoring a substantial improvement in inference efficiency. 1310 Dataset MuSiQue Qasper Ave. Input Length (tokens) 11,214 3,619 Ave. Output Length (tokens) CFIC-7B 233 230 CFIC-7B w/o decoding strategies 219 211 LongAlpaca-7B 189 196 Ave. Decoding Latency (ms/sample) CFIC-7B 564 361 CFIC-7B-Constrained Sentence Prefix Decoding 124 129 CFIC-7B-Skip Decoding 366 207 CFIC-7B w/o decoding strategies 1,480 1,065 LongAlpaca-7B 1,279 1,078 Table 6: Comparison of Inference Latency on MuSiQue and Qasper datasets. Type Supported Partially Supported Not Supported Faithfulness by ChatGPT BGE-SW 34% 40% 26% 98% LongAlpaca-7B 20% 32% 48% 70% CFIC-7B 42% 44% 14% 96% by Human BGE-SW 28% 46% 26% 100% LongAlpaca-7B 10% 34% 56% 66% CFIC-7B 40% 54% 6% 100% Table 7: In-depth Evaluation by ChatGPT and Human. k Qasper 1 2 3 4 5 6 F1-Score 21.3 24.2 27.7 28.0 26.5 25.7 Ave. Latency / ms CFIC-7B 249 302 361 413 453 519 CFIC-7B w/o decoding strategies 802 878 1,065 1,607 2,378 2,899 Table 8: Impact of varying k on F1-Score and Average Latency for Qasper dataset. 1311
2024
71
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13148–13171 August 11-16, 2024 ©2024 Association for Computational Linguistics Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation Jan Cegin♠†, Branislav Pecher♠†, Jakub Simko†, Ivan Srba† Maria Bielikova†, Peter Brusilovsky‡ ♠Faculty of Information Technology, Brno University of Technology, Brno, Czechia † Kempelen Institute of Intelligent Technologies, Bratislava, Slovakia {jan.cegin, branislav.pecher, jakub.simko, ivan.srba, maria.bielikova}@kinit.sk ‡ University of Pittsburgh, Pittsburgh, USA [email protected] Abstract The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models. However, more research is needed to assess how different prompts, seed data selection strategies, filtering methods, or model settings affect the quality of paraphrased data (and downstream models). In this study, we investigate three text diversity incentive methods well established in crowdsourcing: taboo words, hints by previous outlier solutions, and chaining on previous outlier solutions. Using these incentive methods as part of instructions to LLMs augmenting text datasets, we measure their effects on generated texts’ lexical diversity and downstream model performance. We compare the effects over 5 different LLMs, 6 datasets and 2 downstream models. We show that diversity is most increased by taboo words, but downstream model performance is highest with hints. 1 Introduction The emergence of large language models (LLMs) such as GPT-4, LLaMA, etc., has sparked interest in using them to augment textual datasets (Ubani et al., 2023; Dai et al., 2023; Piedboeuf and Langlais, 2023). In these scenarios, the number of samples is expanded by paraphrasing existing ones through LLM prompting. The created paraphrases are then added to the original dataset and used for downstream model training. Such methods have been explored for various domains such as sentiment classification (Piedboeuf and Langlais, 2023; Ubani et al., 2023), news classification (Piedboeuf and Langlais, 2023) and health symptoms classifications (Dai et al., 2023). However, investigation of the effect of various prompts, specific instructions, and selection of seed data inspired by crowd in the text augmentation process when using LLMs is lacking. METHOD→ TABOO CHAINING HINTS Dataset↓ BERT Mistral BERT Mistral BERT Mistral 20News 0/1 0/2 1/1 0/0 0/1 0/4 AG News 0/0 0/2 0/0 0/2 0/0 0/3 ATIS 2/0 1/0 0/0 0/0 0/0 0/1 FB 1/0 1/1 0/0 0/0 0/1 0/2 SST-5 0/0 0/2 0/0 0/1 0/0 0/2 Yelp 0/0 0/2 0/0 0/0 0/0 0/2 Table 1: Number of overperforming or underperforming cases of downstream models (BERT, Mistral) finetuned on LLM-generated data. We observe a varying performance when diversity incentive methods are used during the training set generation. Only the hints incentive method appears to frequently significantly outperform the baseline (= no incentives), indicating its potential usefulness in LLM augmentation. Taboo and chaining methods achieve mixed results, sometimes even dropping below the baseline. The effects appear more frequently with fine-tuned Mistral than BERT. The rows denote 6 datasets used in the experiments. We used 5 different LLMs to generate the training sets for each dataset-method-model combination. Crowdsourcing is an established practice for collecting training or validation examples for a variety of NLP tasks. Scenarios of data collection using human workers can be similar to those of data augmentation: workers create paraphrases on existing sentences chosen from a dataset. The aim of such data collection is to increase the data diversity and subsequent performance of classifiers trained on the data (Larson et al., 2019, 2020). To increase the diversity, various methods are used in crowdsourcing to guide workers. These include taboo words (Larson et al., 2020) - where most significant words from the collected data are identified and listed in the worker instructions to be avoided during paraphrasing, chaining (Rhys Cox et al., 2021; Larson et al., 2019) - where outliers in the previous paraphrases are identified and used as seed sentences in the next round of data collection, and hints where previous outlier paraphrases are used as 13148 examples in the instructions. The hints (Rhys Cox et al., 2021; Zhou and Bhat, 2020) method itself is similar to LLM in-context learning, where examples are included in the instructions for the model to achieve better performance. All of these diversity incentive methods report increased diversity of paraphrases and some also report increased performance of the classifiers trained on the so-collected data. This work is inspired by the parallels between crowdsourcing and LLM prompting and by the performance of diversity incentive methods on the diversity of paraphrases and the performance of models trained on them. We investigate the effects of the three diversity incentive methods (originating in crowdsourcing) on data augmentation using LLMs. The baseline, taken from a previous study (Cegin et al., 2023), is a simple prompting for paraphrases. Measuring paraphrase diversity and downstream performance of classification models, we assess whether the diversity incentives (added to the base prompt) improve LLM outputs similarly as in crowdsourcing scenarios. To our knowledge, this is the first work to investigate the effects of diversity incentive methods on LLMs. In this paper, we answer the following research questions: RQ1: Does the usage of diversity incentive methods on LLMs yield more diverse paraphrases? (compared to base prompting) RQ2: Do classifiers achieve better performance if trained on data augmented using diversity incentive methods on LLMs? (compared to base prompting) To answer these questions 1, we have conducted a data augmentation experiment using 5 different LLMs on 6 different datasets in the tasks of sentiment (movie and app reviews), news, and intent (flight and voice assistant commands) classification. In this experiment, we repeatedly collect LLM paraphrases using different diversity incentive methods. Then, we compare the lexical diversity of the collected data and the performance of downstream classifiers. Additionally, we also conduct an ablation study, where we modify the diversity incentive methods with random data to validate, that the inputs used by these methods (e.g., most influential taboo words, outlier paraphrases) contribute to the 1Data and code at: https://github.com/kinit-sk/ LLM-div-incts method’s performance and a combination of the best performing methods for lexical diversity and model performance. In total, we collected 253,500 paraphrases. The most prominent findings are the following: 1) We do not observe statistically significant improvements in lexical diversity of the generated datasets, but only minor improvements using the taboo method, 2) The hints method increases the performance of classification models trained on such data compared to the baseline, while also reducing standard deviation and thus increasing the stability of results, 3) The chaining method and taboo method both do not significantly affect the performance of classification models trained on such data compared to the baseline. 2 Related work: Crowdsourcing and LLM-based augmentation 2.1 Crowdsourcing diverse paraphrases Crowdsourcing of paraphrases is an established method to collect data for dataset building and augmentation in NLP (Larson et al., 2019, 2020; Zhou and Bhat, 2020; Wei et al., 2018; Rhys Cox et al., 2021). In this process, a worker is asked to paraphrase a seed sentence to create new variants (Wei et al., 2018; Larson et al., 2020). To increase the diversity of paraphrases, various instruction variants are used, building on the assumption (shown by (Larson et al., 2020; Joshi and He, 2022; Wang et al., 2022)), that performance of downstream models correlates with training set diversity. The hints method (Rhys Cox et al., 2021; Zhou and Bhat, 2020) guides workers towards a variety of possible solutions by showing them examples of the most distinct paraphrases previously created by other workers. A variation of this method displays word-clouds of recommended words to be used. Hints have been used for the data collection of user utterances for task-oriented chatbots and to collect diverse motivational messages. The taboo method (Larson et al., 2020) instructs workers to avoid specific words when paraphrasing. These “taboo words” are drawn from previously collected paraphrases as most influential using a linear SVM. Taboo words have been used in the collection of data for intent classification. The chaining method (Rhys Cox et al., 2021; Larson et al., 2019) identifies outliers or most distinct paraphrases within the already collected data and uses them as seed sentences. It is applied in 13149 variations for data collection of intent utterances and of motivational messages. In crowdsourcing, all three methods show increases in the paraphrase diversity and model performance, compared to base prompting. 2.2 Data augmentation via LLMs LLMs such as GPT-2 (Radford et al., 2019) or BART (Lewis et al., 2020) have previously been used to create paraphrases. Additional extensions used style transfer to create paraphrases of a certain linguistic style (Krishna et al., 2020), syntax control of the generated paraphrases (Goyal and Durrett, 2020; Chen et al., 2020), multi-lingual paraphrases in a zero-shot setting (Thompson and Post, 2020) and LLM finetuning using Low-Rank Adaption for specific domain paraphrase collection (Chowdhury et al., 2022). Recent studies used GPT-3.5 and GPT-4 as data augmentation techniques that were compared with previous stateof-art NLP augmentation techniques (Piedboeuf and Langlais, 2023; Ubani et al., 2023). Two studies report better performance in using LLMs as data augmenters than using previous state-of-art techniques in both paraphrasing existing texts (Dai et al., 2023) and in a zero-shot setup of generating new texts using specific prompts (Ubani et al., 2023). Another study reports mixed results, when GPT-3.5 is compared with previous state-of-the-art techniques (Piedboeuf and Langlais, 2023). Regardless of mixed results, GPT-like models have already been used as augmenters in domains of automated scoring (Fang et al., 2023) and low-resource language generation (Ghosh et al., 2023). However, LLMs can also produce repeating outputs of lower quality (Cegin et al., 2023; Cox et al., 2023). To our best knowledge, there is no study which investigates if and how diversity incentives (established in crowdsourcing), can be used in LLMsbased paraphrasing. We hypothesize, that use of diversity incentives can prevent the known drawback of LLMs to generate highly similar repetitive content (a challenge that was addressed in crowdsourcing by diversity incentives). 3 Data collection and evaluation methodology We collected paraphrases for all combinations of the following: 5 different LLMs, 6 datasets, and 3 diversity incentive methods + 1 base prompting. For each combination, 5 collection iterations were performed: in each 6 random seed sentences per label were drawn from a dataset. For each prompt fired, 5 paraphrases were collected. This totalled in 142,500 collected paraphrases when aggregated all together across datasets and LLMs. For the ablation study and combination of best methods in Section 6 we collected an additional 111,000 paraphrases in total. As the diversity incentive methods need some previously collected data to determine their cues (hints, seeds or taboo words), each iteration consisted of 2 rounds: first we collected data using only the basic prompt and in the second round, we collected data using the given diversity incentive method (or base prompt method). Thus, the resulting datasets for each method consist of seed data and data collected from both rounds. The entire data collection process is visualized in Figure 1. After the paraphrases were collected, we evaluated them in several steps. First, we manually checked the validity of a subset (50%) of the collected data (i.e., is the created sample a true paraphrase retaining the label?). Second, we computed the diversity of the collected data, comparing the mean vocabulary size (no. unique words) and mean number of unique 3-grams for each diversity incentive method (refers to RQ1). Third, we evaluated the performance of models trained on the created paraphrases (refers to RQ2). For each combination of LLM, dataset and method, we finetuned BERT-large 5 times and Mistral-7bv0.1 3 times (the dataset also determined the classification task to which a model was finetuned). We evaluated the accuracy of trained model on the full test set for that given dataset specifically and on a subset of the test set for Mistral to save computational resources following previous works (Chang and Jia, 2023; Köksal et al., 2023; Li and Qiu, 2023; Gao et al., 2021), as the inference time is long and costly. Details of the finetuning process can be found in Appendix D and E. 3.1 Prompt design As our base prompt, we adopted the instruction design from a previous LLM-paraphrasing study (Cegin et al., 2023). There, the prompt plainly instructs to “Paraphrase this text or sentence 5 times:”, which is followed by the seed sentence. For the taboo method we take the implementation from (Larson et al., 2020) that uses a linear SVM trained on bag-of-words representation in a one-vs-many setting to identify the 3 most signif13150 Figure 1: Overview of our methodology. For each dataset, we randomly sample 6 samples per label that are used as seed sentences for LLM data augmentation. There, we collect data in in 2 rounds - 1st only using the prompt method and then in parallel for prompt method and 3 different diversity incentive methods. These are added together to form the datasets. BERT-large or Mistral classifier is fine-tuned 5 or 3 times respectively on each of the collected data and then evaluated. We repeat the entire process 5 times. icant words that are then used in the instructions. We run the computation to get taboo words on the first round of collected data that were collected using the prompt method. We also filter out named entities using NLTK to not include them as taboo words. The prompt used in this method is taken from (Cegin et al., 2023). For the chaining method we use an outlier detection method from (Larson et al., 2019) where we first compute per label a mean embedding vector from the collected samples from the first round. Then, using Euclidean distance, we find the collected samples that are the furthest away from the mean vector to be used as seed sentences in the second round of data collection. The prompt used in this method is the same as in the prompt method. The hints method is similar to the previous approach with chaining where we find outliers in the collected data the same way. Here the data are only included in the prompt itself as examples listed with the given seed sentence. The listed examples are always only those that have been created from the given seed to be paraphrased. The prompt is the same as in the prompt method with added delimiter section listing the 3 different hints for the seed sentence. Templates and examples of the prompts can be found in Appendix H. 3.2 LLMs used as generators We used 5 different LLMs as data augmenters - 2 open source LLMs, LLaMA-2 and 2 closed LLMs. We chose open LLMs based on their different performance and size on the OpenLLM leaderboard. We used the instruction finetuned versions of the LLMs available at HuggingFace. Namely, for LLaMA-2 (Touvron et al., 2023) we use LLaMA-270B-instruct, for Platypus (Lee et al., 2023) we use Platypus-70B-instruct and for Mistral (Jiang et al., 2023) we use Mistral-7B-instruct. We collected the data on a custom private infrastructure with 16 core CPU, 64 GB RAM and 4xA100 GPUs. As for the closed LLMs, we used 2 of the most widely used: GPT3.5 denoted as ChatGPT (gpt-3.5-turbo-1106 version) and GPT4 (gpt-4-0613 version). 3.3 Datasets used We used 6 different datasets for our data collection experiments from the domains of news, intent and sentiment classification. We specifically focused on multi-class English datasets as the diversity incentive methods were employed in crowdsourcing processes that used multi-class English datasets. We used the 20 news (Lang, 1995) and AG news (Zhang et al., 2015) datasets for news classification, FB (Schuster et al., 2019) and ATIS (Hemphill et al., 1990) datasets for intent classification and SST-5 (Socher et al., 2013) and Yelp (Zhang et al., 2015) datasets for sentiment classification. We did not use all of the labels in our experiments for the news and intent classification datasets, but randomly select a subset of them. More details can be found in Appendix F. 13151 3.4 Ablation study setup To investigate if the diversity incetive methods actually influence the diversity of the collected data and performance of classifiers trained on such data we conduct an ablation study. Here, we repeat the data collection process for the open-source LLMs (Mistral, Platypus) and LLaMA-2 using modified versions of each of the diversity incentive method to investigate whether the particular setup of the methods themselves as they have been used in crowdsourcing literature influences the results. For the taboo method, instead of using the most significant words from the previously generated paraphrases we used 3 random words from the generated paraphrases. For the chaining method and hints method, instead of using the outliers as the next seed sentences or as a hints, we used any previously generated paraphrase that was randomly chosen as seed or hint respectively. 4 Paraphrase validity and diversity 4.1 Validity of paraphrases Before evaluating validity of paraphrases, we filtered for malformed phrases, empty phrases or duplicated phrases as per (Cegin et al., 2023). As we collect only 5 samples per one seed sentence, we have detected no duplicated phrases. There were some malformed phrases generated by all LLMs with the exception of ChatGPT, but their number was generally low. The number of collected samples per dataset can be found in Appendix G. The highest amount of mangled or empty paraphrases were detected in GPT-4 responses, mostly when using the chaining method, where the number of invalid paraphrases was approx. 5%. For Mistral, LLaMA-2 and Platypus we detected around 1% of mangled paraphrases. We found no impact of diversity incentive on the number of mangled or empty paraphrases for these LLMs. The detected mangled or empty paraphrases were removed and not included in the next stages. Second, for each dataset and LLM combination, we sampled 50% of the collected data to be manually validated, i.e. we checked whether the resulting paraphrases are semantically equivalent to the seed sentences and their labels. Details are in Appendix B. Among diversity incentive methods, we detected no invalid utterance, in line with the findings of (Cegin et al., 2023). 4.2 Lexical diversity of paraphrases Next, we investigated the effect of diversity incentive methods on the lexical diversity of the collected datasets. We focused on the number of collected unique words (vocabulary) and the number of collected unique 3-grams for each dataset. As we repeated the data collection process 5 times for each dataset and LLM combination, we report the mean numbers of collected unique words and 3-grams. We visualize our findings in Appendix I. In nearly all cases except for one (ChatGPT for the AG News dataset) the taboo method yielded a higher-than-baseline number of unique words and 3-grams. The hints and chaining methods yielded only occasional increases in lexical diversity, with fluctuating results of increased and decreased lexical diversity across LLMs and datasets. However, the resulting increases in lexical diversity were not statistically significant, as we investigated using the Wilcoxon signed-rank test (p=0.05). In more details, the taboo method increased mean no. unique words 30/30 cases and no. unique 3-grams 29/30 cases. The chaining method had better diversity than the baseline in 9/30 cases for unique ngrams and in 4/30 cases the diversity was similar. It achieved better diversity in 10/30 cases for unique words and similar in 9/30 cases. The hints method yielded similar results, achieving better no. of unique ngrams than the baseline 10/30 cases and similar in 5/30 cases, while achieving better no. unique words in 9/30 cases and 8/30 cases it was similar to the baseline. The relative increase in lexical diversity ranges from approx. 2% (Yelp, SST-5 datasets) to 10 % (ATIS, FB datasets) for both no. unique words and 3-grams. In summary, even thought the taboo method increases the lexical diversity in nearly all of the cases, the increase is not statistically significant. This contrasts with the crowdsourcing literature. It indicates that the LLMs are using lexically rich vocabulary already with the base prompting, hence the low benefit of diversity incentive methods. 4.3 Ablation study results Here, we compared the number of collected ngrams and words between the ablated and nonablated diversity incentive methods. We label methods as of similar performance if the difference in the number of collected 3-grams or words is less than 10 and we also perform statistical tests. We report the difference between non-ablated and ab13152 lated methods in Figures 7 and 6. The non-ablated taboo method has betters results in both words (19/30 cases better, 8/30 similar, 3/30 worse) and n-grams (22/30 cases better, 7/30 similar, 1/30 worse) collected than its ablated counterpart. This indicates that the use of the most significant words helps LLMs generate more diverse data in most cases. In contrast, the non-ablated chaining and hints methods yield better diversity in only 8/30 cases for number of unique words and even less so for the number of unique 3-grams. In more than half of the cases the lexical diversity decreased. This indicates that the usage of outliers as seed sentences or as examples is not desirable when targeting higher lexical diversity. We answer the RQ1: Does the usage of diversity incentive methods on LLMs yield more diverse paraphrases? as follows: the usage of the taboo method increases the lexical diversity of collected data when compared to both the baseline method and the ablated version of the method itself. Other two methods however affect the diversity of collected paraphrases only randomly. These changes are, however, not statistically significant, indicating that the LLMs use rich lexical vocabulary even without the diversity incentives themselves. 5 Finetuning models on data collected via diversity incentive methods To investigate whether the diversity incentive methods improve the performance of downstream models, we finetuned BERT-large 5 times and Mistral 3 times for each LLM-dataset combination. Additionally, as we work with limited data, which was found to cause large variance and instability in finetuning results (Mosbach et al., 2020, 2023; Pecher et al., 2023; Chang and Jia, 2023), we sampled data 5 times. This resulted in 25 finetuned classifiers for BERT (5 data collection rounds and 5 finetunings for each of those data collection rounds) and 15 for Mistral that we evaluate per dataset-LLM combination. The full details about hyperparameters and the finetuning setup of BERT and Mistral classifier can be found in Appendices D and E respectively. We report the accuracy of the finetuned models on the test split of each dataset and focus on 2 main attributes: mean accuracy and stability of performance (by measuring standard deviation of accuracy).Additionally, we also conducted Mann-Whitney-U tests (p=0.05) between the baseline prompt method and other diversity incentive methods. We are interested in consistent, better performance of a diversity incentive method over the prompt baseline across LLMs and datasets, as fluctuating performance could be an indicator of random effects. See summary in Table 1 and full results in the Appendix C. 5.1 Impact of diversity incentives on model performance In terms of mean achieved accuracy from all diversity incentive methods while finetuning BERT, the hints method achieved best performance across all LLM and dataset combinations by consistently outperforming or achieving similar mean value as the baseline prompt method in 28 out of 30 LLM and dataset combinations - 20 cases of better mean performance and 9 cases of similar (difference less than 0.1%). However, only 3 out of the 19 (15.79%) increases were statistically significant. Finetuning of Mistral yielded stronger results as the hints method achieved better performance 25/30 times, 4/30 times the performance was similar and once worse. Out of the 25 times the hints method performed better, 14 times (56%) it was statistically significant. We speculate that this might be due to better capabilities of the model to use the augmented data. In terms of LLMs used for data augmentation, the statistically significant increases were achieved in 3/6 cases for Platypus and Mistral, in 4/6 cases for LLaMA2 and GPT-4. The relative increase in mean performance ranged from 0.6% to 2.5% better performance than the baseline for BERT and 1% to 11% for Mistral. The taboo method did significantly worse than the baseline for BERT in 3/30 cases, and only once better. On the other hand, the decrease on Mistral happened in 2/30 cases, while 9/30 times there was a significant increases in performance. The taboo method achieved better results on Mistral, similar to the hints method. The chaining method did not perform better or worse in most cases, yielding a very similar mean performance in most cases for both BERT and Mistral. In terms of performance stability, BERT finetuned on data collected via the hints method achieved better stability of performance (standard deviation relative difference less than 5%) in 22/30 cases and similar stability of performance in 5 cases. For Mistral, better stability was achieved in 26/30 cases, with 2 cases of similar and 2 cases of worse stability (on the FB dataset). The relative increase of stability over baseline prompt method 13153 is from approx. 5% to 35% for BERT and from 10% to 66% for Mistral. The taboo method achieves better stability for 14/30 cases, 9/30 cases it is worse than the baseline and 7/30 cases the stability is similar for BERT. For Mistral the results are similar: 15/30 cases the stability is better, 11/30 cases it is worse and 4/30 cases it is similar to the baseline. The chaining method achieves better stability of performance only half of the time for BERT and 18/30 cases for Mistral, with 8 cases of worse stability. In nearly all cases the hints method achieves higher mean performance than the baseline prompt method and on average achieves higher stability of performance as seen by decreased standard deviation and increased minimum value of finetuned models for both BERT and Mistral. The increases in mean performance and stability are more significant for Mistral than BERT, being statistically significant in 14/26 cases of better performance. The taboo method more often than not increases the performance over the baseline and achieves lower stability, but only does so around half of the time, which could indicate random chance at play. Models finetuned on data collected using the taboo method can also underperform significantly. The chaining method performs similar to the taboo method, with fluctuating results in both stability and mean performance. 5.2 Ablation study results Similar to the Section 4, we evaluate the diversity incentive methods also terms of an ablation study conducted via details from Section 3.4 to investigate whether the setup of the methods themselves contributes to their performance. We visualize our findings in Appendix K. The non-ablated hints method has in 29/30 cases better mean performance that the ablated version for BERT and in 27/30 cases for Mistral, with statistically significant results in 8/30 cases for BERT and 4/30 for Mistral. This might indicate that the usage of outliers as hints for the LLMs tends to increase the quality of collected data in data augmentation scenarios when compared to hints chosen randomly as in the ablated method. The non-ablated taboo method achieves better mean performance in 8/18 cases (all of them statistically significant) for BERT and 12/30 cases for Mistral (6 cases of statistical significance). However, the ablated version of taboo method was better than the non-ablated version in 4/30 cases significantly. This implies that the use of most significant words as taboo instructions for the LLMs has no significant effect in data augmentation. The nonablated version of chaining method achieves better mean performance in 9/30 cases (4 cases of statistical significance) for BERT and in 12/30 for Mistral (3 cases of statistical significance). For Mistral, in equally 3 cases the results were statistically worse. This implies, similar to the taboo method, that the usage of previous outliers as seed sentences has no significant effect on LLMs in a data augmentation scenario when compared to the usage of random previous paraphrases as seed sentences. We answer the RQ2: Do classifiers achieve better performance if trained on data augmented using diversity incentive methods on LLMs? as follows: only models finetuned on data collected via the hints method achieve better stability and mean performance than those trained on data collected via the baseline prompt method. The hints method also achieves better mean performance and stability of performance when compared to its ablated version. The data collected via the taboo and chaining methods have random influence on the performance of finetuned models. These results indicate that the usage of outliers as hints for LLMs in a data augmentation scenarios is beneficial, while other methods have no advantage over the baseline of using only prompt instructions. 6 Combining diversity incentives As the taboo method achieved best results in lexical diversity in and the hints method achieved best results in model performance, as follow-up, we decided to combine these two methods to see if we can achieve an improvement. We have performed the data collection and finetuning process in the same way as described in Section 3. In terms of lexical diversity, the method itself does not have any statistical significance on the results, although the mean number of unique words is higher than the baseline in 18/30 cases and the number of unique n-grams is higher in 16/30 cases. However, in some of the remaining cases a considerable (more than 5%) drop can be observed. In terms of model performance, the combined method statistically significantly decreased the model performance over baseline in 5/30 cases with no increases for BERT and increases performance in 4/30 cases for Mistral. Additionally, it always performed worse as either the hints or taboo method. 13154 In summary, the combination of hints and taboo method into one method grants little to no advantage over either of the methods in both lexical diversity and model performance. We hypothesize that this might be due to the more complicated instructions to the LLM when collecting the data. A decoupling of the methods in a chain of tasks could potentially improve this approach in the future. 7 Discussion Given the results of our experiments, we note these following observations: First, contrary to the performance of diversity incentive methods observed by related work in crowdsourcing settings (better lexical diversity of paraphrases and better performance of downstream models), not all of the methods show improvement of the lexical diversity when used with LLMs. The worst performing method is the chaining method, where recent works already pointed out that LLMs create progressively worse paraphrases when using their own outputs as seed sentences repeatedly (Tripto et al., 2023). However, none of the changes in lexical diversity are of statistical significance. Second, the best performing method for data augmentation is the hints method, which is similar to in-context learning where demonstrations of samples are provided to the LLM as part of the prompt. This might be the reason why this method works so well, as the own paraphrases of the LLM guide it to better output, similar to in-context learning. Third, we observe that, contrary to some previous works (Larson et al., 2020; Joshi and He, 2022), the lexical diversity of the paraphrases does not correlate with performance of models trained on them. Even though the data collected using the taboo method yield highest lexical diversity, models trained on such data do not achieve consistently better performance against baseline. Fourth,the increase in mean performance and stability seems to be small, but in relative terms (compared to the baseline method) it seems to be significant, as the increase of mean performance can range from 0.6% to 2.5% increase over baseline for BERT and 1% to 11% increase for Mistral. For stability, the increases are even more significant: for BERT the range is between 5% to 35% increase over baseline and for Mistral from 10% to 66%. Fifth, diversity incentives require additional computations (for significant words and outlier paraphrases) and also require larger LLM context (e.g., hints use additional paraphrases in instructions of the model), meaning higher costs. As such, the increased computation costs may not warrant the use of diversity incentives. Sixth, the combination of the best method for lexical diversity (taboo) and best method for model performance (hints) did not yield the increases in both lexical diversity and model performance, but performed rather poorly. We hypothesize that this might be due to the increased context length for the LLM with additional instructions that are hard to perform in one single action. The promising results using the hints method opens possibilities for investigations of incontext learning for text generation in LLMs, as the quality of such generated data using hints seems to be better than without them. This is in line with the recent results (Cox et al., 2023) that indicate that the usage of previous examples in instructions for LLMs leads to better generated data. 8 Conclusion In this work, we investigated the effects of different diversity incentive methods used in crowdsourcing on the lexical diversity of LLM-augmented textual datasets and performance of classification models trained on such data. We compared 3 of such methods with a baseline of using only prompts asking the LLM to paraphrase a given seed. We experimented with 5 LLMs on 6 datasets. Our results indicate that the taboo method increases lexical diversity of the collected data, but that this change is not of statistical significance and affects performance only randomly. The hints method affects lexical diversity randomly, but increases the performance of classification models (both in stability of and mean performance) that were trained on data collected using this method. The chaining method does not improve lexical diversity or model performance of classification models trained on data collected using this method. The combination of hints method and taboo method does not significantly increase the lexical diversity or model performance. A common downside of diversity incentive methods is the increase of inference costs. Also, there is still some randomness present when using these methods, as even the best performing methods do not increase lexical diversity or performance of models in all cases. The notable relative increase in stability of performance and mean performance of models trained 13155 on data collected using the hints method indicates that LLMs can produce data of better quality using this method when aiming for downstream task classifier performance. Limitations We note several limitations to our work. First, we did not explore the usability of the diversity incentive methods for languages other than English or for multi-lingual language models. Second, we did not use different types of prompts in our experiments and followed those used in previous studies (Cegin et al., 2023; Larson et al., 2020). Different prompts could have effects on the quality of LLMs, but would radically increase the size of this study, and as such we decided to leave this for future work. Third, we evaluated the Mistral finetuned models on only a subset of the test data to save computational resources as some datasets had large test data sets similar to other works (Chang and Jia, 2023; Li and Qiu, 2023; Gao et al., 2021; Köksal et al., 2023). We did, however, use different splits for each finetuning to mitigate the impact of sample bias. Fourth, the worse results of the taboo method might be due to the fact that the method sometimes uses unrelated words. Taboo words are determined per label which may yield words with little relevance to the seed sentence. This limitations stems from our replication of the method from crowdsouring, where no such filters were described in the original work of (Larson et al., 2020). Fifth, we have collected data from only 2 opensource LLMs as well as from LLaMA-2, but we believe that the inclusion of different LLMs and the consistent improvement of hints method on model performance across data collected from various LLMs does not threaten our findings. Sixth, we only used 2 classifiers for finetuning, namely BERT-large and Mistral-7B-v0.1. However, we believe that repeated data collection rounds and multiple finetunings on the collected data for a variety of datasets mitigates this drawback and as such it does not threaten our findings. Seventh, we collected 5 samples per seed sentence and used 6 seed sentences per label from the datasets we used, which resulted in (relatively to the original datasets) smaller datasets (ranging from approx. 200 to 700 sampels). The total amount of all collected paraphrases amounts to 253,500 paraphrases, reflecting the multiple data collection rounds datasets and LLMs we used. However, we did not investigate the effects of diversity incentives on larger amounts of collected data to investigate if such data augmentation methods decrease in effectiveness when augmenting larger amounts of seed data. Eight, we only used the default settings of the diversity incentive methods and thus we did not compare the different number of seed sentences other than 6 per label and different number of hints to investigate the effectiveness of the diversity incentive methods under different settings. Ninth, the reproducibility of our data collection process for ChatGPT and GPT-4 is dependent upon the owners of ChatGPT services as the models we used in our study might be deprecated and not available in due time. This is, however, counterbalanced by the inclusion of 3 open LLMs. Tenth, we do not know if any of the 6 datasets used in this study have been used for training the LLMs we used for data collection and if this had any effect on our results and findings. As such, we do not know what kind of effect the diversity incentive methods would have on data augmentation of new, unpublished datasets. This limitation is part of the recently recognized possible “LLM validation crisis”, as described by (Li and Flanigan, 2023). Eleventh, we do not provide a direct comparison of LLMs against each other, as the seed sentences used for each data collection round changed for each LLM randomly. We believe, however, that this does not threaten the results, as the goal of this paper is to compare diversity incentive methods in a direct comparison on LLMs, not LLMs between each other. Acknowledgements This work was partially supported by AI-CODE (GA No. 101135437), VIGILANT (GA No. 101073921), and vera.ai (GA No. 101070093), projects funded by the European Union under the Horizon Europe. This work was also partially supported by MODERMED a project funded by the Slovak Research and Development Agency, GA No. APVV-22-0414. Part of the research results were obtained using the computational resources procured in the national project National competence centre for high performance computing (project code: 13156 311070AKF2) funded by European Regional Development Fund, EU Structural Funds Informatization of society, Operational Program Integrated Infrastructure. This work was supported by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254). References Jan Cegin, Jakub Simko, and Peter Brusilovsky. 2023. ChatGPT to replace crowdsourcing of paraphrases for intent classification: Higher diversity and comparable model robustness. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 1889–1905, Singapore. Association for Computational Linguistics. Ting-Yun Chang and Robin Jia. 2023. Data curation alone can stabilize in-context learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8123–8144, Toronto, Canada. Association for Computational Linguistics. Wenqing Chen, Jidong Tian, Liqiang Xiao, Hao He, and Yaohui Jin. 2020. A semantically consistent and syntactically variational encoder-decoder framework for paraphrase generation. In Proceedings of the 28th International Conference on Computational Linguistics, pages 1186–1198, Barcelona, Spain (Online). International Committee on Computational Linguistics. Jishnu Ray Chowdhury, Yong Zhuang, and Shuyi Wang. 2022. Novelty controlled paraphrase generation with retrieval augmented conditional prompt tuning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10535–10544. Samuel Rhys Cox, Ashraf Abdul, and Wei Tsang Ooi. 2023. Prompting a large language model to generate diverse motivational messages: A comparison with human-written messages. In Proceedings of the 11th International Conference on Human-Agent Interaction, HAI ’23, page 378–380, New York, NY, USA. Association for Computing Machinery. Haixing Dai, Zhengliang Liu, Wenxiong Liao, Xiaoke Huang, Yihan Cao, Zihao Wu, Lin Zhao, Shaochen Xu, Wei Liu, Ninghao Liu, Sheng Li, Dajiang Zhu, Hongmin Cai, Lichao Sun, Quanzheng Li, Dinggang Shen, Tianming Liu, and Xiang Li. 2023. Auggpt: Leveraging chatgpt for text data augmentation. Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314. Luyang Fang, Gyeong-Geon Lee, and Xiaoming Zhai. 2023. Using gpt-4 to augment unbalanced data for automatic scoring. Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3816–3830, Online. Association for Computational Linguistics. Sreyan Ghosh, Chandra Kiran Evuru, Sonal Kumar, S Ramaneswaran, S Sakshi, Utkarsh Tyagi, and Dinesh Manocha. 2023. Dale: Generative data augmentation for low-resource legal nlp. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Sentosa, Singapore. Tanya Goyal and Greg Durrett. 2020. Neural syntactic preordering for controlled paraphrase generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 238–252, Online. Association for Computational Linguistics. Charles T. Hemphill, John J. Godfrey, and George R. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990. Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Nitish Joshi and He He. 2022. An investigation of the (in)effectiveness of counterfactually augmented data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3668–3681, Dublin, Ireland. Association for Computational Linguistics. Abdullatif Köksal, Timo Schick, and Hinrich Schuetze. 2023. MEAL: Stable and active learning for few-shot prompting. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 506–517, Singapore. Association for Computational Linguistics. Kalpesh Krishna, John Wieting, and Mohit Iyyer. 2020. Reformulating unsupervised style transfer as paraphrase generation. EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, pages 737–762. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. 2019. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700. Ken Lang. 1995. Newsweeder: Learning to filter netnews. In Armand Prieditis and Stuart Russell, editors, Machine Learning Proceedings 1995, pages 331–339. Morgan Kaufmann, San Francisco (CA). 13157 Stefan Larson, Anish Mahendran, Andrew Lee, Jonathan K. Kummerfeld, Parker Hill, Michael A. Laurenzano, Johann Hauswald, Lingjia Tang, and Jason Mars. 2019. Outlier detection for improved data quality and diversity in dialog systems. NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, 1:517–527. Stefan Larson, Anthony Zheng, Anish Mahendran, Rishi Tekriwal, Adrian Cheung, Eric Guldan, Kevin Leach, and Jonathan K. Kummerfeld. 2020. Iterative feature mining for constraint-based data collection to increase data diversity and model robustness. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8097–8106, Online. Association for Computational Linguistics. Ariel N. Lee, Cole J. Hunter, and Nataniel Ruiz. 2023. Platypus: Quick, cheap, and powerful refinement of llms. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7871–7880, Online. Association for Computational Linguistics. Changmao Li and Jeffrey Flanigan. 2023. Task contamination: Language models may not be few-shot anymore. Xiaonan Li and Xipeng Qiu. 2023. Finding support examples for in-context learning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6219–6235, Singapore. Association for Computational Linguistics. Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. 2020. On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. In International Conference on Learning Representations. Marius Mosbach, Tiago Pimentel, Shauli Ravfogel, Dietrich Klakow, and Yanai Elazar. 2023. Few-shot fine-tuning vs. in-context learning: A fair comparison and evaluation. In Findings of the Association for Computational Linguistics: ACL 2023, pages 12284– 12314, Toronto, Canada. Association for Computational Linguistics. Branislav Pecher, Ivan Srba, and Maria Bielikova. 2023. On the effects of randomness on stability of learning with limited labelled data: A systematic literature review. arXiv preprint arXiv:2312.01082. Frédéric Piedboeuf and Philippe Langlais. 2023. Is ChatGPT the ultimate data augmentation algorithm? In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15606–15615, Singapore. Association for Computational Linguistics. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Samuel Rhys Cox, Yunlong Wang, Ashraf Abdul, Christian von der Weth, and Brian Y. Lim. 2021. Directed diversity: Leveraging language embedding distances for collective creativity in crowd ideation. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, New York, NY, USA. Association for Computing Machinery. Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3795–3805, Minneapolis, Minnesota. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics. Brian Thompson and Matt Post. 2020. Paraphrase generation as zero-shot multilingual translation: Disentangling semantic similarity from lexical and syntactic diversity. In Proceedings of the Fifth Conference on Machine Translation, pages 561–570, Online. Association for Computational Linguistics. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Nafis Irtiza Tripto, Saranya Venkatraman, Dominik Macko, Robert Moro, Ivan Srba, Adaku Uchendu, 13158 Thai Le, and Dongwon Lee. 2023. A ship of theseus: Curious cases of paraphrasing in llm-generated texts. Solomon Ubani, Suleyman Olcay Polat, and Rodney Nielsen. 2023. Zeroshotdataaug: Generating and augmenting training data with chatgpt. Haohan Wang, Zeyi Huang, Hanlin Zhang, Yong Jae Lee, and Eric P Xing. 2022. Toward learning humanaligned cross-domain robust models by countering misaligned features. In Uncertainty in Artificial Intelligence, pages 2075–2084. PMLR. Lee Wei, Hsuan Huang Chi, Wei Chang Chien, Kuang Daniel Wu Ming, Ta Chuang Kun, An Yang Po, and Cheng Hsieh Chu. 2018. Effective quality assurance for data labels through crowdsourcing and domain expert collaboration. In Advances in Database Technology - EDBT 2018, Advances in Database Technology - EDBT, pages 646–649. OpenProceedings.org. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. Advances in neural information processing systems, 28. Jianing Zhou and Suma Bhat. 2020. Dynamic word recommendation to obtain diverse crowdsourced paraphrases of user utterances. In International Conference on Intelligent User Interfaces, Proceedings IUI, pages 55–66. A Ethical considerations Based on a thorough ethical assessment, performed on the basis of intra-institutional ethical guidelines and checklists tailored to the use of data and algorithms, we see no ethical concerns pertaining directly to the conduct of this research. We also ethically assessed our paraphrase validity crowdsourcing process from Appendix B via our intrainstitutional ethical guidelines and found no ethical concerns. In our study, we analyzed existing data or data generated using various LLMs. During our manual checking of the data in Section 3 we also ensured that the data contained no personal or offensive data. Albeit production of new data through LLMs bears several risks, such as introduction of biases, the small size of the produced dataset, sufficient for experimentation is, at the same time, insufficient for any major machine learning endeavors, where such biases could be transferred. We follow the license terms for all the models and datasets we used (such as the one required for the use of the LLaMA-2 model) – all models and datasets allow their use as part of research. A.1 CO2 Emission Related to Experiments Data collection via open-source LLMs was conducted using a private infrastructure, which has a carbon efficiency of 0.432 kg CO2/kWh. A cumulative of 100 hours of computation was performed on hardware of type A100 PCIe 40/80GB (TDP of 250W) for data collection. Model finetuning for both BERT and Mistral was conducted using a private infrastructure, which has a carbon efficiency of 0.432 kg CO2/kWh. A cumulative of 800 hours of computation was performed on hardware of type A100 PCIe 40/80GB (TDP of 250W) for data collection. Total emissions together are estimated to be 97.2 kgCO2 of which 0 percents were directly offset. We tried to reduce the generated emissions by using 4-bit quantization for LLMs and using a subset of test data for evaluation for Mistral finetuning, as inference is costly. Estimations were conducted using the MachineLearning Impact calculator presented in (Lacoste et al., 2019). B Paraphrase validity checking process For the process of checking the validity of the created paraphrases, we used our very own web app developed for this process. The users, who were the authors that also developed the app, were shown the seed samples and its label, from which LLM generated the paraphrases, and one particular paraphrase to validate. The authors/users all gave consent to the data collection process and had knowledge of how the data would be used. The instructions were "Please decide if the paraphrase has the same meaning as the seed sentence and if it adheres to the label of the seed sentence." The user was then able to either mark the paraphrase as valid or not, with an additional optional checkbox to label the paraphrase as ‘borderline case’ for possible re-visions. As the seed sentence changed only once in a while (we first showed all the paraphrases from one seed sentence) this significantly reduced the cognitive load on the annotator. The users/authors then discussed together the ‘borderline cases’ where the users were not sure about the validity of created paraphrases. C Full results of model performance on In this section we report full result of our experiments for each dataset, diversity incentive method 13159 20 News PROMPT TABOO CHAINING HINTS COMB ChatGPT 60.015.43 59.455.54 57.785.92 60.304.96 59.445.61 GPT4 61.382.80 65.442.58 62.303.84 65.432.95 60.583.85 Mistral 58.913.81 58.533.19 57.773.28 58.922.90 58.542.94 LLaMA-2 60.624.46 59.324.55 59.585.26 60.873.88 60.164.83 Platypus 61.953.36 61.083.37 60.243.62 61.022.41 59.353.10 (a) Results on the 20 News dataset. AG News PROMPT TABOO CHAINING HINTS COMB ChatGPT 79.452.37 79.322.46 78.142.68 80.432.36 78.742.96 GPT4 79.353.09 79.532.89 77.742.95 79.362.06 79.552.40 Mistral 83.381.75 83.261.83 82.792.43 83.401.62 79.542.61 LLaMA-2 81.083.19 81.833.23 81.213.35 81.563.21 79.654.15 Platypus 78.564.34 79.823.45 78.654.35 79.563.80 78.573.88 (b) Results on the AG News dataset. ATIS PROMPT TABOO CHAINING HINTS COMB ChatGPT 82.9411.06 76.4713.05 80.9412.84 85.217.73 79.4412.85 GPT4 76.068.78 74.617.90 74.309.80 76.478.76 74.137.99 Mistral 79.839.75 74.587.60 75.1011.15 80.329.21 75.318.43 LLaMA-2 82.885.36 78.206.31 81.116.03 83.355.53 79.345.73 Platypus 83.538.30 81.298.47 81.188.98 83.736.81 82.058.40 (c) Results on the ATIS dataset. FB PROMPT TABOO CHAINING HINTS COMB ChatGPT 83.102.32 81.702.06 82.252.19 83.111.51 80.741.54 GPT4 82.562.92 80.554.40 80.803.37 82.512.47 81.143.15 Mistral 79.183.12 77.984.14 78.724.10 79.443.68 77.672.81 LLaMA-2 79.604.04 79.344.12 79.442.67 80.582.67 78.753.69 Platypus 80.752.10 79.602.79 79.864.74 82.232.25 79.942.16 (d) Results on the FB dataset. SST-5 PROMPT TABOO CHAINING HINTS COMB ChatGPT 34.942.51 36.062.70 35.282.03 35.852.06 34.322.08 GPT4 33.702.09 34.361.96 33.931.97 33.881.77 33.881.47 Mistral 33.192.94 32.742.94 32.542.98 33.462.43 32.742.98 LLaMA-2 33.372.74 34.832.17 32.972.49 33.632.15 33.792.43 Platypus 33.892.69 33.922.13 33.412.54 34.242.23 34.112.13 (e) Results on the SST-5 dataset. Yelp PROMPT TABOO CHAINING HINTS COMB ChatGPT 43.962.02 44.862.01 43.812.85 44.171.99 43.831.70 GPT4 41.502.66 41.592.47 40.693.35 41.942.83 41.342.08 Mistral 39.432.83 40.082.76 38.942.21 39.692.62 39.473.63 LLaMA-2 43.003.41 43.072.82 42.092.70 42.722.82 42.273.33 Platypus 43.373.05 42.792.75 43.152.79 43.352.66 41.713.91 (f) Results on the Yelp dataset. Table 2: Performance of BERT-large classifier on the test split of each dataset after being trained 5 times for each of the repeated 5 data collection rounds. We report the mean performance and standard deviation. The hints method generally increases mean performance and stability of performance when compared to baseline prompt method. and LLM. The results for BERT-large are in Table 2 and for Mistral in Table 3. Visualizations can be found in Appendix J. The specific open-source LLMs used for data collection were LLaMA-270B-instruct 2 with 70 bilion parameters, Mistral2https://huggingface.co/meta-llama/Llama-2-70b-chat-hf 7B-instruct 3 with 7 bilion parameters and Platypus70B-instruct 4 with 70 bilion parameters. 3https://huggingface.co/mistralai/Mistral-7B-Instructv0.1 4https://huggingface.co/garage-bAInd/Platypus2-70Binstruct 13160 20 News PROMPT TABOO CHAINING HINTS COMB ChatGPT 74.054.13 74.024.75 75.053.25 76.051.72 73.302.03 GPT4 75.253.70 75.422.49 73.382.43 77.522.74 73.418.65 Mistral 72.870.61 74.200.29 73.330.77 73.890.60 73.821.19 LLaMA-2 75.212.37 75.645.18 71.216.69 77.191.81 76.182.64 Platypus 73.433.48 76.902.93 72.435.71 77.452.21 76.303.31 (a) Results for 20 News dataset. AG News PROMPT TABOO CHAINING HINTS COMB ChatGPT 82.095.48 84.273.16 82.314.78 83.493.75 82.343.07 GPT4 81.344.88 81.393.24 82.785.73 84.951.29 81.804.00 Mistral 85.961.78 86.960.93 85.711.03 87.140.76 85.961.82 LLaMA-2 81.646.02 83.283.41 85.093.24 83.091.29 84.552.68 Platypus 82.482.38 85.272.51 85.372.05 85.230.87 82.613.91 (b) Results for AG News dataset. ATIS PROMPT TABOO CHAINING HINTS COMB ChatGPT 89.918.74 88.869.67 91.058.78 94.043.15 89.213.57 GPT4 74.219.06 77.3711.75 77.3710.83 82.895.20 81.586.96 Mistral 89.566.67 87.117.69 87.894.88 89.304.85 85.799.06 LLaMA-2 82.8912.20 75.0010.98 86.329.94 85.355.01 81.5811.94 Platypus 87.119.47 85.009.62 88.426.02 89.913.58 87.376.58 (c) Results for ATIS dataset. FB PROMPT TABOO CHAINING HINTS COMB ChatGPT 86.602.90 87.021.78 85.463.00 87.971.47 86.063.84 GPT4 87.931.34 86.524.74 83.659.85 88.122.20 85.005.11 Mistral 79.835.58 79.185.20 79.084.28 80.521.25 81.093.16 LLaMA-2 79.861.49 82.341.21 79.223.08 84.082.68 81.184.87 Platypus 83.812.85 79.684.76 81.742.21 85.832.30 84.541.72 (d) Results for FB dataset. SST-5 PROMPT TABOO CHAINING HINTS COMB ChatGPT 49.925.77 53.392.77 49.955.52 51.161.83 51.834.73 GPT4 48.334.81 53.034.77 54.571.91 51.793.15 52.764.18 Mistral 48.624.17 47.332.19 51.863.08 50.353.81 46.854.55 LLaMA-2 47.723.97 51.132.39 48.425.06 53.393.00 51.313.12 Platypus 50.595.92 51.863.12 50.322.60 50.902.34 50.953.60 (e) Results for SST-5 dataset. Yelp PROMPT TABOO CHAINING HINTS COMB ChatGPT 54.002.42 52.523.43 52.203.34 54.012.37 53.972.86 GPT4 53.713.27 53.783.97 53.012.35 53.801.85 53.730.76 Mistral 53.372.10 54.163.28 53.183.02 55.281.23 54.481.11 LLaMA-2 52.244.42 53.322.22 54.982.33 55.402.77 53.933.16 Platypus 54.593.04 53.093.26 53.472.44 54.671.52 54.012.21 (f) Results for Yelp dataset. Table 3: Performance of Mistral classifier on a subset of the test split of each dataset after being trained 5 times for each of the repeated 5 data collection rounds. We report the mean performance and standard deviation. The hints method generally increases mean performance and stability of performance when compared to baseline prompt method. D BERT-large finetuning details We used the bert-large-uncased version of the model from Huggingface and the best working hyperparameters from our hyperparameter search were batch size of 32, classifier dropout set to 0.2, used the AdamW optimizer with learning rate set to 1e-5 and trained for 80 epochs. We evaluated the model during training after each 10 epochs and saved its performance. We reported the best test performance for each of the models during training. 13161 E Mistral finetuning details We used the Mistral-7B-v0.1 5 version of the model from Huggingface. For finetuning, we used the PEFT method QLoRA (Dettmers et al., 2023) in 4-bit setting with r=16 and alpha=16. We finetuned the model for 20 epochs, used batch size of 32, learning rate of 2e-5, dropout of 0.1, used halfprecision floating-point format (fp16), warmup ratio of 0.1, maximum grad. norm of 0.3, maximum sequence length of 128, weight decay set to 0.01 and used 8-bit Adam optimization. We evaluated the model on a subset of the test dataset (10% of the original dataset) due to the lengthy inference times, similar to previous work (Chang and Jia, 2023; Li and Qiu, 2023; Gao et al., 2021; Köksal et al., 2023) on all of the datasets except for ATIS. We did, however, use different splits from the test part of the datasets to mitigate the effect of sample bias. F Dataset details As we did not use all of the dataset labels and samples in each of the dataset, we list our setup here. We mostly used labels that were in the datasets with similar quantity to deal with the imbalanced datasets issue. All used datasets are in English language. For the 20 News dataset we used samples with labels politics, wellness, entertainment, travel, style and beauty and parenting. For the AG News, SST-5 and Yelp datasets we used all the samples. For the ATIS dataset we used samples with labels atis_abbreviation, atis_aircraft, atis_airfare, atis_airline, atis_flight, atis_flight_time, atis_ground_service and atis_quantity. For the FB dataset we used samples with labels get_directions, get_distance, get_estimated_arrival, get_estimated_departure, get_estimated_duration, get_info_road_condition, get_info_route, get_info_traffic, get_location and update_directions. G Number of collected samples per dataset For the 20 News datasets we used 36 seed samples (6 seed per label with 6 labels total) randomly sampled for each data collection round, resulting in 180 samples collected for each round and 396 samples in the final dataset (48 seed samples + 180 samples 1st round + 180 samples 2nd round). The entire 5https://huggingface.co/mistralai/Mistral-7B-v0.1 test split we used for BERT finetuning had 11,751 samples. For the AG News dataset we used 24 seed samples randomly sampled for each data collection round, resulting in 120 samples collected for each round and 264 samples in the final dataset. The entire test split we used for BERT finetuning had 7,600 samples. For the ATIS dataset we used 48 seed samples randomly sampled for each data collection round, resulting in 240 samples collected for each round and 528 samples in the final dataset. The entire test split we used for BERT finetuning had 763 samples. For the FB dataset we used 60 seed samples randomly sampled for each data collection round, resulting in 300 samples collected for each round and 660 samples in the final dataset. The entire test split we used for BERT finetuning had 5,645 samples. For the SST-5 dataset we used 30 seed samples randomly sampled for each data collection round, resulting in 150 samples collected for each round and 330 samples in the final dataset. The entire test split we used for BERT finetuning had 2,210 samples. For the Yelp dataset we used 30 seed samples randomly sampled for each data collection round, resulting in 150 samples collected for each round and 330 samples in the final dataset. The entire test split we used for BERT finetuning had 13,895 samples. 13162 H Templates and examples of diversity incentive prompts used in LLMs The prompt and chaining method: Rephrase an original question or statement 3 times. Original phrase: seed_phrase. Prompt example Rephrase an o r i g i n a l q u e s t i o n or s t a t e m e n t 3 times . O r i g i n a l phrase : " t e l l me the f a s t e s t way to get home " . Chaining example Rephrase an o r i g i n a l q u e s t i o n or s t a t e m e n t 3 times . O r i g i n a l phrase : " p l e a s e share the most r a p i d means of g e t t i n g back to my dwelling " . The taboo method: Rephrase an original question or statement 3 times. Original phrase: seed_phrase. Don’t use the words “word_1”, “word_2" or “word_3” in your responses. Taboo example Rephrase an o r i g i n a l q u e s t i o n or s t a t e m e n t 3 times . O r i g i n a l phrase : " t e l l me the f a s t e s t way to get home " . Don ’ t use the words " a r r i v e " , " c o n s t r u c t i o n " or " house " in your r e s p o n s e s . The hints method: Rephrase an original question or statement 3 times. Original phrase: seed_phrase. ### Example paraphrases: phrase_1, phrase_2, phrase_3 ### Hints example Rephrase an o r i g i n a l q u e s t i o n or s t a t e m e n t 3 times . O r i g i n a l phrase : " t e l l me the f a s t e s t way to get home " . ### Example p a r a p h r a s e s : " p l e a s e share the most r a p i d means of g e t t i n g back to my dwelling " . " inform me of the q u i c k e s t r o u t e to reach my house " . " what i s the s w i f t e s t method to a r r i v e a t my r e s i d e n c e " . ### I Visualization of the effect of diversity incentive methods on lexical diversity The effects of diversity incentive methods on lexical diversity can be found in Figure 2 for no. unique 3-grams and Figure 3 for no. unique words. J Visualization of the effect of diversity incentive methods model performance The effects of diversity incentive methods on model performance can be found in Figure 4 for BERTlarge and Figure 5 for Mistral-7B. K Results of the ablation study In this section we list visualizations of the results for ablated versions of different diversity incentive methods in lexical diversity in Figures 6 and 7 and in accuracy of models trained on data collected this way in Figures 8 and 9. 13163 Figure 2: Results of diversity incentive methods on no. of collected unique 3-grams per dataset and LLM combination. The taboo method generally increases the no. of collected unique 3-grams, while the chaining and hints methods have random effects. 13164 Figure 3: Results of diversity incentive methods on no. of collected unique words per dataset and LLM combination. The taboo method generally increases the no. of collected unique words, while the chaining and hints methods have random effects. 13165 Figure 4: The accuracy of BERT-large classifier on test data that was trained on data collected via different diversity incentive methods using various LLMs. The best performing methods is the hints method, which generally increases mean performance of the models and stability of performance. The taboo method has close to random influence on model performance while the chaining method generally decreases model performance. 13166 Figure 5: The accuracy of Mistral finetuned for classification on a subset of test data. The model was trained on data collected via different diversity incentive methods using various LLMs. The best performing methods is the hints method, which generally increases mean performance of the models and stability of performance. 13167 Figure 6: The change in no. of collected unique 3-grams when comparing ablated methods with non-ablated. The figure displays the change of diversity of the ablated version of the diversity incentive methods vs. the non-ablated version. The ablated version of the taboo method performs generally worse, indicating that the tabooing of most significant words increases diversity of texts collected via LLMs. 13168 Figure 7: The change in no. of collected unique words when comparing ablated methods with non-ablated. The figure displays the change of diversity of the ablated version of the diversity incentive methods vs. the non-ablated version. The ablated version of the taboo method performs generally worse, indicating that the tabooing of most significant words increases diversity of texts collected via LLMs. 13169 Figure 8: The change in accuracy of BERT-large trained on data collected via ablated and non-ablated diversity incentive methods. The figure displays the change of accuracy of the ablated version of the diversity incentive methods vs. the non-ablated version. The ablated version of the hints method performs generally worse, indicating that the inclusion of previous examples in the data collection yields better data. 13170 Figure 9: The change in accuracy of Mistral trained on data collected via ablated and non-ablated diversity incentive methods. The figure displays the change of accuracy of the ablated version of the diversity incentive methods vs. the non-ablated version. The ablated version of the hints method performs generally worse, indicating that the inclusion of previous examples in the data collection yields better data. 13171
2024
710
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13172–13184 August 11-16, 2024 ©2024 Association for Computational Linguistics Beyond Orthography: Automatic Recovery of Short Vowels and Dialectal Sounds in Arabic Yassine El Kheir∗, Hamdy Mubarak, Ahmed Ali, Shammur Absar Chowdhury∗+ {yelkheir, hmubarak, amali, shchowdhury}@hbku.edu.qa Abstract This paper presents a novel Dialectal Sound and Vowelization Recovery framework, designed to recognize borrowed and dialectal sounds within phonologically diverse and dialect-rich languages, that extends beyond its standard orthographic sound sets. The proposed framework utilized quantized sequence of input with(out) continuous pretrained selfsupervised representation. We show the efficacy of the pipeline using limited data for Arabic, a dialect-rich language containing more than 22 major dialects. Phonetically correct transcribed speech resources for dialectal Arabic is scare. Therefore, we introduce ArabVoice151, a first of its kind, curated test set featuring 5 hours of dialectal speech across 15 Arab countries, with phonetically accurate transcriptions, including borrowed and dialectspecific sounds. We described in detail the annotation guideline along with the analysis of the dialectal confusion pairs. Our extensive evaluation includes both subjective – human perception tests and objective measures. Our empirical results, reported with three test sets, show that with only one and half hours of training data, our model improve character error rate by ≈7% in ArabVoice15 compared to the baseline. 1 Introduction Self-supervised learning (SSL) paradigm has transformed speech research and technology, achieving remarkable performance (Baevski et al., 2020; Chen et al., 2022) while reducing the dependency on extensively annotated datasets (Radford et al., 2023). The SSL models excel at discerning the underlying acoustic properties in both frames and utterance level (Pasad et al., 2021, 2023; Chowdhury et al., 2023) irrespective of language. Phonetic information is sailent and preserved even when ∗These authors contributed equally to this work. + Corresponding author. 1https://github.com/QCRIVoice/ACL-DVR these continuous representations are mapped to a finite set of codes via vector quantization (Hsu et al., 2021a; Sicherman and Adi, 2023; Wells et al., 2022; Kheir et al., 2024). This allows the learning paradigm to leverage unlabeled data to discover units that capture meaningful phonetic contrasts. Leveraging insights from acoustic unit discovery (Park and Glass, 2008; Versteegh et al., 2015; Dunbar et al., 2017; Eloff et al., 2019; Van Niekerk et al., 2020), unsupervised speech recognition (Baevski et al., 2021a; Da-Rong Liu and shan Lee, 2018; Chen et al., 2019; Da-rong Liu and yi Lee, 2022; Baevski et al., 2021b), and phoneme segmentation (Kreuk et al., 2020; Bhati et al., 2022; Dunbar et al., 2017; Versteegh et al., 2015) have utilized quantized discrete units for various purposes. These include (i) pretraining the SSL model (Baevski et al., 2020; Hsu et al., 2021a), (ii) employing acoustic unit discovery as a training objective (van Niekerk et al., 2020), and (iii) utilizing discrete labels for training phoneme recognition and automatic speech recognition (Chang et al., 2023; Da-rong Liu and yi Lee, 2022; Da-Rong Liu and shan Lee, 2018; Sukhadia and Chowdhury, 2024). Inspired by previous research, we employ SSL representations and vector quantization to recognize acoustic units in phonologically diverse spoken dialects, extending beyond their standard orthographic sound sets. We introduce a simple yet potent network leveraging SSL and a discrete codebook to recognize these non-orthographic dialectal and borrowed sounds with minimal labeled data. Arabic is an appropriate language choice for the task. The language has a rich tapestry of dialects, each with its unique characteristics in phonology, morphology, syntax, and lexicon (Ali et al., 2021). These dialects2 differ not only among themselves 2There are 22 Arab countries, and typically, there is more than one dialect spoken in each Arab country (ex: rural versus urban areas) 13172 but also when compared to Modern Standard Arabic (MSA). While MSA prevails in official and educational domains, Dialectal Arabic (DA) serves as the means for daily communication. The diversity in pronunciation and phoneme sets for DA goes beyond standardized MSA sound sets. Moreover, to add to the challenges, DA follows no standard orthography. Therefore, despite the abundance of DA speech data in online platforms, accurately (phonetically correct) transcribed resources are scarce, categorizing DA among the low-resource languages. To bridge this gap, we introduce the Arabic “Dialectal Sound and Vowelization Recovery” (DSVR) framework. The proposed framework exploits the frame-level SSL embeddings and quantizes them to create a handful of discrete labels using k-means model. These discrete labels are then fed (can be in combination with SSL embeddings) as input to a transformer-based dialectal unit and vowel recognition (DVR) model. We show its efficacy for (a) dialectal and borrowed sound recovery; and (b) vowelization restoration capabilities with only 1 hour 30 minutes of training data. We introduced Arabic dialectal test set – “ArabVoice15”, a collection of 5 hours of dialectal speech and verbatim transcription with recovered dialectal and borrowed sounds from 15 Arab countries. For vowelization restoration, we tested on 1 hour of speech data, sampled from CommonVoice-Ar (Ardila et al., 2019), transcribed by restoring short vowels. Our paper describes the phonetic rules adopted, special sounds considered along with detailed annotation guidelines for designing these test sets. Furthermore, we evaluate the quality of the intermediate discrete labels using human perceptual evaluation, in addition to other purity and clustering-based measures. We observed that these discrete labels can capture speaker-invariant, distinct acoustic, and linguistic information while preserving the temporal information. Consequently, encapsulating the discriminate acoustic unit properties, which can be used to recover dialectal missing sounds. Our empirical results suggest that DSVR can exploit unlabeled data to design the codebook and then with a small amount of annotated data, a unit recognizer can be trained. Our contribution involves: (i) Proposed Arabic Dialectal Sound and Vowelization Recovery (DSVR) framework to recognize dialectal units and restore short vowels; (ii) Developed annotation guidelines for the verbatim dialectal transcription; (iii) Introduced and benchmark ArabVoice15 test set – a collection of dialectal speech and phonetically correct verbatim transcription of 5 hours of data. (iv) Released a small subset of CommomVoice - Arabic (Ardila et al., 2019) data with restored short vowels, dialectal and borrowed sounds. This study addresses the crucial challenge of identifying and understanding these phonetic intricacies, acknowledging their essential role in improving the performance of speech processing applications like dialectal Text-to-Speech (TTS) and ComputerAssisted Pronunciation Training applications. To the best of our knowledge, this study is the first to attempt to automatically restore vowels, borrowed and dialectal sounds for rich spoken dialectal Arabic language with very limited amount of data. Moreover, the study also introduce the very first dialectal testset with phonetically correct transcription representation. 2 Arabic Sounds The exploration of phonotactic variations across Arabic dialects, including MSA and other regional dialects offers a rich field of study within the domain of Arabic linguistics. These variations are not merely lexical, but phonetic and in many cases deeply embedded in the phonological rules that dictate the permissible combinations and sequences of sounds within each dialect (Biadsy et al., 2009). 2.1 Related Studies Limited research investigated dialectal sounds in Arabic transcribed speech. Vergyri and Kirchhoff (2004) deployed an EM algorithm to automatically optimize the optimal diacritic using acoustic and morphological information combination. Al Hanai and Glass (2014) employed automated text-based diacritic restoration models to add diacritics to speech transcriptions and to train speech recognition systems with diacritics. However, the effectiveness of text-based diacritic restoration models for speech applications is questionable for several reasons, as demonstrated in Aldarmaki and Ghannam (2023), they often fail to accurately capture the diacritics uttered by speakers due to the nature of speech; hesitation, unconventional grammar, and dialectal variations. This leads to a deviation from rule-based diacritics. Recently, Shatnawi et al. (2023) developed a joint text-speech model to in13173 corporate the corresponding speech signal into the text based diacritization model. Grapheme to Phoneme (G2P) has been studied thoroughly by many researchers across multiple languages. Recent approaches in G2P include data-driven and multilingual (Yu et al., 2020; Garg et al., 2024) mapping from grapheme sequence to phoneme sequence. However, previous work in Arabic G2P is comprised of two steps: (i) Grapheme to vowelized-grapheme (G2V) to restore the missing short vowels and (ii) Vowelizedgrapheme to phoneme sequence (V2P). The first step is often statistical and deploys techniques like sequence-to-sequence; for example studies like Abdelali et al. (2016); Obeid et al. (2020) are used widely for restoring the missing vowels in Arabic. The second step is relatively one-to-one and can be potentially hand-crafted rules for MSA as well as various dialects, refer to Biadsy et al. (2009); Ali et al. (2014) for more details. MSA Arabic speech recognition phoneme lexicon can be found here3 The distinction between MSA and regional dialects is nuanced; viewing them as separate is oversimplified. Arabs perceive them as interconnected, leading to diglossia, where MSA is for formal contexts and dialects for informal ones, yet with significant overlap and blending (Chowdhury et al., 2020a). Chowdhury et al. (2020b) studied dialectal code-switching in the manually annotated Egyptian corpus. The corpus was annotated for both MSA and Egyptian dialect labels per token, considering both the linguistic and the acoustic cues. The findings indicate the complex overlapping characteristics of the dialectal sound units showing roughly 2.6K Egyptian sounding words with respect to 9.3K MSA and 2.3K mix of both. 2.2 MSA and Dialectal Phonlological Variations Arabic dialects exhibit phonological differences when compared to MSA, these differences might be noted across various aspects of pronunciation and phonology, such as consonants, vowels, and diphthongs. It’s suggested that Arabic generally encompasses around 28 consonants, alongside three short vowels, three long vowels, though these numbers could vary slightly depending on the dialect in question. The consonant pronunciation of H [θ], X [D],   [DQ], h. [dý],  [dQ], and † [q] cover most of the variations across Arabic dialects. Here are 3https://catalog.ldc.upenn.edu/LDC2017L01 some examples of phones that vary between MSA and various Arabic dialects. • Interdental Consonants: In particular H [θ]/ X [D] found in MSA are pronounced differently. For example, in Egyptian Arabic, they are often pronounced as € [s]. • The voiceless stop constant † [q] is a good example across Arabic dialects, In many cases, it will be pronounced as glottal stop Z [P] in Egyptian dialect and voiced velar h. [dý] in Gulf and Yemeni dialects. • Long and short vowels might exhibit a reduction in duration or even drop in duration in various dialects. In some dialects, the difference between long and short vowels may be subtle to notice. • The difference in stress between Arabic dialects can lead to different meanings. The phonological differences and examples mentioned above do not cover all variations but highlight several distinctions between Arabic dialects and MSA. A depiction of certain MSA sound variations is presented in Appendix A.1. 3 Methodology Figure 1 gives an overview of our proposed Dialectal Sounds and Vowelization Restoration Framework. The goal of the pipeline is to recover (verbatim) dialectal sound and short vowel units, using frame-level representation. Given an input speech signal X = [x1, x2, · · · , xT ] of T frames, the frame-level representation (Z) is first extracted from a multilingual SSL pretrained model. We subsampled frame-level vectors ( eZ ⊂Z) to train a simple Vector Quantization (VQ) model using k-means for getting a Codebook Ck, with k categorical variables. Each cluster, in the codebook, is then associated with a code Qk i and a centroid vector Gk i . Using the Ck codebook, we infer the discrete sequences codes ˆZ corresponding to the input Z. ˆZ is the input of our Dialectal Units and Vowel Recognition (DVR) module. 3.1 Pretrained Speech Encoder The XLS-R4 model is a multilingual pre-trained SSL model following the same architecture as wav2vec2.0 (Baevski et al., 2020). It includes a CNN-based encoder network to encode the raw 4https://huggingface.co/facebook/wav2vec2-large-xlsr53 13174 audio sample and a transformer-based context network to build context representations over the entire latent speech representation. The encoder network consists of 7 blocks of temporal convolution layers with 512 channels, and the convolutions in each block have strides and kernel sizes that compress about 25ms of 16kHz audio every 20ms. The context network consists of 24 blocks with model dimension 1024, inner dimension 4096, and 16 attention heads. The XLS-R model has been pre-trained on around 436, 000 hours of speech across 128 languages. This diverse dataset includes parliamentary speech (372, 000 hours in 23 European languages), read speech from Multilingual Librispeech (44, 000 hours in 8 European languages), Common Voice (7, 000 hours in 60 languages), YouTube speech from the VoxLingua107 corpus (6, 600 hours in 107 languages), and conversational telephone speech from the BABEL corpus (≈ 1, 000 hours in 17 African and Asian languages). We opt for the large XLS-R (1B parameters). Our preliminary analysis revealed limitation in the XLS-R in differentiating between acoustic sounds, such as X [d]/  [dQ] and H [t]/   [tQ] present in MSA and DA. Consequently, we primed the model towards Arabic sounds by finetuning with 13 hours clean avaliable MSA data (Ardila et al., 2019) for ASR task. We restricted the training to 5 epochs to prevent the risk of catastrophic forgetting of the pretrained representation (Goodfellow et al., 2013). 3.2 Vector Quantization Vector Quantization (Makhoul et al., 1985; Baevski et al., 2020) is a widely used technique for approximating vectors or frame-level embeddings through a fixed codebook size. In our Vector Quantization (VQ) modules (see Figure 1), we pass forward a sequence of continuous feature vectors Z = {z1, z2, . . . , zT } and then assign each zt to its nearest neighbor in the trained codebook, Ck. In other words, each zt is replaced with the code Qk i ∈Ck assigned to the centroid Gk i . The resultant discrete labels are quantized sequence ˆZ = {ˆz1, ˆz2, . . . , ˆzT }. These labels are expected to facilitate better proninciation learning and incorporate distinctive phonetic information in the subsequent layers. Training the Codebook For quantization, we utilized the k-means clustering model. We selected a random subset of frame-level representation for training the cluster model. Moreover, to select wide varieties of sound unit, we forced-aligned the available/automatic transcription of the datasets (see Section 5.1) with a GMM-HMM based ASR models. Using the timestamps, we then select SSL frame representations that aligned with wide varieties of sound labels.5 We trained the codebook for different k = {128, 256, 512} 3.3 Dialectal Units and Vowel Recognition (DVR) Model We explored two variants of DVR – discrete and joint Model (as seen in Figure 2). The discrete DVR takes only the discrete ˆZ labels from the VQ as input, where as the joint module concatenate both the ˆZ and Z inside the subsequent layer. The resultant embeddings (for both model) are then passed to the transformer layers and the head feedforward layer. The DVR model is optimized with character recognition objective to identify arabic units. 3.4 Baseline As baseline, we used the frozen frame-level representation from the XLS-R model to pass to the feedforward layer followed by the transformers and output head. The architecture uses similar encoder as the DVR model (see Figure 2 Baseline). For brevity, we reported with the results of the second architecture (SSL frame-level representation with transformer-based encoder) as the baseline of the paper. 4 ArabVoice15 Dataset Spoken DA remains a low-resource language primarily due to the scarcity of transcription that can faithfully capture the diverse regional and borrowed sounds in the standard written format. Such lack of data posses significant challenge for speech and linguistic research and evaluation. In this study, we address this challenge by designing and developing ArabVoice15 test set. Furthermore, we have also enhanced a subset of the existing Arabic Commonvoice (Ardila et al., 2019), Ar:CVR dataset with restored vowels, borrowed and dialectal sounds. In the following sections, we will discuss the datasets, preprocessing steps along with in detail annotation guidelines. ArabVoice15 is a collection of 5 hours of speech utterances randomly selected from testset of ADI17 510k sample frames for each sound label. 13175 Figure 1: Proposed Arabic Dialectal Sound and Vowelization Recovery (DSVR) Framework Figure 2: Baseline and DVR – Discrete and Joint Model (Ali et al., 2019) dataset, widely used for dialect identification task. For the ArabVoice15, we selected a total of 2500 utterance, ≈146(±3.6) utterance from each of the 15 Arab countries including: Algeria (ALG), Egypt (EGY), Iraq (IRA), Jordan (JOR), Saudi Arabia (KSA), Kuwait (KUW), Lebanon (LEB), Libya (LIB), Morocco (MOR), Palestine (PAL), Qatar (QAT), Sudan (SUD), Syria (SYR), United Arab Emirates (UAE), and Yemen (YEM). The average utterance duration: 7-8 seconds. As for Ar : CV R, we randomly extracted 21.38 hours from the Ar:CV trainset, which we then mannually annotated at both verbatim and vowelized level (test ≈1hr). Data Verbatim Pre-Processing We present a set of rules employed for data normalization, aiming to reduce annotators’ tasks through a rule-based phonemic letter-to-sound approach in Arabic, as detailed in (Al-Ghamdi et al., 2004). For vowelization, we initially applied diacritization (aka vowelization or vowel restoration) module present in the Farasa tool (Abdelali et al., 2016). We then applied the following rule-based phonemic letterto-sound function to our dataset. This step also removed any Arabic letters that are not traditionally pronounced in spoken conversation. • For @ [a:] : (i) If it appears within a word (not at the beginning) and is followed by two consonants, we delete it. For example, H. AJºË@ I. J» [ktb a:lktb] becomes H. AJºË I. J» [ktb lktb]. (ii) If it occurs at the beginning in the form of the definite article È@, we replace it with [Pa]. For example, ÕΪÖÏ@ [a:lmQlm/] becomes ÕΪÖÏ Z [PalmQlm]. • For È [l] : We removed the Shamsi (Sun) [l], that refers to [l] in È@ followed by a Sun conso13176 Dataset Source of Data Train (#hrs) Test (#hrs) Annotated with Ar:CVR+ Subset from Arabic Common Voice (Ardila et al., 2019) Train split 1 hr (∗total 19 hrs) 1 hr Restored short vowels, dialectal and borrowed sounds AR:TTS-data Subset collected from available test-to-speech speech corpus (2 speakers, one from Egypt and Levantine region) (Abdelali et al., 2022, 2024; Dalvi et al., 2024) 30 mins – – EgyAlj in-house, source Aljazeera Arabic channel, containing MSA and Egy content – 1.8 hrs Semi-supervised transcription, manually restored short vowels, dialectal and borrowed sounds. ArabVoice15+ A small subset for ADI17 (Ali et al., 2019) test set – 5 hrs Transcribed with dialectal and borrowed sound in consideration Table 1: Train and Test dataset used for Dialectal Units and Vowel Recognition (DVR) model. ∗present total hours of data available and used to show the effect of training data size. + test data will be made available to the public. nant6 ¡¢ ’’ ‚ƒ PP XYJ JË. For example: àAÔgQË@ [a:lrèman] becomes àAÔgP@ [a:rèman] • For @, we replaced it wherever it occurred in the text with @Z [Pa:]. • For Hamza shapes ( ø @ ð @ Z), we normalized them to Z [P]. • For ø @, we normalized them to @ [a:/]. • For Tanwin diacritics (@ @ @ [/un/, /in/, /an/]) at the end of a phrase, we replaced it with a short vowel, and elsewhere, we turned it into à @, à @, à@ [/un/, /in/, /an/] to match the typical verbatim sounds. Annotation Guideline We gave extensive training to an expert transcriber, a native speaker from Egypt, to provide the written form for each word and its verbatim transcription. For example, if the word is Õ Î ¯ [qalam] (pen), and the speaker said Õ Î ¿ [kalam], then the transcriber writes [qalam/kalam]. This is the summary of the annotation guidelines: • For sounds that are not in MSA and have been borrowed from foreign languages, the following special letters7 are used: 6In Arabic grammar, there are two categories of letters: "sun letters" éJ ‚Ò ‚Ë@ ¬ðQmÌ'@ and "moon letters" éK QÒ®Ë@ ¬ðQmÌ'@. These categories affect the pronunciation of the Arabic definite article È@ (al-). Sun letters are those Arabic letters that cause assimilation ÐA «XB @ of the definite article È@ (al-) when they are prefixed to nouns, meaning the "l" sound of "al-" merges with the initial consonant of the noun. The assimilation occurs in pronunciation, but not in writing. 7The special letters used in the annotation process do not belong to the Arabic alphabet; instead, we borrowed them – h [g] as in the word Ég. ñk. “google” which is written as Ég. ñk. [ju:jl] / Égñk [gu:gl]. – ¬ [v] as in the word ñK YJ ¯ “video” which is written asñK YJ ¯ [fi:dyu:] /ñK YJ ¯ [vi:dyu:]. – H [p] as in the word ø @Q.ƒ@ “spray” which is written as ø @Q.ƒ [sbra:y] / ø @Qƒ [spra:y]. • For dialectal sounds that are missed in MSA, the following special letters are used: – À (Gulf /Qaf/) as in the word ÈAÆ« which is written as ÈA®« / ÈAÆ«. – The Egyptian/Syrian/Lebanese † [q] is pronounced mostly as Z [P] as in ÈA¯ [qa:l] / È@Z [Pa:l]. –   (Egyptian/Lebanese /Z/) as in the wordQê ¢J K. is written as Qê ¢J K. / Qê ¢J K.. There are few words with special spellings that do not precisely reflect their pronunciation. In these cases, the transcriber writes both, as in the word @ Yë [hadha] / @ XAë (/ha:dha/). Numbers and some special symbols (ex: the percentage sign %) are written in letters and are being judged according to speakers’ pronunciation. Quality Control: Detection of possible annotation errors was done automatically and doubtful cases were returned to the transcriber for review. In addition, a manual inspection of random sentences (10%) from each file was performed. Any file below 90% accuracy was returned for full correction. from Farsi sharing similar Arabic shapes, these letters were employed to represent distinct dialectal sounds. 13177 5 Experimental Design 5.1 Training Datasets and Resources Datasets: Unspervised Codebook Generation To train the codebook, we randomly selected utterances from publicly available resources. For Arabic sounds, we opt for utterances from official CommonVoice train set along with Arabic TTS data. Moreover, to add borrowed/special sounds missing in MSA phonetic set (e.g., /g, v, p/), we included publicly available English datasets like LibriSpeech (Panayotov et al., 2015), and TIMIT (Garofolo et al., 1993). For the subsampling process, we opt for hybrid ASR systems8 for Arabic and Montreal Forced-Aligner9 for the English. Datasets: Spervised DVR Model To train the DVR model, we opt for a small training dataset to showcase our the efficacy of our proposed framework in low-resource setting. The details of dataset used for DVR is presented in Table 1. For the training, we utilize dataset transcribed with restored vowels, borrowed and dialectal sounds. We used 1 hour 30 minutes of training data in this study. 5.2 Model Training The Models, presented in Figure 2, are optimized using Adam optimizer for 50 epochs with an early stopping criterion. The initial learning rate is 1 × 10−4, and a batch size of 16 is employed. The loss criterion is CTC loss, utilized for predicting verbatim sequences. The input dimension for the SSL frame-level representation is d = 1024, the dimension of the discrete labels d = k. For all the architectures in Figure 2, the dimension of feedforward (FF) layer is d = 512. For the DVR joint, the output from the FFs ( ˆd, e) are concatenated to form [ ˆd, e] of dimension d = 1024. These outputs are then passed to 2 transformer encoders each with 8 attention heads. Following, the encoded information is then projected to output head of dimension V = 39 equivalent to the characters supported by the models. The total number of trainable parameters are Baseline:7.634M; DVR discrete:7.110M; and joint: 33.346M. 5.3 Evaluation Measures We used Davis-Bouldin index (DBindex) to select the k value for our codebook. The DBindex is 8Trained on Arabic CommonVoice 9https://github.com/MontrealCorpusTools/ Montreal-Forced-Aligner.git widely used in clustering performance evaluation (Davies and Bouldin, 1979), and is characterized by the ratio of within-cluster scatter to between-cluster separation. A lower DBindex value is better, signifying compact clustering. Following, we adapted the approach of (Hsu et al., 2021b) to evaluate the codebook quality using Phone Purity, Cluster Purity, and Phone-Normalized Mutual Information (PNMI). These measures use frame-level alignment of characters with discrete codes assigned to each frame. Phone purity measures the average framelevel phone accuracy, when we mapped the codes to its most likely phone (character) label. Cluster purity, indicates the conditional probability of a discrete code given the character label. PNMI measures the percentage of uncertainty about a character label eliminated after observing the code assigned. A higher PNMI indicates better quality of the codebook. Moreover, we assessed the codebook quality by human perception tests as mentioned in the following section. As for evaluating the dialectal sounds and short vowel recognition model, we reported Character Error Rate (CER) with and without restoring short vowels. Human Perception Test Setup We performed cluster quality analysis for k = {128, 256, 512} following the steps of (Mao et al., 2018; Li et al., 2018). For our study, we defined each clusters (demoted by a code) as either Clean or Mix. Clusters are considered as Clean when 80% of its instances are matched to one particular character, where as for Mix clusters, the instances are mapped to different characters.10 We hypothesise that the Mix clusters represent examples which can resembles closely to either two of canonical sound unit /l1/ and /l2/, or a mix of both /l1_l2/. We randomly selected 52 examples from each perceived Mix Clusters. We asked the four annotators (2 native and 2 non-native Arabic speakers) to categorize it into these four classes: more similar to /l1/, more similar to /l2/, a mix of both, or neither. 6 Results and Discussion Number of discrete codes in Codebook We reported the DBindex for the codebook sizes k = {128, 256, 512} in Table 2. We observed lower DBindex with k = 256 indicating better codebook quality. We further evaluated the codebook quality and reported purity measures with the Ar:CVR test10Only characters above 20% frequency are considered. 13178 Figure 3: The statistical results of perceptual tests of different sounds using cluster with k = 256 set only for brevity and CER with all the testsets. Our CER results shows the efficacy of the selected k = 256 for most of the test sets. We observed that increasing codebook size improves the purity and the PNMI. We noticed, the gain in cluster stability between k = 256 vs k = 516 is not very large with respect to the performance and computational cost. Hence we selected the codebook C of size k = 256 for all the experiments. Perceptual test of Codebook We averaged annotator judgments across four categories for all Mix clusters, revealing no clear majority and highlighting the listeners’ difficulty in categorically labeling audio within these clusters. In aligned with Mao et al. (2018); Li et al. (2018), we also conclude that these mixed labels genuinely exist and cannot be precisely characterized by any conventional given label. We present some of our findings of the perceptual test in Figure 3 for 5 different Mix clusters with average judgment per category. k 128 256 512 C size k selection criterion DBindex (↓) 2.59 2.57 2.7 Purity Measures: Ar:CVR testset Phone Purity (↑) 0.600 0.641 0.672 Discrete Code Purity (↓) 0.436 0.289 0.236 PNMI (↑) 0.343 0.418 0.495 CER (↓): Borrowed and Dialectal Unit Recognition Ar:CVR 0.149 0.108 0.107 EgyAlj 0.246 0.206 0.218 ArabVoice15 0.465 0.447 0.462 Average 0.287 0.254 0.262 Table 2: Quality evaluation of discrete codes based on DBindex, purity measures and CER for 3 test sets. Dialectal Unit Recognition Performance We reported the performance of the proposed DVR discrete and joint model in Table 3 for borrowed and dialectal unit recognition task. Our results shows the efficacy of the DVR models over the baseline specially for dialectal test sets (ArabVoice 13179 CER Z DD DJ Training Data 1hr 30min Ar:CVR 0.113 0.108 0.094 EgyAlj 0.252 0.206 0.231 AraVoice15 0.536 0.447 0.464 3hrs 30min Ar:CVR 0.103 0.108 0.096 EgyAlj 0.270 0.241 0.253 AraVoice15 0.497 0.470 0.483 5hr 30min Ar:CVR 0.095 0.110 0.099 EgyAlj 0.257 0.245 0.248 AraVoice15 0.485 0.477 0.491 ∼20 hrs Ar:CVR 0.099 0.108 0.101 EgyAlj 0.264 0.244 0.227 AraVoice15 0.492 0.478 0.457 Table 3: Reported CER performance for borrowed and dialectal unit recognition task with Baseline (Z), DVR Discrete (DD) and DVR Joint (DJ) models, for all three test sets and different training data sizes. CER Farasa Z DD DJ Ar:CVR 0.279 0.123 0.278 0.118 EgyAlj 0.250 0.279 0.395 0.274 Table 4: Reported CER for Farasa, Baseline (Z), DVR Discrete (DD) and DVR Joint (DJ) models for two test sets. Training set of 1 hour 30 minutes. and EgyAlj). We observed for borrowed and dialectal unit recognition, the discrete model outperforms the joint model significantly. Breakdown of the performance for 15 countries are presented in Appendix A.2. Impact of Training Data size Table 3 also shows the impact of the training data size. We observed for dialectal unit recognition, our DVR discrete model outperforms the other two models significantly with limited data sets of {1hr30min, 3hr30min, 5hr30min}. We see an improvement in performance from 1hr30min to 3hr30min settings. However, beyond a certain data threshold, the improvements plateaued. Performance for short vowel restoration For short vowel restoration (in Table 4), we observed that the added frame-level embeddings (in DVR joint) improve the recognition performance. We also observed that the baseline model performs comparably with DVR joint. This indicates that the restoration of short vowels benefits from high dimensional fine-grained information compare to using few discrete codes. We also compared the CER with Farasa – state-of-the-art text-based dicretization tool (Abdelali et al., 2016). We observed the acoustic models outperform Farasa by a large margin, especially for common voice subset. However, Farasa excelled in formal content – news content presented in EgyAlj testset. 7 Conclusion In this study, we propose a novel dialectal sound and short vowel recovery framework that utilizes a handful of discrete codes to represent the variability in dialectal Arabic. We also observed with only 256 discrete labels, the borrowed and dialectal sound recognition model outperforms both baseline and joint (discrete code with frame-level SSL representation) models by ≈7% CER improvement. For restoring vowels, we noticed SSL embeddings play a bigger role. Our findings indicate the efficacy of the discrete model with small training datasets. To foster further research in dialectal Arabic, we introduced, benchmarked, and released ArabVoice15 – a dialectal verbatim transcription dataset containing utterances from 15 Arab countries. In the future, we will apply the framework to more dialects and other dialectal languages. Limitations The diversity of representation and the size of ArabVoice15 could limit the conclusion to generalize in all Arabic dialects due to variability in dialectal sounds. Although the annotator was an expert transcriber and received extensive training, their dialect may have led to some bias in judgment. Ethics Statement For the research work presented in this paper on the Dialectal Sound and Vowelization Recovery (DSVR) framework, we have adhered to the highest ethical standards. All the speech/audio data used in this study were already publicly available. The human perception tests for our evaluation process were designed with a commitment to fairness, inclusivity, and transparency. The participants were selected keeping in mind balancing gender and nativity. Listeners were fully briefed on the nature 13180 of the research and their rights as participants, including the right to withdraw at any time without consequence. However as we mentioned in the limitation section, we cannot guarantee any human bias toward any dialectal sound or preference. References Ahmed Abdelali, Kareem Darwish, Nadir Durrani, and Hamdy Mubarak. 2016. Farasa: A fast and furious segmenter for arabic. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Demonstrations, pages 11–16. Ahmed Abdelali, Nadir Durrani, Cenk Demiroglu, Fahim Dalvi, Hamdy Mubarak, and Kareem Darwish. 2022. Natiq: An end-to-end text-to-speech system for arabic. In Proceedings of the Seventh Arabic Natural Language Processing Workshop, pages 394–398, Abu Dhabi, UAE. Association for Computational Linguistics. Ahmed Abdelali, Hamdy Mubarak, Shammur Absar Chowdhury, Maram Hasanain, Basel Mousi, Sabri Boughorbel, Samir Abdaljalil, Yassine El Kheir, Daniel Izham, Fahim Dalvi, Majd Hawasly, Nizi Nazar, Yousseif Elshahawy, Ahmed Ali, Nadir Durrani, Natasa Milic-Frayling, and Firoj Alam. 2024. LAraBench: Benchmarking Arabic AI with Large Language Models. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, Malta. Association for Computational Linguistics. Mansour M Al-Ghamdi, Husni Al-Muhtasib, and Moustafa Elshafei. 2004. Phonetic rules in arabic script. Journal of King Saud University-Computer and Information Sciences, 16:85–115. Tuka Al Hanai and James R Glass. 2014. Lexical modeling for arabic asr: a systematic approach. In INTERSPEECH, pages 2605–2609. Hanan Aldarmaki and Ahmad Ghannam. 2023. Diacritic recognition performance in arabic asr. arXiv preprint arXiv:2302.14022. Ahmed Ali, Peter Bell, James Glass, Yacine Messaoui, Hamdy Mubarak, Steve Renals, and Yifan Zhang. 2016. The MGB-2 challenge: Arabic multi-dialect broadcast media recognition. In SLT. Ahmed Ali, Shammur Chowdhury, Mohamed Afify, Wassim El-Hajj, Hazem Hajj, Mourad Abbas, Amir Hussein, Nada Ghneim, Mohammad Abushariah, and Assal Alqudah. 2021. Connecting arabs: Bridging the gap in dialectal speech recognition. Communications of the ACM, 64(4):124–129. Ahmed Ali, Suwon Shon, Younes Samih, Hamdy Mubarak, Ahmed Abdelali, James Glass, Steve Renals, and Khalid Choukri. 2019. The mgb-5 challenge: Recognition and dialect identification of dialectal arabic speech. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1026–1033. IEEE. Ahmed Ali, Yifan Zhang, Patrick Cardinal, Najim Dahak, Stephan Vogel, and James Glass. 2014. A complete kaldi recipe for building arabic speech recognition systems. In 2014 IEEE spoken language technology workshop (SLT), pages 525–529. IEEE. Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M Tyers, and Gregor Weber. 2019. Common voice: A massivelymultilingual speech corpus. arXiv preprint arXiv:1912.06670. Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021a. Unsupervised speech recognition. Advances in Neural Information Processing Systems, 34:27826–27839. Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021b. Unsupervised speech recognition. In NeurIPS. Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. 2020. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in neural information processing systems, 33:12449–12460. Saurabhchand Bhati, Jesús Villalba, Piotr ˙Zelasko, Laureano Moro-Velazquez, and Najim Dehak. 2022. Unsupervised speech segmentation and variable rate representation learning using segmental contrastive predictive coding. IEEE/ACM Transactions on Audio, Speech, and Language Processing. Fadi Biadsy, Julia Bell Hirschberg, and Nizar Y Habash. 2009. Spoken arabic dialect identification using phonotactic modeling. Xuankai Chang, Brian Yan, Yuya Fujita, Takashi Maekaku, and Shinji Watanabe. 2023. Exploration of efficient end-to-end asr using discretized input from self-supervised learning. arXiv preprint arXiv:2305.18108. Kuan-Yu Chen et al. 2019. Completely unsupervised phoneme recognition by a generative adversarial network harmonized with iteratively refined hidden markov models. In Interspeech. Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, et al. 2022. Wavlm: Large-scale self-supervised pre-training for full stack speech processing. IEEE Journal of Selected Topics in Signal Processing, 16(6):1505–1518. Shammur A Chowdhury, Ahmed Ali, Suwon Shon, and James R Glass. 2020a. What does an end-to-end dialect identification model learn about non-dialectal information? In INTERSPEECH, pages 462–466. 13181 Shammur A Chowdhury, Younes Samih, Mohamed Eldesouki, and Ahmed Ali. 2020b. Effects of dialectal code-switching on speech modules: A study using egyptian arabic broadcast speech. Shammur Absar Chowdhury, Nadir Durrani, and Ahmed Ali. 2023. What do end-to-end speech models learn about speaker, language and channel information? a layer-wise and neuron-level analysis. Computer Speech & Language, 83:101539. Hung yi Lee Da-Rong Liu, Kuan-Yu Chen and Lin shan Lee. 2018. Completely unsupervised phoneme recognition by adversarially learning mapping relationships from audio embeddings. In Interspeech. Yi-chen Chen Sung-feng Huang Shun-po Chuang Dayi Wu Da-rong Liu, Po-chun Hsu and Hung yi Lee. 2022. Learning phone recognition from unpaired audio and phone sequences based on generative adversarial network. IEEE/ACM Transactions on Audio, Speech, and Language Processing. Fahim Dalvi, Maram Hasanain, Sabri Boughorbel, Basel Mousi, Samir Abdaljalil, Nizi Nazar, Ahmed Abdelali, Shammur Absar Chowdhury, Hamdy Mubarak, Ahmed Ali, Majd Hawasly, Nadir Durrani, and Firoj Alam. 2024. LLMeBench: A flexible framework for accelerating llms benchmarking. David L Davies and Donald W Bouldin. 1979. A cluster separation measure. IEEE transactions on pattern analysis and machine intelligence, (2):224–227. Ewan Dunbar et al. 2017. The zero resource speech challenge 2017. In ASRU. Ryan Eloff, André Nortje, Benjamin van Niekerk, Avashna Govender, Leanne Nortje, Arnu Pretorius, Elan Van Biljon, Ewald van der Westhuizen, Lisa van Staden, and Herman Kamper. 2019. Unsupervised acoustic unit discovery for speech synthesis using discrete latent-variable neural networks. arXiv preprint arXiv:1904.07556. Abhinav Garg, Jiyeon Kim, Sushil Khyalia, Chanwoo Kim, and Dhananjaya Gowda. 2024. Datadriven grapheme-to-phoneme representations for a lexicon-free text-to-speech. arXiv preprint arXiv:2401.10465. John S Garofolo, Lori F Lamel, William M Fisher, Jonathan G Fiscus, and David S Pallett. 1993. Darpa timit acoustic-phonetic continous speech corpus cdrom. nist speech disc 1-1.1. NASA STI/Recon technical report n, 93:27403. Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Nawar Halabi and Mike Wald. 2016. Phonetic inventory for an arabic speech corpus. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021a. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing. Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, and Abdelrahman Mohamed. 2021b. Hubert: Self-supervised speech representation learning by masked prediction of hidden units. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3451–3460. Yassine El Kheir, Ahmed Ali, and Shammur Absar Chowdhury. 2024. Speech representation analysis based on inter- and intra-model similarities. In Explainable Machine Learning for Speech and Audio Workshop, ICASSP. Felix Kreuk, Joseph Keshet, and Yossi Adi. 2020. Self-supervised contrastive learning for unsupervised phoneme segmentation. In Interspeech. Xu Li, Shaoguang Mao, Xixin Wu, Kun Li, Xunying Liu, and Helen Meng. 2018. Unsupervised discovery of non-native phonetic patterns in l2 english speech for mispronunciation detection and diagnosis. In INTERSPEECH, pages 2554–2558. John Makhoul, Salim Roucos, and Herbert Gish. 1985. Vector quantization in speech coding. Proceedings of the IEEE, 73(11):1551–1588. Shaoguang Mao, Xu Li, Kun Li, Zhiyong Wu, Xunying Liu, and Helen Meng. 2018. Unsupervised discovery of an extended phoneme set in l2 english speech for mispronunciation detection and diagnosis. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6244–6248. IEEE. Ossama Obeid, Nasser Zalmout, Salam Khalifa, Dima Taji, Mai Oudah, Bashar Alhafni, Go Inoue, Fadhl Eryani, Alexander Erdmann, and Nizar Habash. 2020. CAMeL tools: An open source python toolkit for Arabic natural language processing. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 7022–7032, Marseille, France. European Language Resources Association. Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. 2015. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 5206–5210. IEEE. Alex S. Park and James R. Glass. 2008. Unsupervised pattern discovery in speech. IEEE Transactions on Audio, Speech, and Language Processing. Ankita Pasad, Ju-Chieh Chou, and Karen Livescu. 2021. Layer-wise analysis of a self-supervised speech representation model. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 914–921. IEEE. 13182 Ankita Pasad, Bowen Shi, and Karen Livescu. 2023. Comparative layer-wise analysis of self-supervised speech models. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE. Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. 2023. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pages 28492–28518. PMLR. Sara Shatnawi, Sawsan Alqahtani, and Hanan Aldarmaki. 2023. Automatic restoration of diacritics for speech data sets. arXiv preprint arXiv:2311.10771. Amitay Sicherman and Yossi Adi. 2023. Analysing discrete self supervised speech representation for spoken language modeling. In ICASSP. Vrunda Sukhadia and Shammur Absar Chowdhury. 2024. Children’s speech recognition through discrete token enhancement. In INTERSPEECH 2024. Benjamin Van Niekerk, Leanne Nortje, and Herman Kamper. 2020. Vector-quantized neural networks for acoustic unit discovery in the zerospeech 2020 challenge. arXiv preprint arXiv:2005.09409. Benjamin van Niekerk, Leanne Nortje, and Herman Kamper. 2020. Vector-quantized neural networks for acoustic unit discovery in the zerospeech 2020 challenge. In Interspeech 2020, pages 4836–4840. Dimitra Vergyri and Katrin Kirchhoff. 2004. Automatic diacritization of arabic for acoustic modeling in speech recognition. In Proceedings of the workshop on computational approaches to Arabic script-based languages, pages 66–73. Maarten Versteegh et al. 2015. The zero resource speech challenge 2015. In Interspeech. Dan Wells, Hao Tang, and Korin Richmond. 2022. Phonetic analysis of self-supervised representations of english speech. In Interspeech. Mingzhi Yu, Hieu Duy Nguyen, Alex Sokolov, Jack Lepird, Kanthashree Mysore Sathyendra, Samridhi Choudhary, Athanasios Mouchtaris, and Siegfried Kunzmann. 2020. Multilingual grapheme-tophoneme conversion with byte representation. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8234–8238. IEEE. A Appendix A.1 Sound Analysis In Figure 5, we have depicted potential confusion between specific sounds in MSA and Arabic dialects. Utilizing a Hidden Markov Model-Time Delay Neural Network (HMM-TDNN) model11, 11https://kaldi-asr.org/models/m13 trained with MGB-2 (Ali et al., 2016) for Arabic, we aligned randomly selected samples from the original datasets of CommonVoice Arabic and EgyAlj. For the English dataset TIMIT, we used the provided ground truth alignment. After aligning speech signals with their original unvowelized character-based transcriptions, we matched frame-level features extracted from XLSR (see Section 3.1) with their corresponding characters. In Figure 5.A, we randomly selected 1000 samples associated with P [z] and 1000 samples associated with X [D] from CommonVoice Arabic. Despite CommonVoice Arabic being considered as clean MSA speech data with good pronunciation, we observed that some samples of X [D] were clustered with P [z], primarily explained by the speakers getting influenced by their dialectal variations, as discussed in Section 2. Figure 5.B displays the selection of three characters: H [t], è [t/h], è [h]. Notably, è is at times pronounced as [t] and at other times as [h]. Although rule-based methods (Halabi and Wald, 2016) can predict when it will correspond to which sound, applying these rules in everyday spoken language, where people don’t follow rule based pronunciation, proves challenging. The figure reveals two main clusters for [t] and [h], with vectors associated with è scattered between these clusters, highlighting the aforementioned point. Figure 5.C illustrates the selection of four labels: Arabic h. [ [dý], and English phonemes (zh, g, jh) [ý, g, dý]. We selected 1000 Arabic samples of h. from CommonVoice Arabic and EgyAlj, along with 500 samples for each of the English phonemes. It became apparent that the Arabic sound h. is distributed across different English pronunciations (zh, g, and jh), indicating dialectal variations in the pronunciation of h. . A.2 Country-wise DVR performance In this section, we present the aforementioned results discussed in Section 6. Figure 4 displays CER results for the Baseline (Z), SVR Discrete (k:256), and DVR joint (Z+k:256) models trained on 1H30min of data, tested on AraVoice15. We analyze the CER results for each dialect individually. Our observations reveal that SVR Discrete (k:256) and DVR joint (Z+k:256) consistently outperform the Baseline (Z) across all dialects, exhibiting a sub13183 Figure 4: Reported CER for test utterances from 15 Arab countries for three models Baseline (Z), DVR discrete (k:256) and DVR joint (Z+k:256) Figure 5: 2D t-SNE Projection of Frame-Level Presentations Extracted Randomly from Finetuned Arabic XLS-R. A. Pairs ( X P) [ð z]. B. Sounds (è è H) [h t]. C. Pairs (h. [dý], zh [ý], g ). stantial performance gap in MOR, YEM, PAL, and IRA dialects. Moreover, SVR Discrete (k:256) and DVR joint (Z+k:256) exhibit similar performance across the majority of the 15 dialects (10/15), with notable disparities observed in JOR, SUD, SYR, where a discernible performance gap is evident. 13184
2024
711
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13185–13197 August 11-16, 2024 ©2024 Association for Computational Linguistics Document-Level Machine Translation with Large-Scale Public Parallel Corpora Proyag Pal and Alexandra Birch and Kenneth Heafield ILCC, School of Informatics, University of Edinburgh {proyag.pal,a.birch,kenneth.heafield}@ed.ac.uk Abstract Despite the fact that document-level machine translation has inherent advantages over sentence-level machine translation due to additional information available to a model from document context, most translation systems continue to operate at a sentence level. This is primarily due to the severe lack of publicly available large-scale parallel corpora at the document level. We release a large-scale open parallel corpus with document context extracted from ParaCrawl in five language pairs, along with code to compile document-level datasets for any language pair supported by ParaCrawl. We train context-aware models on these datasets and find improvements in terms of overall translation quality and targeted document-level phenomena. We also analyse how much long-range information is useful to model some of these discourse phenomena and find models are able to utilise context from several preceding sentences. 1 Introduction Machine translation has traditionally been framed as a problem of translating source text to target text one sentence at a time. However, depending on the languages and the content of the text being translated, it is often the case that a sentence is impossible to translate well in isolation – that further contextual information is required. Maruf et al. (2021) summarise several discourse phenomena that are impossible for sentence-level machine translation systems to deal with – including anaphoric pronouns, lexical cohesion, deixis, and ellipsis. There are also features like grammatical gender, number, style, and formality, that can sometimes not be determined from the individual sentence but are dependent on surrounding context. Therefore, it has been clear for many decades (Bar-Hillel, 1960) that a machine translation system cannot translate some sentences without the ability to capture other linguistic cues from context. Läubli et al. (2018) also showed that while sentence-level neural machine translation can appear high-quality out of context, human evaluators had a much stronger preference for human translation when evaluating translation at the document level. As a result, there have been many efforts to incorporate document context into neural machine translation. What almost all of these methods have in common is that they require parallel training data with document context. ParaCrawl (Bañón et al., 2020) produced largescale parallel corpora and the released data includes information about the URLs from which the sentences were extracted, but the released corpora were only sentence-level. We use raw webpage text publicly available from ParaCrawl along with the officially released sentence-level corpora to assemble large-scale document-level parallel corpora for several language pairs. We release our code to generate the document-level datasets1 as well as the datasets in five selected language pairs2. We then validate the usefulness of our datasets by training context-aware translation models for all of these language pairs, and find that models that are aware of target context perform better than sentence-level baselines, often in terms of overall translation quality, but more significantly when evaluated with respect to targeted discourse phenomena. We experiment with varying the amount of preceding context available to these contextaware models at test time, and find that while information from the immediately previous sentence is most useful – as is intuitively obvious, longer-range context up to a certain extent can also help models translate phenomena like anaphoric pronouns more accurately. 1https://github.com/Proyag/ParaCrawl-Context 2https://huggingface.co/datasets/Proyag/ paracrawl_context. More language pairs may be released in the future; it requires a resource-intensive but simple process of running our code on any ParaCrawl language pair. 13185 2 Related Work Many attempts to utilise document context in machine translation have introduced specialised architectures to encode context (Tu et al., 2018; Jean et al., 2017; Kuang et al., 2018; Maruf and Haffari, 2018; Voita et al., 2018; Miculicich et al., 2018) alongside source sentences. Some methods like Junczys-Dowmunt (2019); Sun et al. (2022); Post and Junczys-Dowmunt (2023) eliminate sentence boundaries altogether and use a standard transformer architecture to translate an entire chunk of text as a single sequence. However, unless they retain some sentence markers like in Junczys-Dowmunt (2019) or Tiedemann and Scherrer (2017), these models can be difficult to evaluate with our existing evaluation paradigms and metrics due to the dependence on sentencelevel test sets. In those cases, we often have to rely on sentence-splitting heuristics and alignment methods just to be able to compute a sentence-level metric on the model outputs. As a result, our work chooses a method to translate with one sentence at a time as input, but with document context provided as a separate additional input. This allows the model to benefit from context information while still being simple to evaluate with existing metrics and test sets. There are few existing parallel corpora of significant size that retain document metadata – examples are Europarl (Koehn, 2005) which had only around 2M sentences for the largest language pairs; CzEng (Kocmi et al., 2020; Bojar et al., 2016) containing 61M sentence pairs with document annotation along with more than 100M synthetic sentence pairs, but only for eng↔ces; OpenSubtitles (Lison and Tiedemann, 2016); and News Commentary (Kocmi et al., 2023), where the latter two datasets are relatively large corpora for several language pairs but restricted in domain. The CCAligned corpus (El-Kishky et al., 2020) includes hundreds of millions of comparable document pairs across many languages, from which sentence-level datasets were extracted and released. A dataset with sentence pairs and corresponding document contexts was not created; however, it should be possible to extract a similar dataset as the one we present here from the available data released by CCAligned which includes documents, URLs, and sentences. Closest to our data, Al Ghussin et al. (2023) used publicly available parallel document metadata from ParaCrawl3 to extract aligned paragraphs of text, and used these paragraphs as a proxy for documents. Even though their extracted datasets were at a relatively small scale due their use of only a subset of ParaCrawl data and strict filtering, they observed clear improvements in targeted evaluations of document-level translation phenomena. Post and Junczys-Dowmunt (2023) showed that using only monolingual documents and backtranslating (Sennrich et al., 2016) them sentence by sentence to produce synthetic document pairs can surprisingly produce better results than using actual document pairs to train a document-level model. Their results, however, were mostly on unreleased private data, and their comparison could not be reproduced on public data precisely because of the absence of public datasets of adequate size. Another recent orthogonal approach is to use large language models’ inherent ability to model long context to perform document-level translation (Wang et al., 2023; Zhang et al., 2023; Karpinska and Iyyer, 2023) with no or very few parallel training examples. While this paradigm is gaining popularity, it is yet to be comprehensively explored, and the need remains to have large-scale datasets of parallel text with document context. 3 Dataset At the time of its release, ParaCrawl (Bañón et al., 2020) was the largest publicly available sentencelevel parallel corpus for most of the languages it supported. The ParaCrawl corpus mining process included steps to match documents that were estimated to be translations of each other, from which sentences were extracted and aligned, but unfortunately, the released corpora did not preserve document context or structure, and only contained isolated sentence pairs along with the source URLs they were originally extracted from. However, separately, a lot of the raw text crawled from the web was also released4 as languageclassified base64-encoded text with their corresponding URLs. Therefore, we were able to match the webpage contents to their URLs in the sentencelevel parallel corpora to recover the corresponding documents. To build document-level parallel datasets from these sources of data, we chose five language pairs – Czech (ces), Polish (pol), German (deu), French 3https://www.statmt.org/paracrawl-benchmarks/ 4https://paracrawl.eu/moredata 13186 Lang. Sentences Source Target Both deu 278.3 105.6 110.3 92.1 fra 216.6 83.5 86.3 72.2 ces 50.6 18.7 21.0 16.3 pol 40.1 16.8 18.4 14.9 rus 5.4 3.1 2.8 2.4 Table 1: Sizes of our document-level datasets in millions of lines. “Sentences” is the size of the original ParaCrawl sentence-level datasets. “Source/Target” denotes the subset of sentence pairs where there is at least one source/target context – eng is always considered the source language in this case. “Both” denotes the subset of sentence pairs with at least one source context and one target context. Note: the eng-rus dataset is significantly smaller because it was not part of the ParaCrawl main release, but a smaller “bonus” release. (fra), and Russian (rus), all paired with English (eng) – and used the following method: 1. Extract the source URLs and corresponding sentences from the TMX files from ParaCrawl release 95 (or the bonus release in the case of eng-rus). Each sentence is usually associated with many different source URLs, and we keep all of them. 2. Match the extracted URLs with the URLs from all the raw text data and get the corresponding base64-encoded webpage/document, if available. 3. Decode the base64 documents and try to match the original sentence. If the sentence is not found in the document, discard the document. Otherwise, keep the 512 tokens preceding the sentence (where a token is anything separated by a space), replace line breaks with a special <docline> token, and store it as the document context. Since some very common sentences correspond to huge numbers of source URLs, we keep a maximum of 1000 unique contexts per sentence separated by a delimiter ||| in the final dataset. 4. Finally, we compile three different files per language pair – a dataset with all sentence pairs where we have one or more source contexts, one with all sentence pairs with target contexts, and a third dataset with both contexts. Even though the TMX files have source URLs for all released sentences, this process was lossy 5https://paracrawl.eu/releases due to a few different reasons: • ParaCrawl was compiled from a number of separate crawls or “collections”, and there were inconsistencies in how URLs were formatted in intermediate steps. We employed some basic heuristics to match as many URLs as possible, such as removing http:// and https:// and trailing slashes before matching, but there is a still a chance some URLs were missed in this process. • Data from CommonCrawl was not duplicated in the released raw text from ParaCrawl. To avoid re-downloading huge amounts of data, any URLs that were present in CommonCrawl but not in the other collections are missing from our dataset. • There were many instances where the original sentence could not be found in the contents of a webpage corresponding to its source URL. This is most likely due to the same URLs being crawled at different times and finding dynamic or possibly entirely changed content. Due to the existence of multiple matched documents for some sentences in the datasets, source and target contexts for a sentence pair may not be aligned. However, approximately 99.9% of all extracted sentence pairs have exactly one source/target context, which implies that the contexts should be aligned in most cases. Further filtering is recommended if aligned contexts are required, with the simplest option being to remove the subset of sentence pairs with more than one matched context. The sizes of our extracted datasets are shown in Table 1. Some samples can be found in Appendix A, and the full datasets are publicly available at https://huggingface.co/ datasets/Proyag/paracrawl_context. 4 Document-level Translation Models To evaluate the usefulness of our datasets, we train document-level translation models using only our datasets. Even though higher-quality documentlevel training data exists at a smaller scale, we choose to train our models only on data from ParaCrawl in order to accurately evaluate the quality and utility of our data and to make a fair comparison with our sentence-level baselines. At training time, for each input example, we first sample one context out of up to 1000 that are present in the document-level dataset. To ensure 13187 that the models are capable of using variable-length context at test time, we uniformly sample a context length l from {1, . . . , 256} for each training example. We then retain at least l tokens from the preceding context, possibly exceeding the limit to avoid mid-sentence splits. This context sampling was implemented using a custom pipeline6 in the Sotastream toolkit (Post et al., 2023). We then use the source sentence as the main model input, the sampled context as a second input (see detailed model architecture in Section 4.1), and the target sentence as the target model output. We train separate models for each language pair using either source or target context information. While the model is always provided ground-truth context at training time, this is not always available in the case of target context at test time, so we test using both ground-truth target context and real predicted output context. However, we note that one of the most common use cases of machine translation is in the context of computer-assisted translation (CAT) tools, where translators can see preceding context but typically machine translate and post-edit one sentence at a time, as a result of which gold-standard target context is available for each sentence. 4.1 Model Architecture and Training We use the dual-encoder transformer architecture from Junczys-Dowmunt and Grundkiewicz (2018) but without tied parameters between the two encoders, implemented in the Marian framework (Junczys-Dowmunt et al., 2018). In other words, we modify a standard transformer encoder-decoder model (Vaswani et al., 2017), which takes the source sentence as input and produces the target sentence as output, to add a second encoder which takes additional source/target context as input. This is similar in spirit to Zhang et al. (2018), but we do not incorporate the context encoding into the source encoding, instead directly feeding both encodings to the decoder. As shown in Figure 1 of Junczys-Dowmunt and Grundkiewicz (2018), the decoder has two stacked cross-attention sub-layers to attend to the two encoder contexts. We use default transformer-big hyperparameters. This choice of architecture is less complex than the specialised architectures described in Section 2, but still allows for separating the current sentence and context in6https://github.com/Proyag/sotastream/blob/ custom_pipelines/sotastream/pipelines/sample_ from_fields_pipeline.py puts, giving us greater control over evaluation and interpretability compared to models which translate an entire large chunk of text at a time. Moreover, the addition of the second encoder automatically accounts for the need for extra model capacity to encode the document context. We train context-aware models for the following language pairs: eng→deu, eng→fra, eng→rus, eng→ces, and pol→eng. We train our models with dynamic batch size to make optimal use of GPU memory. We train all models on 4 or 8 Nvidia A100 or 3090 GPUs, using gradient accumulation to ensure that the average effective batch sizes are approximately equivalent in each case. We validate every 50 million target tokens for eng→rus and every 500 million target tokens for all other language pairs, early stopping when cross-entropy calculated on the validation set does not improve for 10 consecutive validations. 4.2 Test Data for Evaluation To assess the translation quality of our models, we perform two kinds of evaluation – general machine translation quality metrics, and using contrastive evaluation to measure the accuracy of the models on targeted discourse phenomena. 4.2.1 General Translation Quality Metrics We compute standard sentence-level quality metrics – BLEU (Papineni et al., 2002) using the sacreBLEU implementation7 (Post, 2018) and COMET8 (Rei et al., 2022) – on the following WMT test sets, all of which were released with document metadata: WMT22 eng→deu and eng→ces (Kocmi et al., 2022), WMT23 eng→rus (Kocmi et al., 2023), WMT20 pol→eng (Barrault et al., 2020), and WMT15 eng→fra (Bojar et al., 2015). 4.2.2 Contrastive Evaluation Contrastive test sets consist of input text and translations which appear correct at the sentence level, but may be wrong given more context. Models are evaluated by their ability to assign higher probability to the sentences that are correct in context. We evaluate our models on a few different contrastive test sets which measure the following types of discourse phenomena for specific language pairs: • Anaphoric pronouns: The ContraPro test sets for eng→deu (Müller et al., 2018) and 7BLEU|#:1|c:mixed|e:no|tok:13a|s:exp|v:2.4.0 8Specifically wmt22-comet-da 13188 Model BLEU / COMET eng→deu eng→fra eng→ces eng→rus pol→eng Sentence-level 35.2 / 85.4 40.5 / 83.1 36.8 / 88.4 22.8 / 75.4 33.5 / 83.7 Subset - source 35.0 / 85.5 – 36.3 / 87.5 22.0 / 74.8 32.6 / 83.3 Subset - target 34.3 / 85.3 40.7 / 83.1 35.9 / 87.8 22.0 / 75.3 32.4 / 83.2 Source context 34.9 / 85.0 – 36.6 / 88.1 19.4 / 72.4 32.4 / 83.0 Gold target context 37.4 / 85.9 42.6 / 83.2 37.3 / 88.5 21.9 / 75.4 32.8 / 83.3 Predicted target context 34.7 / 85.4 40.5 / 82.8 35.4 / 87.1 21.5 / 74.9 32.8 / 83.4 Table 2: Overall sentence-level BLEU/COMET scores on test sets for models in different configurations. Bold text highlights the highest score for each language pair. “Subset - source” and “Subset - target” are sentence-level baselines trained on the subsets of sentences that have source or target contexts respectively, i.e. the same number of training examples as the corresponding context-aware models. “Gold target context” and “Predicted target context” are the same model which encodes target-side context, but the latter uses the predictions from previous lines in the same document as its context instead of the ground-truth context. eng→fra (Lopes et al., 2020) translation evaluate the accuracy of pronoun translation where the source English sentence does not contain enough information to determine the correct pronoun in the target language and context is required to generate the correct translated pronoun. One part of the DiscEvalMT test set (Bawden et al., 2018) also evaluates anaphoric pronoun translation in eng→fra. • Deixis and ellipsis: Good Translation Wrong in Context (GTWiC) (Voita et al., 2019) is a collection of contrastive test sets to evaluate a number of discourse phenomena in eng→rus translation, among which are deictic expressions, verb phrase ellipses, and correct inflection of nouns which depend on elided verbs. • Lexical choice: The DiscEvalMT contrastive test set from Bawden et al. (2018) tests lexical choice in eng→fra translation where the choice of certain words/phrases in translation is ambiguous without context information. It also tests lexical cohesion, i.e. the repetition of translated entities when the entity is repeated on the source side. An eng→ces extension of the lexical cohesion subset was created by Jon (2019). GTWiC also has a similar subset to test lexical cohesion in eng→rus. 5 Results and Analysis We train sentence-level and document-level models in a few different configurations for comparison. Our baseline is a standard sentencelevel transformer-big model trained on all of the ParaCrawl parallel data for a given language pair. We also train sentence-level baselines trained on the subsets of sentence pairs for which source or target contexts could be extracted, thus ensuring a fair comparison in terms of the number and content of training examples. Finally, for each language pair, we train two different document-level models: one which is aware of source context and one which uses target context. We further test the target context model in two different scenarios: using original ground-truth context for each test example and using the actual model output from previous sentences within a document as target context. 5.1 Effect on Overall Translation Quality One of the ways we evaluate our documenttranslation models is simply in terms of overall sentence-level translation quality metrics. The results are summarised in Table 2. We find that while source context does not seem to benefit overall translation quality, or at least not in a way that is reflected in these metrics, using the ground-truth target context generally improves translation quality over the baseline using the same number of training examples, i.e. “Subset - target” compared to “Gold target context” in Table 2. This improvement is not observed for eng→rus, probably due to the small number of training examples in the subset of sentences with target context not being enough to train a high-quality contextaware model. The sentence-level baseline using the full set of ParaCrawl sentence pairs (“Sentencelevel”) out-performs the context-aware models for the smaller language pairs, benefitting from having a much larger number of training examples. 13189 A machine translation system in an environment where translated output is post-edited sentence by sentence, such as in CAT tools, has access to ground-truth target context for every line, and can thus benefit from the improved translation quality of the context-aware model. However, a fully automated document-level translation pipeline does not have this information available. To reproduce this scenario, we also try using actual model predictions from previous sentences within a document as target context (“Predicted target context” in Table 2), and find that this does not yield the same improvement as using ground-truth context. This could be explained as a manifestation of exposure bias (Ranzato et al., 2016) due to the models only being trained on ground-truth contexts and not being robust enough to accurately use the relatively noisy predicted context, resulting in errors being propagated through the context. In some cases, the difference may not even be an obvious error, but could instead be related to domain/style hints that are available in the original context to guide the translation but are lost in the context predicted by the model. While models can be made more robust against the propagation of errors through preceding context using methods like scheduled sampling (Bengio et al., 2015) to expose some generated context to the model during training, the loss of contextual hints is more difficult to remedy. 5.2 Accuracy on Contrastive Test Sets We also perform evaluations on selected contrastive test sets for some language pairs, as mentioned in Section 4.2.2. Each example in a contrastive test set has a source sentence and source/target context along with two or more possible outputs, one of which is correct. We use our models to score all the possible outputs and say the model gets an example right if it assigns higher probability to the correct output than to the other options. We then calculate accuracy over the entire test set. While the contrastive test sets include source context, we report results in this section only on our target context-aware models since, consistent with Table 2, we find that our source context-aware models are unable to outperform sentence-level baselines. Since each contrastive test set is different and designed for specific languages, we discuss them separately in this section. Figure 1: Effect of varying the number of lines of target context on ContraPro eng→deu pronoun translation accuracy. The baseline accuracy achieved by the sentence-level model is 0.507. Accuracy increases steadily with up to 3 or 4 sentences of target context with only marginal gains beyond that. 5.2.1 ContraPro (eng→deu and eng→fra) We use ContraPro (Müller et al., 2018) to evaluate the accuracy of pronoun translation for our eng→deu models. This test set contains examples of sentence pairs where the source English sentence contains the pronoun it which needs to be translated into one of es, sie, or er in the target German. The pairs are designed so that provided context information is required to determine the correct translation of the pronoun. Each example has two translations which are both apparently correct at the sentence level but one of them uses the wrong pronoun in context. Our models are not required to generate translations, only to score each alternative output given the source sentence and ground-truth target context. If the model is able to take the context into account, it should assign higher probability to the option with the correct pronoun. A similar test set created by Lopes et al. (2020) evaluates the same phenomenon in eng→fra translation, where the English it is translated to il or elle in French and they is translated to ils or elles. We find that while our eng→deu sentence-level model has an accuracy of 0.507, i.e. approximately random chance, the model with target context scores 0.785 when provided 5 sentences of preceding context. This shows that the model learns to accurately disambiguate the correct choice of pronouns using the context information. For eng→fra, we find that the sentence-level model already achieves a reasonably high accuracy 13190 of 0.815, due to the fact that the antecedents of the pronouns are located within the same sentence in approximately 43% of the examples in this test set. Using our target context-aware model, we still see an increase in accuracy to 0.824 given a single sentence of preceding context, but no further improvement with longer context. Effect of Context Size Figure 1 shows the effect of the amount of context that is exposed to the eng→deu model on its ability to accurately translate the anaphoric pronouns in ContraPro. We find that a single sentence of target context is enough to significantly increase the accuracy of pronoun translation beyond a sentence-level baseline’s 0.507, and context longer than 3 sentences does not make much of a further difference in terms of total accuracy. This makes intuitive sense, since the antecedent of a pronoun is most often in the immediately preceding sentence and only very rarely more than 2 or 3 sentences away. However, for the subset of 442 examples (out of 12000) where the antecedent distance is greater than 3, the accuracy increases from 0.709 for the baseline to 0.908 for the context-aware model, which indicates that our model is in fact able to use long-range information to disambiguate pronouns. 5.2.2 Good Translation Wrong in Context (eng→rus) Voita et al. (2019) introduced a collection of test sets for contrastive evaluation of a range of discourse phenomena in eng→rus translation. The test set is thus divided into 4 subsets: • Deixis: These examples are related to gender and formality marking in Russian that are absent in the source English. The model needs to use context to translate these deictic words or phrases correctly. • Ellipsis: These examples have elliptical constructions in the English text that cannot be elided in Russian, so the translation needs to expand the ellipsis. There are two kinds of ellipsis-related errors targeted here: where the target text has wrong morphological inflection due to missing information from the source ellipsis, and where the wrong verb is generated for a verb phrase ellipsis. • Lexical cohesion: These examples evaluate the ability of the model to ensure that named entities that are repeated in the source are translated consistently in the output. Models Model / Context Length GTWiC Accuracy Deixis Ellipsis LC Infl. VP Sentence-level 0.5 0.5 0.058 0.458 Trg context/1 0.586 0.5 0.07 0.468 Trg context/2 0.654 0.494 0.07 0.472 Trg context/3 0.692 0.5 0.074 0.472 Table 3: Accuracy of our target context-aware model on the GTWiC test sets with varying number of sentences of target context. “LC” denotes lexical cohesion. While performance on deictic expressions improves steadily with more context, lexical cohesion only improves very marginally, and verb phrase (VP) ellipsis accuracy remains very low. need to be aware of preceding context to translate cross-sentential repetitions consistently. Unlike ContraPro, contrastive examples in GTWiC have several incorrect translations and one correct translation for each given source sentence and context. A test example is considered correct if our translation model scores the correct translation higher than all of the incorrect translations. We report accuracies separately for each GTWiC test set evaluating different phenomena. The performance of our target context-aware model on the GTWiC test sets is reported in Table 3. We observe that the context-aware model is significantly more capable of translating deictic expressions accurately. However, we find that it does not perform well on the ellipsis test sets, with verb phrase ellipsis accuracy surprisingly being worse than chance, and improvements on lexical cohesion are also marginal. This is possibly because the models need both source and target context to be able to model these phenomena accurately. For example, it is difficult for the model to be aware that an entity should be repeated if is not aware that both the preceding source and target contexts had occurrences of the same entity. A maximum of 3 context sentences is available per example in GTWiC, and we once again find that having more target context can be useful for the model to translate deictic expressions correctly, but the benefits diminish as the model usually gets adequate context from the last one or two sentences. 5.2.3 DiscEvalMT (eng→fra and eng→ces) The DiscEvalMT eng→fra contrastive test sets (Bawden et al., 2018) evaluate two document-level 13191 Model eng→fra eng→ces Lexical Choice: Sentence-level 0.5 0.5 Target context 0.525 0.533 Anaphora: Sentence-level 0.5 – Target context 0.545 – Table 4: Accuracy of target context-aware models compared to sentence-level models on the DiscEvalMT test sets in eng→fra and eng→ces. Context-aware models achieve higher accuracy in each case. translation phenomena: • Anaphora: Similar to ContraPro, this test set also contains examples where correctly generating a pronoun in the target French requires the model to use context information about the antecedent. • Lexical choice: Some of these examples contain an ambiguous word in the source and the model needs to be able to use context information to disambiguate the correct sense of the word and translate it correctly. The rest of the examples test models’ ability to consistently translate a repeated word or phrase. The eng→ces extension of DiscEvalMT (Jon, 2019) only includes the lexical choice test set. These test sets also have two alternative translations for each input sentence and context, and our models are expected to score the correct option in context higher than the other option. We can see in Table 4 that while our documentlevel models do not score very highly on this benchmark, they still out-perform the sentence-level baseline. Similar to GTWiC, for the lexical cohesion test sets, we believe that the models would perform better if they were able to access both source and target contexts, since they are otherwise unaware of the repeated word or phrase. 6 Conclusion We release large-scale document-level parallel corpora in five language pairs extracted from the ParaCrawl datasets in an effort to mitigate the dearth of publicly available machine translation training data with document context. We also open-source code to enable the community to compile such datasets in more language pairs. Due to both the ParaCrawl pipeline and our code being open-source, it is theoretically possible to create document-level datasets for any supported language by crawling the web, and not just those already released by ParaCrawl. While we treat any preceding text at the same URL as “context”, it is often the case that these are completely unrelated to a given sentence, such as UI elements, boilerplate text, or entirely unrelated content on the same webpage. Future work should also explore filtering these datasets to retain only genuine contextual information, which is likely to be much more useful to the model, although even content that is not strictly document context may help guide translation through indirect domain or style cues. Our document-level translation experiments show that models aware of target context improve in terms of overall translation quality as well as in terms of some targeted discourse phenomena compared to a sentence-level baseline. We show that machine translation can benefit from multiple sentences of preceding context to accurately translate discourse phenomena like anaphoric pronouns, although very long-range context is rarely useful. 7 Limitations Relevance of context Our work assumes that any extracted text preceding a given sentence on a webpage is relevant “document context” for that sentence. However, it is likely in many cases that the extracted context is unrelated to the sentence, since most webpages are not formatted as a coherent “document”. As a result, the dataset often includes irrelevant context like lists of products, UI elements, or video titles extracted from webpages which will not be directly helpful to document-level translation models. Unaligned contexts For sentences with multiple matching contexts, the source and target contexts may not always be aligned. However, as mentioned in Section 3, the vast majority of sentence pairs have exactly one source/target context, and should therefore have aligned contexts. We recommend filtering on this basis if aligned contexts are required. Availability of both contexts Our models are all trained with either source or target context being available to the models, but for some documentlevel phenomena like lexical consistency of repeated named entities, it is probably necessary for the model to be aware of both source and target 13192 context. Our datasets make it possible for future work to extract training data with both contexts and train such models. Model quality Our models are trained only on noisy ParaCrawl data and tested on high-quality WMT data. While there are much smaller but relatively high-quality document-level training datasets available, all our experiments were conducted only on ParaCrawl data to test the quality of the datasets without being influenced by other data. As a result, these models are not necessarily the strongest possible translation models, but they are useful to fairly and clearly compare document-level machine translation against sentence-level models. Language coverage ParaCrawl was focused on European Union languages with only a few “bonus” releases for other languages. Moreover, most of the corpora were for English-centric language pairs. Due to the high computational requirements to extract these corpora, our work further chose only a subset of these languages, resulting in corpora for only a few European languages, some of them closely related. Given the availability of raw data and tools to extract such corpora for many more languages from all over the world, we hope the community is encouraged to build such resources for a much larger variety of language pairs. Documentlevel translation phenomena also vary widely by source and target language, so such experiments for more languages is left for future work. 8 Ethical Considerations Harmful content The main released corpora from ParaCrawl were filtered to remove sensitive content, particularly pornography. Due to pornographic websites typically containing large amounts of machine translated text, this filtering also improved the quality of the resulting corpora. However, when we match sentences with their source URLs, it often happens that an innocuous sentence was extracted from a webpage with harmful content, and this content is present in our document contexts. We may release filtered versions of these corpora in the future, pending further work to filter harmful content at the document level. Eurocentricity The Eurocentric nature of our work is remarked upon in Section 7. Due to most large-scale publicly available parallel corpora being English-centric, almost all machine translation research remains English-centric, at the cost of the majority of the world’s language users. This limits both the generalisability and usefulness of a lot of research. We hope the release of more data and tools helps the community expand these efforts to more languages. Acknowledgements This work was funded by UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee [grant number 10052546]. Some of the computations described in this research were performed using the Baskerville Tier 2 HPC service (https://www.baskerville.ac.uk/). Baskerville was funded by the EPSRC and UKRI through the World Class Labs scheme (EP/T022221/1) and the Digital Research Infrastructure programme (EP/W032244/1) and is operated by Advanced Research Computing at the University of Birmingham. We acknowledge computational resources provided on Karolina by the Ministry of Education, Youth and Sports of the Czech Republic through the e-INFRA CZ (ID:90254). References Yusser Al Ghussin, Jingyi Zhang, and Josef van Genabith. 2023. Exploring paracrawl for document-level neural machine translation. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 1304– 1310, Dubrovnik, Croatia. Association for Computational Linguistics. Marta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sarrías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2020. ParaCrawl: Web-scale acquisition of parallel corpora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4555–4567, Online. Association for Computational Linguistics. Yehoshua Bar-Hillel. 1960. The present status of automatic translation of languages. In Advances in computers, volume 1, pages 91–163. Loïc Barrault, Magdalena Biesialska, Ondˇrej Bojar, Marta R. Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljubeši´c, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Santanu Pal, Matt Post, and Marcos 13193 Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1–55, Online. Association for Computational Linguistics. Rachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2018. Evaluating discourse phenomena in neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1304–1313, New Orleans, Louisiana. Association for Computational Linguistics. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for sequence prediction with recurrent neural networks. Advances in neural information processing systems, 28. Ondˇrej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 workshop on statistical machine translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1–46, Lisbon, Portugal. Association for Computational Linguistics. Ondˇrej Bojar, Ondˇrej Dušek, Tom Kocmi, Jindˇrich Libovický, Michal Novák, Martin Popel, Roman Sudarikov, and Dušan Variš. 2016. Czeng 1.6: Enlarged czech-english parallel corpus with processing tools dockered. In Text, Speech, and Dialogue, pages 231–238, Cham. Springer International Publishing. Ahmed El-Kishky, Vishrav Chaudhary, Francisco Guzmán, and Philipp Koehn. 2020. CCAligned: A massive collection of cross-lingual web-document pairs. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5960–5969, Online. Association for Computational Linguistics. Sebastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? arXiv preprint arXiv:1704.05135. Josef Jon. 2019. discourse-test-set. https://github. com/cepin19/discourse-test-set. [Online; accessed 22-May-2024]. Marcin Junczys-Dowmunt. 2019. Microsoft translator at WMT 2019: Towards large-scale document-level neural machine translation. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 225–233, Florence, Italy. Association for Computational Linguistics. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2018. MS-UEdin submission to the WMT2018 APE shared task: Dual-source transformer for automatic post-editing. In Proceedings of the Third Conference on Machine Translation: Shared Task Papers, pages 822–826, Belgium, Brussels. Association for Computational Linguistics. Marcin Junczys-Dowmunt, Kenneth Heafield, Hieu Hoang, Roman Grundkiewicz, and Anthony Aue. 2018. Marian: Cost-effective high-quality neural machine translation in C++. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 129–135, Melbourne, Australia. Association for Computational Linguistics. Marzena Karpinska and Mohit Iyyer. 2023. Large language models effectively leverage document-level context for literary translation, but critical errors persist. In Proceedings of the Eighth Conference on Machine Translation, pages 419–451, Singapore. Association for Computational Linguistics. Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Markus Freitag, Thamme Gowda, Roman Grundkiewicz, Barry Haddow, Philipp Koehn, Benjamin Marie, Christof Monz, Makoto Morishita, Kenton Murray, Makoto Nagata, Toshiaki Nakazawa, Martin Popel, Maja Popovi´c, and Mariya Shmatova. 2023. Findings of the 2023 conference on machine translation (WMT23): LLMs are here but not quite there yet. In Proceedings of the Eighth Conference on Machine Translation, pages 1–42, Singapore. Association for Computational Linguistics. Tom Kocmi, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novák, Martin Popel, and Maja Popovi´c. 2022. Findings of the 2022 conference on machine translation (WMT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1–45, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Tom Kocmi, Martin Popel, and Ondrej Bojar. 2020. Announcing czeng 2.0 parallel corpus with over 2 gigawords. arXiv preprint arXiv:2007.03006. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of Machine Translation Summit X: Papers, pages 79–86, Phuket, Thailand. Shaohui Kuang, Deyi Xiong, Weihua Luo, and Guodong Zhou. 2018. Modeling coherence for neural machine translation with dynamic and topic caches. In Proceedings of the 27th International Conference on Computational Linguistics, pages 596–606, Santa Fe, New Mexico, USA. Association for Computational Linguistics. Samuel Läubli, Rico Sennrich, and Martin Volk. 2018. Has machine translation achieved human parity? a 13194 case for document-level evaluation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4791–4796, Brussels, Belgium. Association for Computational Linguistics. Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), pages 923–929, Portorož, Slovenia. European Language Resources Association (ELRA). António Lopes, M. Amin Farajian, Rachel Bawden, Michael Zhang, and André F. T. Martins. 2020. Document-level neural MT: A systematic comparison. In Proceedings of the 22nd Annual Conference of the European Association for Machine Translation, pages 225–234, Lisboa, Portugal. European Association for Machine Translation. Sameen Maruf and Gholamreza Haffari. 2018. Document context neural machine translation with memory networks. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1275–1284, Melbourne, Australia. Association for Computational Linguistics. Sameen Maruf, Fahimeh Saleh, and Gholamreza Haffari. 2021. A survey on document-level neural machine translation: Methods and evaluation. ACM Comput. Surv., 54(2). Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention networks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2947–2954, Brussels, Belgium. Association for Computational Linguistics. Mathias Müller, Annette Rios, Elena Voita, and Rico Sennrich. 2018. A large-scale test set for the evaluation of context-aware pronoun translation in neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 61–72, Brussels, Belgium. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics. Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186– 191, Brussels, Belgium. Association for Computational Linguistics. Matt Post, Thamme Gowda, Roman Grundkiewicz, Huda Khayrallah, Rohit Jain, and Marcin JunczysDowmunt. 2023. SOTASTREAM: A streaming approach to machine translation training. In Proceedings of the 3rd Workshop for Natural Language Processing Open Source Software (NLP-OSS 2023), pages 110–119, Singapore. Association for Computational Linguistics. Matt Post and Marcin Junczys-Dowmunt. 2023. Escaping the sentence-level paradigm in machine translation. arXiv preprint arXiv:2304.12959. Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence level training with recurrent neural networks. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings. Ricardo Rei, José G. C. de Souza, Duarte Alves, Chrysoula Zerva, Ana C Farinha, Taisiya Glushkova, Alon Lavie, Luisa Coheur, and André F. T. Martins. 2022. COMET-22: Unbabel-IST 2022 submission for the metrics shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 578–585, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computational Linguistics. Zewei Sun, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Shujian Huang, Jiajun Chen, and Lei Li. 2022. Rethinking document-level neural machine translation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3537–3548, Dublin, Ireland. Association for Computational Linguistics. Jörg Tiedemann and Yves Scherrer. 2017. Neural machine translation with extended context. In Proceedings of the Third Workshop on Discourse in Machine Translation, pages 82–92, Copenhagen, Denmark. Association for Computational Linguistics. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association for Computational Linguistics, 6:407–420. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc. Elena Voita, Rico Sennrich, and Ivan Titov. 2019. When a good translation is wrong in context: Context-aware machine translation improves on deixis, ellipsis, and lexical cohesion. In Proceedings of the 57th Annual Meeting of the Association for Computational 13195 Linguistics, pages 1198–1212, Florence, Italy. Association for Computational Linguistics. Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine translation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Melbourne, Australia. Association for Computational Linguistics. Longyue Wang, Chenyang Lyu, Tianbo Ji, Zhirui Zhang, Dian Yu, Shuming Shi, and Zhaopeng Tu. 2023. Document-level machine translation with large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 16646–16661, Singapore. Association for Computational Linguistics. Biao Zhang, Barry Haddow, and Alexandra Birch. 2023. Prompting large language model for machine translation: a case study. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org. Jiacheng Zhang, Huanbo Luan, Maosong Sun, Feifei Zhai, Jingfang Xu, Min Zhang, and Yang Liu. 2018. Improving the transformer translation model with document-level context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 533–542, Brussels, Belgium. Association for Computational Linguistics. A Samples from Released Datasets In this appendix, we present some examples from the datasets to demonstrate the format and content of the extracted data. The contexts have been truncated here due to space constraints, but they still illustrate the usefulness of the context to the translation of the sentence. Note that while these examples have been hand-picked and generally show useful context, we still see some noise in the contexts like text fragments in the wrong language or the presence of many short uninformative lines. eng→fra with target context Source sentence: Its entwined cobbled pathways and network of tunnels underground are interesting to look round. Target context: EXCURSIONS À PARTIR DE PRAGUE <docline> | Croisière sur la rivière Vltava avec dîner | Shuttle d’aéroport | <docline> . .. .. .<docline> Il s’agit d’une mine d’or exposée en Bohème, avec des châteaux magiques et des petites villes éparpillées dans la campagne, et touchée par les forêts denses de la chaîne de montagne Šumava le long de la frontière avec l’Autriche. La petite ville de Tábor est charmante, et elle a été habitée par les taborites au quinzième siècle. Target sentence: Ses chemins pavés entrelacés et son réseau de tunnels souterrains sont intéressants à observer. pol→eng with target context In this example, we see some noise in the form of Polish text fragments appearing in the English context. Source sentence: Stypendium mo˙zna przeznaczy´c na dowolny cel. Target context: START 2020 recruitment launched - Fundacja na rzecz Nauki PolskiejFundacja na rzecz Nauki Polskiej <docline> ......<docline> The principal criteria in the competition are the quality and originality of the candidate’s scientific accomplishments to date, as well as his or her single most important research achievement. In recent years the amount of the one-year stipend has been PLN 28,000. Target sentence: The stipend may be used by the laureate for any purpose. eng→deu with source context Source sentence: The evening will bring clouds with rain or sleet. Source context: °C <docline> 2 °C <docline> 2 °C <docline> 2 °C <docline> 2 °C <docline> 1 °C <docline> Air pressure <docline> 1014 hPa <docline> ......<docline> Tomorrow <docline> In the early morning it will be mainly cloudy, but mostly dry. Before noon clouds with rain or sleet will dominate. Target sentence: Die Mittagszeit bringt wechselhaftes Wetter mit ab und zu etwas Regen. eng→fra with both contexts This example contains broken sentence fragments, but is a good example of the context being required to disambiguate a pronoun in the target French – the translation of “it” to “la” requires context about the grammatical gender of the antecedent. Source sentence: Previously, it appeared only below 1,600 metres. Source context: aside to give to my sister.” <docline> The family used coffee revenues to pay his sister’s schooling, and his brother’s. Mr. Zikusoka followed in his father’s footsteps so that 13196 he too could provide for his family. When he got married in 2005, he received a half-hectare of farmland and decided to grow coffee. <docline> ......<docline> He notes that coffee leaf rust disease, which usually affects coffee at altitudes lower than 1,400 metres, has now surfaced at 1,800 metres above sea level. <docline> Coffee berry disease has also shifted to higher altitudes, and is attacking crops at 1,800 metres above sea level. Target context: sa production de café n’a cessé de baisser en raison de maladies et d’insectes nuisibles. Une maladie fongique appelée flétrissement du café a envahi son exploitation, et les perceurs de la tige de café ont attaqué ses caféiers. De nombreux autres agriculteurs ont subi la crise du flétrissement dans le district de Mukono, l’une des principales régions productrices de café en Ouganda. <docline> ......<docline> Il note que la rouille orangée du caféier qui touchait généralement le café cultivé à des altitudes inférieures à 1 400 mètres est maintenant apparue dans les localités situées à 1 800 mètres au-dessus de la mer. <docline> La maladie du fruit du caféier s’est également déportée vers les altitudes plus hautes, et est en train d’attaquer les cultures situées à 1 800 mètres au-dessus du niveau de la mer. Target sentence: Autrefois, on ne la trouvait qu’aux endroits situés en dessous de 1 600 mètres. 13197
2024
712
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13198–13210 August 11-16, 2024 ©2024 Association for Computational Linguistics Bridging the Empirical-Theoretical Gap in Neural Network Formal Language Learning Using Minimum Description Length Nur Lan1,2, Emmanuel Chemla1,3, Roni Katzir2 1Ecole Normale Supérieure, 2Tel Aviv University, 3EHESS, PSL University, CNRS {nur.lan,emmanuel.chemla}@ens.psl.eu [email protected] Abstract Neural networks offer good approximation to many tasks but consistently fail to reach perfect generalization, even when theoretical work shows that such perfect solutions can be expressed by certain architectures. Using the task of formal language learning, we focus on one simple formal language and show that the theoretically correct solution is in fact not an optimum of commonly used objectives — even with regularization techniques that according to common wisdom should lead to simple weights and good generalization (L1, L2) or other meta-heuristics (early-stopping, dropout). On the other hand, replacing standard targets with the Minimum Description Length objective (MDL) results in the correct solution being an optimum. 1 Introduction Probing the capabilities of Artificial Neural Networks (ANNs) in the domain of language learning has advanced in two complementary paths – theoretical and empirical. Theoretical work tries to delineate the kinds of languages and phenomena that can be expressed by ANNs, and empirical work involves training networks on such tasks and inspecting their performance. These paths have still not converged: while theoretical work continues to provide findings regarding the expressivity of different architectures, empirical work keeps arriving at suboptimal solutions that fall short of the theoretically correct ones. For example, for formal languages such as anbn or Dyck-1, among many others, we are not aware of any network trained through gradient descent that was shown to perform well on strings that are orders of magnitudes longer than those seen during training, while failures to report generalization beyond low lengths are pervasive (Joulin and Mikolov, 2015, Weiss et al., 2018, Suzgun et al., 2019, Bhattamishra et al., 2020, El-Naggar et al., 2022, among others; see Lan et al., 2023 for an overview). This stands in contrast to symbolic models, where the requirement for solution correctness across lengths is trivially met. Why this would be the case is often either left unexplained or waved off as a shortcoming of the optimization method (most often, gradient descent using backpropagation). In this work we argue that these failures are not due to training misfortunes that could be overcome, for example, by using a more exhaustive hyper-parameter search, more training steps, or more training data. Rather, they are due to inherent characteristics of the training objectives currently used for such tasks. Our main contributions are: 1. We present a manually built, optimal Long Short-Term Memory network (LSTM; Hochreiter and Schmidhuber, 1997) that accepts the formal language anbn, following a general recipe given in Weiss et al. (2018). We show that this network would not be found using standard training objectives, since it does not lie at optimum points of these objectives – even when using regularization terms which according to common wisdom should result in general solutions. 2. We show that by replacing these objectives and regularization terms with an objective to minimize the network’s Minimum Description Length (MDL, Rissanen, 1978), accompanied by an intuitive encoding scheme, the optimal network becomes an optimum of the objective. The full experimental materials and source code are available at https://github.com/0xnurl/ mdl-lstm . 2 Previous work We rely mainly on three recent works, which we extend in the following ways. First, the current 13198 (a) L1 (b) L2 (c) |H| Figure 1: Loss surfaces around the golden anbn LSTM from Section 4, for the regularization terms considered here: L1, L2, and |H| – the hypothesis encoding length term of the MDL objective. |H| is jagged and non-differentiable but results in the correct net being an optimum of the full loss function (Figure 4). work is similar to El-Naggar et al. (2023), who inspected the role of the objective function in formal language learning. They showed that for a simple recurrent neural network (RNN), which uses a single ReLU layer, the optimal counting solution does not align with optima of common loss functions (cross-entropy and mean squared error). This was done by providing necessary and sufficient conditions for implementing counting in a ReLU-RNN. We extend this work in the following ways. First, we move to the more commonly used LSTM RNN. Since this architecture is more complex, it is also harder to find such sufficient and necessary conditions for counting as done by El-Naggar et al. (2023). This leaves our results mostly empirical, compared to their analytical result. However, here we go beyond that work by also providing an alternative objective function (MDL), for which the optimal network becomes an optimum. Second, in order to locate such an optimum of the objective, we build an optimal LSTM that accepts a specific formal language. For this we rely on Weiss et al. (2018), who showed that an LSTM can theoretically implement counting using specific weight configurations, so that the state vector holds a counter that can be incremented and decremented based on the input. Here we implement their general recipe to build an optimum LSTM that accepts the language anbn. We focus on one language for simplicity, and the method can be easily extended to more languages. Third and closest to the current work, Lan et al. (2022) applied the MDL principle to RNNs for formal language learning. The resulting networks were shown to be correct for any string for languages such as anbn, anbmcn+m, and Dyck-1. Since this objective resulted in a non-differentiable loss function, Lan et al. (2022) used neuroevolution to search the hypothesis space, evolving free-form RNN cells. Since our focus in this work is the objective, here we leave the search algorithm aside and use a single fixed architecture for which a theoretical target is known to exist (LSTM). We then inspect the effect of the objective function on potential weight solutions. More broadly, empirical work using RNNs for artificial grammar learning have been carried out at least since the introduction of Simple RNNs in Elman (1990). Theoretical work regarding RNNs’ theoretical computational power go back at least to Siegelmann and Sontag (1992), who showed that RNNs are Turing-complete under certain permissive assumptions (unbounded activation precision and running time). The empirical success of ANNs in the practical field of natural language processing (NLP) has led to recent interest in the theoretical power of RNNs under practical conditions, mainly real-time processing and finite precision (Weiss et al., 2018, Merrill et al., 2020). Other recent work has applied similar methods to the transformer architecture (see survey in Strobl et al., 2023). In terms of empirical results, works since Elman (1990) trained ANNs to recognize formal languages and most often tested for generalization using unseen string lengths and depths (Gers and Schmidhuber, 2001, Joulin and Mikolov, 2015, among many others). Lan et al. (2023) provide an overview of such works; they show that common to these works is a failure to generalize beyond a certain tested length. Lan et al. (2023) also provide a standardized benchmark for formal language learning, and find that RNNs trained to optimize standard losses fail to generalize well from reasonably small amounts of data, while an RNN variant trained to minimize MDL (Lan et al., 2022) is able to generalize significantly better. 13199 Applying the MDL criterion to ANNs also dates back to at least the early 1990’s. (See Schmidhuber, 1997 for an overview of early attempts in this area, and Lan et al., 2022 for a review of more recent work.) Hinton and Van Camp (1993) minimized the encoding length of the weights alongside the prediction error, while leaving the architecture fixed. Hochreiter and Schmidhuber (1994) provided an algorithm that searches for networks that lie at ‘flat minima’ – regions of parameter space where error remains relatively similar; this preference is given an MDL justification. Zhang and Muhlenbein (1993) used a genetic algorithm to search for network architectures that minimize an MDL score, using a weight encoding similar to L2 regularization. Schmidhuber (1997) presented an algorithm for discovering networks that optimize a runningtime based complexity metric that is closely related to MDL (Levin complexity). More recently, Louizos et al. (2017) used a Bayesian method with an MDL justification for pruning and quantizing network units and weights. Louizos et al. (2018) used a differentiable approximation of the L0 norm to encourage weight sparsity. As far as we can tell, these methods are still prone to the risk of overfitting through highly informative yet small-valued weights, explained in Section 3.2 below. 3 General setup We describe here the technical background leading to the experiments in Section 4. 3.1 Minimum Description Length Striking a balance between model complexity and its fit to the data is important in order to avoid both overfitting and underfitting. It is generally assumed that minimizing model complexity is good (Occam’s razor). This general principle was formalized within Kolmogorov Complexity (Solomonoff, 1964, Chaitin, 1966, Kolmogorov, 1968), defined as the length of the shortest program that generates specific data. Kolmogorov Complexity however is non-computable, a result of the target representation being Turing-complete. The Minimum Description Length principle (MDL; Rissanen, 1978) makes it possible to escape the non-computability by relaxing the requirement of a Turing-complete representation and moving to a less powerful formalism (e.g., a context-free grammar). Formally, consider a hypothesis space H and input data D. The MDL principle aims to find a hypothesis H∗that minimizes the sum: H∗= arg min H∈H |H|C + |D : H| (1) where |H|C is the length of H encoded using an encoding scheme C for encoding hypotheses in H. Encoding length is usually measured in bits. |D : H| is the encoding length of D given H. Minimizing |H|C alone would result in a degenerate, over-general model that does not fit the data well. Conversely, minimizing |D : H| alone would result in overfitting. Minimizing both terms together results in a reasonable compromise between generalization and accuracy. 3.2 Encoding a network In this work, hypotheses in H are taken to be LSTM networks with one linear output layer, followed by a softmax function. Here we describe an encoding scheme for such networks which makes it possible to measure their encoding length |H|. We first note that a reasonable encoding scheme for networks should follow an intuitive notion of simplicity in order to penalize overfitting (i.e., lead to larger encoding length). Equating scalar magnitude with simplicity is not enough, since it is still possible to ‘smuggle’ large amounts of information inside very small scalar values. One extreme example is using a fractal encoding in the spirit of Siegelmann and Sontag (1992) or Tabor (2000) which encodes a stack inside a small rational number.1,2 However, less sophisticated overfitting is also conceivable using highly specific weights, for example if a model assigns specific probabilities due to sampling artefacts in the training set. A reasonable objective should make such memorization worthwhile only if the data justify it, e.g., if it contains many repetitions of the same pattern. Regularization terms such as L1/L2 are not good enough then, since they would deem, for example, a simple value such as 1 worse than a smaller yet highly informative value (e.g., Pn 1 2wi+1 4i < 1, the fractal encoding of a binary stack w, from Siegelmann and Sontag, 1992). 1These works use activation values, not weights, to store such values. However, such a construction still illustrates the difference between information content and scalar magnitude. 2Admittedly and as discussed also in Weiss et al. (2018), standard gradient-based methods would most probably not reach such highly specific weight configurations. This does not mean however that such solutions do not exist in the search space and that better search algorithms would not find them. 13200 For MDL, the encoding scheme C explained in Section 3.1 needs to be chosen so that it fulfills the simplicity requirement. We opt for the following encoding scheme, used in Lan et al. (2022). A weight wij is represented as a rational fraction n m. The numerator and denominator are encoded using the prefix-free encoding for integers from Li and Vitányi (2008): E(n) = 11111 . . . 1111 | {z } Unary enc. of ⌈log2n⌉ 0 |{z} Separator 10101 . . . 00110 | {z } Binary enc. of n Both encodings are then concatenated, with an extra bit for the sign. For example, the weight wij = + 2 5 would be encoded as: 1 |{z} + E(2) = 11010 | {z } 2 E(5) = 1110101 | {z } 5 | {z } wij This encoding fulfills the requirement above: the encoding of very specific or informative weights would be considerably longer than that of intuitively simpler values such as 1. In the current setup, the LSTM architectures vary only by the size of the hidden vector and the values of the weights. In order to reliably encode a specific network one needs to encode only the weights of the LSTM cell and output layer, and prepend the size of the hidden vector. The encoding of a specific network would then be: 11011 | {z } E(hidden size) 11 . . . 01 | {z } ··· 10 . . . 01 | {z } wij 11 . . . 10 | {z } ··· | {z } Weight encoding | {z } LSTM encoding To calculate |H| for networks trained through backpropagation with floating-point weights, in sections below floats are converted to the closest rational with denominator m ≤1000.3 3.3 Language modeling We use the formal language anbn as a test case throughout this work, and probe different networks’ performance on recognizing it. Strings are drawn from the following probabilistic context-free grammar (PCFG): S → ( aSb 1 −p ε p (2) 3We use the CPython implementation in Fraction.limit_denominator() which approximates the closest rational with denominator ≤1000. with p = 0.3 for all tasks. We use a standard language modeling setup in which the network is fed one symbol at a time, and outputs a probability distribution over the alphabet, predicting the next symbol in the string. Following Gers and Schmidhuber (2001), each string starts and ends with a special symbol. The training set is sampled by generating strings from (2). The validation set consists of all consecutive strings starting right after the last n in the training set. The validation loss is weighted per-sample so that it follows the same power law distribution induced by (2). The train-validation split in all experiments is 95%-5%. In the following sections the training size is 1,000, i.e., a 950-50 split. The maximum n in this training set was 21, so the validation set contained all strings with 22 ≤n ≤71. The test set in all experiments consisted of all anbn strings with 1 ≤n ≤1,500. The network is fed one symbol at a time, and at each step outputs a probability distribution ˆp over the alphabet for predicting the next symbol in the string. The baseline loss function we use is the standard cross-entropy loss (CE): CE(p, ˆp) = − n X i=0 p(ci)log(ˆp(ci)) (3) where n is the length of a sequence, ci is the target symbol at time step i, p(ci) is the target probability at time step i for this symbol, and ˆp(ci) is the probability assigned by the network to this symbol at this time step. In a language modeling setting the target p(ci) is set to 1, resulting in: CE(p, ˆp) = − n X i=0 log(ˆp(ci)) (4) This sum is then averaged over all time steps for all sequences. To measure accuracy on the task, we use deterministic accuracy (Joulin and Mikolov, 2015, Lan et al., 2023), defined as the ratio of correct answers at parts of the string that are completely predictable (a correct answer being the network assigning the maximum probability to the correct next symbol). For anbn strings, this means measuring accuracy at the phase that starts once the first ‘b’ appears, including the end-of-sequence symbol. Measuring accuracy at the end-of-sequence symbol turns the task into a strict acceptance task and can distinguish a good network that correctly balances the number of a’s and b’s, from a degenerate network 13201 that, e.g, gets a high deterministic accuracy score simply by only predicting b’s. 3.4 Loss surface exploration Our goal is to test which objectives could lead to optimal solutions. Exhaustive search of the parameter space is infeasible. However, we can explore only parts of the loss space and check if an objective function turns out to favor suboptimal solutions over an optimal one. This would be an indicator that this objective is not suitable for the task (and would lead to reliance on meta-heuristics such as early stopping). We do this by exploring the loss surfaces around an optimal network that solves the task perfectly and around a backpropagationtrained network. For a given network with parameters θ, and for a loss function L, the network’s loss is L(θ) (for some input x, in the current case a one-hot vector representing the current symbol, omitted here). For the 2D visualization we use below, the area around a specific network’s θ can then be explored by using two direction vectors δ and η, and plotting: f(α, β) = L(θ + αδ + βη) (5) We use the exploration technique by Li et al. (2018): δ and η are randomized from a Gaussian; then, specific parts of each direction vector are normalized so that they have the norm of the respective parts in the original θ. For fully-connected layers like the ones used in LSTMs, normalization is done for each set of weights leading to a specific neuron. This normalization technique preserves the relative scale of different weight components of a network, and was shown to better reflect properties like convexity when exploring a network’s surrounding space. In all plots below we use 51 equally spaced values of α, β ∈[−1, 1]. Exploration using larger ranges did not affect the results either visually or quantitatively. 3.5 Objectives The objective functions for all tasks below share the following structure: L(θ) = CE + λReg(θ), Here, CE is the training cross-entropy loss (4) using the distribution ˆp outputted by the network. For the MDL objective in (1), CE serves as |D : H|. This can be justified in encoding-length terms since (4) gives the expected length in bits for transmitting the string using Shannon-Fano encoding. In the second term, Reg is either L1(θ) = P wij∈θ |wij|, L2(θ) = P wij∈θ wij2, or no regularization. For the MDL objective, Reg(θ) is |H| – the encoding length of a network encoded using the method in Section 3.2. λ is a coefficient used to calibrate the level of regularization during training, and is usually chosen empirically. The common wisdom motivating the regularization term in all cases is to prevent models from overfitting. In the L1/L2 regularization framework, this is done by preventing large weights. Using L1 also leads to a preference for zero-value weights, thus potentially removing connections altogether; this can be thought of as a differentiable way to perform architecture search. However, as suggested in Section 3.2, neither term is a good proxy for the |H| term, since small weights can in fact be very informative (i.e., very complex). Figure 1 plots the three regularization terms considered here, surrounding the optimal network presented in Section 4. It can be seen that while the loss surfaces for L1/L2 are smooth, moving to MDL would result in a highly irregular surface, hostile to gradient methods. 4 Optimal vs. trained anbn LSTM Here we compare an optimal, manuallyconstructed LSTM that recognizes the language anbn perfectly, with an LSTM trained through backpropagation on the same task. We name the optimal network ‘golden’ to avoid confusion with general optimum points. The golden network is optimal in the sense that it always outputs the correct probabilities at each step of an anbn string drawn from (2), for any value of n. The optimal probabilities are presented in Figure 2a. The goal of the experiment is to test whether a perfect solution can be found when using the different objectives considered here. Exhausting the entire parameter space is infeasible, even for networks with the very small hidden size (3) used here. However, if the golden network turns out to not be an optimum of certain objectives, i.e., a worse-performing network will be deemed better by an objective, we can conclude that this objective would not lead to this specific network. 13202 (a) Golden network (b) Best trained network Figure 2: Output probabilities assigned by the golden network and the best trained network for n = 73, the first point of failure of the trained network. Going left to right, each column represents the probability distribution outputted by the network at each time step. 4.1 Golden anbn network The golden network is implemented based on a general recipe given in Weiss et al. (2018), who showed that an LSTM can theoretically implement counting using specific configurations of the gate weights, so that the state vector holds a counter that can be incremented and decremented based on the input. This makes it possible for LSTMs to recognize a family of formal languages called Counter Languages. Roughly, this family corresponds to languages which can be recognized using a counting mechanism in real-time (Merrill, 2020). This includes anbn. In an empirical experiment, however, Weiss et al. (2018) trained LSTMs on recognizing the language and found that the networks did not converge on the fully general counting solution, rather it converged on a suboptimal solution that failed to recognize anbn strings starting at n as low as 256. We describe here in general terms the mechanics of the golden network. The full construction is given in Appendix B. The weights of the LSTM cell are set so that the network keeps track of the number of a’s compared to the number of b’s seen at each time step. Figure 3a plots the values of the (a) Golden network (b) Best trained network Figure 3: Memory values for the golden network and the best trained network for n = 73, the first point of failure of the trained network. Each line corresponds to one component of c, the memory vector in the LSTM cell, as the string is fed to the network. memory vector of the LSTM during feeding of an anbn string, illustrating its counting mechanism. On top of the LSTM cell we add a linear layer that receives the hidden state as input, and outputs the correct target probabilities through a final softmax. The manual network reaches 100% test accuracy on the test set which contains all anbn strings with 1 ≤n ≤1500, and in fact can be shown to be correct for any n. Note that the network is optimal with respect to its performance, and that other perfect networks with better MDL scores could exist in the search space. We return to this point in the Limitations section. 4.2 Backpropagation-trained anbn LSTM We compare the golden network with networks trained on the same task using standard techniques. We run a hyper-parameter grid search to train LSTMs that have the same architecture as the manual network: hidden size 3 and a single linear output layer, followed by a softmax. The grid covers the following hyper-parameters: training set size, weight initialization method, regularization term, dropout rate, and early stopping patience based on validation loss, across five different random 13203 seeds. The grid yields 3,360 configurations. The full hyper-parameter grid is given in Appendix A. The overfitting prevention techniques explored here belong to two groups: techniques where a regularization term is added directly to the loss function (L1, L2), and meta-heuristics external to the objective (dropout, early-stopping based on validation loss), aimed at preventing the loss from getting too low. While our focus here is the objective function, we still include dropout and early stopping in order to compare the golden network with a network trained in a practical setting with the best chances to succeed. In Table 1 in the appendix we provide the same comparison for the best network that was trained without early stopping or dropout. We select the best network out of all runs based on validation loss. The best network reached 77.3% deterministic test accuracy (on 1 ≤n ≤1,500) and was trained with the following parameters: training size 1000 (950-50 train-validation split); no regularization term; early stopping patience of 2 epochs; no dropout; normal weight initialization. 4.3 Network behavior We start by comparing the behavior of the golden and trained networks. The best trained network correctly accepts all strings with n ≤72 (the largest n in the training set was 21). We compare the networks using the first point of failure of the trained network, n = 73. Figure 2 plots the output probabilities assigned by the two networks throughout the sequence, and Figure 3 plots their memory values (c in the LSTM cell). We first examine the network outputs in Figure 2. At first blush, the trained network seems successful, following the language distribution induced by (2) and visualized in Figure 2a almost perfectly. However, the network is imperfect in two ways: first, probabilities at the beginning of the string are incorrect, most probably due to overfitting of more frequent low-n values in the training set. Additionally and more crucially, the network’s count seems to leak, with probability mass for the end-ofsequence symbol assigned to the before-last time step. This becomes a problem for n ≥73, when the network starts accepting illicit anbn−1 strings. As for the inner workings of the network, visualizing the network’s memory in Figure 3b shows that the network has indeed developed some counting mechanism in at least one component of the memory vector (c0), which seems to be imperfect as it does not correctly count up to 73, and goes slightly below 0 towards the end of the string. (It is unclear however how the network uses the other two components, which could potentially complement this counter). 4.4 Loss exploration Is the suboptimal performance of the trained network above simply a misfortune of the current setup? We explore the possibility that the culprit might be the objective function. We do this by comparing the loss values of the golden network with the trained network’s, using standard objectives and the MDL objective. Beyond measuring the loss value of the two networks considered here, we also explore their surrounding loss landscape in order to check for alternative local minima and inspect properties like convexity and smoothness of the loss. This is done using the technique described in Section 3.4. Figures 4 and 5 plot the different loss surfaces around the networks. On each plot we mark the minimum point in the neighborhood, to check if it aligns with the network under investigation. If it does not, using that loss (either for fine-tuning the network or training from scratch) would potentially end up at that other minimum. For each relevant point we use the parameter vector to build the corresponding LSTM and calculate its test accuracy. We start by exploring the loss surface around the golden network.4 Figures 4a and 4b show that if L1 or L2 regularization were used, the golden network would not have been found – rather, using these regularization terms would lead the search to suboptimal networks that have better training loss values, but also worse test accuracy. For the MDL objective function, visualized in Figure 4c, the minimum in the plotted area aligns with the golden network, showing that at least in this neighborhood, searching with MDL as an objective would lead to the correct solution. In Section 6 below we discuss potential limitations to these findings. Figure 5 plots the different loss surfaces around the best trained network. We plot in 3D for com4We omit plotting the standalone cross-entropy loss because it is trivial to show that minimizing this loss alone will lead to overfitting (partially explaining the fact that the best performing network ends up using early stopping). Table 1 in the appendix demonstrates this using the next-best grid-search network that was trained without a regularization term or early stopping, whose cross-entropy loss goes below that of the golden network’s. 13204 (a) CEtrain + L1 (b) CEtrain + L2 (c) CEtrain + |H| (MDL) Figure 4: Training loss around the golden anbn LSTM, and test accuracy scores for the golden network and the local minimum network. Optimizing using L1 or L2 (4a, 4b) would result in suboptimal networks, while MDL (4c) results in alignment of the golden network with an optimum point of the loss. (a) CEtrain + L1 (b) CEtrain + L2 (c) CEtrain + |H| (MDL) Figure 5: Training loss surfaces and the test accuracy of the best anbn LSTM found through a hyper-param grid search, trained using backpropagation with the standard cross-enropy loss and early stopping based on validation loss. When evaluated using L1 or L2 (5a, 5b) the network ranks better than the golden network but has worse test performance. When evaluated using MDL (5c) the trained network does not minimize the loss as well as the golden network, ending up in a smooth but suboptimal area of the loss space. parison with the relevant value for the golden network, which lies in a different area of the loss space. Here, for all objectives, the winning network lies in a smooth and convex area. When evaluated using L1 and L2 regularization, the golden network ranks worse by the relevant losses. For the MDL objective the image is reversed: the trained network ranks worse, while the MDL score of the golden network remains unreachable below (Figure 5c). Since the two networks’ crossentropy terms are almost identical (see Table 1), this inversion is mainly due to to the |H| term, which suggests that the trained network uses overinformative weights. Results for more λ values for all networks are given in Table 1. 5 Discussion We presented a comparison between common overfitting-prevention techniques, among them some that equate simplicity with scalar magnitude, and the MDL objective accompanied with an encoding scheme for weights which favors an intuitive notion of simplicity. Combined with a manually built LSTM that optimally recognizes anbn, we could measure the loss values of an optimal solution and check if they align with optimum points of the loss function. It was only when we used MDL that the optimal network aligned with the minimum of the loss. For the other loss functions, networks lying at minimum points had far from optimal performance. Using meta-heuristics such as early stopping mitigated overfitting to some extent, but still did not lead to a fully general solution. We interpret these findings as an indicator that ANNs’ failure to converge on provably existing, optimal solutions is not accidental, but rather an inherent and pathological property of the way that current models are trained. This is in line with a mounting list of generalization failures to learn 13205 formal languages, as well as more complicated natural language tasks. We focused here on RNNs, mainly because they lend themselves easily to manual construction and inspection. However, we see no a priori reason why our results would not extend to other architectures such as transformers or convolutional networks, given the generality of the MDL principle, and the fact that it has been shown to be beneficial across various domains and learning tasks, including linguistic phenomena (see Stolcke, 1994, Grünwald, 1996, de Marcken, 1996, and Rasin et al., 2021, among others). 6 Limitations In Section 4.4 we explored the loss surface around the golden network using different objectives, and found that using L1/L2 regularization leads to suboptimal networks lying at optimum points, while using the MDL objective leads to the golden network lying at an optimum point. However, since the loss exploration is not (and cannot be) exhaustive, caution is needed when making generalizations based on these results. First, when using L1/L2, it is still possible of course that better optima lie somewhere else in the loss spaces, and that the respective minimizing networks have perfect performance. Given the discussion in Section 3.1 about scalar magnitude vs. simplicity we find this possibility unlikely but admittedly still possible. Conversely, when using the MDL objective, here it is conceivable that other networks would have better MDL scores and suboptimal performance. While this cannot be ruled out completely, we believe that using the MDL objective accompanied by a reasonable encoding scheme like the one used here makes over/under-fitting unlikely. This is arguably not the case for L1/L2. (Another possibility, that of a network with a better MDL score but equivalent perfect performance, is more likely, given that the golden network was manually designed and can potentially be optimized further.) Finally, a major practical limitation of the current work relates to the non-differentiability of the MDL objective. This is especially problematic for ANNs, given that current standard training methods rely almost exclusively on gradient descent. One could then consider L1/L2 as a differentiable proxy for a strict formalization of simplicity. However, the current work sheds light on the shortcomings of these compromises. This in turn could lead both to a more informed use of such proxies, and potentially to further research regarding better optimization techniques for MDL. 7 Acknowledgements This research was supported by ANR-17-EURE0017 and ISF 1083/23. This project was provided with computer and storage resources by GENCI at IDRIS thanks to the grant 2023-AD011013783R1 on the supercomputer Jean Zay’s V100 partition. References Satwik Bhattamishra, Kabir Ahuja, and Navin Goyal. 2020. On the Ability and Limitations of Transformers to Recognize Formal Languages. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7096–7116, Online. Association for Computational Linguistics. Gregory J. Chaitin. 1966. On the length of programs for computing finite binary sequences. Journal of the ACM, 13:547–569. Carl de Marcken. 1996. Unsupervised Language Acquisition. Ph.D. thesis, MIT, Cambridge, MA. Nadine El-Naggar, Pranava Madhyastha, and Tillman Weyde. 2022. Exploring the Long-Term Generalization of Counting Behavior in RNNs. Nadine El-Naggar, Andrew Ryzhikov, Laure Daviaud, Pranava Madhyastha, and Tillman Weyde. 2023. Formal and empirical studies of counting behaviour in ReLU RNNs. In Proceedings of 16th Edition of the International Conference on Grammatical Inference, volume 217 of Proceedings of Machine Learning Research, pages 199–222. PMLR. Jeffrey Elman, L. 1990. Finding structure in time. Cognitive science, 14(2):179–211. Felix Gers and Jürgen Schmidhuber. 2001. LSTM recurrent networks learn simple context-free and contextsensitive languages. IEEE Transactions on Neural Networks, 12(6):1333–1340. Peter Grünwald. 1996. A minimum description length approach to grammar inference. In Stefan Wermter, Ellen Riloff, and Gabriele Scheler, editors, Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing, Springer Lecture Notes in Artificial Intelligence, pages 203–216. Springer. Geoffrey E. Hinton and Drew Van Camp. 1993. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the Sixth Annual Conference on Computational Learning Theory, pages 5–13. 13206 Sepp Hochreiter and Jürgen Schmidhuber. 1994. Simplifying Neural Nets By Discovering Flat Minima. In Advances in Neural Information Processing Systems, volume 7. MIT Press. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735– 1780. Armand Joulin and Tomas Mikolov. 2015. Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets. In Advances in Neural Information Processing Systems, volume 28. Curran Associates, Inc. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. Andrei Nikolaevic Kolmogorov. 1968. Three approaches to the quantitative definition of information. International Journal of Computer Mathematics, 2:157–168. Nur Lan, Emmanuel Chemla, and Roni Katzir. 2023. Benchmarking Neural Network Generalization for Grammar Induction. In Proceedings of the 2023 CLASP Conference on Learning with Small Data (LSD), pages 131–140, Gothenburg, Sweden. Association for Computational Linguistics. Nur Lan, Michal Geyer, Emmanuel Chemla, and Roni Katzir. 2022. Minimum Description Length Recurrent Neural Networks. Transactions of the Association for Computational Linguistics, 10:785–799. Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, and Tom Goldstein. 2018. Visualizing the loss landscape of neural nets. Advances in neural information processing systems, 31. Ming Li and Paul Vitányi. 2008. Chapter 1.4, Binary Strings. In An Introduction to Kolmogorov Complexity and Its Applications, Texts in Computer Science. Springer New York, New York, NY. Christos Louizos, Karen Ullrich, and Max Welling. 2017. Bayesian compression for deep learning. Advances in neural information processing systems, 30. Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning Sparse Neural Networks through $L_0$ Regularization. William Merrill. 2020. On the Linguistic Capacity of Real-Time Counter Automata. arXiv preprint arXiv:2004.06866. William Merrill, Gail Weiss, Yoav Goldberg, Roy Schwartz, Noah A. Smith, and Eran Yahav. 2020. A Formal Hierarchy of RNN Architectures. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 443–459, Online. Association for Computational Linguistics. Ezer Rasin, Iddo Berger, Nur Lan, Itamar Shefi, and Roni Katzir. 2021. Approaching explanatory adequacy in phonology using Minimum Description Length. Journal of Language Modelling, 9(1):17– 66. J. Rissanen. 1978. Modeling by shortest data description. Automatica, 14(5):465–471. Jürgen Schmidhuber. 1997. Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability. Neural Networks, 10(5):857– 873. Hava T. Siegelmann and Eduardo D. Sontag. 1992. On the computational power of neural nets. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, COLT ’92, pages 440–449, New York, NY, USA. Association for Computing Machinery. Ray J. Solomonoff. 1964. A formal theory of inductive inference, parts I and II. Information and Control, 7(1 & 2):1–22, 224–254. Andreas Stolcke. 1994. Bayesian Learning of Probabilistic Language Models. Ph.D. thesis, University of California at Berkeley, Berkeley, California. Lena Strobl, William Merrill, Gail Weiss, David Chiang, and Dana Angluin. 2023. Transformers as Recognizers of Formal Languages: A Survey on Expressivity. arXiv preprint arXiv:2311.00208. Mirac Suzgun, Yonatan Belinkov, Stuart Shieber, and Sebastian Gehrmann. 2019. LSTM Networks Can Perform Dynamic Counting. In Proceedings of the Workshop on Deep Learning and Formal Languages: Building Bridges, pages 44–54, Florence. Association for Computational Linguistics. Whitney Tabor. 2000. Fractal encoding of context-free grammars in connectionist networks. Expert Systems, 17(1):41–56. Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the Practical Computational Power of Finite Precision RNNs for Language Recognition. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 740–745. Byoung-Tak Zhang and Heinz Muhlenbein. 1993. Evolving optimal neural networks using genetic algorithms with Occam’s razor. Complex systems, 7(3):199–220. A Grid search hyper-params Training size: 500/1000/5000/10000. Random seed: 100/200/300/400/500. Regularization: none/L1/L2. Regularization lambda when relevant: 0.1/0.5/1.0. Dropout rate: none/0.2/0.4/0.6. Early 13207 Loss λ Golden net Best trained net Best trained, no early stopping Loss Test acc. % Loss Test acc. % Loss Test acc. % CE 3.58e-01 100.00 3.58e-01 77.33 3.57e-01 64.97 CE + L1 0.1 2.50e+02 100.00 1.67e+01 77.21 1.85e+01 86.73 0.5 1.24e+03 96.23 8.17e+01 77.21 8.94e+01 96.24 1.0 2.48e+03 0.00 1.63e+02 0.13 1.78e+02 96.24 CE + L2 0.1 3.72e+04 99.87 4.23e+01 77.21 5.80e+01 90.34 0.5 1.86e+05 99.87 2.10e+02 70.38 2.85e+02 91.81 1.0 3.72e+05 99.87 4.20e+02 70.38 5.69e+02 91.81 MDL 3.92e+03 100.00 5.88e+03 77.98 5.87e+03 69.68 Table 1: Minimum training loss values and best test deterministic accuracy scores in the space surrounding the following networks: the golden network, the best trained network which used early stopping, and the best trained network that was trained without early stopping or regularization terms. Winning values across networks are indicated for each row: best loss (blue) and best accuracy (red). Minimizing the loss and achieving perfect accuracy coincide only for the MDL objective. stop after no improvement for number of epochs: none/2/10. Weight initialization: uniform/normal. All simulations used the Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001, β1 = 0.9, β2 = 0.999, and ran for 20,000 epochs unless stopped by early stopping. B Golden anbn LSTM construction This section spells out the construction of the optimal anbn network from Section 4. The network is designed to output the correct probability distribution for anbn strings induced by the PCFG in (2). The target probabilities are plotted in Figure 2a. The general idea is to implement a counting mechanism in the LSTM cell and then to pass this value through a linear layer and a softmax, which outputs the target probabilities. A full PyTorch implementation of the network is given at https://github.com/0xnurl/mdl-lstm . B.1 Representations and constants We use a standard LSTM cell represented by the following functions: it = σ(Wiixt + bii + Whiht−1 + bhi) (6) ft = σ(Wifxt + bif + Whfht−1 + bhf) (7) gt = tanh(Wigxt + big + Whght−1 + bhg) (8) ot = σ(Wioxt + bio + Whoht−1 + bho) (9) ct = ft ⊙ct−1 + it ⊙gt (10) ht = ot ⊙tanh(ct) (11) where σ is the sigmoid activation and ⊙is the element-wise product. In the following construction, all weights are set to 0 unless mentioned otherwise. The LSTM gates (sigmoids and tanh’s) need to be saturated in order to prevent leakage and keep the solution stable (Weiss et al., 2018). For this, a large enough input needs to be used. We select empirically: LARGE = 27 −1 which is the largest unsigned integer that fits in 7 bits (instead of, e.g., 27, in order to save bits when using the encoding scheme from Section 3.2). Network inputs and outputs are vectors of size 3, with the following class positions: [#, a, b], where # is the start/end-of-sequence symbol. Inputs are one-hot encoded, so that: xt = [1#, 1a, 1b] (12) The notation [· · · ]tanh is used as a shorthand for tanh([· · · ]). Column vectors are printed as row vectors and omitting the transpose for readability. B.2 Counting The network’s memory vector ct is of size 3. We describe the construction that leads to ct holding the following target values at each time step: ct = [1, 1, #a −#b] (13) where #a and #b represent the number of a’s and b’s seen so far. #a −#b thus counts the number 13208 of unmatched a’s and becomes 0 only when the a’s and b’s are balanced. The two constant 1’s will be used downstream. We first set: Wig = LARGE ·   1 0 0 1 0 0 0 1 −1   Then, from definitions (8) and (12) and because all other weights feeding gt are 0, we get: gt = tanh(Wigxt) = [1#, 1#, 1a −1b] (14) i.e., the last component of gt holds +1 when seeing ‘a‘ or −1 when seeing ‘b‘. 1’s are stored in the other components when the start-of-sequence symbol first appears. Then, the input and forget gates are saturated to make the addition between gt and ct−1 stable. Saturating the gates is done through their biases in order to save on encoding length: bii = bif = LARGE · [1, 1, 1] We write off the saturated gates from definition (10), and get the recurrent update of the memory vector: ct = ct−1 + gt (15) It is then simple to apply the recurrence and get the correct counting targets (13) for all time steps: (14) gives g0 = [1, 1, 0] for the first time step and gt>0 = [0, 0, 1a −1b] for all other steps. B.3 Hidden vector The following construction leads to the hidden vector ht holding the following target values, representing the different phases of an anbn string: ht = [1#, 1a, 1#a>#b]tanh (16) We first construct ot as a mask vector to select the relevant part from ct in (13) based on the current phase of the string. The mask ot is constructed by setting: Wio = LARGE ·   2 0 0 0 2 0 0 0 2   bio = LARGE · [−1, −1, −1] Since all other weights in definition (9) are 0, we get: ot = σ(Wioxt + bio) Following the indexing of xt, this results in a one-hot mask based on the current input: ot = [1#, 1a, 1b] (Wio and bio are used instead of setting Wio to the identity because of the sigmoid in (9).) Combined with (13) we get: ht = ot ⊙tanh(ct) = [1#, 1a, 1#a>#b]tanh (17) This vector is (tanh-)one-hot in all cases except when #a = #b, in which case it zeros-out. B.4 Output layer The hidden vector ht is then multiplied by a linear layer Wout. We build the values of Wout backwards based on the optimal target probabilities. Each anbn string has four phases: the start-ofsequence step, the ‘a’ phase, the bm<n phase, and the final-‘b’ phase. Each row in the following matrix holds the optimal probabilities for the respective phase, based on PCFG (2): Targets =     p 1 −p 0 0 1 −p p 0 0 1 1 0 0     (18) Since the output layer feeds a final softmax, we build the logits backwards: Wlog = ln(Targets + ε) with ε preventing taking the log of 0. Here we use ε = (214 −1)−1. Since we have four states and only three components in the hidden vector, we superimpose the four states onto three. First, split Wlog into Wout′ which contains the first three states, and a bias bout which contains the fourth: Wout′ = Wlog1:3 bout = Wlog4 Then subtract to get: Wout′′ = (Wout′ −bout)T The transpose is taken so that multiplying by the one-hot ht copies columns. 13209 Finally, divide by tanh(1) because ht is tanhone-hot based on (17): Wout = Wout′′/tanh(1) As seen in (17), ht is one-hot during all phases except the last ‘b’, and thus copies the relevant probability distribution from (18). Adding the bias undoes the superimposition. ht is all-zero only when #a = #b, in which case Wout · ht is zero. In this case the probabilities for the fourth state (final-b), stored in bout, are outputted. 13210
2024
713
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13211–13235 August 11-16, 2024 ©2024 Association for Computational Linguistics Context versus Prior Knowledge in Language Models Kevin DuD Vésteinn SnæbjarnarsonR Niklas StoehrD Jennifer C. WhiteN Aaron Schein@ Ryan CotterellD DETH Zürich RUniversity of Copenhagen NUniversity of Cambridge @The University of Chicago [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Abstract To answer a question, language models often need to integrate prior knowledge learned during pretraining and new information presented in context. We hypothesize that models perform this integration in a predictable way across different questions and contexts: models will rely more on prior knowledge for questions about entities (e.g., persons, places, etc.) that they are more familiar with due to higher exposure in the training corpus, and be more easily persuaded by some contexts than others. To formalize this problem, we propose two mutual information-based metrics to measure a model’s dependency on a context and on its prior about an entity: first, the persuasion score of a given context represents how much a model depends on the context in its decision, and second, the susceptibility score of a given entity represents how much the model can be swayed away from its original answer distribution about an entity. We empirically test our metrics for their validity and reliability. Finally, we explore and find a relationship between the scores and the model’s expected familiarity with an entity, and provide two use cases to illustrate their benefits. https://github.com/kdu4108/ measureLM 1 Introduction Language models have displayed remarkable abilities to answer factual queries about entities, suggesting that they encode knowledge about these entities learned during pretraining (Petroni et al., 2019; Brown et al., 2020; Roberts et al., 2020; Geva et al., 2021). For prompts that extend a question with additional information or context, the model can draw on both its prior knowledge and the additional context to answer the query (Kwiatkowski et al., 2019; Joshi et al., 2017; Berant et al., 2013; Kasai et al., 2023). While previous research has investigated how often a model will rely on prior knowledge over conflicting contextual information in answering questions (Longpre et al., 2021), we Contexts Entities Susceptibility Persuasion c: Homer is the best employee of all time. ... Harry plays chess with Phoebe. The Beatles are the best. Homer is an actor. Figure 1: In answering a given query, a model may be more susceptible to context for some entities than others, while some contexts may be more persuasive than others (as indicated in this figure by color darkness in the rightmost column). We introduce mutual informationbased metrics to evaluate how much impact the context has relative to the prior knowledge of a model. hypothesize that models will not behave identically for all contexts and entities. For example, if a language model is prompted with Harry hugged Voldemort. How friendly are Harry Potter and Lord Voldemort?, we might expect the prior knowledge learned from training data describing the rivalry between these two characters to significantly influence the model’s answer. However, if the model lacks a strong prior on, say, Susie and Alia, then we might expect its answer to be primarily context-driven when prompted with Susie hugged Alia. How friendly are Susie and Alia?. We formalize this problem through the lens of evaluating the change in a model’s answer distribution for different contexts and entities. We present two mutual information-based metrics that allow us to explore differences in the effect of specific contexts on model behavior for different entities. The persuasion score of a given context measures how much a model’s answer distribution is affected by the context when prompted with a particular query about a given entity. The susceptibility score of a given entity measures how much the model’s answer distribution can be swayed for a particular query about that entity, marginalized over all contexts. Given their basis in mutual information, our metrics are natural operationalizations of per1 13211 suasion and susceptibility. Furthermore, we offer empirical evidence of the validity and reliability of these measures by comparing them against similar metrics and showing their robustness to paraphrases and different samples. To study how language models behave for different contexts and entities, we create a synthetic dataset of queries covering 122 topic areas extracted from the YAGO knowledge graph (Suchanek et al., 2007), entities extracted from YAGO and generated with GPT-4 (OpenAI, 2023), and contexts constructed with different qualities, e.g., relevancy and assertiveness. We apply our new metrics to Pythia models ranging from 70m to 12b parameters (Biderman et al., 2023) to find evidence that relevant contexts are consistently more persuasive than irrelevant ones, and assertive contexts are more persuasive than less assertive ones for yes–no questions. In a deep dive into one model, we find evidence that entities with high expected familiarity, as measured by both training data frequency and entity degree statistics in the YAGO knowledge graph, have lower susceptibility scores. We further conduct case studies to show how these metrics could be useful in applied settings. In a study on friend–enemy stance measurement, we find evidence that enemy duos are less susceptible than friend duos. Applying our metrics to gender bias, we find evidence for a difference in susceptibility between stereotypically masculine and feminine names for gender-biased contexts. Through this, we show how our proposed metrics can be used to better analyze the effects of context and prior knowledge, with the potential for application toward greater control over model behavior. 2 Context and Prior Knowledge Much prior work has noted that language models develop biases about entities during training and investigated the tension between this entity bias and additional information provided in the context. 2.1 Entity Bias Loosely defined across studies as model bias from spurious correlations between entity mentions and a target characteristic in the training data, entity bias has been mainly examined in relation extraction (RE), where a model extracts relationships between entities from text (Zelenko et al., 2002; Zhou et al., 2005). Different studies on RE have attempted to either mitigate (Zhang et al., 2017, 2018; Peng et al., 2020; Wang et al., 2022) or leverage (Yamada et al., 2020; Zhou and Chen, 2022) entity bias for improved performance on the task. Wang et al. (2022) note that entities can carry both useful semantic information about their roles, e.g., whether the entity is a person or a place, and spurious information which can bias the model toward relations not mentioned in the target sentence, e.g., the model may link Switzerland with the relation countries_of_residence at inference time for a sentence that does not mention this relation, due to their frequent co-occurrence in the training data. Additional studies on machine reading comprehension (MRC) tasks, e.g., SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), and NaturalQuestions (Kwiatkowski et al., 2019), have found substituting different entity names can result in meaningful changes in model predictions and overall evaluation performance (Yan et al., 2022). 2.2 Context and Entity Bias The existence of entity bias naturally raises the question of how it interacts with context to shape a model’s response. Several papers (Longpre et al., 2021; Chen et al., 2022; Xie et al., 2023) approach this by inducing and exploring knowledge conflicts, where a context preceding a query proposes information that conflicts with a model’s prior knowledge about that query. They measure the model’s reliance on pretrained entity bias and context by computing the memorization ratio: the proportion of knowledge conflict examples for which the model maintains its answer from prior knowledge. Pezeshkpour (2023) measures a model’s prior knowledge of a fact by comparing the entropy of a queried model’s answer distribution before and after stating the fact in context. Several studies further propose interventions to reduce entity bias and favor in-context information. These approaches include prompting (Zhou et al., 2023; Onoe et al., 2023), modifying training data (Wang et al., 2023), fine-tuning (Li et al., 2023), and neuron-level interventions (Yu et al., 2023) at inference time. However, the metrics used to quantify model reliance on context and entity bias in these papers—excepting Pezeshkpour (2023)—are limited in several ways. First, most previous work does not develop entity-specific or context-specific metrics for the strength of the entity bias or context persuasiveness. Instead, their metrics produce only 2 13212 a single number to summarize the model’s overall reliance on entity bias. Second, their setups are limited to adversarial cases in which a context is chosen to go against the entity bias. Indeed, we may well wish to measure the interplay of context and entity bias in other cases, such as when context reinforces entity bias or when they do not clearly disagree. Therefore, we seek to define a metric that measures how much the model depends on a given context or entity with rigorous, theoretically grounded interpretations. 3 Our Formalization We now formalize the problem setting in which we define metrics for the susceptibility of an entity and the persuasiveness of a context for a given model. Let Σ be an alphabet. Consider a language model pM over Σ, i.e., pM is a distribution over the Kleene closure Σ∗. Furthermore, we assume that pM was estimated from a corpus D ⊂Σ∗. Let E ⊂Σ∗be the subset of strings that could correspond to string representations of entities.1 Let Q = {qn}N n=1 be a set of N slotted query templates of type qn : E → Σ∗that fill the slot with the argument to each qn. We can think of a query template q ∈Q as slotting an entity into a query, e.g., slotting the entity Slovenia into the query template q(e) = The capital of e is produces the string The capital of Slovenia is. Now, consider three random variables. First, let C be a Σ∗-valued random variable that stands for a context. Second, let A be a Σ∗-valued random variable whose values are answers. Let E be a Evalued random variable. The pushforward q(E), then, is a Σ∗-valued random variable over slottings of entities according to query q ∈Q. The random variables C, q(E) and A are jointly distributed according to the following probability distribution p(C = c, q(E) = q(e), A = a) △∝pM(cq(e)a), (1) where cq(e)a ∈Σ∗is string concatenation. To formalize persuasion and susceptibility, we make use of the joint distribution p heavily in the proceeding subsections. 3.1 Persuasion Score For each context c that is prepended to a query template with a given entity slotted in, q(e), we wish to 1In practice, an entity can have different verbalizations, e.g., Sherlock and Sherlock Holmes. . And, whether a specific string refers to an entity may very well depend on the context in which it occurs. Thus, our set E is an approximation at best. assign a persuasion score ψ to represent how successful that context is at altering a model’s answer distribution. This score depends on the specific queried entity e, because contexts themselves are often entity-dependent. Intuitively, a context’s persuasion score should measure how much the probability distribution of possible answers changes, averaged across all possible answers. More precisely, we define our persuasion score ψ(c, q(e)) as the half-pointwise mutual information (half-PMI) between the context c and the answer random variable A, conditioned on the fixed query about an entity: ψ(c, q(e)) ≜I(C = c; A | q(E) = q(e)) = X a∈Σ∗ p(a | c, q(e)) log p(a | c, q(e)) p(a | q(e)) = KL(p(A | c, q(e)) || p(A | q(e))), (2) where p(A | c, q(e)) and p(A | q(e)) can be derived by marginalizing and conditioning Eq. (1). The persuasion score of a context can then be interpreted as the degree (in nats) to which the context was able to change the model’s answer distribution when prepended to a query. When the persuasion score is at its lower bound of 0 nats, it indicates the context is completely unpersuasive, i.e., it did not change the model’s answer distribution at all. Contexts with higher persuasion scores change the answer distribution more, which is consistent when viewed through the lens of KL-divergence.2 3.2 Susceptibility Score For a given query template applied to an entity q(e), we further wish to assign a susceptibility score χ to q(e) which represents how easy it is to change a model’s answer distribution. Intuitively, the susceptibility score should measure how much a model’s answer distribution to a query changes when prompted with additional context, averaged across all possible contexts and answers. More precisely, we define the susceptibility score χ(q(e)) as the mutual information between the context and answer random variables, conditioned on a fixed query about an entity: χ(q(e)) ≜ X c∈Σ∗ p(c)ψ(c, q(e)) = I(C; A | q(E) = q(e)), (3) 2We include details on further equivalencies of half-PMI and other concepts in information theory in App. A. 3 13213 where p(c) is the marginal distribution over contexts. Equivalent to the difference in entropy H(A | q(E) = q(e)) −H(A | C, q(E) = q(e)), the susceptibility score represents the reduction in answer distribution uncertainty (in nats) when a query is preceded by a context. A high susceptibility score means the model is highly influenced by context for the query about that entity, with its upper bound of H(A) indicating that context fully determines the answer. A low score indicates the model’s response is robust to context, with its lower bound of 0 indicating no influence of context on the answer distribution. We can use susceptibility to answer the question “How much does the model’s answer depend on its entity bias?” with an information-theoretically grounded, interpretable scale based on the model’s full behavior. 3.3 Entity-Independent Persuasion Score Persuasion scores can be further marginalized over entities. Analogous to our definition of the susceptibility score, we define the entity-independent persuasion score of a context as how much the log probability distribution of possible answers changes, averaged across all possible entities and answers. We describe this further in App. B. 4 Experiments We now provide empirical evidence to further validate our metrics, characterize model behavior using the susceptibility and persuasion scores, and investigate reasons behind differences in susceptibility scores for different entities. 4.1 Setup For each of the 122 different relations from the YAGO knowledge graph, we collect 100 entities3 and construct 600 random contexts from relationspecific context templates such that each entity is mentioned in 6 contexts. The 600 contexts are evenly distributed between assertive,4 base,5 and negation6 context types. We construct four query forms of each relation: two closed questions, i.e., yes–no, and two open questions. Using these samples, we compute persuasion scores for each context according to Eq. (2) and 350 are real entities (e.g., Adele) sampled from YAGO, and 50 are fake entities (e.g., Udo König) of the same entity class (e.g., Person) generated with GPT-4 (OpenAI, 2023). 4E.g., Definitely, the capital of {entity} is {answer}. 5E.g., The capital of {entity} is {answer}. 6E.g., The capital of {entity} is not {answer}. susceptibility scores for each entity according to Eq. (3) for each of the four query forms for six Pythia models of different sizes7 trained on the deduplicated Pile (Wolf et al., 2020; Dettmers et al., 2022; Biderman et al., 2023). Due to computational constraints, we approximate the model’s answer distribution with the next-token distribution over the model’s vocabulary.8 The detailed setup can be found in App. C. 4.2 Empirically Validating Our Metrics 4.2.1 Estimating Scores Because persuasion and susceptibility scores both involve a countably infinite sum, we opt for a stochastic approximation scheme. Specifically, we construct a Monte Carlo estimator. We sample from a narrower set of constructed contexts and approximate the answer distribution with the next token distribution over the model’s vocabulary, as described in §4.1. We take a sample size of 600 contexts. While the Monte Carlo approximation itself results in a consistent estimator, the additional approximations mean we do not have a guarantee on the quality of the approximation as a whole. In Fig. 4, we exhibit the variance of our estimator across three random seeds, i.e., sampled sets of context. 4.2.2 Validating Persuasion Scores Convergent Validity. According to existing measurement modeling methods (Loevinger, 1957; Messick, 1987; Jackman, 2008; Hand, 2004; Quinn et al., 2010; Jacobs and Wallach, 2021), observing a relationship between a new metric and existing ones would serve as additional evidence that the metric is meaningful. To this end, we explore whether contexts with higher persuasion scores tend to more successfully convince the model to agree with the context. Using the Pythia-6.9bdeduped model, we generate an answer for each prepended context to a query–entity pair from §4.1 and use simple string matching to map the answer to whether it agrees with the context, the original answer, or neither. We then apply a permutation test (k = 10000, α = 0.05 with the Benjamini–Hochberg (BH) correction (Benjamini and Hochberg, 1995)) for whether contexts that elicited context-concordant answers have higher persuasion scores than those that did not alter the 770m, 410m, 1.4b, 2.8b, 6.9b, 12b (8-bit quantized) 8While it would be more precise to estimate the answer distribution by repeatedly sampling many model outputs, this is very computationally expensive. 4 13214 Original Context Other Sampled answer type 0 2 4 Mean persuasion score open Original Context Other Sampled answer type 0.00 0.05 0.10 0.15 Mean persuasion score closed Persuasion scores vs sampled answer Figure 2: The x-axis represents bins for whether the model’s answer agreed with its prior, the context, or neither. For open (left) queries, the persuasion scores of contexts that persuaded the model to output an answer matching the context (Context) are higher than those of contexts that did not (Original, Other). model from the original answer for each of the 122 query topics. Within a query topic, we run separate tests for open and closed queries, since the entropy of the answer distribution for closed queries tends to be much lower than the entropy for open queries. We find that for 59% of open queries, contexts that persuade the model to output the in-context answer have significantly higher persuasion scores than non-persuasive ones. Curiously, this behavior holds for only 34% of closed queries. We discuss this surprising result more in §6. We further show a summary of the mean persuasion scores for the different kinds of elicited answers in Fig. 2. Construct Reliability. To show construct reliability, we provide evidence that persuasion scores do not strongly vary for inputs that differ only in phrasing, i.e., query form. For example, the persuasion score of The capital of Slovenia is Oz. should be similar when prepended to either Is Slovenia’s capital Ljubljana? or Is Ljubljana the capital of Slovenia?. We compute persuasion scores using the setup from §4.1 with two query forms for both closed and open queries, keeping contexts that appeared for the same query and entity for all seeds. We then compute the variance across the query forms to test for reliability. Fig. 4 shows strong evidence that the metric is reliable. The variance is very low across different closed query forms, as expected. While open query forms have higher variance, this is not unexpected because the questionanswering query form, e.g., Q: What is the capital of Slovenia?\nA: is more specific than the sentencecompletion form, e.g., The capital of Slovenia is, which has a broader set of plausible answers. 0.2 0.4 0.6 0.8 1.0 Memorization ratio bin 0 1 2 3 Susceptibility score open 0.2 0.4 0.6 0.8 1.0 Memorization ratio bin 0.1 0.2 Susceptibility score closed Susceptibility score vs memorization ratio (binned) Figure 3: Susceptibility score (y-axis) against the MR divided into 5 bins between 0 and 1 (x-axis) for all entities and queries. The opacity represents the proportion of points in each bin. For both open queries (•) and closed queries (•), we see a decreasing upper bound between the MR and susceptibility score. While the quartiles of the open queries generally decrease (except for the lowest bin), the opposite occurs for closed queries. 4.2.3 Validating Susceptibility Scores Convergent Validity. We compare susceptibility scores to per-entity memorization ratio (MR)9 as further evidence for the meaningfulness of our metric, using the setup from §4.1 for the Pythia6.9b-deduped model. Fig. 3 shows a decreasing upper-bounding relationship between susceptibility scores and MR for both open queries and closed queries. While not a straightforward correlation, this pattern supports the convergent validity of susceptibility scores and can be explained by the scores’ nature of measuring a difference in entropy. That is, if the model’s answer was often changed (low MR), this could correspond to a wide range of entropy change (low or high susceptibility), depending on the model’s confidence without the context. If the model’s answer was mostly unchanged (high MR), the entropy likely remained similar (low susceptibility). We discuss these results further in App. D.2. Construct Reliability. Following the setup used in §4.2.2, we consider the same two query forms for both open and closed questions to test for reliability (low variance) for susceptibility scores. Fig. 4 shows strong evidence for the reliability of susceptibility scores. Similarly to the persuasion scores, the variance for both open and closed queries is very low across closed query forms; open query forms have higher variance because they have meaningfully different possible answers, as discussed in §4.2.2. 9Adapted from Longpre et al. (2021) to apply on a perentity basis and describe it in more detail in App. D. 5 13215 closed open Query type 10 12 10 9 10 6 10 3 Variance Seed closed open Query type 10 12 10 9 10 6 10 3 100 Variance Query form Variance of susceptibility and persuasion scores Figure 4: Summarizing across all 122 queries, we display the variance of susceptibility scores (blue) and persuasion scores (orange), across three random seeds (left) and across two query forms (right), and stratified for both closed and open queries (x-axis). The variance is very low across random seeds for both query types, and, for closed queries, across the specific query form. Variance is high for the different open query forms. 4.3 What Makes a Context Persuasive? To better characterize model behavior with persuasion scores, we explore several tests for qualities that might distinguish more persuasive contexts from less persuasive ones: relevance, assertiveness, and negation. 4.3.1 Relevance Experiment Setup. We use the setup described in §4.1. For the relevance test, we consider a relevant context to be one that mentions the queried entity, and an irrelevant context as one that does not. We hypothesize that relevant contexts should be more persuasive than irrelevant ones. For each of the 122 queries, we find the mean persuasion score of the relevant contexts and irrelevant contexts and use a permutation test to determine whether the mean persuasion score for relevant contexts (across entities) is higher than the mean persuasion score for irrelevant contexts, using a significance level of α = 0.05 with the BH correction. Results. As seen in Fig. 5 (▼), across most model sizes and relations, relevant contexts are significantly more persuasive than irrelevant contexts. Specifically, depending on the model, 95–100% of open queries and 83–100% of closed queries showed a significant result. We also see a trend in which, as model size increases, so too does the degree to which relevant contexts are more persuasive, as measured by the mean effect size. We summarize the significance test results and effect sizes for all models and queries in App. E. 4.3.2 Assertiveness Experiment Setup. Using a similar setup to §4.3.1, we explore whether assertive contexts are more persuasive than base ones by testing whether the mean persuasion score of the former group is greater than that of the latter for each query. Results. Fig. 5 (■) shows that consistently across model sizes, assertive contexts tend to be more persuasive than base contexts for closed queries but not for open queries. Moreover, assertive contexts are most persuasive for medium-sized models such as 1.4b and 2.8b. We hypothesize that smaller models may be less persuaded by assertive contexts because they may be worse at integrating context into their answers. Larger models may be less persuaded because of their stronger prior knowledge/lower susceptibility to context, whether assertive or not. We highlight the significance test results and effect sizes for all queries for the Pythia-6.9b-deduped model in Fig. 6 and other models in App. E. 4.3.3 Negation Experiment Setup. We use the same setup as in §4.3.2 and explore whether negation contexts differ in persuasiveness from the base ones, using a two-tailed permutation test for each query. Results. The permutation tests suggest evidence for a significant difference in 88% of the closed queries and 74% of the open queries for the Pythia-6.9b-deduped model. However, there is no consistent directional pattern; from Fig. 7, we see that negations are significantly more persuasive for some queries while significantly less persuasive for others. App. E shows a similar pattern for other models. Fig. 5 (▲) also shows that the smallest model is the most sensitive to being persuaded differently by negation vs base contexts. For closed queries, this may be due to potential spurious correlations or token biases between, i.e., seeing the word not and the model’s probability of outputting No. 4.3.4 Comparing Context Qualities For all models on open queries, the relevance of a context has the greatest effect on persuasion score compared to assertiveness or negation, as measured by effect size. This is consistent with our intuition that the model should be more sensitive to whether the queried entity is mentioned in the context than to other context features. However, we 6 13216 Permutation test results across models and comparisons 70m 410m 1.4b 2.8b 6.9b 12b Model size 0.0 0.2 0.4 0.6 0.8 1.0 % of queries open 70m 410m 1.4b 2.8b 6.9b 12b Model size 0.0 0.2 0.4 0.6 0.8 1.0 % of queries closed (a) % of queries with significant result 70m 410m 1.4b 2.8b 6.9b 12b Model size 0.5 0.0 0.5 1.0 1.5 2.0 2.5 Effect size open 70m 410m 1.4b 2.8b 6.9b 12b Model size 0.5 0.0 0.5 1.0 1.5 2.0 2.5 Effect size closed (b) Mean effect size Figure 5: The plots in Fig. 5a indicate the proportion of queries for which (▼) relevant contexts are significantly more persuasive than irrelevant contexts, (•) unfamiliar entities are significantly more susceptible than familiar entities, (■) assertive contexts are significantly more persuasive than base contexts, and (▲) negation contexts are significantly more persuasive than base contexts. We further provide the average effect size over queries of those comparisons in Fig. 5b. We highlight specific findings in §4.3 and §4.4. Query 0.0 2.5 5.0 Effect size closed Query 0.0 2.5 5.0 Effect size open 10 3 10 2 10 1 100 Effect sizes for assertive vs base contexts Figure 6: These plots show, for the Pythia-6.9b-deduped model, the effect size between base and assertive contexts (y-axis) and p-values (red is significant, blue is insignificant) of the null hypothesis that persuasion scores of assertive contexts are not greater than those of base contexts, for each of the 122 queries (x-axis). The evidence suggests that assertive contexts are significantly more persuasive than assertive contexts for most closed queries, but few open queries. can see in Fig. 5 that for medium-sized models on closed queries, the other context comparisons have stronger effect sizes, e.g., assertiveness and negation for the 2.8b model. This could be explained by potential spurious correlations/token biases associating, i.e., the word not with the model outputting No or the word definitely with outputting Yes. 4.4 What Makes an Entity Susceptible? 4.4.1 Familiar vs Unfamiliar Entities We hypothesize that the model should have lower susceptibility scores for entities encountered during pretraining compared to unfamiliar, fake entities. Experiment Setup. We use the setup described in §4.1 to compute susceptibility scores for known and unknown entities. We use a permutation test with α = 0.05 and the BH correction to test whether the known real entities are less susceptiQuery 2.5 0.0 2.5 Effect size closed Query 2.5 0.0 2.5 Effect size open 10 3 10 2 10 1 100 Effect sizes for negation vs base contexts Figure 7: These plots show, for the Pythia-6.9b-deduped model, the effect size (y-axis) and p-values (red is significant, blue is insignificant) of the null hypothesis that persuasion scores of negation contexts are the same as those of base contexts, for each of the 122 queries (x-axis). While many queries are significant, some queries exhibit a significantly positive effect while others exhibit a significantly negative one. ble than the unknown fake entities.10 The detailed setup can be found in App. C. Results. For the Pythia-6.9b-deduped model, we find that with 73 122 queries (open questions) and 61 122 queries (closed questions), familiar entities have significantly lower susceptibility scores than unfamiliar fake entities. We conjecture that for the remaining queries, the model may not have strong prior biases about the sampled entities for these queries. Indeed, further analysis (App. F.1) finds some evidence supporting the hypothesis that queries with smaller effect sizes or less significant p-values feature less familiar entities, as we find a small correlation between effect size and entity frequency in the training set. Fig. 8 shows the distribution of susceptibility scores for real and fake entities for an example query with a particularly strong effect size. 10We filter out fake entities that appear in the training data to better represent unknown entities in our analysis. 7 13217 (a) (b) Query form 0.0 0.5 1.0 Susceptibility score Susceptibility scores for real vs fake entities Figure 8: The officialLanguage query topic exhibits a particularly strong effect in which real entities (•) have lower susceptibility scores than fake entities (•) for open questions. This plot shows susceptibility scores for 100 real and fake entities, for two different query forms: (a) Q: What is the official language of {entity}?\nA:, and (b) The official language of {entity} is. Generally, as model size increases, so too does the significance and effect size of unfamiliar entities being more susceptible than familiar entities, as seen by the generally increasing blue lines (•) in all four plots in Fig. 5. This trend is consistent with our expectation that bigger models have stronger prior knowledge of entities and are therefore less susceptible to familiar entities. The smallest model (70m) does not have a significant difference in susceptibility between familiar and unfamiliar entities, which could indicate it is too small to have strong prior knowledge. 4.4.2 Degrees of Familiarity Training Data Frequency. Since language models are parameterized with knowledge from their training corpora, we hypothesize that the model is less susceptible for entities with which it is more familiar, i.e., more frequently occurring in the training data. We investigate this relationship between the Pythia models’ behavior and frequency statistics in the Pile dataset on which they were trained (Gao et al., 2020). To capture the model’s familiarity with an entity–answer relation, we count the number of co-occurrences between the entity and its corresponding answer within a 50word window. We compare the susceptibility score to this co-occurrence frequency and find a significant correlation (Spearman ρ = −0.23) for the Pythia-6.9b-deduped model. We see in Fig. 9 that as the training data frequency increases, the susceptibility scores’ upper bound decreases. This trend is shared across all model sizes (see App. F.2). 101 103 Frequency 0 1 Score Open 101 103 Frequency 0.00 0.05 0.10 0.15 Score Closed Susceptibility Score to Frequency Figure 9: For both open (•) and closed (•) queries, the upper bound of the mean susceptibility scores for entityanswer pairs decreases as co-occurrence frequency in the Pile increases (Pythia-6.9b-deduped). Entity Degree in a Knowledge Graph. Training data can be noisy, difficult, and expensive to search through for entity co-occurrences and more complex frequency statistics. Knowledge graphs offer a more precise alternative as they represent entities and relations extracted from common corpora like Wikipedia in a structured way. For example, within the pretraining data, it is very difficult to identify the number of different answers with which an entity will co-occur within the context of a specific relation. However, with a knowledge graph, we can easily identify the exact number of objects with which an entity might share a given relation. Thus, we explore the relationship between the relationdependent degree of an entity in a knowledge graph and the susceptibility score. Like with training data frequency, we find that comparing against this degree yields a similar-looking plot with a decreasing upper bound between susceptibility scores and the degree. Such a trend suggests that unfamiliar entities have the potential to be highly susceptible, while very familiar entities tend to have lower susceptibility. In App. F.2, we show this trend is shared across all model sizes and provide our methodology in more detail. 5 Applications We examine how knowing susceptibility scores can be useful for analyzing model behavior in two different applications. Here, we highlight one key finding per application; however, we emphasize that future work can conduct a more detailed analysis of these and other applications. Social Sciences Measurement. Social scientists use large language models (LLMs) for annotating data and descriptive data analysis, yet such use may 8 13218 {} and {} are The relationship between {} and {} is Query form 0.1 0.2 0.3 0.4 Susceptibility score Susceptibility scores for friend duos and enemy duos Figure 10: Susceptibility scores for different entity-pairs which are either friends or enemies. Enemy duos (•) appear to have lower susceptibility than friend duos (•). inadvertently incorporate entity biases and skew model behavior (Ziems et al., 2024; Gilardi et al., 2023; Zhang et al., 2023; O’Hagan and Schein, 2023). To better contextualize model annotations for a case study on friend–enemy stance detection (Choi et al., 2016; Stoehr et al., 2023), we aim to understand how susceptibility scores may differ between friend and enemy-based entity pairs for the query The relationship between {entity1} and {entity2} is, e.g., are famous friend-based relationships more susceptible than enemy-based relationships? From Fig. 10, we see that with the Pythia-6.9b-deduped model for two specific query forms, enemy duos are less susceptible than friend duos, which can inform social scientists that a model’s annotation for friend pairs may be more easily influenced by the context than enemy pairs. We provide more details in App. G. Exploring Gender Bias. Since higher susceptibility scores indicate weaker induced biases for entities, we conjecture that this can relate to being underrepresented in the training data. Based on this, we consider how the susceptibility score can be used to study gender bias in LLMs. Using GPT-4, we collect stereotypically biased contexts, gendered names, and neutral queries and run several experiments to identify gender discrepancies in susceptibility scores; see App. G for full details. We highlight a result where masculine names have a higher susceptibility than feminine names when swapping the genders in the stereotypical contexts, as seen in Fig. 11. This could indicate that the model is more surprised to see contexts claiming men follow feminine stereotypes, and therefore could suggest less representation of feminine stereotypes in the training data than masculine ones. M M* F F* context 0.18 0.20 0.22 0.24 score Susceptibility Scores from Stereotypes Figure 11: Susceptibility scores for the gendered names (masculine (•), feminine (•)) over the stereotypical contexts. M and F are the original stereotypes, and M* and F* correspond to the swapped genders. 6 Discussion and Conclusion We have made the case that the persuasion score and susceptibility score are both valid and reliable in measuring their respective constructs, following a well-established measurement modeling framework. Throughout our experiments, we find a common theme in the results: there is a strong, negative, possibly linear relationship between the upper bound of the susceptibility score and (a) the entity’s memorization ratio (Fig. 3), (b) the log-co-occurrence frequency in the training data (Fig. 9), and (c) the log-relation-dependent degree in a knowledge graph. That is, as each of those three values increases, we see a clear pattern indicating that the highest susceptibility scores tend to decrease. This is consistent with our hypothesis that the induced bias of an entity increases for a model as the model’s expected familiarity with the entity increases. Furthermore, we find a difference in behavior between scores for open and closed queries; while in many experiments, we see similar patterns between the two, susceptibility and persuasion appear to have a stronger relationship with memorization ratio for open queries, while assertive contexts appear to be significantly more persuasive than base contexts primarily for closed queries. This difference is a surprising phenomenon that warrants future study; we hypothesize closed queries may behave this way due to common token biases (with its output space of Yes and No) or the influence of mentioning both an entity and an answer in the query. Finally, while we applied these metrics to analyze model behavior in two case studies, in future work we aim to apply them to unearth new perspectives in other context and prior knowledge-dependent problems such as retrieval-augmented generation, model editing and control, and few-shot learning. 9 13219 Limitations We face some technical limitations in executing the empirical aspects of this work. First, while §3 defines the output space of A as the set of all possible outputs Σ∗, in practice, it is computationally expensive to estimate that probability distribution. Instead, we look only at the model’s probability distribution of the next token, which could be a noisy signal, especially in cases where the answer suggested by a context and the answer suggested from prior knowledge share the same first token. Second, it is difficult to go through the whole Pile to count answer–entity co-occurrences without noise. Third, the scores depend on the sampled contexts, which may not be representative of all applications. Ethics Statement As LLM capabilities grow more advanced and their usage proliferates throughout the real world, we acknowledge that their development can exacerbate risks to people, especially those historically underrepresented or misrepresented to these models. Our work aims to make model behavior more transparent by providing a new tool to analyze the interaction between context and prior knowledge in LMs, which is especially important as people interact with them in chat, question-answering, and other prompt-based settings. We foresee no particular ethical concerns and hope this paper contributes to developing tools that can identify and mitigate ethical concerns in the future. Acknowledgements We thank Alex Warstadt, Tiago Pimentel, Anej Svete, Alexandra Butoi, Benjamin Dayan, and Leo Du for helpful comments, discussion, and feedback. Niklas Stoehr acknowledges funding through the Swiss Data Science Center (SDSC) Fellowship. Vésteinn Snæbjarnarson is supported by the Pioneer Centre for AI, DNRF grant number P1. References Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), 57(1):289–300. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling. In International Conference on Machine Learning, pages 2397–2430. PMLR. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Hung-Ting Chen, Michael Zhang, and Eunsol Choi. 2022. Rich knowledge sources bring complex knowledge conflicts: Recalibrating models to reflect conflicting evidence. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2292–2307, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Eunsol Choi, Hannah Rashkin, Luke Zettlemoyer, and Yejin Choi. 2016. Document-level sentiment inference with social, faction, and discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 333–343, Berlin, Germany. Association for Computational Linguistics. Thomas M. Cover and Joy A. Thomas. 2006. Elements of Information Theory. Wiley-Interscience. Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. 2022. LLM.int8(): 8-bit matrix multiplication for transformers at scale. In Advances in Neural Information Processing Systems, volume 35, pages 30318–30332. Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2020. The Pile: An 800GB dataset of diverse text for language modeling. arXiv preprint. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5484–5495. 10 13220 Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. ChatGPT outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30). David J. Hand. 2004. Measurement Theory and Practice: The World Through Quantification. WileyBlackwell. Simon Jackman. 2008. Measurement. In The Oxford Handbook of Political Methodology. Oxford University Press. Abigail Z. Jacobs and Hanna Wallach. 2021. Measurement and fairness. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, page 375–385. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611. Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Velocity Yu, Dragomir Radev, Noah A. Smith, Yejin Choi, and Kentaro Inui. 2023. RealTime QA: What’s the answer right now? In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466. Daliang Li, Ankit Singh Rawat, Manzil Zaheer, Xin Wang, Michal Lukasik, Andreas Veit, Felix Yu, and Sanjiv Kumar. 2023. Large language models with controllable working memory. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1774–1793. Jane Loevinger. 1957. Objective tests as instruments of psychological theory. Psychological Reports, 3(3):635–694. Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ramesh, Chris DuBois, and Sameer Singh. 2021. Entity-based knowledge conflicts in question answering. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7052–7063. Samuel Messick. 1987. Validity. ETS Research Report Series, 1987(2):i–208. Sean O’Hagan and Aaron Schein. 2023. Measurement in the age of LLMs: An application to ideological scaling. arXiv preprint. Yasumasa Onoe, Michael Zhang, Shankar Padmanabhan, Greg Durrett, and Eunsol Choi. 2023. Can LMs learn new entities from descriptions? challenges in propagating injected knowledge. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5469–5485. OpenAI. 2023. GPT-4 technical report. arXiv preprint. Hao Peng, Tianyu Gao, Xu Han, Yankai Lin, Peng Li, Zhiyuan Liu, Maosong Sun, and Jie Zhou. 2020. Learning from Context or Names? An Empirical Study on Neural Relation Extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3661–3672. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pages 2463–2473. Pouya Pezeshkpour. 2023. Measuring and modifying factual knowledge in large language models. arXiv preprint. Kevin M. Quinn, Burt L. Monroe, Michael Colaresi, Michael H. Crespin, and Dragomir R. Radev. 2010. How to analyze political attention with minimal assumptions and costs. American Journal of Political Science, 54(1):209–228. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the parameters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418–5426. Niklas Stoehr, Pengxiang Cheng, Jing Wang, Daniel Preotiuc-Pietro, and Rajarshi Bhowmik. 2024. Unsupervised contrast-consistent ranking with language models. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 900–914. Niklas Stoehr, Lucas Torroba Hennigen, Josef Valvoda, Robert West, Ryan Cotterell, and Aaron Schein. 2023. An ordinal latent variable model of conflict intensity. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4817–4830, Toronto, Canada. Association for Computational Linguistics. 11 13221 Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web, page 697–706. Fei Wang, Wenjie Mo, Yiwei Wang, Wenxuan Zhou, and Muhao Chen. 2023. A causal view of entity bias in (large) language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 15173–15184. Yiwei Wang, Muhao Chen, Wenxuan Zhou, Yujun Cai, Yuxuan Liang, Dayiheng Liu, Baosong Yang, Juncheng Liu, and Bryan Hooi. 2022. Should we rely on entity mentions for relation extraction? debiasing relation extraction with counterfactual analysis. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3071–3081. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38–45. Jian Xie, Kai Zhang, Jiangjie Chen, Renze Lou, and Yu Su. 2023. Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts. ArXiv:2305.13300 [cs]. Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entityaware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6442–6454. Jun Yan, Yang Xiao, Sagnik Mukherjee, Bill Yuchen Lin, Robin Jia, and Xiang Ren. 2022. On the robustness of reading comprehension models to entity renaming. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 508–520. Qinan Yu, Jack Merullo, and Ellie Pavlick. 2023. Characterizing mechanisms for factual recall in language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 9924–9959. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2002. Kernel methods for relation extraction. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10, EMNLP ’02, page 71–78. Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. 2023. Sentiment analysis in the era of large language models: A reality check. arXiv preprint. Yuhao Zhang, Peng Qi, and Christopher D. Manning. 2018. Graph convolution over pruned dependency trees improves relation extraction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2205–2215. Yuhao Zhang, Victor Zhong, Danqi Chen, Gabor Angeli, and Christopher D. Manning. 2017. Position-aware attention and supervised data improve slot filling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 35– 45. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 427–434. Wenxuan Zhou and Muhao Chen. 2022. An improved baseline for sentence-level relation extraction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 161–168. Wenxuan Zhou, Sheng Zhang, Hoifung Poon, and Muhao Chen. 2023. Context-faithful prompting for large language models. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 14544–14556. Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. 2024. Can large language models transform computational social science? Computational Linguistics, pages 1–55. 12 13222 A A Primer on Half-pointwise Mutual Information Half-pointwise mutual information (HPMI) is a non-standard concept. There is, for example, no mention of it in standard references on information theory (Cover and Thomas, 2006). In this brief primer, we give various properties of HPMI and show how it relates to other concepts in information theory. Given random variable X over the discrete space X, random variable Y over the discrete space Y, and x ∈X, the half-PMI of X = x, Y is defined as: HPMI(X = x; Y ) ≜ X y∈Y p(y | x) log p(y | x) p(y) (4) such that E x∼X [HPMI(X = x; Y )] = E x∼X  X y∈Y p(y | x) log p(y | x) p(y)   (5a) = X x∈X X y∈Y p(x)p(y | x) log p(y | x) p(y) (5b) = X x∈X X y∈Y p(x, y) log p(x, y) p(x)p(y) (5c) = MI(X; Y ) (5d) We now state and prove three Propositions about HPMI. Proposition 1. Given random variable X over the discrete space X, random variable Y over the discrete space Y, and x ∈X, then: HPMI(X = x; Y ) = H(X = x) −H(X = x | Y ) (6) Proof. HPMI(X = x; Y ) ≜ X y∈Y p(y | x) log p(y | x) p(y) (7a) = X y∈Y p(y | x) log p(x, y) p(x)p(y) (7b) = X y∈Y p(y | x) log p(x | y) p(x) (7c) = − X y∈Y p(y | x) log p(x) + X y∈Y p(y | x) log p(x | y) (7d) = −log p(x) + X y∈Y p(y | x) log p(x | y) (7e) = H(X = x) −H(X = x | Y ) (7f) Note that to get from Eq. (7e) to Eq. (7f), H(X = x | Y ) ≜P y∈Y p(y | x) log p(x | y) because E x∼X  X y∈Y p(y | x) log p(x | y)  = X x∈X X y∈Y p(x, y) log p(x | y) (7g) = X x∈X X y∈Y p(x, y) log p(x, y) p(y) (7h) = H(X | Y ) (7i) ■ 13 13223 Proposition 2. Given random variable X over the discrete space X, random variable Y over the discrete space Y, and x ∈X, then: HPMI(X = x; Y ) = Hx(Y ) −H(Y | X = x) (8) where Hx(Y ) ≜−P y p(y | x) log p(y) is the pointwise cross-entropy between X = x and Y . Proof. HPMI(X = x; Y ) ≜ X y∈Y p(y | x) log p(y | x) p(y) (9a) = − X y∈Y p(y | x) log p(y) + X y∈Y p(y | x) log p(y | x) (9b) = Hx(Y ) −H(Y | X = x) (9c) ■ Notably, HPMI(X = x; Y ) ̸= H(Y ) −H(Y | X = x). Furthermore, while the decomposition of mutual information into a difference in entropies is symmetric, i.e., H(X) −H(X | Y ) = H(Y ) −H(Y | X), this shows that half-pointwise mutual information is not, i.e., H(X = x) −H(X = x | Y ) ̸= H(Y ) −H(Y | X = x). Proposition 3. Given random variable X over the discrete space X, random variable Y over the discrete space Y, and x ∈X, then: HPMI(X = x; Y ) = KL (p(Y | x) || p(Y )) (10) Proof. Half-PMI is equivalent to KL (p(Y | x) || p(Y )) by definition. ■ Corollary 1. Half-PMI is nonnegative. Proof. Since half-PMI is equivalent to KL (p(Y | x) || p(Y )) (Proposition 3) and KL-divergence is nonnegative, half-PMI must be non-negative. ■ B An Entity-Independent Persuasion Score In addition to the persuasion score, which is entity-dependent, we also consider an entity-independent extension. We assign an entity-independent persuasion score κ to a context c which represents how persuasive a context is at altering a model’s answer distribution to a query, regardless of which entity parameterizes the query. One might be interested in an entity-independent persuasion score for contexts like Only give wrong answers to questions., which we might expect to affect all queries regardless of the entity. Another use case is in comparing the persuasiveness of context templates, such as {entity1} loves {entity2} and {entity1} really really really loves {entity2} for the query What’s the relationship between {entity1} and {entity2}?. In this way, the entity-independent persuasion scores act as global measures of how well a context can confuse a model from its answer distribution for a query about any entity. Analogous to our definition of the susceptibility score, we define the entity-independent persuasion score of a context as how much the log probability distribution of possible answers changes, averaged across all possible entities and answers. More precisely, we define our entity-independent persuasion score κ(c, q) as: κ(c, q) ≜ X e∈E p(q(e) | c)ψ(q(e)) (11a) = X e∈E X a∈Σ∗ p(q(e) | c)p(a | c, q(e)) log p(a | c, q(e)) p(a | q(e)) (11b) = X e∈E X a∈Σ∗ p(q(e) | c)p(a | c, q(e)) log p(a, c | q(e)) p(a | q(e))p(c | q(e)). (11c) 14 13224 which is the half-conditional PMI. Further marginalizing out the context, we arrive at the entityindependent susceptibility score for a query: γ(q) ≜E c∼C [κ(c, q)] (12a) = X c∈Σ∗ p(c)κ(c, q) (12b) = X c∈Σ∗ p(c) X e∈E X a∈Σ∗ p(q(e) | c)p(a | c, q(e)) log p(a, c | q(e)) p(a | q(e))p(c | q(e)) (12c) = X c∈Σ∗ X e∈E X a∈Σ∗ p(a, c, q(e)) log p(a, c | q(e)) p(a | q(e))p(c | q(e)) (12d) = MI(A; C | q(E)). (12e) The entity-independent persuasion score differs from the persuasion score by additionally marginalizing over the entities. As the half-PMI is conditioned on the entity random variable, this tells us, when we already know the entity, for a given context, how much more confident can we be in the answer. In some sense, then, this can be interpreted as the average persuasiveness of a context across all entities for the query. C Detailed Experimental Setup We extract 122 relations from the YAGO knowledge graph (Suchanek et al., 2007), such as alumniOf, capital, and highestPoint. For each relation, we do the following: • We randomly sample k real entities (and corresponding answers) from YAGO and use GPT-4 11 (OpenAI, 2023) to generate k fake entities with the same entity class as the real ones12.13 • We construct open and closed query form templates, e.g., (closed) Q: Is {answer} the capital of {entity}?\nA: and (open) Q: What is the capital of {entity}?\nA:, and parameterize them with both real and fake entities (and answers, if applicable), leaving us with 2k queries per query form. • We construct context templates of 3 types: base, e.g., The capital of {entity} is {answer}., assertive, e.g., The capital of {entity} is definitely {answer}., and negation (e.g., The capital of {entity} is not {answer}.). We parameterize these context templates with both real and fake entities (and answers, if applicable). From this, we randomly sample 6k contexts, subject to the constraint that each entity is directly mentioned in 6 contexts total (that is, in 2 assertive contexts, 2 base contexts, and 2 negation contexts). • We compute the persuasion scores ψ(c, q(e)) for the real and fake entities according to Eq. (2). • We compute the susceptibility score χ(q(e)) for the real and fake entities according to Eq. (3). We approximate C with a uniform distribution over the set of sampled contexts and A with the model’s next token probabilities. • For the various group comparisons (e.g., relevant vs irrelevant contexts, familiar vs unfamiliar entities, etc.), we use a permutation test over the t-statistic (α = 0.05, with the BH correction) to test our null hypothesis for each comparison. 11gpt-4-1106-preview, January 2024 12Entity classes: CreativeWork, Event, Intangible, Organization, Person, Place, Product, Taxon, and FictionalEntity. 13Real example: Adele. Fake example: Udo König. 15 13225 D Entity-Specific Memorization Ratio D.1 Definition The memorization ratio (MR), as used by Longpre et al. (2021), is defined as follows: Given a set of (query, context)-pairs with knowledge conflicts (i.e., the answer in the context disagrees with the original answer), MR = po po+ps , where po is the number of queries for which the model returned the original answer and ps is the number of queries for which the model returned the substitute answer presented in the context. Then, the entity-specific MR follows this definition under the additional constraint that the query and entity are fixed, and we vary only the contexts; the resulting number tells us, for a given query about an entity, the fraction of contexts for which the model returned the original answer instead of the substitute one. D.2 Further Discussion on Relation between MR and Susceptibility Score In Fig. 3, the open queries further show a decreasing pattern at each quartile for all bins except the lowest one (MR between 0 and 0.2). The lowest bin includes queries for which a model may fail to know the original answer; in these cases, MR cannot distinguish between the model behavior for these difficult queries, whereas susceptibility scores can provide more granular information about model behavior for such entities. Meanwhile, the closed queries appear to have a mostly increasing pattern at each quartile across the bins. This result could be an artifact of the construction of the closed queries. Since the original answer of all closed queries is Yes, it is possible that the contexts increase the confidence in Yes due to some model artifact or token bias, which would explain higher susceptibility scores even for higher MR. E Persuasion Scores: In-Depth Results E.1 Relevant vs Irrelevant Context Persuasion Scores Across Models Our null hypothesis is that the mean persuasion score of relevant contexts is not greater than that of irrelevant contexts. We summarize the test results (effect size and p-values) for all queries for all models in Fig. 12. E.2 Assertive vs Base Context Persuasion Scores Across Models Our null hypothesis is that the mean persuasion score of assertive contexts is not greater than that of base contexts. We summarize the test results (effect size and p-values) for all queries for all models in Fig. 13. E.3 Negation vs Base Context Persuasion Scores Across Models Our null hypothesis is that the mean persuasion score of negation contexts is not equal to that of base contexts. We summarize the test results (effect size and p-values) for all queries for all models in Fig. 14. F Susceptibility Scores: In-Depth Results F.1 Unfamiliar vs Familiar Entity Susceptibility Scores Across Models Our null hypothesis is that the mean susceptibility score of unfamiliar entities is not greater than that of familiar entities. We summarize the test results (effect size and p-values) for all queries for all models in Fig. 15. From this figure, we can see the trend of how effect size and percentage of significant queries generally increase with model size, and notably the smallest model has no significant results for any query. However, even the larger models do not exhibit significant differences in scores between unfamiliar and familiar entities for all queries. To investigate the spread further, we plot the p-values and effect size against the entity frequencies in the Pile. The results are presented in Fig. 16. There is a significant trend for the open queries against the frequency (spearman, ρ is −0.23, p < 0.05), showing that real entities tend to be less susceptible the more frequently they appear in the training data. The trend is not significant for the closed set. 16 13226 Effect sizes for relevant vs irrelevant contexts for Pythia models Query 0.5 1.0 Effect size open Query 0.5 1.0 Effect size closed 10 3 10 2 10 1 100 pythia-70m-deduped Query 0 2 Effect size open Query 0 2 Effect size closed 10 3 10 2 10 1 100 pythia-1.4b-deduped Query 0 1 2 3 Effect size open Query 0 1 2 3 Effect size closed 10 3 10 2 10 1 100 pythia-6.9b-deduped Query 0 1 2 Effect size open Query 0 1 2 Effect size closed 10 3 10 2 10 1 100 pythia-410m-deduped Query 0 2 Effect size open Query 0 2 Effect size closed 10 3 10 2 10 1 100 pythia-2.8b-deduped Query 0 2 4 Effect size open Query 0 2 4 Effect size closed 10 3 10 2 10 1 100 pythia-12b-deduped Figure 12: These plots show, for each of the 6 model sizes, the effect size between relevant and irrelevant contexts (y-axis) and p-values (red is significant, blue is insignificant) of the null hypothesis that persuasion scores of relevant contexts are not greater than those of irrelevant contexts, for each of the 122 queries (x-axis). Across a consistent result across all models of primarily positive effect sizes and mostly significant results. Effect sizes for assertive vs base contexts for Pythia models Query 1 0 1 2 Effect size open Query 1 0 1 2 Effect size closed 10 3 10 2 10 1 100 pythia-70m-deduped Query 0 2 4 Effect size open Query 0 2 4 Effect size closed 10 3 10 2 10 1 100 pythia-1.4b-deduped Query 0 2 4 6 Effect size open Query 0 2 4 6 Effect size closed 10 3 10 2 10 1 100 pythia-6.9b-deduped Query 2 0 2 Effect size open Query 2 0 2 Effect size closed 10 3 10 2 10 1 100 pythia-410m-deduped Query 0 5 Effect size open Query 0 5 Effect size closed 10 3 10 2 10 1 100 pythia-2.8b-deduped Query 2 0 2 4 Effect size open Query 2 0 2 4 Effect size closed 10 3 10 2 10 1 100 pythia-12b-deduped Figure 13: These plots show, for each of the 6 model sizes, the effect size between assertive and base contexts (y-axis) and p-values (red is significant, blue is insignificant) of the null hypothesis that persuasion scores of relevant contexts are not greater than those of irrelevant contexts, for each of the 122 queries (x-axis). Across a consistent result across all models of primarily positive effect sizes and mostly significant results. 17 13227 Effect sizes for negation vs base contexts for Pythia models Query 0 2 4 Effect size open Query 0 2 4 Effect size closed 10 3 10 2 10 1 100 pythia-70m-deduped Query 2 1 0 1 Effect size open Query 2 1 0 1 Effect size closed 10 3 10 2 10 1 100 pythia-1.4b-deduped Query 2 0 2 Effect size open Query 2 0 2 Effect size closed 10 3 10 2 10 1 100 pythia-6.9b-deduped Query 2 0 2 Effect size open Query 2 0 2 Effect size closed 10 3 10 2 10 1 100 pythia-410m-deduped Query 0 2 4 Effect size open Query 0 2 4 Effect size closed 10 3 10 2 10 1 100 pythia-2.8b-deduped Query 2 0 2 Effect size open Query 2 0 2 Effect size closed 10 3 10 2 10 1 100 pythia-12b-deduped Figure 14: These plots show, for each of the 6 model sizes, the effect size between relevant and irrelevant contexts (y-axis) and p-values (red is significant, blue is insignificant) of the null hypothesis that persuasion scores of relevant contexts are not greater than those of irrelevant contexts, for each of the 122 queries (x-axis). Across a consistent result across all models of primarily positive effect sizes and mostly significant results. Effect sizes for unknown vs known entities for Pythia models Query 0.5 0.0 0.5 1.0 Effect size open Query 0.5 0.0 0.5 1.0 Effect size closed 10 3 10 2 10 1 100 pythia-70m-deduped Query 0 2 4 Effect size open Query 0 2 4 Effect size closed 10 3 10 2 10 1 100 pythia-1.4b-deduped Query 0 2 Effect size open Query 0 2 Effect size closed 10 3 10 2 10 1 100 pythia-6.9b-deduped Query 0 1 Effect size open Query 0 1 Effect size closed 10 3 10 2 10 1 100 pythia-410m-deduped Query 1 0 1 2 Effect size open Query 1 0 1 2 Effect size closed 10 3 10 2 10 1 100 pythia-2.8b-deduped Query 1 0 1 2 Effect size open Query 1 0 1 2 Effect size closed 10 3 10 2 10 1 100 pythia-12b-deduped Figure 15: These plots show, for each of the 6 model sizes, the effect size between relevant and irrelevant contexts (y-axis) and p-values (red is significant, blue is insignificant) of the null hypothesis that persuasion scores of relevant contexts are not greater than those of irrelevant contexts, for each of the 122 queries (x-axis). Across a consistent result across all models of primarily positive effect sizes and mostly significant results. 18 13228 102 103 104 105 106 Frequency 10 25 10 16 10 7 P-value 102 103 104 105 106 Frequency 2 1 0 Effect size Figure 16: The significance of the difference in real/fake susceptibility correlates somewhat with the frequency of the real entities. Open queries are colored blue, and closed ones are orange. Spearman, ρ for the open, is −0.23, p < 0.05; the trend is not significant for the closed set. F.2 Training Data and Susceptibility Scores Since language models are parameterized with knowledge from their training corpora, we examine whether we can identify correlations between patterns in entity susceptibility scores and their prevalence in the training data. Because the Pythia models are all trained on the Pile dataset (Gao et al., 2020) in a single pass, we choose to compare the susceptibility scores from the Pythia models to various frequency and co-occurrence statistics in the Pile.14 Experiment Setup. Our goal is to understand how the susceptibility score relates to the frequencies of entities and their co-occurrences in the training data. For this, we use all of the entities and answers selected as described in §4.4 and locate them in the Pile. We only perform rudimentary tokenization of each document by removing punctuation and splitting at white spaces, but find this suffices to locate the exact terms. As a sanity check, we annotate named entities in 30k documents and cross-reference the list of entities. If a supposed entity has a fairly high frequency (>50) and is most often (>75%) not labeled as an entity, we exclude it from the calculations. This removes ∼200 entities that are high-frequency non-entity words in English and lowers the co-occurrences by 25%. Finally, we calculate the token distance for each entity-answer pair for every document in ∼1/3 of the Pile. Results. We compare the co-occurrences of the entity-answer pairs to the averaged susceptibility scores over all queries Qe that apply to the entity e, |Qe|−1Σq∈Qeχ(q(e)). Our results show a stark difference in behavior between the open and closed questions (§4.4). The open questions not only have higher susceptibility scores, but the training corpus frequency of the mentioned entities influences them more. This is to be expected as there are far more probable candidates for open questions than for closed yes–no-style questions. We also find a significant negative correlation (Spearman ρ -0.23, p ≃0) between frequency and susceptibility scores for the pythia-6.9b-deduped model, indicating that the language model is less susceptible to context interference for entity-answer pairs that are more frequently found in the training corpus. See Fig. 9 and Fig. 17. Finally, we also notice a big difference in the rank correlation depending on the query type. Some query types are more susceptible to context for the given entities than others. An overview of this is given in the next section. 14We use the same deduplicated 825GiB version of the Pile that the Pythia-6.9b model was trained on. https://huggingface. co/datasets/EleutherAI/the_pile_deduplicated. 19 13229 Figure 17: Susceptibility scores plotted against frequency for different model sizes. The decreasing lower bound trend is generally consistent across all models and both open/closed queries, although it appears to be stronger for open queries (especially at larger model sizes). 20 13230 Figure 18: These plots show, for each of the 6 model sizes, the relationship between entity susceptibility score and relation-dependent degree in the knowledge graph. The decreasing lower bound trend is generally consistent across all models and both open/closed queries. 21 13231 Susceptibility and Frequency Analysis We are interested in seeing how different predicate relations of the queries from the knowledge graph have different susceptibility scores. We evaluate the relation between susceptibility scores and co-occurrence frequencies per entity-answer predicate relation. This reveals trends in what types of relations are more susceptible to context than others. A lower correlation indicates a stronger drop in susceptibility as the entities become more frequent in the pretraining data. F.3 Entity Degrees and Susceptibility Scores Since knowledge graphs are structured conceptual maps relating entities, we further seek to identify whether we can identify any correlations between statistics of the YAGO knowledge graph and an entity’s susceptibility scores. Validating against a knowledge graph can be advantageous over validating against the actual training data for a number of reasons, including that for most models, the actual training data is inaccessible for research, and the scale of the training data can make it prohibitively expensive to trawl through efficiently and precisely. For example, with a knowledge graph, we can identify the exact number of objects an entity might share an alumniOf relation with, while within the pretraining data, it is very difficult to identify the number of different answers an entity will co-occur within the context of the specific alumniOf relation. Experiment Setup. Our goal is to understand how the susceptibility score relates to the degree of entities in the YAGO knowledge graph G for specific queries. For this, we extract the number of incoming and outgoing edges from an entity e along a relation (or query) q as follows: δ(e, q) = |{a | (e, q, a) ∈G} ∪{a | (a, q, e) ∈G}|. We plot δ(e, q) against the susceptibility score χ(q(e)) for all entities and queries. Results. From Fig. 18, we see a decreasing upper bound relationship between susceptibility scores and the YAGO degree δ for both open and closed queries for all model sizes. This could be explained as follows: consistent with our original hypothesis, very familiar entities to a model have low susceptibility, while less familiar entities can have a wider range of susceptibility. The potential for unfamiliar entities to be susceptible is much higher than that for very familiar entities, although unfamiliar entities can also be less susceptible. Further investigation into traits that characterize the susceptibility of familiar vs less familiar entities is needed. G Applications G.1 Social Sciences Measurement Motivation. Large language models (LLMs) are actively used today in empirical social sciences for annotating data and descriptive data analysis (e.g., classifying tweets with sentiment and ideology scores (Ziems et al., 2024; Gilardi et al., 2023)). However, Zhang et al. (2023) warn that LLMs applied to sentiment classification “may inadvertently adopt human biases” and demonstrate that the prompt design, i.e., the context persuasiveness, can significantly influence the outcome. O’Hagan and Schein (2023) demonstrate that LLMs exhibit biases about different entities when measuring political ideology, based on their prior knowledge. Finally, Stoehr et al. (2024) use LLMs to measure the stance of product reviews, their setting does not disentangle the effects of prior knowledge and context, thus leaving ambiguous the question of whether the measurement is more because of the LLM’s prior bias about the product or the review’s actual content. Experiment Setup. We consider a manually constructed dataset (Tab. 1) of well-known entity-pairs which are either friends (e.g., Harry Potter and Ron Weasley) or enemies (e.g., David and Goliath) and contexts relating the pair (e.g., Harry loves Ron). We aim to understand how susceptibility scores may differ between the two kinds of relationships for the query What’s the relationship between {entity1} and {entity2}?, e.g., are famous friend-based relationships more susceptible than enemy-based relationships? We compute the susceptibility scores for these entity-pairs using simple template-generated contexts such as {entity1} loves {entity2} and {entity1} hates {entity2} and then analyze the results. 22 13232 Results. From Fig. 10, we can see that for the open query The relationship between {} and {} is, there is a clear difference in susceptibility scores for friend and enemy entity-pairs. We leave to future work investigating the exact nature of why some entity-pairs may have lower susceptibility scores than others. G.2 Exploring Gender Bias Motivation. The susceptibility score tells us how the entropy changes for a given entity as we vary the prepended contexts. We posit that stronger induced biases for entities lead to lower susceptibility scores than for those less represented in the training data. Using this information, we consider how the susceptibility score can be used to study gender bias in LLMs. Experiment Setup. We collect sentences containing 20 highly stereotypical sentences about men and women, respectively, to use as contexts. For instance, Men are not good at multitasking. We also collect 20 names for each gender and 20 neutral queries that we think may be affected by the contexts, such as {entity} spends most of their time. Finally, we also invert the queries by flipping the gendered words, giving us non-stereotypical contexts. We then calculate susceptibility scores for each query and gender combination over the four different context collections. All scores are calculated using the Pythia-6.9bdeduplicated model. All data was generated with GPT-4. The contexts, names, and queries can be found below. Results. Results are shown in Fig. 11. For all contexts, there is a significant difference between the genders (p < 10−6). We make several observations: (i) When prompted with the masculine stereotypes, we get the lowest susceptibility for the male entities, indicating the model has a bias for the stereotypical male contexts. (ii) By swapping the genders in the stereotypical female contexts (M*), we get the highest susceptibility for the male entities. Somewhat surprisingly, we also find that this gives the lowest susceptibility scores for the female names. (iii) The susceptibility scores for the female names over the stereotypical female contexts (F) are similar to the swapped ones (F*), hinting that the model does not have as strong a bias for the feminine stereotypes as in the masculine case in (i). Combined with our prior findings on the relation between susceptibility and the saliency of entities, this hints at masculine stereotypes being more prominently represented in the training data (as well as in the model’s induced biases). At the same time, we don’t see evidence of exaggerated feminine stereotypes being favored via lower susceptibility scores. G.2.1 Gender Stereotypes Data Here, we list the data used to run the gender bias use case. All data was generated with the help of the GPT-4 LLM. The female names are Mary, Jennifer, Linda, Patricia, Elizabeth, Susan, Jessica, Sarah, Karen, Nancy, Margaret, Lisa, Betty, Dorothy, Sandra, Ashley, Kimberly, Donna, Emily, Michelle. The male names are James, John, Robert, Michael, William, David, Richard, Joseph, Thomas, Charles, Christopher, Daniel, Matthew, George, Donald, Anthony, Paul, Mark, Andrew, Edward. The stereotypical feminine contexts are: Women are not as strong as men., All women are overly emotional and sensitive., Women are not good at math and sciences., All women are naturally good at cooking and cleaning., Women talk more than men., Women are generally not capable of doing manual labor., All women love shopping and fashion., Women don’t understand technology as well as men do., Women are indecisive and change their mind too often., Women cannot drive as well as men., Women are not as ambitious as men when it comes to career., Women overthink things more than men., Women are not as good at sports as men., Women are more gossipy than men., All women are afraid of bugs and spiders., Women are not good at directions and get lost easily., Women take longer to get ready than men., Women always need to ask for others for their opinion., All women are obsessed with their weight and dieting., All women are crazy about babies and want to have kids. The stereotypical masculine contexts are Men always bring the association of strength and aggression., All men are inherently messy and cannot keep their spaces clean., Men are not good at multitasking., All men are naturally good at sports., Men cannot cook as well as women., Men do not take care of their health as much as women do., I believe men are not as talkative as women., All men love video games., 23 13233 Men are not good listeners., Men are always competitive., Men are not as emotional as women., Men are not able to express their feelings as well as women do., Men only think about one thing., Men look at problems from a logical perspective, not considering emotions., Men are typically less organized than women., Men tend to make decisions without considering others’ feelings., All men only care about sports and cars., Men don’t like to ask for help., Men don’t pay as much attention to appearance as women do., All men can’t resist a flirt. The queries used for both sets of entities are: I think {entity} works as a, {entity} is good at, I met {entity} yesterday, they are, {entity} usually spends their free time, {entity}’s favorite hobby is likely, {entity} probably lives in a, When it comes to {entity}’s fashion style, they prefer, {entity}’s educational background might be in, {entity} often enjoys eating, If {entity} was a colour, they would be, The one thing that {entity} cannot resist is, {entity} probably drives a, One thing that {entity} is really bad at is, {entity}’s personality can be described as, {entity} spends most of their time, {entity} probably likes to watch, During weekends, {entity} might be, {entity}’s favorite type of music is probably, I imagine {entity} has a pet, it is a, {entity}’s favorite type of movie is probably. 24 13234 Entity 1 Entity 2 Label Type Sherlock Holmes James Moriarty enemy movie Obi Wan Kenobi Darth Maul enemy movie Harry Potter Lord Voldemort enemy movie Harry Potter Draco Malfoy enemy movie Spiderman Norman Osborne enemy movie Super Mario Bowser enemy movie Gandalf Saruman enemy movie Bilbo Baggins Sauron enemy movie Superman Lex Luthor enemy movie James Bond Ernst Stavro Blofeld enemy movie Optimus Prime Megatron enemy movie Boston Red Sox New York Yankees enemy sports Green Bay Packers Chicago Bears enemy sports Borussia Dortmund FC Bayern Munich enemy sports Real Madrid FC Barcelona enemy sports Joe Frazier Muhammad Ali enemy sports AC Milan Inter Milan enemy sports Torries Labor Party enemy politics Democrats Republicans enemy politics USA Al-Qaeda enemy politics Donald Trump Hillary Clinton enemy politics Donald Trump Joe Biden enemy politics Kuomintang Chinese Communist Party enemy history Winston Churchill Adolf Hitler enemy history Harry Trumann Nikita Khrushchev enemy history George Bush Saddam Hussein enemy history David Goliath enemy history Greece Troy enemy history Gauls Rome enemy history USA Soviet Union enemy history Nazi Germany Allied Forces enemy history Cain Abel enemy history Coca Cola Pepsi enemy business Ford General Motors enemy business Thomas Edison Nikola Tesla enemy business Steve Jobs Bill Gates enemy business Airbus Boeing enemy business McDonalds Burger King enemy business Visa Mastercard enemy business Netscape Microsoft Internet Explorer enemy business UPS Fedex enemy business Canon Nixon enemy business Sony Nintendo enemy business Sheriff of Nottingham Robin Hood enemy history Moby Dick Captain Ahab enemy movie Tom Jerry enemy movie Peter Pan Captain Hook enemy movie Jack Sparrow Hector Barbossa enemy movie Harry Potter Ronald Weasley friend movie John Lennon Paul McCartney friend history Amelia Earhart Eleanor Roosevelt friend history Georges Braque Pablo Picasso friend history Bilbo Baggins Gandalf friend movie Harry Potter Hermione Granger friend movie Frodo Baggins Samwise Gamgee friend movie Han Solo Chewbacca friend movie C.S. Lewis J.R.R. Tolkien friend history Alexander the Great Hephaestion friend history Mark Twain Nikola Tesla friend history John Adams Thomas Jefferson friend history Bill Gates Warren Buffett friend business Vincent van Gogh Paul Gauguin friend history Albert Einstein Niels Bohr friend history Woody Buzz Lightyear friend movie Shrek Donkey friend movie Bal Gangadhar Tilak Mohammed Ali Jinnah friend history Marc Twain Hellen Keller friend history Thomas Edison Henry Ford friend history Bill Gates Paul Allen friend business Larry Page Sergei Brin friend business Mike Wazowski James P. Sullivan friend movie Sherlock Holmes John Watson friend movie Harry Potter Albus Dumbledore friend movie Table 1: The manually constructed friend–enemy dataset, which consists of entity pairs, whether their relationship is friend-based or enemy-based, and the type of their relationship, e.g., movie, history, etc. 25 13235
2024
714
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13236–13249 August 11-16, 2024 ©2024 Association for Computational Linguistics Word Matters: What Influences Domain Adaptation in Summarization? Yinghao Li1∗Siyu Miao1∗Heyan Huang12† Yang Gao12† 1School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 2Beijing Institute of Technology Southeast Academy of Information Technology, Putian, China {yhli,symiao,hhy63,gyang}@bit.edu.cn Abstract Domain adaptation aims to enable Large Language Models (LLMs) to generalize domain datasets unseen effectively during the training phase. However, factors such as the size of the model parameters and the scale of training data are general influencers and do not reflect the nuances of domain adaptation performance. This paper investigates the fine-grained factors affecting domain adaptation performance, analyzing the specific impact of ‘words’ in training data on summarization tasks. We propose quantifying dataset learning difficulty as the learning difficulty of generative summarization, which is determined by two indicators: wordbased compression rate and abstraction level. Our experiments conclude that, when considering dataset learning difficulty, the cross-domain overlap and the performance gain in summarization tasks exhibit an approximate linear relationship, which is not directly related to the number of words. Based on this finding, predicting a model’s performance on unknown domain datasets is possible without undergoing training. Source code and scripts are available at https://github.com/li-aolong/ Word-Matters. 1 Introduction With the continuous development of Large Language Models (LLMs), remarkable capabilities have been demonstrated in knowledge comprehension (Thirunavukarasu et al., 2023a; Sun et al., 2023), logical reasoning (Hao et al., 2023; Miao et al., 2023), problem-solving (Chan et al., 2023; Talebirad and Nadiri, 2023), and other aspects (Zhao et al., 2023; Wen et al., 2023). As a result, LLMs have been widely applied to various summarization tasks on different domains to improve productivity like law (Shukla et al., 2022), medicine (Veen et al., 2023), finance (Li ∗These authors are both First Author. †Corresponding author et al., 2023) and so on, including both natural and social science study (Glickman and Zhang, 2024; Xu et al., 2024). However, when LLMs are applied to specific domains, it often necessitates the selection of corresponding domain-specific knowledge bases for training (Zhang et al., 2023; Cui et al., 2024; Thirunavukarasu et al., 2023b; Yu et al., 2021). This results in a limitation where a model trained in one domain struggles to be effectively applied in others (Dada et al., 2023), leading to a waste of resources. This constraint arises from the disparity in the distribution between the training data and the target domain data (Zhang et al., 2022). In light of this limitation, effective methods must be taken to fix the gap and then enhance the model’s adaptability and efficiency in summarization tasks. Domain adaptation aims to train a model from multiple source domains, enabling it to generalize well to unseen domains (Li et al., 2018; Dou et al., 2019). Consequently, enhancing domain adaptation performance is a key objective for large-scale models in improving downstream tasks (Zhou et al., 2021a). It is worthwhile to explore which factors can affect the domain adaptation performance (Wang et al., 2021). Schaeffer et al. (2023) proposes that metrics based on nonlinear or non-contiguous tokens are crucial to a model demonstrating emergent abilities and that ROUGE-L-Sum shows sharper variations. This has inspired us to consider the performance changes of models in domain adaptation from the perspective of more granular units. Tokens typically do not possess complete semantics, whereas words are the basic language units with specific meanings or functions. Therefore, we consider exploring the impact on model performance in domain adaptation from the perspective of words. Summarization tasks involve generating concise texts that encapsulate the main components of longer documents, considering factors such 13236 as coherence, information diversity, and coverage scope (Alomari et al., 2022). This differs from other downstream tasks like machine translation (Klimova et al., 2022) and classification (Bird et al., 2020). Fatima et al. (2022) note that reducing summary extractors’ size or compression ratio can lead to losing vital content, features, concepts, and other significant information. Therefore, we explore how the degree of information extraction between input documents and target summaries impacts domain adaptation performance in summarization tasks. This paper investigates how words impact the domain adaptation of summary tasks. We first introduce two indicators, compression rate, and abstraction level, to quantify the learning difficulty of datasets, thereby more accurately reflecting the performance gain of models. Then, we identify two key aspects affecting domain adaptation: crossdomain overlap and word count, hypothesizing a linear relationship between them and model performance. Experiments are conducted with models of various sizes on summarization datasets from four domains. The results indicate that the crossdomain overlap exhibits an approximately linear relationship with performance gain when considering dataset learning difficulty. In contrast, word count shows no significant correlation. Based on this linear relationship, it is possible to predict model performance without undergoing training by using the cross-domain overlap calculated from the dataset. Our contributions can be summarized as follows: • We propose two factors affecting the domain adaptation of summarization tasks: (1) Learning difficulty coefficient of the dataset more accurately reflects the performance gain; (2) Cross-domain overlap directly represents the closeness between the source and target domains. • Our experiments show that cross-domain overlap has an approximately linear relationship with performance gains based on the learning difficulty coefficient, revealing the connection between datasets and domain adaptation from the perspective of words. • We demonstrate that without undergoing training, it is possible to predict a model’s performance on unknown domain datasets solely based on the learning difficulty coefficient and cross-domain overlap. This provides a resource-efficient and rapid validation method for models regarding domain adaptation. 2 Related Work Domain Adaptation (DA) has emerged as a crucial methodology for enhancing model performance across varying domains (Farahani et al., 2020). DA aims to enhance the performance of LLMs in a target domain, where annotated data may be scarce or absent, by leveraging knowledge from a related domain with a sufficient amount of labeled data (Farahani et al., 2020). Many methods have been developed to tackle the out-of-domain adaptation issue (Zhou et al., 2021b; Fan et al., 2021; Cha et al., 2021; Wang et al., 2022; Ling et al., 2023). There are three pivotal strategies related to our work: (1) Continual pre-training, (2) Alignment of distributions, and (3) Adaptation tuning. Continual Pre-training Continual pre-training uses similar training objectives as continual selfsupervised training to update pre-trained models with new data instead of retraining from scratch (Gupta et al., 2023). Continual pre-training is studied for domain adaptation where the new dataset comes from a new domain, which is referred to as continual domain-adaptive pre-training (DA-training) (Gururangan et al., 2021; Scialom et al., 2022; Ke et al., 2023a). DAP-training methods can achieve better results by training LLMs with a large unlabeled domain corpus before endtask fine-tuning (Alsentzer et al., 2019; Lee et al., 2019; Gururangan et al., 2020; Ke et al., 2023b). However, the effectiveness of this method is contingent upon the relevance of the pre-trained LLMs to the target domain and requires substantial domainspecific data to achieve optimal performance. Alignment of Distributions Aligning the statistical attribution of the source and target domains to match their distributions has emerged as a principal method (Peng et al., 2019; Nguyen-Meidine et al., 2020). The general way of aligning the distributions is by minimizing the distance between domains. The most used distance measures in domain adaptation are maximum mean discrepancy, Kullback-Leibler divergence, and contrastive domain discrepancy (Long et al., 2015; Ganin and Lempitsky, 2015). The strength of this approach lies in its theoretical rigor and the potential for precise domain alignment. Nevertheless, the challenge 13237 of selecting appropriate distance metrics and the computational complexity of these calculations can pose significant obstacles. Compared with this, we adopt word-based statistical metrics to calculate the similarity between texts from different domains directly. Adaptation Tuning LLMs may not capture sufficient knowledge for specific tasks or domains even when trained on vast amounts of general text data. Adapting models to a smaller, domain-specific dataset can significantly improve their performance in that specific area. Here are three primary methods for adapting LLMs: (1) Prompt engineering has shown its power to quickly adapt LLMs to unseen domains without updating the inner parameters. Prompts define unseen tasks with or without several illustrative examples to LLMs in natural language (Ben-David et al., 2022; Kojima et al., 2023). Continuous prompts are sequences of tokens attached with the input sentence that can be learned from the downstream dataset by prompt tuning (Ye et al., 2022; Vu et al., 2022; Razdaibiedina et al., 2023). Su et al. (2022) demonstrate the transferability of continuous prompts in both crosstask as well as cross-model settings; (2) Adapter fine-tuning, such as Low-rank adapters (Hu et al., 2021) and DyLora (Valipour et al., 2023), adds a small number of extra parameters to LLMs to enhance performance without major modifications; (3) Full fine-tuning is still the most fundamental and wildly used method to improve the model’s adaptation performance. Instruction fine-tuning has proven to be highly successful in enhancing the model’s adaptation capabilities (Chung et al., 2022; Menick et al., 2022; Wei et al., 2022; Huang et al., 2023). However, how to select suitable data to cultivate LLMs’ adapting capacity and predict transferring results remain a problem. 3 What and How does Word Influence Domain Adaptation? In this section, we explore how words can affect aspects of domain adaptation and their impact. We first hypothesize that datasets of varying learning difficulties affect model performance and investigate the influence of words on the learning difficulty of target domain datasets. We propose two indicators to reflect the learning difficulty of datasets: Compression Ratio and Abstraction Level. Secondly, from the perspective of words, we propose two aspects that could affect domain adaptation: cross-domain overlap and word count. Finally, we hypothesize a linear relationship between these aspects and domain adaptation performance based on dataset learning difficulty. This hypothesis is tested in subsequent experiments. 3.1 Word Influence On Target Domain Dataset Learning Difficulty We assume that different datasets have varying levels of learning difficulty, and training models on datasets with low learning difficulty can lead to higher metric improvements on the test set. In contrast, the metric improvement is relatively small for datasets with high learning difficulty. To quantitatively assess the learning difficulty of datasets for the generative summarization task, we introduce two indicators: Compression Ratio and Abstraction Level. Compression Ratio The Compression Ratio reflects the learning difficulty of a dataset in terms of form, which describes the degree of length reduction of the original text relative to the generated text. A higher compression Ratio indicates a more challenging dataset because the model needs to compress the content of the original documents to a greater extent, which places a higher demand on the model’s text comprehension and information extraction capabilities. The Compression Ratio α for a dataset containing n samples is represented as the average Compression Ratio across all samples and is calculated using the following formula: α = 1 n n X i=1 |Di| |Si| , (1) where |Di| and |Si| represent the word count of the i-th document and summary in the dataset, respectively. Abstraction Level Considering only the Compression Ratio cannot fully reflect the dataset’s learning difficulty. For example, if the reference summaries in the dataset are verbatim excerpts of specific sentences from the source document, even though the Compression Ratio may be high, the model only needs to learn to copy parts of the document to achieve high performance. This is a straightforward extractive pattern and does not truly reflect the model’s summarization capability. Therefore, we introduce Abstraction Level that reflects the learning difficulty of a dataset in terms of content, which we define as the reciprocal of 13238 the average ROUGE score between the original documents in the test set and the corresponding summaries, with the formula represented as follows: β = n nP i=1 ROUGE(di,si) (2) ROUGE is essentially a method for calculating overlap. We argue that the overlap between documents and summaries can, to some extent, represent the co-occurrence of knowledge within the dataset. A lower ROUGE value indicates lower content relevance between the document and the summary, making improving performance on that dataset. Hence, we use the reciprocal of Abstraction Level to reflect the learning difficulty of the dataset in terms of content. Learning Difficulty Coefficient According to the proposed two indicators influencing dataset learning difficulty, we define a dataset’s learning difficulty coefficient λ as the product of Compression Ratio and Abstraction Level, represented by the following formula: λ = αβ (3) 3.2 Possible Impact Aspects Cross Different Domains Based on Words We investigate the influence of words on domain adaptation performance. Due to the limitations of the original metric for summarization tasks, such as ROUGE, in reflecting how well a model generalizes across different domains, we introduce an evaluation metric suitable for assessing domain adaptation performance and explore the potential factors that might affect this metric. Summarization Gain The commonly used evaluation metric in summarization tasks, ROUGE, reflects performance on a specific dataset, whereas the performance of domain adaptation is more evident in the change in absolute performance. Therefore, we employ the ROUGE gain as a fundamental measure of domain adaptation performance, with the formula as follows: Gain = ROUGEfine-tuned −ROUGEbase, (4) where ROUGEbase represents the original model’s ROUGE value calculated through direct inference on the test set, while ROUGEfine-tuned represents the ROUGE value obtained after the model has been fine-tuned. Cross-domain Overlap When considering performance adaptation across different domains, intuitively, they are more similar if there are more overlapping words between datasets from different domains. We assume this similarity can lead to performance improvement in domain adaptation. We propose cross-domain overlap to characterize the word-level overlap ratio between different domains. For source domain S containing n datasets and target domain T containing m datasets, the formula for cross-domain overlap is expressed as follows: γ = 1 n · m n X i=1 m X j=1 Pl k=1 Count(wTj k , Si) |Si| (5) where Si and Ti represent the i-th dataset for the domain S and T respectively, l is the total number of unique words in Tj, wTj k represents the k-th unique word in Tj, and Count(wTj k , Si) is the number of occurrences of word wTj i in Si. Word Count Word count refers to the total number of words across all samples in a dataset. Generally, the larger the number of training set samples within a specific range, the better the model performance. However, whether a higher total word count in all samples is also beneficial is worth exploring. Therefore, we consider word count as one aspect affecting the model’s domain adaptation performance. 3.3 How to influence? We posit that the metrics of cross-domain overlap and Word Count notably influence domain adaptation performance, particularly when accounting for dataset learning difficulty. The higher the crossdomain overlap, the more similarity between the source domain’s data and the target domain’s data. Consequently, the model is more likely to leverage data from source domains to learn knowledge that is closer to the target domain, resulting in better adaptation to that domain. Regarding Word Count, it is intuitive to assume that a larger training dataset size, and consequently a higher Word Count, leads to better model performance. However, whether this correlation consistently extends to domain adaptation performance remains an open question for investigation. 13239 Dataset Domain Training Set Test Set Document Words (Avg.) Summary Words (Avg.) CNNDM News 35,000 500 663 56 (287113) (11490) (781) (56) PubMed Science 35,000 500 1064 190 (119,924) (6,658) (3,049) (202) SAMSum Conversation 14,732 500 93 20 (14,732) (819) (92) (20) WikiHow General 35,000 500 523 52 (157,252) (5,577) (580) (62) Table 1: The statistics of datasets. The data in parentheses represent the total data from the original datasets. LD-Gain When dataset learning difficulty is factored into domain adaptation performance, for more challenging datasets, a model should require a smaller performance gain to achieve the same level of performance as when difficulty is not considered since the dataset’s complexity impedes the model’s ability to generalize across domains. Hence, we hypothesize that cross-domain overlap and Word Count are linearly related to the product of dataset learning difficulty and performance gain, which is referred to LD-Gain. The proposed hypotheses are as follows: Hypothesis1 :γ ∝λGain = LD-Gain, Hypothesis2 :WC ∝λGain = LD-Gain, (6) where WC represents Word Count. The following experiments section will verify the two hypotheses. 4 Experiments We use four summarization datasets, CNN/Daily Mail (Hermann et al., 2015), PubMed (Cohan et al., 2018), SamSum (Gliwa et al., 2019) and WikiHow (Koupaee and Wang, 2018), each originating from the news, science, conversation, and general domains respectively. Due to the significant differences in the number of samples across different datasets, we sample 35,000 samples from the CNNDM, PubMed, and WikiHow datasets for the training set while retaining the entire SAMSum dataset. We sample 500 test samples from all the test sets of these datasets. The detailed statistical data of the datasets are shown in Table 1. To verify the above two hypotheses, we configure different experimental setups for cross-domain overlap and Word Count. All experiments are based on the Bloom (Scao et al., 2022) and Llama2 (Touvron et al., 2023) series of models. Due to resource constraints, different experiments use models of various sizes. The prompts used to train the Llama2 and Bloom models are presented in appendix A. 4.1 Setup for Cross-domain Overlap Cross-domain overlap is calculated between the different source and target domains. Considering the potential impact of the number of source domains on the results, we set up two experiments: singledomain adaptation and multi-domain adaptation. Single-domain Adaptation Single-domain adaptation refers to training on a single source domain and testing on different target domains. We first test the basic performance of models on test sets across four domains and then fine-tune the models using training sets from the three domains, excluding the test set domain. For a model, this results in having models trained on three single-domain datasets, which are then tested for their performance on the test sets. Finally, we calculate the change in performance. The Bloom-1.1B, Bloom-3B, and Llama-2 7B models are trained on 4 RTX 3090 GPUs in this experiment. For the Bloom-1.1B and 3B models, we conduct full-parameter fine-tuning for one epoch with a learning rate 2e-5 and a batch size of 4. For the Llama2-7B model, we use LoRA (Hu et al., 2021) for fine-tuning over three epochs, with the other hyperparameters remaining unchanged. Multi-Domain Adaptation Multi-domain adaptation refers to training on multiple source domains. When one of the four domain datasets is selected as the test set, the datasets from the other three domains are combined to serve as the training set. In this experiment, we use the Bloom-1B and 3B models. The basic performance of the two models on the four domain test sets has already been tested in the single-domain adaptation, so we use the mixed dataset from the three domains, excluding the test set domain, for training to calculate the performance change. The training hyperparameters are the same as those used in the single-domain adaptation. 4.2 Setup for Word Count To minimize the interference of other factors in investigating the relationship between word count and domain adaptation, we use only one domain dataset, CNNDM, for training. Then, we test on the same domain’s test set and the test sets of the other three domains to observe the impact of word count on the results. The CNNDM training set is evenly divided into ten chunks, each used for training in separate stages. Training is conducted in ten stages, starting with the first chunk as the training set. In 13240 Training Set Test Set Compression Rate Abstract Level Learning Difficulty Coefficient ROUGE Base ROUGE Fine-tuned ROUGE Improvement Cross-domain Overlap LD-Gain PubMed CNNDM 12.95 20.22 261.85 8.10 7.84 -0.26 2.35% -68.08 SAMSum 10.23 2.13 8.89% 557.74 WikiHow 8.47 0.37 4.47% 96.88 CNNDM PubMed 7.08 13.13 92.96 7.42 10.65 3.28 2.90% 304.91 SAMSum 6.80 -0.62 3.51% -57.64 WikiHow 6.42 -1.00 3.56% -92.96 CNNDM SAMSum 4.86 16.76 81.45 5.22 6.36 1.14 1.62% 92.85 PubMed 4.61 -0.61 0.64% -49.68 WikiHow 2.99 -2.23 1.26% -181.63 CNNDM WikiHow 13.05 39.23 511.95 4.562 5.075 0.513 3.30% 262.63 PubMed 4.506 -0.056 2.25% -28.67 SAMSum 5.39 0.83 7.53% 424.92 Table 2: Results of single-domain adaptation on the Llama-2 7B model. (a) Bloom-1.1B (b) Bloom-3B (c) Llama2-7B Figure 1: Single-domain adaptation. The dotted line reflects a changing trend fitting the scatter data. The lightcolored area represents a standard deviation of plus or minus. (a) Multi-Domain (b) Mixed-Domain Figure 2: The left is the result of multi-domain adaptation for Bloom-1.1B and 3B. The right contains the results of both single-domain and multi-domain adaptation. subsequent stages, each phase uses the training set from the previous stage plus the next chunk, continuing until the final stage, where the entire dataset of the source domain is used for training. Each stage involves training for one epoch. 5 Results and Analysis We analyze the results from different perspectives. Both single-domain and multi-domain experiment results supported our hypothesis that cross-domain overlap is linearly correlated with performance. Due to this finding, we could make a quantitative prediction about the transferred result of certain given training and evaluating datasets. On the other hand, we found that word count is unrelated to domain adaptation, which further emphasizes the significance of choosing high-quality data rather than a large amount of data. To solidify the universality of our findings, we conduct a prediction study and it turns out that our hypothesis can successfully predict domain adaptation performance with cross-domain overlap. There is only a slight gap between our prediction data and observed data. 5.1 Cross-domain overlap is linearly correlated with performance Single-Domain Adaptation The results of the single-domain adaptation for Llama2-7B are shown in Table 2, and the results of Bloom-1.1B and 3B are shown in Table 5 and Table 6 of Appendix B. The compression rate, abstraction level, and the learning difficulty coefficient calculated based on them only relate to the test set. Therefore, these indicators remain constant for the same test set, 13241 Model CNNDM PubMed SAMSum WikiHow Compression Rate Abstract Level Learning Difficulty Coefficient ROUGE base ROUGE Fine-tuned ROUGE Improvement Cross-domain Overlap LD-Gain Bloom-1B # # # 12.95 20.22 261.85 3.72 3.91 0.19 3.13% 49.75 # # # 7.08 13.13 92.96 4.13 4.65 0.52 3.18% 48.34 # # # 4.86 16.76 81.45 1.75 1.65 -0.1 1.05% -8.15 # # # 13.05 39.23 511.95 2.51 2.49 -0.02 2.60% -10.24 Bloom-3B # # # 12.95 20.22 261.85 4.06 5.48 1.42 3.13% 371.83 # # # 7.08 13.13 92.96 4.84 6.87 2.03 3.18% 188.71 # # # 4.86 16.76 81.45 1.81 2.17 0.36 1.05% 29.32 # # # 13.05 39.23 511.95 2.59 3 0.41 2.60% 209.90 Table 3: Multi-domain Adaptation. represents the test sets, while # constitutes the training sets together. even if the training set changes. Similarly, the base ROUGE value of the model on the test set also remains unchanged. Figure 1 illustrates the relationship between cross-domain overlap and LD-Gain for three different models. It can be observed that when there is a low overlap between the target domain and the source domain, the model fails to generalize well to the target domain. Conversely, the model performance tends to exhibit significant improvement when there is high vocabulary overlap. We also utilize another similarity calculation method, BERTScore (Zhang et al., 2019), to replace ROUGE values in computing LD-Gain. The results for single-domain adaptation on bloom-3b and llama2-7b are depicted in the figure 5 of Appendix C. We observe that the relationship between LD-Gain computed based on BERTScore and cross-domain overlap is similar to the results obtained with ROUGE-based calculations. Based on this discovery, we believe that there exists a linear correlation between vocabulary overlap and model performance gain, factoring in dataset learning difficulty. Multi-Domain Adaptation Table 3 presents the results obtained by training with multiple domain data and testing with a single domain. It can be observed that the ROUGE improvement of Bloom1B on CNNDM is 0.19, which is smaller than 0.52 compared to the improvement on PubMed. However, the learning difficulty coefficient for the CNNDM test set is higher at 261.85, exceeding the value of 92.96 for PubMed. Therefore, the LDGain of the model on CNNDM, adjusted by the learning difficulty coefficient, is higher than that of PubMed. The relationship between cross-domain overlap and LD-Gain for multi-domain adaptation is illustrated in Figure 2a. We also combine the results of single-domain and multi-domain adaptation and plot them in a single graph, as shown in Figure 2b. It can be observed that as the cross-domain overlap increases, the model’s actual gains gradually increase. This finding reveals that cross-domain overlap has a linear relationship with performance in domain adaptation. 5.2 Word count is not related to performance The word count results are presented in Table 4. It can be observed that the word count within a chunk is relatively similar, indicating that the word count can be controlled by increasing the number of chunks. On the other hand, the overlap within a domain does not vary significantly. It remains stable within a small range, allowing for the observation of the relationship between overlap and performance across different domains. The visual results from Figure 3 demonstrate that there is no clear upward or downward trend in model performance across different domains as the word count increases. Instead, there is oscillation within a specific range. The fluctuations are most prominent for the target domains CNNDM and SAMSum, which remain relatively stable within a specific range. Consequently, we conclude that the influence of word count on domain adaptation performance is unrelated. There are instances where performance may even decline with an increase in word count. Meanwhile, we also calculate the overlap of each chunk to observe whether there is still a linear relationship between the improvement of model performance and overlapping data at different domains. Although the cross-domain overlap is relatively similar within the same domain, a linear correlation exists between cross-domain overlap and LD-Gain across different domains. Specifically, the model exhibits higher actual gains as the cross-domain overlap increases. 5.3 Predictability The preceding experiments have confirmed a linear correlation between the cross-domain overlap 13242 Test Set Chunk0 Chunk1 Chunk2 Chunk3 Chunk4 Chunk5 Chunk6 Chunk7 Chunk8 Chunk9 Word Count 2,608,429 2,594,348 2,591,597 2,573,224 2,641,309 2,558,078 2,582,793 2,594,890 2,607,471 2,614,820 Cross-domain Overlap CNNDM 7.37% 7.35% 7.39% 7.35% 7.28% 7.29% 7.33% 7.28% 7.25% 7.34% PubMed 2.93% 3.00% 2.94% 2.87% 2.91% 2.92% 2.92% 2.95% 2.93% 2.84% SAMSum 1.59% 1.56% 1.74% 1.57% 1.60% 1.55% 1.60% 1.60% 1.64% 1.76% WikiHow 3.58% 3.57% 3.64% 3.60% 3.56% 3.55% 3.54% 3.55% 3.48% 3.55% LD-Gain CNNDM 851.00 775.07 971.45 735.79 960.98 995.02 1091.91 811.73 976.69 1099.76 PubMed 43.69 31.61 21.38 26.96 33.47 49.27 25.10 26.03 35.32 40.90 SAMSum 6.52 0.81 13.03 -24.44 -1.63 -24.44 -20.36 -8.96 -38.28 -17.11 WikiHow -102.39 -15.36 25.60 -66.55 -30.72 30.72 112.63 -35.84 20.48 -66.55 Table 4: Results of different chunks on test sets across various domains. The word count is increasing from Chunk0 to Chunk9. Figure 3: Relationship between word count and LDGain. As word count increases, there is no significant trend in LD-Gain change. and performance, taking into account dataset learning difficulty. Based on this observation, we can extrapolate performance predictions for unknown domain datasets using the results from existing domain datasets. As depicted in Figure 1a, a linear fit to the scatter plot of single-domain data yields a performance prediction trend for Bloom-1.1B in single-domain adaptation, as illustrated by the dashed line in Figure 4. We re-sample 500 different examples, distinct from the previous datasets. Subsequently, we compute the compression ratio and abstraction level values for the four new test sets, obtaining the learning difficulty coefficient λ. Based on the metrics from the training set used in the previous single-domain experiments, we calculate the cross-domain overlap γ value. Ideally, we can use λ and γ to predict the model’s performance gain on the new dataset, thereby obtaining the predicted ROUGE value with Figure 4: The dotted line is the predicted line from the previous experiment, while the solid line is drawn with the testifying dataset. the formula as follows: ROUGEpredicted = Gain + ROUGEbase, Gain = LD-Gain λ , LD-Gain = β0 + β1γ, (7) where β represents the parameters of the fitting line under the existing data of the model. We conduct experiments using the Bloom-1.1B model. The results of the new dataset, along with the fitted curve, are shown in Figure 4 as scatter points and a solid line. It can be observed that the predicted fitted line and the actual experimental results’ fitted line exhibit a similar trend, with a relatively small difference. This indicates that we can estimate the performance for unseen domain datasets based on the dataset’s characteristics and existing performance results, thereby obtaining a rough performance expectation before actual inference. 13243 6 Conclusion We investigate the impact of words on domain adaptation performance in summarization tasks. We propose two indicators to represent the learning difficulty from a dataset and introduce a performance evaluation method based on learning difficulty. We find that word overlap is an essential factor affecting domain adaptation and exhibits a linear correlation with model performance. However, the influence of word count on domain adaptation does not show a regular pattern. We will investigate this phenomenon further in future work. Furthermore, by predicting the performance of new domain data based on its cross-domain overlap with existing domains, it becomes possible to preemptively assess the model’s suitability for specific domains without the need for extensive retraining or fine-tuning. This predictive capability can significantly streamline the adaptation of language models to new domains and save many resources, finally improving their practical utility in real-world applications. 7 Limitations The approaches of domain adaption mainly involve continual pre-training, alignment of distributions, and adaptation tuning. Our findings are related to the third one and, therefore, limited to discussing the influence factors of pre-training data, model parameters, and so on. For the adaptation tuning method, our paper focuses on exploring word impact on domain adaptation. Other factors, such as the quality of summarization training data, instruction diversity, and quality, are out of our consideration and may bring additional noise. Due to resource constraints, this paper employed LoRA fine-tuning for a 7B model without investigating the effects of different fine-tuning methods on domain adaptation. In future work, we will explore in more detail the impact of dataset quality and different training methods on domain adaptation. Acknowledgements We appreciate Xiaochen Liu for providing the initial inspiration for this work. This work was supported by the Joint Funds of National Natural Science Foundation of China (No. U21B2009), Major Research Plan of the National Natural Science Foundation of China (Grant No. 92370110). References Ayham Alomari, Norisma Idris, Aznul Qalid Md Sabri, and Izzat Alsmadi. 2022. Deep reinforcement and transfer learning for abstractive text summarization: A review. Computer Speech & Language, 71:101276. Emily Alsentzer, John R. Murphy, Willie Boag, WeiHung Weng, Di Jin, Tristan Naumann, and Matthew B. A. McDermott. 2019. Publicly available clinical bert embeddings. Eyal Ben-David, Nadav Oved, and Roi Reichart. 2022. Pada: Example-based prompt learning for on-the-fly adaptation to unseen domains. Jordan J. Bird, Anik’o Ek’art, and Diego Resende Faria. 2020. Chatbot interaction with artificial intelligence: human data augmentation with t5 and language transformer ensemble for text classification. Journal of Ambient Intelligence and Humanized Computing, 14:3129–3144. Junbum Cha, Sanghyuk Chun, Kyungjae Lee, HanCheol Cho, Seunghyun Park, Yunsung Lee, and Sungrae Park. 2021. Swad: Domain generalization by seeking flat minima. In Neural Information Processing Systems. Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shan Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. ArXiv, abs/2308.07201. Hyung Won Chung, Le Hou, S. Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Dasha Valter, Sharan Narang, Gaurav Mishra, Adams Wei Yu, Vincent Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed Huai hsin Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. 2022. Scaling instruction-finetuned language models. ArXiv, abs/2210.11416. Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, and Nazli Goharian. 2018. A discourse-aware attention model for abstractive summarization of long documents. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 615–621, New Orleans, Louisiana. Association for Computational Linguistics. Jiaxi Cui, Munan Ning, Zongjian Li, Bohua Chen, Yang Yan, Hao Li, Bin Ling, Yonghong Tian, and Li Yuan. 2024. Chatlaw: A multi-agent collaborative legal assistant with knowledge graph enhanced mixture-ofexperts large language model. 13244 Amin Dada, Aokun Chen, C.A.I. Peng, Kaleb E. Smith, Ahmad Idrissi-Yaghir, Constantin Seibold, Jianning Li, Lars Heiliger, Xi Yang, Christoph M. Friedrich, Daniel Truhn, Jan Egger, Jiang Bian, Jens Kleesiek, and Yonghui Wu. 2023. On the impact of crossdomain data on german language models. In Conference on Empirical Methods in Natural Language Processing. Qi Dou, Daniel Coelho de Castro, Konstantinos Kamnitsas, and Ben Glocker. 2019. Domain generalization via model-agnostic learning of semantic features. In Neural Information Processing Systems. Xinjie Fan, Qifei Wang, Junjie Ke, Feng Yang, Boqing Gong, and Mingyuan Zhou. 2021. Adversarially adaptive normalization for single domain generalization. In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pages 8208–8217. Abolfazl Farahani, Sahar Voghoei, Khaled Rasheed, and Hamid R. Arabnia. 2020. A brief review of domain adaptation. Zainab Fatima, Shehnila Zardari, Muhammad Fahim, Maria Andleeb Siddiqui, Ag Asri Ag Ibrahim, Kashif Nisar, and Laviza Falak Naz. 2022. A novel approach for semantic extractive text summarization. Applied Sciences, 12(9):4479. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. Mark Glickman and Yi Zhang. 2024. Ai and generative ai for research discovery and summarization. Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A humanannotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70–79, Hong Kong, China. Association for Computational Linguistics. Kshitij Gupta, Benjamin Thérien, Adam Ibrahim, Mats L. Richter, Quentin Anthony, Eugene Belilovsky, Irina Rish, and Timothée Lesort. 2023. Continual pre-training of large language models: How to (re)warm your model? Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, and Luke Zettlemoyer. 2021. Demix layers: Disentangling domains for modular language modeling. Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don’t stop pretraining: Adapt language models to domains and tasks. Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023. Reasoning with language model is planning with world model. ArXiv, abs/2305.14992. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In NIPS, pages 1693–1701. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Barun Patra, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, and Furu Wei. 2023. Language is not all you need: Aligning perception with language models. Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Konishi, Gyuhak Kim, and Bing Liu. 2023a. Continual pretraining of language models. In The Eleventh International Conference on Learning Representations. Zixuan Ke, Yijia Shao, Haowei Lin, Hu Xu, Lei Shu, and Bing Liu. 2023b. Adapting a language model while preserving its general knowledge. Blanka Frydrychova Klimova, Marcel Pikhart, Alice Delorme Benites, Caroline Lehr, and Christina SanchezStockhammer. 2022. Neural machine translation in foreign language teaching and learning: a systematic review. Education and Information Technologies, 28:663–682. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2023. Large language models are zero-shot reasoners. Mahnaz Koupaee and William Yang Wang. 2018. Wikihow: A large scale text summarization dataset. ArXiv, abs/1810.09305. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240. Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, and Dacheng Tao. 2018. Deep domain generalization via conditional invariant adversarial networks. In European Conference on Computer Vision. Yinheng Li, Shaofei Wang, Han Ding, and Hang Chen. 2023. Large language models in finance: A survey. Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, and Liang Zhao. 2023. Domain specialization as the key to make large language models disruptive: A comprehensive survey. 13245 Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. 2015. Learning transferable features with deep adaptation networks. Jacob Menick, Maja Trebacz, Vladimir Mikulik, John Aslanides, Francis Song, Martin Chadwick, Mia Glaese, Susannah Young, Lucy CampbellGillingham, Geoffrey Irving, and Nat McAleese. 2022. Teaching language models to support answers with verified quotes. Ning Miao, Yee Whye Teh, and Tom Rainforth. 2023. Selfcheck: Using llms to zero-shot check their own step-by-step reasoning. ArXiv, abs/2308.00436. Le Thanh Nguyen-Meidine, Atif Belal, Madhu Kiran, Jose Dolz, Louis-Antoine Blais-Morin, and Eric Granger. 2020. Unsupervised multi-target domain adaptation through knowledge distillation. Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang. 2019. Moment matching for multi-source domain adaptation. Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, and Amjad Almahairi. 2023. Progressive prompts: Continual learning for language models. Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, Jonathan Tow, Alexander M. Rush, Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas Muennighoff, Albert Villanova del Moral, Olatunji Ruwase, Rachel Bawden, Stas Bekman, Angelina McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien Launay, Margaret Mitchell, Colin Raffel, Aaron Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue, Christopher Klamm, Colin Leong, Daniel van Strien, David Ifeoluwa Adelani, and et al. 2022. BLOOM: A 176b-parameter open-access multilingual language model. CoRR, abs/2211.05100. Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. 2023. Are emergent abilities of large language models a mirage? Thomas Scialom, Tuhin Chakrabarty, and Smaranda Muresan. 2022. Fine-tuned language models are continual learners. Abhay Shukla, Paheli Bhattacharya, Soham Poddar, Rajdeep Mukherjee, Kripabandhu Ghosh, Pawan Goyal, and Saptarshi Ghosh. 2022. Legal case document summarization: Extractive and abstractive methods and their evaluation. Yusheng Su, Xiaozhi Wang, Yujia Qin, Chi-Min Chan, Yankai Lin, Huadong Wang, Kaiyue Wen, Zhiyuan Liu, Peng Li, Juanzi Li, Lei Hou, Maosong Sun, and Jie Zhou. 2022. On transferability of prompt tuning for natural language processing. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Kai Sun, Y. Xu, Hanwen Zha, Yue Liu, and Xinhsuai Dong. 2023. Head-to-tail: How knowledgeable are large language models (llm)? a.k.a. will llms replace knowledge graphs? ArXiv, abs/2308.10168. Yashar Talebirad and Amirhossein Nadiri. 2023. Multiagent collaboration: Harnessing the power of intelligent llm agents. ArXiv, abs/2306.03314. Arun James Thirunavukarasu, Refaat Hassan, Shathar Mahmood, Rohan Sanghera, Kara Barzangi, Mohanned El Mukashfi, and Sachin Shah. 2023a. Trialling a large language model (chatgpt) in general practice with the applied knowledge test: Observational study demonstrating opportunities and limitations in primary care. JMIR Medical Education, 9. Arun James Thirunavukarasu, Darren Shu Jeng Ting, Kabilan Elangovan, Laura Gutierrez, Ting Fang Tan, and Daniel Shu Wei Ting. 2023b. Large language models in medicine. Nature Medicine, 29:1930– 1940. Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and finetuned chat models. Mojtaba Valipour, Mehdi Rezagholizadeh, Ivan Kobyzev, and Ali Ghodsi. 2023. Dylora: Parameter efficient tuning of pre-trained models using dynamic search-free low-rank adaptation. Dave Van Veen, Cara Van Uden, Louis Blankemeier, Jean-Benoit Delbrouck, Asad Aali, Christian Bluethgen, Anuj Pareek, Malgorzata Polacin, Eduardo Pontes Reis, Anna Seehofnerova, Nidhi Rohatgi, Poonam Hosamani, William Collins, Neera 13246 Ahuja, Curtis P. Langlotz, Jason Hom, Sergios Gatidis, John Pauly, and Akshay S. Chaudhari. 2023. Clinical text summarization: Adapting large language models can outperform human experts. Tu Vu, Brian Lester, Noah Constant, Rami Al-Rfou, and Daniel Cer. 2022. Spot: Better frozen model adaptation through soft prompt transfer. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, and Tao Qin. 2021. Generalizing to unseen domains: A survey on domain generalization. IEEE Transactions on Knowledge and Data Engineering, 35:8052–8072. Jindong Wang, Cuiling Lan, Chang Liu, Yidong Ouyang, Tao Qin, Wang Lu, Yiqiang Chen, Wenjun Zeng, and Philip S. Yu. 2022. Generalizing to unseen domains: A survey on domain generalization. Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. Hao Wen, Yuanchun Li, Guohong Liu, Shanhui Zhao, Tao Yu, Toby Jia-Jun Li, Shiqi Jiang, Yunhao Liu, Yaqin Zhang, and Yunxin Liu. 2023. Empowering llm to use smartphone for intelligent task automation. ArXiv, abs/2308.15272. Ruoxi Xu, Yingfei Sun, Mengjie Ren, Shiguang Guo, Ruotong Pan, Hongyu Lin, Le Sun, and Xianpei Han. 2024. Ai for social science and social science of ai: A survey. Hongbin Ye, Ningyu Zhang, Shumin Deng, Xiang Chen, Hui Chen, Feiyu Xiong, Xi Chen, and Huajun Chen. 2022. Ontology-enhanced prompt-tuning for fewshot learning. In Proceedings of the ACM Web Conference 2022, WWW ’22. ACM. Tiezheng Yu, Zihan Liu, and Pascale Fung. 2021. AdaptSum: Towards low-resource domain adaptation for abstractive summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5892–5904, Online. Association for Computational Linguistics. Liwen Zhang, Wei Cai, Zhaowei Liu, Zhi Yang, Wei Dai, Yujie Liao, Qi Qin, Yifei Li, Xingxian Liu, Zhiqiang Liu, Zhoufan Zhu, Anbo Wu, Xinnan Guo, and Yun Chen. 2023. Fineval: A chinese financial domain knowledge evaluation benchmark for large language models. ArXiv, abs/2308.09975. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. ArXiv, abs/1904.09675. Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, and Haoxin Liu. 2022. Nico++: Towards better benchmarking for domain generalization. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16036–16047. Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Z. Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jianyun Nie, and Ji rong Wen. 2023. A survey of large language models. ArXiv, abs/2303.18223. Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. 2021a. Domain generalization: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45:4396–4415. Kaiyang Zhou, Yongxin Yang, Yu Qiao, and Tao Xiang. 2021b. Domain generalization with mixstyle. ArXiv, abs/2104.02008. 13247 A The training prompts The prompt used to train the Llama2 model. <s> [INST] «SYS» You are a helpful, respectful and honest assistant. Always answer as helpfully as possible while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don’t know the answer to a question, please don’t share false information. «/SYS» Summarize the following paragraph <Dodument> [/INST] The prompt used to train the Bloom model. <s>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user’s questions. summarize the following paragraph <Document> B Results of Bloom-1.1B and 3B C Results of BERTScore 13248 Training Set Test Set Compression Rate Abstract Level Learning Difficulty Coefficient ROUGE Base ROUGE Fine-tuned ROUGE Improvement Cross-domain Overlap LD-Gain PubMed CNNDM 12.95 20.22 261.85 3.72 3.33 -0.39 2.35% -102.12 SAMSum 6.13 2.41 8.89% 631.06 WikiHow 4.38 0.66 4.47% 172.82 CNNDM PubMed 7.08 13.13 92.96 4.13 3.87 -0.26 2.90% -24.17 SAMSum 4.78 0.65 3.51% 60.42 WikiHow 5.2 1.07 3.96% 99.47 CNNDM SAMSum 4.86 16.76 81.45 1.75 1.32 -0.43 1.62% -35.03 PubMed 2.04 0.29 0.64% 23.62 WikiHow 2.37 0.62 1.26% 50.50 CNNDM WikiHow 13.05 39.23 511.95 2.51 2.43 -0.08 3.30% -40.96 PubMed 2.66 0.15 2.25% 76.79 SAMSum 2.46 -0.05 7.53% -25.6 Table 5: Results of single-domain adaptation on the Bloom-1.1B model. Training Set Test Set Compression Rate Abstract Level Learning Difficulty Coefficient ROUGE Base ROUGE Fine-tuned ROUGE Improvement Cross-domain Overlap LD-Gain PubMed CNNDM 12.95 20.22 261.85 6.348 7.38 1.03 2.35% 269.71 SAMSum 8.91 2.56 8.89% 670.34 WikiHow 5.61 -0.74 4.47% -193.77 CNNDM PubMed 7.08 13.13 92.96 8.60 7.64 -0.96 2.90% -89.24 SAMSum 9.20 0.61 3.51% 56.71 WikiHow 6.66 -1.94 3.56% -180.34 CNNDM SAMSum 4.86 16.76 81.45 5.19 9.16 3.97 1.62% 323.36 PubMed 3.22 -1.97 0.64% -160.46 WikiHow 0.66 -4.53 1.26% -368.97 CNNDM WikiHow 13.05 39.23 511.95 3.709 4.77 1.06 3.30% 542.67 PubMed 3.44 -0.27 2.25% -138.23 SAMSum 5.00 1.30 7.53% 665.54 Table 6: Results of single-domain adaptation on the Bloom-3B model. (a) Bloom-3B (b) Llama2-7B Figure 5: Results of single-domain adaptation calculated by BERTScore. 13249
2024
715
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13250–13262 August 11-16, 2024 ©2024 Association for Computational Linguistics Visualization Recommendation with Prompt-based Reprogramming of Large Language Models Xinhang Li1,2*, Jingbo Zhou2†, Wei Chen1, Derong Xu1, Tong Xu1†, Enhong Chen1† 1University of Science and Technology of China & State Key Laboratory of Cognitive Intelligence, 2Business Intelligence Lab, Baidu Research [email protected], [email protected], {chenweicw, derongxu}@mail.ustc.edu.cn, {tongxu,cheneh}@ustc.edu.cn Abstract Visualization recommendations, which aim to automatically match proper visual charts for specific data tables, can significantly simplify the data analysis process. Traditional approaches in this domain have primarily relied on rule-based or machine learning-based methodologies. These methods often demand extensive manual maintenance and yet fail to fully comprehend the tabular data, leading to unsatisfactory performance. Recently, Large Language Models (LLMs) have emerged as powerful tools, exhibiting strong reasoning capabilities. This advancement suggests their substantial promise in addressing visualization recommendation challenges. However, effectively harnessing LLMs to discern and rationalize patterns in tabular data, and consequently deduce the essential information for chart generation, remains an unresolved challenge. To this end, we introduce a novel Hierarchical Table Prompt-based reprogramming framework, named HTP. This framework aims to integrate multi-dimensional tabular data into LLMs through a strategically crafted prompt learning method while keeping the LLMs’ backbone and weights unaltered. The HTP framework uniquely incorporates a four-level prompt structure, encompassing general, instance, cluster, and column levels. This multi-level approach is engineered to provide a comprehensive understanding of both general distribution and multifaceted fine-grained features of tabular data, before inputting the tabular data into the frozen LLM. Our empirical studies confirm that the HTP framework achieves state-of-the-art performance, marking an advancement in the field of data visualization and analysis. 1 Introduction Data visualization, which facilitates effective decision-making via better responding the human *This work was done when the first author was an intern in Baidu Research under the supervision of Jingbo Zhou. †Corresponding authors. LLM Structure Attention Fusion logits Figure 1: Illustration of HTP framework. sensitivity and efficiency in processing visual information (Kosmyna et al., 2018), has become increasingly crucial nowadays. Traditional visualization tools necessitate extensive effort for manual specification, which often requires domain expertise in data analysis. Therefore, large efforts have been made to develop automatic tools for solving the visualization recommendation task, i.e., recommending the proper visual charts. Generally, prior arts can be divided into two categories. The first, known as Rule-based approaches (Mackinlay, 1986; Mackinlay et al., 2007), relies on expert-designed rules to generate visualizations. However, these approaches not only suffer the massive manual labor for rule maintenance, but also struggle to address the combinatorial explosion problem caused by increasing data dimensions. In contrast, the Machine Learningbased approaches (Dibia and Demiralp, 2019; Li et al., 2021) are investigated to automatically learn the best matching between tabular data and visualization elements. Unfortunately, few of them could fully extract the multi-level features within tables, which severely limits the performance. Last year has witnessed the prosperity of Large Language Models (LLMs), whose advanced capabilities of LLM in processing and interpreting 1 13250 complex data unlocks new possibilities in the field of data visualization. However, significant challenges persist in effectively harnessing the potential of an LLM for visualization recommendation. Firstly, LLMs are typically designed to process sequences of discrete tokens in an unstructured format, whereas tables used for visualization often display highly structured and predominantly numerical characteristics, which leads to a mismatch. Second, the inherent pre-training of LLMs does not naturally encompass the comprehension abilities required to interpret complex tabular data. Third, rather than fine-tuning the entire model, it is often preferable to keep the LLM backbone “frozen”. This approach ensures the model’s versatility in supporting a variety of tasks without compromising its fundamental capabilities. To this end, we propose a novel Hierarchical Table Prompt-based reprogramming framework (HTP), to adapt LLM for visualization recommendation without altering the backbone structure. The essence of HTP lies in leveraging data-driven multilevel prompts to adaptively reprogram the tabular data across various dimensions, thereby bridging the gap between the structured nature of tabular data, as well as the inherent capabilities of LLMs for textual processing. Specifically, HTP utilizes four levels of prompts as follows: • General-level prompt, which is employed to describe the overall distribution of table dataset, while facilitating information sharing and integration among prompts at various levels, thereby enhancing the generalization performance of LLM. • Instance-level prompt, which connects the individual table instances with various charts, by leveraging the specific distribution of tabular data and corresponding chart. • Cluster-level prompt, which is generated via feature extraction and clustering, and targets enhancing the LLM to capture the implicit patterns that exist within the table datasets, as well as the correlations among patterns. • Column-level prompt, which originates from the inherent structural information of tables, and highlights the columnar organization to improve the column-level information processing, and support cross-column comprehension. Based on these four prompts, as illustrated in Figure 1, we concatenate prompts at general, instance, and cluster levels with a serialized table before feeding them into LLMs, thereby facilitating table-specific knowledge extraction and integration. Meanwhile, the column-level prompts are prepended to encoded inputs to retain the structural information of tables, thereby improving the distinction among columns, and fostering a contextual understanding between them. The contribution of this paper could be summarized as follows: • To the best of our knowledge, we are the first to utilize LLMs to explore the visualization recommendation task through soft prompt-based reprogramming, without altering the pre-trained backbone model. • A novel prompt-based framework is proposed to reprogram the hierarchical table information into multi-prompts, which enhances the comprehension capabilities of LLMs. • Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed HTP compared with state-of-the-art approaches. 2 Related Work 2.1 Visualization Recommendation Prior researches on automated visualization recommendation includes rule-based and machine learning-based approaches. Rule-based approaches (Mackinlay, 1986; Roth et al., 1994; Perry et al., 2013; Wongsuphasawat et al., 2016) are based on manually developed rules, which heavily rely on expert knowledge, and have high maintenance costs. Moreover, as table size grows, these methods may encounter combinatorial explosion issues. Recently, machine learning-based methods have shown notable progress in enhancing recommendation accuracy and scalability. (Luo et al., 2018) employed expert rules for initial visualizations, and evaluated them using a trained classifier. (Li et al., 2021; Hu et al., 2019) formulated the visualization recommendation challenge as a series of classification tasks, learn to predict labels for various design choices, while (Zhou et al., 2020; Dibia and Demiralp, 2019) cast the challenge as a sequence generation problem. The former utilized a Deep Q-Network (DQN) for selecting action tokens to fill the predefined chart templates, while the latter employed a seq2seq model to autoregressively generate JSON-encoded charts. However, these approaches are limited to learning the existing knowledge within datasets, and lack sufficient exploration of multifaceted table features, such as table patterns and structures, leading to a limited comprehension of complicated tables. Recently, some researchers (Dibia, 2023; Ko et al., 2024; 2 13251 Cheng et al., 2023; Maddigan and Susnjak, 2023) leveraged LLMs to investigate the visualization recommendation task. They divided the visualization generation process into several subtasks, exploring appropriate discrete prompts for each subtask to maximize the capabilities of the larger model. Different from prior arts, in our work, we enrich the extensive pre-trained knowledge of LLMs with multi-level prompt, to fully extract the semantics within table data from multiple dimensions. 2.2 Prompt Tuning With continuous parameter scaling of pre-training models, fine-tuning entire models for downstream tasks becomes daunting due to inefficiency and potential catastrophic forgetting (Pfeiffer et al., 2021; Kan et al., 2023). Thus, Prompt Tuning (Lester et al., 2021), which prepends learnable continuous prompts to input text while keeping model parameters fixed, has achieved success across various domains. However, directly using this method to complex table datasets results in substantial information loss. In this paper, we use multi-level prompts to integrate table information, enhancing the efficiency of utilizing prior knowledge. 3 Preliminaries 3.1 Problem Formulation We formulate the visualization recommendation task as a text generation problem. Consider a labeled training dataset D = {(T, V )i}|D| i=1 where Ti represents a serialized table and Vi = v1 i , v2 i , · · · vli i is a concise json description of chart with length li, which encapsulating key information for defining a chart. Our goal is to generate Vi in an autoregressive way. Based on Prompt Tuning (Lester et al., 2021), we learn a prompts group P that have multi-level prompts, with a module G designed to dynamically produce prompts P. We train our model by maximizing the likelihood of generating the target sequence, as follows: max P,G pθ |D| Y i=1 li Y j=1 (vj i | v1 i , v2 i · · · vj−1 i , Ti, P, G). (1) Here parameters θ of LLM remain frozen, only the prompts group P and G are updated. 3.2 Data Serialization 3.2.1 Table Linearization As demonstrated in (Suadaa et al., 2021; Zhang et al., 2020), the representation of data in table form has a significant impact on generation performance. In this paper, we adopt a template-based linearization approach, transforming tables into flat string representations to enhance their compatibility with the internal structures of LLMs. Formally, given a table T, it is represented as: < header1,1 > is < value1,1 >, · · · < headeri,j > is < valuei,j >, · · · . 3.2.2 Chart Mapping We employs a JSON-based template to encode chart data similar to (Poco and Heer, 2017; Satyanarayan et al., 2017). The template encapsulates the key information necessary for defining a chart, including the visualization types and the arrangement of x-/y-axes. When using tools like Echarts or Plotly, our recommended attributes allow users to easily create chart images while giving them the freedom to adjust or add attributes to suit their specific analysis needs or presentation preferences if needed. The details of our template are as follows: where each object in “y” and the objects of “x” form a visual trace. 4 Methodology In this section, we will introduce the proposed framework in detail. As shown in Figure 2, our framework consists of four primary prompts, namely general, instance, column and cluster level prompts, to inject table insights into the LLM. Technical details of each component are discussed in the following subsections, step-by-step. 4.1 General-level Prompt Generation To encompass the overall information of the table dataset and enhance generalization ability, we introduce a soft prompt Pgen as (Lester et al., 2021) do, which is uniformly applied across all instances. This general prompt ensures that the model comprehensively understanding table dataset. It also 3 13252 Image Encoder Projector … Repulsion Attraction label update Memory bank Soft Codes Hard Codes … Cluster-level Prompt Pattern-1 Pattern-2 Pattern-n Final hs Embedding layer Style Controller Reparameterize LLM Cross-Attn = Image Encoder Projector Aggregate by visual type Style Query Feature Extractor Cluster Self-Attn Cell pos emb Linear Self-Attn Table pos emb Mapping Network Self-Attn Cross-Attn Aggregate Style Query Serialize * * * * + + hs Column-level Instance-level Cluster-level General-level i! i Batch samples (a) (b) (c) (d) (e) Concat Self-Attn Style Controller + Instance-level Prompt * : Concat : Element-wise add + Figure 2: The overall framework of HTP. (a) Supervised contrastive learning for chart style representations. (b) Style query generation. (c) Clustering table features to generate cluster-level prompts. (d) The overall workflow of the model. (e) The detailed structure of Style Controller. facilitates sharing and integrating information from other level prompts. 4.2 Instance-level Prompt Synthesization Since tables target for visualizing data across diverse domains, there are inherent differences among tables within the dataset. Obviously, a single general prompt is insufficient to fully adapt to these complicated variations of tables. In response, we introduce an instance-aware prompt capable of dynamically capturing the table-specific features. In detail, we first synthesize a style vector that reflects the characteristics of different types of charts, named style query. Then, we use this style query to project information from the serialized table embedding into the chart space. In this way, the style query facilitates the extraction of fine-grained information from tables across various dimensions, enhancing the table’s perceptibility to charts. Moreover, projecting table information into chart space narrows down the search space for the LLM, thereby reducing the likelihood of producing hallucinations. 4.2.1 Style Query Generation To ensure that the style query adequately represents the stylistic characteristics of each chart type, our style query is composed of two parts: hard codes and soft codes. Hard codes capture the inter-class features among different chart types, while soft codes are employed to explore the intra-class features within each chart type. Hard Codes Generation. Since chart images have explicit labels (e.g., bar charts, pie charts), enabling distinction of different chart styles and further guiding chart generation. Thus, we employ a supervised contrastive learning approach that trains a projector P on labeled chart images, enabling it to learn the distinct representation of each chart style. The detailed process is shown in Figure 2 (a). During training, we first feed a chart image i alongside its augmented version i+ (e.g., rotation, grayscale), as well as other batch samples, into a frozen pre-trained image encoder to obtain visual embeddings. These extracted embeddings are then fed into a bottleneck architecture projector P, which maps them to vectors that reflect chart style. Then, we utilize supervised contrastive learning loss on generated vectors to train the projector P. Drawing inspiration from MOCO (He et al., 2020), we maintain a continuously updated memory bank for each chart type. In our method, positive samples for a given instance include its augmented versions and same-type instances within batch and memory banks. In contrast, negative samples encompass instances of different chart types within the same batch and memory bank, ensuring that the model 4 13253 learns to distinguish different chart categories. As shown in Figure 2 (b), after the contrastive learning phase, we reprocess the visual embeddings of images through projector P to obtain style features. To obtain an overall stylistic representation for each chart type, we categorize these features by chart type and perform an averaging operation within each category. In this way, we obtain a stylistic representation set {qi h}m i=1, named hard codes, where m represents the number of chart types. Soft Codes Generation. Even within the same chart category, there are many subtle variations. Relying solely on hard codes could lead to overlooking significant intra-class differences among chart types, limiting the model’s ability to detect subtle stylistic differences between charts. To more effectively explore the intra-class features among charts, we introduce m trainable soft style codes {qi s}m i=1, which are prepended to the hard codes to provide a more comprehensive representation of the chart space. Now the style query can be defined as follows: Qs = {[qi s; qi h]} i = 1, 2, ..., m. Here [·; ·] denotes the concatenation operation. Consequently, the style query Qs captures intra and inter-class characteristics of each chart type simultaneously, which will be updated during training, allowing a better perception of input table. 4.2.2 Generating Instance-level Prompts with Style Query We then use style query to guide instance-level prompt generation through a structure called Style Controller. As depicted in Figure 2 (e), we first employ a mapping network M to align dimension and modality between Qs and the serialized input embedding Tr. For efficiency, M consists of down and up projection layers, with a nonlinear layer situated between them. Then we adopt two attention layers: the self-attention over Qs, followed by cross-attention from Tr, as follows: Qs = M(Qs), Qs = Self-Attn(Qs), S = Cross-Attn(Qs, Tr). (2) The S = {s1, s2, ...sm} represents features extracted by style query from m spaces based on the distributional information of table embedding. Intuitively, self-attention layer enables the model to effectively process and integrate contextual information from style query. While the cross-attention layer can dynamically emphasize the essential chart-related information within a table through style query. Finally, we generate the instance-level prompt Pins by averaging the si ∈S. 4.3 Cluster-level Prompt Generation When two tables exhibit similar messages, they are likely to be represented by similar charts. Thus, using the shared information within table pattern is crucial for comprehending table representations and making chart recommendations. The diverse features of table data, such as trends, entropy, and variance, provide a comprehensive overview of table data, thus play a crucial role in identifying distinct table patterns. Thus, we extract various table features and cluster them into k different table patterns. Each table schema has its own shared prompt, resulting in a cluster prompts pool {pi c}k i=1. Correspondingly, for a given table T ∈Ci, its cluster prompt can be directly set as pi c. 4.4 Column-level Prompt Generation The previous discussion emphasized the overall semantic understanding of the table. However, since the selection of chart type and the arrangement of x-/y-axes are fundamentally based on column data, a detailed understanding of each column’s role and characteristics within the table is also critical for recommending appropriate charts. To this end, we introduce column-level prompt that designed to encapsulate column structure information within table, facilitating intra-column understanding and cross-column analysis. As shown in Figure 2 (d), following (Chen et al., 2022), we first encode a given table T on a per-cell basis to retain structure. Specifically, for each cell in T, we concatenate the cell’s header and value, and query the LLM to obtain embeddings. To enhance representation at the column level, we introduce a unique soft prompt for each column, which is prepended to the embeddings of cells within the same column, yielding a refined, column-centric table structure representation E ∈Rm×n×s×d, with s, m and n denoting sequence length, number of rows and columns, respectively. Next, we employ a two-layer attention structure, the first is a cell-wise layer that focus on individual cells, while the second operates on single column to explore column semantics: 5 13254 E0 = E + Ecpe, ˆE0 = Linear(Self-Attn(E0)) + E0, E1 = 1 s Xs i=1 ˆE0[:, :, i, :], ˆE1 = Self-Attn(E1), (3) where Ecpe ∈Rs×d is the cell text position embedding, which will be updated during training. In this way, the ˆE1 can grasp the column structure information inside the table. 4.5 Prompt Integration for Reprogramming LLM After obtaining the general, instance, and cluster level prompts, we use a shallow network V with a skip connection to reparameterize them. Subsequently, the reparameterized prompts are concatenated: P ′ = [V(Pgen); V(Pins); V(Pclu)], V(P) = ϕ(P) + P, (4) where ϕ is a simple linear layer. Furthermore, a self attention layer is applied to enable information sharing among the prompts: Ppre = Self-Attn(P ′). (5) The refined prompts Ppre ∈Rlp×d are concatenated with the serialized table embedding Tr ∈ Rls×d, where lp denotes the prefix length, ls denotes the input sequence length. They are then fed into the LLM to obtain the last hidden states H. After that, we apply a cross-attention layer, using H as a query, to integrate the structured and unstructured information from the entire table: E2 = ˆE1 + Etpe, H = Cross-Attn(H, E2) + H, (6) where Etpe ∈Rm×n×d is the table position embedding that provides row and column information for tables. This obtained hidden state will replace the original one to estimate probability of next word. 5 Experiments 5.1 Settings Datasets Due to inaccessibility of the Plotly Corpus, a publicly available dataset collected by (Hu et al., 2019), we independently re-extracted a substantial dataset of 443, 526 public visualization pairs from the Plotly community feed via the Plotly REST API1. Following rigorous data cleaning, the refined dataset contains 116, 528 visualization pairs across 439, 001 columns, including 4, 091(3.51%) pie, 36, 576(31.38%) line, 29, 740(25.52%) scatter, 6, 269(5.38%) histogram, 29, 641(25.43%) bar and 10, 211(8.77%) box charts. More details are in Appendix A. Baselines To evaluate the performance of our HTP framework, we compare our method with following baselines: (1) VizML (Hu et al., 2019), (2) KG4Vis (Li et al., 2021), (3) MultiVision (Wu et al., 2021), (4) Data2Vis (Dibia and Demiralp, 2019), (5) Table2Charts (Zhou et al., 2020), (6) DeepEye (Luo et al., 2018). More details are in Appendix B. Metrics Referring to VizML, we employ Accuracy to evaluate whether models can recommend correct design choices, including: XY (whether the field encoded on the X-axis and Y-axis is correct), chart type (whether the correct chart type is recommended) and overall (both XY-axis and chart type are considered). Due to the variations among the above baseline models in calculating metrics, with some evaluations based on individual fields and others on entire tables, for fair comparisons, we report the metrics at the both level. More details are in Appendix C. Implementations We randomly split the dataset into three parts: 80% for training, 10% for validation, and 10% for testing. For contrast learning, we use CLIP (Radford et al., 2021) visual encoder to extract features of input images. For our chart generation model, we primarily utilize the pre-trained Bloom (Workshop et al., 2022) model with 1.1B parameters as the backbone. We use K-Means (MacQueen et al., 1967) to cluster table features. Additional details are provided in Appendix D. 2 5.2 Comparison with Baselines The performance comparison of our HTP and baselines is presented in Table 1, in which the mean and standard deviation of all metrics are obtained through three random runs. We have the following observations: (1) Our HTP consistently achieves the best performance in most of the evaluation metrics, with 25.4% and 32.8% absolute improvements of overall accuracy in field level and table 1https://api.plot.ly/v2/ 2We make the code and dataset of HTP available at: https://github.com/PaddlePaddle/PaddleSpatial/ tree/main/research/HTP 6 13255 Model Field Level Table Level XY Chart Type Overall XY Chart Type Overall VizML 0.892 ± 0.008 0.553 ± 0.009 0.507 ± 0.014 0.795 ± 0.018 0.393 ± 0.015 0.334 ± 0.017 KG4Vis 0.819 ± 0.038 0.418 ± 0.055 0.319 ± 0.052 0.552 ± 0.030 0.257 ± 0.014 0.111 ± 0.024 MultiVision 0.649 ± 0.020 0.624 ± 0.021 Data2Vis 0.646 ± 0.038 0.341 ± 0.010 0.321 ± 0.012 0.508 ± 0.027 0.415 ± 0.007 0.281 ± 0.014 Table2Charts 0.932 ± 0.015 0.453 ± 0.016 0.436 ± 0.010 0.881 ± 0.013 0.426 ± 0.007 0.397 ± 0.012 DeepEye 0.523 ± 0.009 0.396 ± 0.022 0.237 ± 0.006 0.493 ± 0.006 0.427 ± 0.004 0.249 ± 0.004 HTP 0.928 ± 0.011 0.780 ± 0.014 0.761 ± 0.013 0.874 ± 0.009 0.802 ± 0.011 0.725 ± 0.008 Table 1: Performance comparison between HTP and all the baselines. The best results are in bold and the second are underlined. level. This indicates that our model is capable of processing and integrating subtasks within visualization recommendation from a comprehensive perspective. (2) In chart type accuracy, our model achieves an accuracy of 0.780 at the field level and 0.802 at the table level, with a large margin of 13.1% and 17.8% over the best-performing baseline, respectively. This demonstrates the effectiveness of using style query to enhance table perception across various chart types. Moreover, chart type prediction accuracy is generally lower than xy prediction across all models, suggesting it’s more challenging and could be a bottleneck in chart recommendation. (3) Table2Charts exhibits greater accuracy on the xy-axis than other methods. This may be due to the pre-defined chart template constraining the selection of x and y-axis. However, in real-world scenarios with more complex charts, such a strongly constrained template might lack flexibility. (4) KG4Vis has high standard deviation across all metrics, and a large drop in overall accuracy from field to table level. This demonstrates that focusing solely on single-column features while neglecting the comprehensive attributes of the entire table leads to incomplete understanding, consequently impairing performance at the table level. In contrast, our HTP , by encompassing information across all table levels, achieves higher accuracy with exhibiting great consistency. 5.3 Ablation Analysis To verify the effectiveness of each design in our model, we further compare HTP with six variants: • w/o-CLUP removes the cluster-level prompt. • w/o-TSI removes the all table structure information (include the column-level prompt). • w/o-COLP removes the column-level prompt. • w/o-GP removes the general prompt. • w/o-INSP removes the instance-level prompt. • w/o-SSC removes the soft codes, using only hard XY 0.84 0.85 0.86 0.87 0.88 0.89 ACC w/o-CLUP w/o-TSI w/o-COLP w/o-GP w/o-INSP w/o-SSC HTP Chart Type 0.74 0.75 0.76 0.77 0.78 0.79 0.80 0.81 ACC w/o-CLUP w/o-TSI w/o-COLP w/o-GP w/o-INSP w/o-SSC HTP Overall 0.68 0.69 0.70 0.71 0.72 0.73 0.74 ACC w/o-CLUP w/o-TSI w/o-COLP w/o-GP w/o-INSP w/o-SSC HTP Figure 3: Performance comparison between HTP and its variants on Table Level. codes as source style query. According to the results shown in Figure 3, HTP outperforms all its variants, proving the significance of our special designed prompts. Specifically, w/o-INSP and w/o-TSI underperform other variants, highlighting the need for dynamically generating customized prompts to capture fine-grained information and the internal structure of tables. Meanwhile, we can see w/o-INSP has a significant performance degradation on the chart type prediction, proving that instance-level prompt can effectively obtain perceptual information between table and different types of charts. Besides, we observe a decrease when removing the cluster-level prompt, which demonstrates the importance of sharing information within table patterns (w/o-CLUP). In addition, the performance degrades if the soft codes is not used, suggesting a comprehensive understanding of chart representation through capturing intraclass connections is crucial for enhancing model performance. Further, removing the table-column prompt results in a notable performance decline, which verifies the importance of understanding table structure and enhancing column-level insights. 5.4 Comparison with Adaptation Methods across Model Scales To evaluate the effectiveness and adaptability of our model, we conduct a comprehensive comparison between HTP and the following methods at differ7 13256 Model Method Field Level Table Level XY Chart Type Overall XY Chart Type Overall Bloom-Small (560M) Fine-Tuning 0.953 0.745 0.734 0.918 0.701 0.649 Prompt Tuning 0.899 0.550 0.534 0.843 0.549 0.492 Prefix Tuning 0.924 0.592 0.584 0.869 0.584 0.550 LoRA 0.927 0.624 0.613 0.887 0.577 0.536 HTP 0.902 0.714 0.696 0.845 0.746 0.679 Bloom-Medium (1.1B) Fine-Tuning 0.940 0.782 0.772 0.896 0.803 0.755 Prompt Tuning 0.876 0.682 0.656 0.834 0.615 0.543 Prefix Tuning 0.912 0.746 0.726 0.858 0.765 0.697 LoRA 0.915 0.758 0.739 0.864 0.777 0.703 HTP 0.928 0.780 0.761 0.874 0.802 0.725 Bloom-Large (3B) Fine-Tuning 0.950 0.832 0.820 0.915 0.834 0.787 Prompt Tuning 0.817 0.670 0.672 0.845 0.748 0.676 Prefix Tuning 0.923 0.841 0.818 0.897 0.804 0.763 LoRA 0.936 0.813 0.798 0.897 0.820 0.764 HTP 0.938 0.850 0.836 0.899 0.831 0.773 Table 2: Performance comparison between HTP and adaptation methods at different model scales. The best results except Fine-Tuning are in bold, and the second are underlined. ent model scales, including Fine-Tuning, Prompt Tuning (Lester et al., 2021), Prefix Tuning (Li and Liang, 2021) and LoRA (Hu et al., 2022). From Table 2, we observe that: (1) Across the majority of metrics and model scales, our model significantly outperforms other partial parameter tuning methods. This indicates that our proposed promptbased framework can comprehensively unearth the implicit insights of tables and more efficiently harness the potential of LLMs for visualization recommendation. (2) Our HTP outperforms parameterefficient tuning methods by a large margin in smallscale LLMs. For example, HTP achieves absolute improvements of 8.3% and 12.9% over the best-performing PEFT method in overall metric at field level and table level respectively. (3) We can see that there is still a significant gap between Fine-Tuning and parameter-efficient tuning methods. However, our method achieves comparable performance to Fine-Tuning across models of various scales, and even outperforms Fine-Tuning in most metrics. In the 560M model, both the chart type and overall accuracy at table level exceed FineTuning. Moreover, in the 3B model, these two metrics also surpass Fine-Tuning performance at the field level. This demonstrates that HTP can better enrich the semantic features of table data and align highly structured table data with the pre-training paradigm of LLMs without loss of information. Table Level Field Level 0.86 0.88 0.90 0.92 0.94 0.96 XY Sample Random Table Level Field Level 0.76 0.78 0.80 0.82 0.84 Chart Type Sample Random Table Level Field Level 0.70 0.72 0.74 0.76 0.78 Overall Sample Random Figure 4: Performance comparison between different prompt initialization methods. 4 6 8 10 12 Table Level 0.7 0.8 0.9 1.0 ACC Overall Chart Type XY 4 6 8 10 12 Field Level 0.7 0.8 0.9 1.0 1.1 ACC Overall Chart Type XY Figure 5: Parameter sensitivity of the number of table pattern clusters K. 5.5 Parameters Analysis 5.5.1 Prompt Initialization We explore the impact of soft prompt initialization in our study. Our investigation focuses on two distinct initialization strategies: (1) Randombased Initialization. Soft prompts are generated by randomly initializing their embeddings. (2) Sample Vocabulary-based Initialization. We sample high-frequency word chunks from the training set dictionary, concatenate them in order of frequency, then trim to match the prompts’ length. We use the embeddings of the trimmed version as initial values. As shown in Figure 4, we can see that HTP is robust to the prompt initialization method, achieving comparable results with both initialization choices. 8 13257 Model Field Level Table Level XY Chart Type Overall XY Chart Type Overall GPT-4 0.832 0.403 0.385 0.727 0.453 0.427 HTP 0.881 0.722 0.683 0.841 0.753 0.693 Table 3: Performance comparison between HTP and GPT-4. 5.5.2 Number of Table Patterns Clusters K As depicted in Figure 5, more clusters of table pattern, namely increasing K, enables HTP to capture more complex and diverse table pattern information. However, if K gets too large, there may be insufficient corresponding table groups in the table dataset to support cluster representation learning, and some superfluous clusters may introduce noise and undermine performance instead. 5.6 Comparison with GPT-4 Since the method we propose involves inputs (tables) and outputs (chart codes) that are both in text format, without any image inputs or outputs, to compare the performance of our HTP and GPT-4, we conduct an experiment with small data batches using GPT-4 (non-vision). We randomly selected 150 samples to conduct the experiment. As shown in Table 3, our HTP consistently outperforms GPT4 across all tasks, with a particularly significant margin of excellence in chart-type recommendation tasks. This suggests that HTP demonstrates outstanding ability in understanding and recommending visualizations that are most suitable for the specific structure of the data at hand. 6 Conclusion In this paper, we introduced a novel Hierarchical Table Prompt-based reprogramming Framework, called HTP, to enhance the visualization recommendation process through Large Language Models (LLMs). The HTP harnessed potential of LLMs through four level of prompts, aiming at extracting semantic information from table through comprehensive perspectives. In this way, HTP effectively unlocked the potential of LLM for transforming table into insightful charts. Extensive experimental results demonstrated the superior performance of HTP framework. 7 Limitations We briefly mention some limitations of our work. First, we have only used a single self-collected dataset. This is mainly due to the fact that the only open-source dataset Plotly Corpus, collected by (Hu et al., 2019), was not accessible. Moreover, we mainly consider data-encodings attributes including visualization types and the arrangement of x-/y-axes while non-data-encoding attributes such as layouts and colors are not considered, which is because the evaluation of non-data-encoding attributes is relatively subjective, and our dataset predominantly utilizes default settings for these attributes. Acknowledgements This work was supported in part by the grants from National Natural Science Foundation of China (No.62222213, U22B2059, 62072423, 92370204). References Miao Chen, Xinjiang Lu, Tong Xu, Yanyan Li, Jingbo Zhou, Dejing Dou, and Hui Xiong. 2022. Towards table-to-text generation with pretrained language model: A table structure understanding and text deliberating approach. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8199–8210. Liying Cheng, Xingxuan Li, and Lidong Bing. 2023. Is gpt-4 a good data analyst? arXiv preprint arXiv:2305.15038. Victor Dibia. 2023. Lida: A tool for automatic generation of grammar-agnostic visualizations and infographics using large language models. arXiv preprint arXiv:2303.02927. Victor Dibia and Ça˘gatay Demiralp. 2019. Data2vis: Automatic generation of data visualizations using sequence-to-sequence recurrent neural networks. IEEE computer graphics and applications, 39(5):33– 46. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738. Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on Learning Representations. 9 13258 Kevin Zeng Hu, Michiel A. Bakker, Stephen Li, Tim Kraska, and César A. Hidalgo. 2019. Vizml: A machine learning approach to visualization recommendation. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, page 128. Baoshuo Kan, Teng Wang, Wenpeng Lu, Xiantong Zhen, Weili Guan, and Feng Zheng. 2023. Knowledge-aware prompt tuning for generalizable vision-language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15670–15680. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations. Hyung-Kwon Ko, Hyeon Jeon, Gwanmo Park, Dae Hyun Kim, Nam Wook Kim, Juho Kim, and Jinwook Seo. 2024. Natural language dataset generation framework for visualizations powered by large language models. In Proceedings of the CHI Conference on Human Factors in Computing Systems, pages 1–22. Nataliya Kosmyna, Jussi T Lindgren, and Anatole Lécuyer. 2018. Attending to visual stimuli versus performing visual imagery as a control strategy for eeg-based brain-computer interfaces. Scientific reports, 8(1):13222. Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Hao-Tian Li, Yong Wang, Songheng Zhang, Yangqiu Song, and Huamin Qu. 2021. Kg4vis: A knowledge graph-based approach for visualization recommendation. IEEE Transactions on Visualization and Computer Graphics, 28(1). Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 4582–4597. Yuyu Luo, Xuedi Qin, Nan Tang, and Guoliang Li. 2018. Deepeye: Towards automatic data visualization. In 2018 IEEE 34th international conference on data engineering (ICDE), pages 101–112. Jock Mackinlay. 1986. Automating the design of graphical presentations of relational information. Acm Transactions On Graphics (Tog), 5(2):110–141. Jock Mackinlay, Pat Hanrahan, and Chris Stolte. 2007. Show me: Automatic presentation for visual analysis. IEEE transactions on visualization and computer graphics, 13(6):1137–1144. James MacQueen et al. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281–297. Paula Maddigan and Teo Susnjak. 2023. Chat2vis: Generating data visualisations via natural language using chatgpt, codex and gpt-3 large language models. Ieee Access. Daniel Perry, Bill Howe, Alicia Key, and C. Aragon. 2013. Vizdeck: Streamlining exploratory visual analytics of scientific data. Jonas Pfeiffer, Aishwarya Kamath, Andreas Rücklé, Kyunghyun Cho, and Iryna Gurevych. 2021. Adapterfusion: Non-destructive task composition for transfer learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 487–503. Jorge Poco and Jeffrey Heer. 2017. Reverse-engineering visualizations: Recovering visual encodings from chart images. Computer Graphics Forum, page 353–363. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Conference on Machine Learning, pages 8748–8763. Steven F Roth, John Kolojejchick, Joe Mattis, and Jade Goldstein. 1994. Interactive graphic design using automatic presentation knowledge. In Conference on Human Factors in Computing Systems, pages 112– 117. Arvind Satyanarayan, Dominik Moritz, Kanit Wongsuphasawat, and Jeffrey Heer. 2017. Vega-lite: A grammar of interactive graphics. IEEE Transactions on Visualization and Computer Graphics, page 341–350. Lya Hulliyyatus Suadaa, Hidetaka Kamigaito, Kotaro Funakoshi, Manabu Okumura, and Hiroya Takamura. 2021. Towards table-to-text generation with numerical reasoning. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1451–1465. Kanit Wongsuphasawat, Dominik Moritz, Anushka Anand, Jock Mackinlay, Bill Howe, and Jeffrey Heer. 2016. Voyager: Exploratory analysis via faceted browsing of visualization recommendations. IEEE Transactions on Visualization and Computer Graphics, page 649–658. BigScience Workshop, Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, et al. 2022. Bloom: A 176bparameter open-access multilingual language model. arXiv preprint arXiv:2211.05100. 10 13259 Aoyu Wu, Yun Wang, Mengyu Zhou, Xinyi He, Haidong Zhang, Huamin Qu, and Dongmei Zhang. 2021. Multivision: Designing analytical dashboards with deep learning based recommendation. IEEE Transactions on Visualization and Computer Graphics, 28(1):162–172. Hongzhi Zhang, Yingyao Wang, Sirui Wang, Xuezhi Cao, Fuzheng Zhang, and Zhongyuan Wang. 2020. Table fact verification with structure-aware transformer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1624–1629. Mengyu Zhou, Qingtao Li, Yuejiang Li, Shi Han, and Dongmei Zhang. 2020. Table2charts: Learning shared representations for recommending charts on multi-dimensional data. arXiv preprint arXiv:2008.11015. Appendix A Data Cleaning We developed our dataset using a data cleansing pipeline, outlined as follows: (1) Format Verification. Since the Plotly community stores table and chart data in json format, we will initially filter out data that cannot be parsed in json format. (2) Completeness Assessment. Data samples lacking either non-empty source table data or chart data are discarded. Additionally, if data columns corresponding to the x and y axes in the chart data are missing in the table data, if table data contains empty columns, or if chart data lacks specified chart type or axes, the sample is excluded. (3) Data Deduplication. In the Plotly community, each visualization pair is uniquely identified by a “fid”. When encountering multiple data entries with the same “fid”, only the first one is retained. Since many tables are slight modifications of each other, we calculate their basic features, integrate these values into a single feature identifier, and retain only the first entry for each set of samples sharing this feature identifier. B Baselines To evaluate the proposed framework, we adopt six baselines for comparison. Here are the descriptions of these baselines: • VizML (Hu et al., 2019), which formulates visualization recommendation task as a series of classification challenges, deploying distinct Neural Network-based models for each classification task. We train VizML in our dataset on its Mark Type task (chart type predixtion task and Is on Xaxis or Y-axis task). In our training process, we divide the dataset by tables, meaning all columns from the same table are assigned to the same set. • KG4Vis (Li et al., 2021), which leverages knowledge graphs to recommend from datasetvisualization pairs. • MultiVision (Wu et al., 2021), which design two scoring network for recommending single and multiple-view visualizations. Because we do not recommend multiple-view visualizations, only the first network is used to train on our dataset. Specially, due to the incomplete disclosure of all training parameters, we set the batch size to 4096 and the learning rate to 3e-3, conducting training over 30 epochs. Because our dataset contains six types of charts, so we set the initialization parameter num_class of the ChartTypeLSTM model to 6. The remaining model hyperparameters are consistent with those in the source code. • Data2Vis (Dibia and Demiralp, 2019), which leverages an LSTM-based neural translation model to generate json encoded visualizations in an autoregressive way. Following Data2Vis, for each table-chart pair, three training samples are generated by sampling three rows from table (three different data rows with the same encoded chart), resulting in a total of 279, 609 pairs which are used for training. The vocabulary sizes of source and target are 95 and 38. • Table2Charts (Zhou et al., 2020), which use deep Q-Network to fill the predefined chart templates by estimating the next token or action. Specificallybecause the chart templates defined in table2charts—scatter, line, bar, and pie match the types of charts in our dataset, we retain only these four types of data for training. • DeepEye (Luo et al., 2018), which provides two public models (ML and rule-based) without training scripts. Thus, we evaluate its models on our test set and report the best results between two models. For all baselines, all hyperparameter settings were based on the values reported in the original paper for optimal results, unless otherwise specified. C Evaluation Details Let N denote the total number of tables, Ci denote the number of columns in table i, correctij xy and correctij type as binary indicators for column j in table i, where correctij xy=1 if the xy axis allocation 11 13260 for column j is correct and 0 otherwise. Similarly, correctij xy=1 if the chart type for column j is correctly predicted, and 0 otherwise. At the table level, when calculating metrics, the entire table is treated as a single unit; a prediction is only considered correct if the X-Y axis allocation and chart type for all columns within the table are accurately predicted, as follows: Accxy= PN i=1(QCi j=1 correctij xy) N , Accchart type= PN i=1(QCi j=1 correctij type) N , Accoverall= PN i=1(QCi j=1(correctij xy & correctij type)) N . (7) While at the field level, we calculate metrics on a per-column basis: M = N X i=1 Ci, Accxy = PN i=1 PCi j=1 correctij xy M , Accchart type = PN i=1 PCi j=1 correctij type M , Accoverall = PN i=1 PCi j=1(correctij xy & correctij type) M . (8) D Implementations To prevent bias towards imbalanced data, we randomly split the dataset by chart type, allocating 80% for training, 10% for validation, and 10% for testing for each chart category. We run all experiments on NVIDIA RTX A100 GPUs, and use the Adam (Kingma and Ba, 2015) optimizer with β1 = 0.9 and β2 = 0.999 to optimize models. For contrast learning , we adopt CLIP’s (Radford et al., 2021) visual encoder to extract the global features of input images. The temperature parameter, memory bank size, batch size and learning rate are set to 0.07, 2048, 1024, and 9e-4, respectively. We employ K-Means (MacQueen et al., 1967) to cluster table features, to minimize the impact of random initialization of initial cluster centers as much as possible, we execute the algorithm five times and select the iteration with the lowest Sum of Squared Errors (SSE) as the final result. For our chart generation model, we primarily utilize the pre-trained Bloom (Workshop et al., 2022) model with 1.1 billion parameters as our backbone. For training our chart generation model, we set the learning rate, batch size to 1e-5 and 4. The length of generallevel, instance-level, column-level, cluster-level prompts are set to 20, 100, 1, and 15. In addition to parametric experiments, we fixed the number XY 0.900 0.905 0.910 0.915 0.920 0.925 0.930 0.935 0.940 ACC w/o-CLUP w/o-TSI w/o-COLP w/o-GP w/o-INSP w/o-SSC DIHP Chart Type 0.72 0.74 0.76 0.78 0.80 ACC w/o-CLUP w/o-TSI w/o-COLP w/o-GP w/o-INSP w/o-SSC DIHP Overall 0.70 0.71 0.72 0.73 0.74 0.75 0.76 0.77 ACC w/o-CLUP w/o-TSI w/o-COLP w/o-GP w/o-INSP w/o-SSC DIHP Figure A1: Performance comparison between HTP and its variants on Field Level. of clusters for cluster prompt at 8 and the prompt initialization method to Sample Vocabulary-based Initialization. In the generation stage, we adopt topp sampling as the default decoding method with a temperature of 0.1 and a top-p = 0.75. E Table Features for Clustring We use table features that introduced in VizML (Hu et al., 2019) for clustering. Following VizML, we first extract 81 single-column features, comprising both 50 continuous features and 31 categorical features. These features are categorized into four main groups: the data type of the column (Types), statistical characteristics of the column’s values such as distribution and outliers (Values), the column’s name (Names), and the column’s row count (Dimensions). To describe the relationship between pairs of columns, we employ 30 pairwise-column features. In the final step, we utilize 16 aggregation functions to combine both pairwise-column and single-column features, yielding 841 dataset-level features for clustering. F Discussion about Permutations To investigate the impact of the permutations of table rows and columns on the results, we conducted relevant experiments. The results indicate that the permutations of tables can have a certain negative impact on the inference results of our model. We will explore ways to mitigate this issue in our future work. G More Results about Ablation Analysis Due to space constraints and the fact that trends at both the table level and field level are largely similar, we have chosen to present only the table level results in the main text. For completeness, we also include the field level results in Figure A1. The analysis of ablation studies at the field level follows a similar pattern to that at the table level. 12 13261 H Image Dataset We collect 125,121 chart images from the public web. These include 28100 bar images, 9838 box images, 7319 histogram images, 38725 line images, 37215 scatter images and 3924 pie images. 13 13262
2024
716
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13263–13282 August 11-16, 2024 ©2024 Association for Computational Linguistics HOLMES: Hyper-Relational Knowledge Graphs for Multi-hop Question Answering using LLMs Pranoy Panda *1 Ankush Agarwal1 Chaitanya Devaguptapu1 Manohar Kaul1 Prathosh A P1, 2 1Fujitsu Research India 2Indian Institute of Science, Bengaluru {pranoy.panda, ankush.agarwal, manohar.kaul}@fujitsu.com [email protected], [email protected] Abstract Given unstructured text, Large Language Models (LLMs) are adept at answering simple (single-hop) questions. However, as the complexity of the questions increase, the performance of LLMs degrade. We believe this is due to the overhead associated with understanding the complex question followed by filtering and aggregating unstructured information in the raw text. Recent methods try to reduce this burden by integrating structured knowledge triples into the raw text, aiming to provide a structured overview that simplifies information processing. However, this simplistic approach is query-agnostic and the extracted facts are ambiguous as they lack context. To address these drawbacks and to enable LLMs to answer complex (multi-hop) questions with ease, we propose to use a knowledge graph (KG) that is context-aware and is distilled to contain query-relevant information. The use of our compressed distilled KG as input to the LLM results in our method utilizing up to 67% fewer tokens to represent the query relevant information present in the supporting documents, compared to the state-of-the-art (SoTA) method. Our experiments show consistent improvements over the SoTA across several metrics (EM, F1, BERTScore, and Human Eval) on two popular benchmark datasets (HotpotQA and MuSiQue). 1 Introduction Multi-Hop Question Answering (MHQA) is a field that presents unique challenges in the realm of natural language processing. To illustrate the challenges of MHQA, consider extracting information from data arising during board meetings. While current technologies (such as LLMs) are proficient at addressing simple (single-hop) questions, such as "How many board meetings were held in the last twelve months?", they falter when confronted with *Corresponding author : [email protected] Figure 1: Multi-Dimensional Improvements: Our method (with GPT-4 as reader LLM) achieves SoTA results on several datasets and multiple Multi-hop QA metrics. EM: Exact-Match with the gold answer, SelfAware EM: Confidence-aware EM, BertScore (Zhang et al., 2019): Semantic similarity between predicted and gold answer; Query Info Efficiency: Efficiency of representing query-relevant information in the supporting documents - inversely proportional to the input token count for the reader LLM. complex (multi-hop) questions. An example of a multi-hop question is "For the board meeting with the most divided votes in the last twelve months, what was the agenda, who voted against it, and by what margin did it pass or fail?". Answering this question requires a series of interconnected steps: first, identifying the meeting with the most divided votes; next, determining the main agenda of that meeting; then, listing the members who voted against it; and finally, calculating the margin by which it was approved or rejected. Each of these steps, or "hops", demands not only the retrieval of additional information but also a nuanced understanding of the context and the relationships between various entities. This complexity underscores the challenge of MHQA, where the goal is to navigate through layers of information to arrive at the answer. Our work is aimed at tackling these multifaceted questions. LLMs have emerged as a powerful set of tools for various NLP tasks (Zhang et al., 2023; Wad13263 Predicted Answer: Gary Lightbody HOLMES + GPT-4 (Our Method) Question: Who was the main vocal of "Set the Fire to the Third Bar"? Supporting Documents Title-1: Set the Fire to the Third Bar Passage-1: "Set the Fire to the Third Bar" is a song from Snow Patrol's fourth album, "Eyes Open", featuring Martha Wainwright on vocals. Title-2: Snow Patrol Passage-2: Snow Patrol are a Northern Irish-Scottish rock band formed in 1993, consisting of Gary Lightbody (vocals, guitar), Nathan Connolly (guitar, backing vocals), … Title-3: Rita Redshoes Passage-3: ... his second solo album. In 2008 she released her debut solo album "Golden Era". In 2010 the band Snow Patrol invited her to perform single "Set the fire to the third bar" in Rock in Rio Lisbon. Title-4: Laura Jean Passage-4: ... support from community and alternative radio stations such as Triple J. She received high exposure when Snow Patrol, fans of her, asked her to sing Martha Wainwright's part in the song Set the Fire to the Third Bar during their two Australian tours in 2007. ... Predicted Answer: Martha Wainwright GPT-4 Struct-QA + GPT-4 (Li and Du, 2023) Predicted Answer: Martha Wainwright Input Martha Wainwright Set the Fire to the Third Bar Eyes Open Gary Lightbody Snow Patrol Vocalist features Set the Fire to the Third Bar (Doc title 1) Snow Patrol (Doc title 2) Relevant Subset of the Hyper-Relation Graph Our Method’s Output is lead Is a The question asks for the main vocal of the song “Set the Fire to the Third Bar” From the facts provided, “Set the Fire to the Third Bar” by Snow Patrol features Gary Lightbody as the lead singer. However, Martha Wainwright is mentioned as providing vocals, suggesting her involvement. Considering the lead singer's role, Gary Lightbody is the main vocal. P 2 Struct-QA’s CoT-Based Output To determine the main vocal of " Set the Fire to the Third Bar ", let's go through the docs step-by-step: 1. The document on “Set the Fire to the Third Bar” confirms Martha Wainwright as the featured 2. Cross-referencing details on “Set the Fire to the Third Bar” the “Snow Patrol” document lacks information on the main vocal. "Laura Jean" mentions Laura Jean covering Martha Wainwright’s part during tours … 3. The main vocal for “Set the Fire to the Third Bar” is Martha Wainwright, featured in the original song along with Gary Lightbody. Attributes of Hyper-Relational KG Figure 2: Multi-Hop QA Case Study: This figure illustrates a bridge-type multi-hop question from HotpotQA dataset for comparing our method with the baselines. It highlights our method’s accurate identification of ‘Snow Patrol’ as the crucial bridge entity and subsequently finding the lead vocalist, a feat not achieved by baselines. hwa et al., 2023; Robinson et al., 2022; Xu et al., 2023). However, despite their prowess, LLMs oftentimes are found lacking, when confronted with complex, multi-hop questions (see Figure 1 for an example). We hypothesize that the degradation in performance is due to the complexity involved in filtering and contextual aggregation of information present in the unstructured text. To address this challenge with unstructured text, recent methods, such as StructQA (Li and Du, 2023), extract structured knowledge in the form of KG triples from raw text and couple it with Chain-ofThought (CoT) reasoning (Wei et al., 2022). However, the extracted KG triples are not dependent on the query and lack the context under which these facts are valid. To understand the ambiguity due to the lack of context, consider this KG triple: {subject: "Apple", relation: "prices rose", object: "10%"}. Without additional context, it is difficult to determine whether the entity "Apple" refers to the fruit or the company. Moreover, StructQA provides both the extracted KG triples and the raw text as input to the LLM, leading to significantly longer prompts (see Table 3) and information redundancy. Our method, HOLMES1 , addresses these challenges by creating a query-focused context-aware KG and using it as the sole input for the LLM, i.e., without inputting the raw text. Specifically, we 1Named after Sherlock Holmes for its ability to deduce query relevant information from unstructured text (i) synthesize a hyper-relational KG from unstructured text that captures both facts, and the context under which these facts are valid, and (ii) prune the hyper-relational KG using a knowledge schema that encodes the type of information necessary to answer the query. These steps collectively furnish the LLM with a curated set of relevant facts. As an example, we provide a case study on the HotpotQA dataset in Figure 2. To rigorously evaluate our approach, we use two challenging multi-hop QA datasets, HotpotQA (Yang et al., 2018) and MuSiQue (Trivedi et al., 2022b), and experiment with three SoTA reader LLMs for QA: GPT-3.5, GPT-4, and GeminiPro. We compare with the baselines across 6 metrics: EM, F1, Precision, Recall, Human-Eval, BERTScore, and achieve significant gains. Our contributions are as follows: • A new multi-hop QA approach that transforms unstructured text into a hyper-relational KG using a query-derived schema, serving as an input to the LLM. • A significant improvement over the SoTA multi-hop QA method (Li and Du, 2023): 18.7% and 20% in EM scores on the Hotpot dataset, and 26% and 14.3% on the MuSiQue dataset for GPT-3.5 and GPT-4, respectively (see Table 1). • Using our query-focused hyper-relational KG we use up to 67% fewer tokens to represent 13264 query relevant information than the current SoTA method, and up to 60% fewer tokens w.r.t. original supporting documents (see Table 3). 2 Related Work Multi-hop Question Answering (MHQA) Recent achievements in addressing straightforward questions (Lan et al., 2019) coupled with availability of high-quality MHQA datasets (Yang et al., 2018; Trivedi et al., 2022b) have prompted a shift towards tackling multi-hop questions. Existing works in MHQA have adopted various approaches, including: (i) Construction of dynamic graphs (Qiu et al., 2019), hierarchical (Fang et al., 2020) and Abstract meaning representation (AMR) based graphs (Deng et al., 2022) as well as leveraging KGs (Li and Du, 2023) (ii) Employing endto-end differentiable learning methods, utilizing Multi-task Transformers for retrieval, reranking and predicting in an iterative manner (Qi et al., 2021) or using Recurrent Neural Networks (RNN) to sequentially retrieve documents from a graph of entities (Asai et al., 2020) and (iii) dynamically converting multi-hop questions into single-hop questions by generating subsequent questions based on the answers to previous ones (Perez et al., 2020), or by updating single-hop questions in the embedding space (Sun et al., 2021). In contrast to these methodologies, our work introduces a novel approach by constructing a hyper-relational KG from the documents, which is then utilized exclusively to answer questions. Relational Graphs for MHQA While retrieving and relying on purely unstructured text is a go to approach for single-hop question answering (Watanabe et al., 2017). It may not be ideal for MHQA, as it is designed to make multi-step, comprehensive reasoning difficult. This issue is often addressed by constructing structured sources of information from raw text and in most of the cases using KGs (Li and Du, 2023). By using KGs, the relational information among question concepts and answers can be easily captured (Dong et al., 2023). However, one of the main drawback of KGs is the lack of context i.e., they focus only on triples, overlooking qualifiers important for inference. To avoid overlooking qualifiers, our method involves building hyper-relational KGs to serve as an input to LLMs for MHQA, as explained in detail in Sect 4.2.1. To the best of our knowledge, our work is the first to use hyper-relational KGs for MHQA. Training Free MHQA Previously, KGs have been applied to MHQA in two ways: training-based (Sun et al., 2018, 2019; Yavuz et al., 2022; Ramesh et al., 2023) and training-free (Li and Du, 2023), also known as prompting. In these methods, KGs used were either human-curated (Speer et al., 2017; Bollacker et al., 2008) or training-based (Izacard and Grave, 2021; Bosselut et al., 2019). In some cases, LLMs were employed for triple extraction from documents to form graphs (Li and Du, 2023; Carta et al., 2023). However, no prior works automatically create a schema for the graph and use the graph for MHQA, a unique approach used by us. 3 Preliminaries Hyper-Relational Knowledge Graph - A hyperrelational KG H is an enriched form of a traditional KG. It allows for the representation of multiple relationships between entities. Let A denote a collection of attribute sets whose elements represent different types of attribute sets. For example, Ats ∈A is a set of timestamps, while Adt ∈A is a set of document titles, that serve as additional attributes. We then define our hyper relational KG H as: H = {(es, r, eo, a) | es, eo ∈ E, r ∈R, a ∈Adt}, where E is the set of named entities, R is the set of relations and Adt is the set of document titles that are additional attributes.This structure not only links entities es and eo via relation r but also integrates the corresponding document title attributes Adt, offering a more nuanced description. We refer (es, r, eo, a) as a hyper triple. Graph Schema - A graph schema GS, defined as GS = {(˜es, r, ˜eo) | ˜es, ˜eo ∈ET , r ∈R}, outlines the structure of a KG through entity types ET and relations R. Each triple in the schema specifies the permissible entity types ˜es (subject) and ˜eo (object) for each relation r, serving as a blueprint for organizing knowledge. Named Entity and Relation Extraction - Named entity extraction identifies textual spans referring to specific entities, while relation extraction classifies semantic relationships between entities in text, enabling structured representation of unstructured data. In this work, we harness LLMs’ capabilities for these tasks. Pre-trained on extensive text, these models accurately classify named entities and extract relations with minimal examples (Li and Du, 2023; Wang et al., 2023; Wadhwa et al., 2023), showcasing their proficiency in contextual understanding. See Appendices A.4.1, A.4.2 for detailed prompts and few-shot examples used in our method 13265 Document 2 Title-2: Mother Love Bone Passage-2: Mother Love Bone was an American rock band that formed in Seattle, Washington in 1987. The band was active from 1987 to 1990. Frontman Andrew Wood’s personality and compositions ... Document 1 Title-1: Return to Olympus Passage-1: Return to Olympus is the only album by the alternative rock band Malfunkshun. It was released after the band had broken up and after lead singer Andrew Wood (later of Mother Love Bone) had died of a drug overdose in 1990. ... Supporting Documents What was the former band of the member of Mother Love Bone who died just before the release of “Apple”? Question Pruner Pre-trained Generative Decoder (LLM) Pruner Entity Extraction Return to Olympus (Doc title 1) Mother Love Bone (Doc title 2) Mother Love Bone 1987-1990 Andrew Wood Return to Olympus Malfunkshun Seattle frontman only album Hyper-Relational Knowledge Graph {(𝑫𝒐𝒄 𝒕𝒊𝒕𝒍𝒆𝒊, 𝒔𝒖𝒃𝒋𝒆𝒄𝒕𝒊, 𝒓𝒆𝒍𝒂𝒕𝒊𝒐𝒏𝒊, 𝒐𝒃𝒋𝒆𝒄𝒕𝒊)} P3 Entity Extraction Entity Document Linking Level Order Traversal for Graph Creation Person Sport Work Album Place Film Date Band Music Auxiliary Graph Schema Relation Extraction Entity Type Estimation Collate Schemas Schema from question Schema from auxiliary graph schema Person Band Film Date Album Query-Aligned Graph Schema Query-Focused HyperRelation Knowledge Graph Mother Love Bone Andrew Wood Return to Olympus Malfunkshun frontman only album Question Pre-computed Graph EntityDoc Graph Answer: Malfunkshun Output ( Max One-time lookup) Complementary Fact Retrieval Apple Entity-Document Graph {(𝑫𝒐𝒄𝒊, 𝒆𝒏𝒕𝒊)} 1987 Doc-1 Doc-2 Andrew Wood Seattle 1.a 1.b 2 3 Attributes of Hyper-Relational KG Figure 3: Method Overview: Our method has three key steps - (i) Query-Dependent Structured Knowledge Discovery (Section 4.2.1), (ii) Knowledge Schema Construction for Information Refinement (Section 4.2.2), and (iii) Reader LLM Prompt Construction (Section 4.2.3). Step (i) involves creation of an entity document graph ( 1.a ⃝ in the Figure), and performing a level-order traversal on it to get a Hyper-relational KG ( 2⃝in the Figure). Next, in step (ii), we create a query-aligned knowledge schema from the question and an auxiliary graph schema (1.b ⃝in the Figure), and use it to prune the Hyper-Relational KG ( 3⃝in the Figure) - which forms the input for the LLM. for named entity and relation extraction. 4 Methodology The key idea of our method is to identify the subset of documents that contain the answer to the multihop query, and subsequently extract context-aware structured information from them (where context comes from the documents). We further perform a refinement step to retain query-relevant information. Below, we begin with a brief description of the problem statement in Section 4.1, followed by an explanation of our method in Section 4.2. The overview of our approach can be seen in Figure 3. 4.1 Problem Statement In this work, we address the challenge of MHQA in a zero-shot setting. This implies that we assume no prior domain-specific labeled data, rendering the problem both challenging and practically useful (Davison, 2020). Moreover, we operate in a training-free setting, leveraging the reasoning capabilities of LLMs for this task. Formally, the task is to extract the answer aq from a given natural language question q and a collection of supporting documents Sq. The set of supporting documents is defined as Sq := {D1 q, D2 q, . . . , Dm q }, where each document Dj q := (tj q, pj q) consists of a title tj q and the associated passage pj q. Given that q is a multi-hop question, deducing its answer requires the aggregation of information from at least two documents Di q, Dj q ∈Sq (where i ̸= j). 4.2 Proposed Method (HOLMES) We begin by traversing the supporting documents to identify the subset of documents relevant to the query, and extract structured, context-aware information (Section 4.2.1). This information is then refined using a query-based knowledge schema (Section 4.2.2). Next, we format the distilled graph for the reader LLM prompt, including a complementary fact retrieval step within the prompt to ensure higher coverage of query-relevant information. (Section 4.2.3). 4.2.1 Query-Dependent Structured Knowledge Discovery Unstructured text, contains a complex web of facts and relationships forming a latent semantic graph. This graph represents interconnected factual information not explicitly structured but implied by relationships and named entities in the text. Our approach navigates this structure using the entitydocument graph—a bipartite graph linking documents and named entities. It helps uncover parts of the latent graph relevant to the given query. Entity-Document Graph Construction The entity-document graph has two node 13266 types—documents and entities—with a single edge connecting them. We begin by extracting named entities from supporting documents. Then, we establish edges between document and entity nodes, forming a bipartite graph that captures the connections between entities and the documents they appear in (See Figure 3 - 1.a ⃝for an example). Level Order Traversal and Structured Information Extraction We begin by extracting named entities from the query and use them for a levelorder/breadth-first traversal of the entity-document graph. Similar to Li and Du (2023); Wadhwa et al. (2023), we use LLMs to extract KG triples from document nodes, framing the task as a sequence-tosequence problem. By providing the LLM with a detailed prompt (a clear description of the task and output format) and few-shot examples, we guide the model to generate triples directly from raw text (see Appendix A.4.4 for prompts). After extracting triples, we enhance them into hyper-relational KG quadruples (or hyper triples) by appending the title of the source document to each triple. This title serves as an additional attribute, offering the LLM context to understand when the triple is valid. During the traversal, we filter hyper triples with the current entity node as either the subject or object, adding the counterpart entity to the traversal queue. This iterative process continues for a predefined number of hops k, where the value of k defines the query complexity, i.e., the number of facts required to answer the question. Example of a hyper-relational KG in Figure 3- 2⃝. When we perform the above specified k-hop traversal starting from the named entities in the query, we are effectively probing the latent semantic graph in the raw text by incrementally selecting subsets of documents related to the query. 4.2.2 Knowledge Schema Construction for Information Refinement The hyper-relational graph construction is rooted in the inference query. However, it contains some hyper triples which although are related to the named entities in the query, capture relationships that are not useful for answering it. For instance, a query might focus on a person’s occupation, but the graph also captures their hobbies. To eliminate such extraneous information, we construct a query-aligned knowledge schema (Figure 3 - 1.b ⃝shows an example) and then carry out a refinement step (refined output example in Figure 3 3⃝). Query-Aligned Graph Schema Creation The graph schema acts as a structural template, guiding the organization of information crucial for answering the given question. It is essential for capturing the types of entities and relations likely to form the backbone of the answer. For example, in response to the question "Who is the 2nd daughter of the 1st president of X?" the schema should capture relations such as {<Person>, daughter of, <Person>} and {<Person>, president of, <Country>}, representing the direct information sought by question. Our graph schema is populated using two sources: (i) We first derive schema elements from the inference query by identifying relations in it, and then using LLMs to estimate the subject and object entity types for each of these relations (prompt in Appendix A.4.3). This forms a schema with a direct alignment with the question’s intent. (ii) We enrich the schema with additional domainspecific relations to aid multi-hop reasoning, using an auxiliary graph schema derived from indomain questions (Appendix A.5). This includes a wider array of entity-relation pairs, such as {<Person>, child of, <Person>} and {<Person>, head of, <Country>}, to account for potential inferential steps when direct query matches are absent in supporting documents. We select schema triples with relations similar to the question’s (based on cosine similarity) to retrieve a relevant subset. While constructing this auxiliary schema incurs a one-time compute cost, its amortized cost is minimal over multiple queries, especially given it helps to reduce the input token length for the reader LLM by upto 60% w.r.t. original source documents. In cases where the inference query diverges from the auxiliary graph schema, we rely on the schema derived from the query. This ensures system flexibility, adapting to specific query requirements, while still benefiting from the broader knowledge in the auxiliary schema when applicable. Pruning Hyper-relational Knowledge Graph The final step in our methodology involves refining the constructed hyper-relational KG ‘H’ by pruning it using the query-aligned graph schema GS. This pruning process is essential to distill the graph to its most relevant components, thereby enhancing the efficiency and effectiveness of the reader LLM in generating answers. The pruning process begins by computing embeddings for the relations in both the hyper-relational KG and the graph schema. Let vr denote the embedding of a relation r in the hyper-relational KG, 13267 and let ur′ denote the embedding of a relation r′ in the graph schema. We compute cosine similarity between each pair of relation embeddings: sim(vr, ur′) = vr · ur′ ∥vr∥∥ur′∥, (1) where · denotes the dot product and ∥· ∥denotes the Euclidean norm. For each relation rh in a hyper triple (es, rh, eo, a) ∈H, we compute its highest similarity score w.r.t. any relation rs in the schema: score(rh) = max rs∈GS sim(vrh, urs). (2) We compute the scores for all those relations whose entity types in the hyper triple match with that in the schema. We then select p hyper triples with the highest scores. This gives us the pruned graph H′ = sortscore(H, p), where sortscore denotes the sorting operation based on the computed similarity scores, and p - no. of hyper triples to retain. 4.2.3 Reader LLM Prompt Construction Next, we create the prompt for the reader LLM. Each hyper triple in the pruned hyper-relational KG, H′, is verbalized into an English sentence as the LLMs are adept at understanding the same (Jiang et al., 2023). The verbalization process transforms the structured triple into a natural language text. To be specific, we concatenate each item in the hyper triple into a long sentence marked by specific separation and boundary symbols. The resulting sentences are then arranged in the descending order of their similarity scores w.r.t. the schema to form the input prompt. We arrange the facts based on the relevance to the query (measured via similarity scores) as the information retrieval by LLMs (w.r.t. their input prompt) is done best when the gold information is placed closest to query (Cuconasu et al., 2024). As structured information extraction is an unsolved problem, some pertinent details may be missed in the input graph. To mitigate this, we include a verification step in the prompt, described below. Complementary Fact Retrieval If the LLM identifies that facts about a particular set of named entities is missing from the input graph, then we instruct it to list those named entities. We then fetch the corresponding documents from the entitydocument graph and integrate them with the initial set of relevant facts. This process not only enriches the input for the LLM but also ensures that any missing query-relevant information is retrieved, enhancing the accuracy of the system’s responses. We assume that a single-step verification is sufficient. Refer Appendix A.4.5 for the reader LLM prompt. 5 Experimental Setup 5.1 Evaluation Details Datasets: We use two benchmark multi-hop question-answering datasets, namely HotpotQA (Yang et al., 2018) and MuSiQue (Trivedi et al., 2022b) for evaluating our method. Table 9 displays the total number of samples for both datasets in the training and development sets. For our evaluation process, we utilize questions, context, and gold answers from the development sets of both HotpotQA and MuSiQue. To tune hyperparameters, we randomly select 50 questions from the development set and use them (following (Trivedi et al., 2022a; Li and Du, 2023)). We create our test set by randomly sampling an additional 1000 questions from the development set of HotpotQA and 1200 questions from development set of MuSiQue. Its worth noting that our test set size is twice as big as StructQA (Li and Du, 2023). In both HotpotQA and MuSiQue datasets, for each question there are ten and twenty supporting documents available, respectively. Each document in both datasets contains a title and a passage of text. Baselines: We operate in a training-free setting, utilizing LLMs as the reader model (takes the question and corresponding supporting documents as input and gives an answer). We experiment with two popular LLMs - GPT-3.5, GPT-4 (Achiam et al., 2023) in the main paper, and report results with Gemini (Lee et al., 2023) in Appendix A.1. We compare our method against three baselines (i) StructQA (Li and Du, 2023): CoT (Wei et al., 2022) based SoTA method for MHQA with LLMs. We use the same prompts and methodology as outlined in their study. (ii) Base (with supp docs): To test the base multihop reasoning of the reader LLM, in this baseline, we feed the reader LLM with the question and supporting documents directly. (iii) Base (w/o supp docs): To test the parametric knowledge of the reader LLM, in this baseline, we feed it with just the question and elicit a response. All prompts used in our experiments are documented in the Appendix A.4 for reproducibility. Evaluation Metrics: We use Exact-Match (EM), Precision, Recall, and F1-Score as automatic metrics (Li and Du, 2023; Mavi et al., 2022) to measure correctness of the predicted answers. 13268 Datasets HotpotQA MuSiQue Methods EM (↑) F1 (↑) P (↑) R (↑) EM (↑) F1 (↑) P (↑) R (↑) Reader: gpt-4-1106-preview Base (w/o supp docs) 0.26 0.45 0.45 0.50 0.09 0.21 0.22 0.21 Base (with supp docs) 0.54 0.74 0.75 0.77 0.39 0.55 0.55 0.56 StructQA (Li and Du, 2023) 0.55 0.77 0.75 0.80 0.42 0.56 0.57 0.56 Our Method 0.66 0.78 0.80 0.79 0.48 0.58 0.59 0.59 Reader: gpt-3.5-turbo-1106 Base (w/o supp docs) 0.23 0.37 0.38 0.40 0.06 0.15 0.17 0.15 Base (with supp docs) 0.47 0.65 0.66 0.68 0.24 0.36 0.36 0.37 StructQA (Li and Du, 2023) 0.48 0.64 0.62 0.67 0.23 0.37 0.37 0.37 Our Method 0.57 0.69 0.69 0.70 0.29 0.38 0.39 0.37 Table 1: Multi-hop Reasoning Evaluation (Automatic Metrics): We report the Exact-Match (EM), F1, Precision (P) and Recall (R) scores of all methods in comparison on two MHQA datasets. We experiment with two SoTA reader LLMs for the QA task - GPT-4 and GPT-3.5. We report results on Gemini-pro in the Appendix A.1. The results indicate consistent and significant improvements across datasets, metrics and LLMs. Base: Only reader LLM; supp docs: supporting documents w.r.t. query For semantic evaluation of the predicted answers, we also compute Human Evaluation Score (H-Eval) and BERTScore (Zhang et al., 2019) on a random sample of 100 questions from the development set of the HotpotQA dataset. The Human Evaluation Scores are obtained by averaging scores from three annotators. We use a smaller subset of question for this analysis due to the resource-intensive nature of human evaluation. 5.2 Implementation Details Problem Setting We focus on the distractor setting (Yang et al., 2018; Trivedi et al., 2022b) for question answering. In this setting, the supporting documents for each question consist of a set of distractor documents, i.e., documents not useful for predicting the answer. This setting poses a challenge, demanding robustness to noise in input. Knowledge Triple Extraction Both StructQA and our method use LLMs for knowledge triple extraction. Thus, for a fair comparison, we always use the same triple extractor LLM for both. Results in Table 1, use gpt-3.5-turbo-instruct for triple extraction. We further study the impact of different triple extractor LLM in Table 6. HOLMES hyperparameters: We use OpenAI embedding model (text-embedding-ada-002) to compute text embeddings. We set the value of k (number of levels in the level order traversal) to 4 across both the datasets in Table 1,6. Beyond 4 levels of traversal, performance remains the same as the complexity of the dataset vary from 2-4 hop (See Table 10). Similarly, we set the value of p (number of hyper triples to be retained after pruning) to 50 across all result tables (sensitivity analysis in the Appendix A.1). Both of these values were chosen by experimenting on 50 samples from the development sets of the respective datasets. Auxiliary Graph Schema Creation: We randomly sample 10,000 questions (without answers and supporting documents) from the training data of HotpotQA and MuSiQue, for creating the graph schema. We provide further details about the auxiliary schema creation in the Appendix A.5. 6 Results and Analysis Here, we study the performance of our method by investigating several key dimensions of multi-hop question answering (especially in the era of LLMs): (i) Multi-hop reasoning capability (a) w.r.t automatic metrics (b) w.r.t human & semantic metrics (ii) Performance w.r.t. different question types (a) Reasoning type-wise performance (b) Hop-wise performance (iii) Query Information efficiency (input token count for reader LLM) (iv) Measure of confident predictions We also study the impact of the LLM used for knowledge triple extraction, and conduct further studies in the App A.1 and a case study in App A.6. Multi-hop Reasoning: (a) Evaluation using Automatic Metrics - In Table 1 we report the performance of our method w.r.t. Exact Match (EM) and F1 scores for both datasets. We find that, across reader LLMs, our method consistently outperforms all baseline methods. This underscores the importance of our data organization and pruning process, retaining only the relevant information in input prompts for the reader LLMs. Notably, our method even outperforms the SoTA StructQA, which employs the CoT mechanism (Wei et al., 2022). 13269 Dataset HotpotQA Methods H-Eval (↑) B-Score (↑) Base 86.5 87.00 StructQA 87.5 85.20 Our Method 89.0 89.05 Table 2: Multi-hop Reasoning Evaluation (Semantic Metrics): Human evaluation (H-Eval) on 100 samples & BERTScore (B-Score) for 1000 samples from HotpotQA. (a) Human and Semantic Evaluation As generative models can generate long worded answers, its important to semantically evaluate the predicted answers. Thus, we use BertScore and human evaluators to judge the correctness of the predicted answers. We use GPT-4 reader based responses for this study and report the results in Table 2. We observe that our method achieves a 1-2 point improvement on both the metrics w.r.t. the baselines, confirming the efficacy of our method. Query Information Efficiency In multi-hop question answering, where reader LLMs like GPT-4 are employed, the presence of irrelevant information poses a significant challenge because (i) it increases the computational load (and thus the cost) (ii) and complicates the LLM’s task of connecting disparate pieces of information across documents. Thus, efficiently managing input data by filtering out unnecessary content becomes crucial to both performance and cost-effectiveness. To quantify this factor, we measure the reader input token count and use an efficiency metric for input tokens. The query information effiency (or reader input token efficiency) metric is a normalized score between zero and one, computed through min-max normalization of input token count (min & max are based on input token counts of all methods in comparison). Our results, in Figure 1 and Table 3, show a significant reduction in average input token count across both datasets compared to baselines. Datasets HotpotQA MuSiQue Methods Token Count(↓) Token Count(↓) Supp docs 1333.81 2361.36 StructQA 3078.85 5908.87 Our Method 1230.90 1398.15 Table 3: Reader Input Token Count: Comparing the average input token size across datasets to measure the efficiency of representing query relevant information. Performance w.r.t. different question types To analyze HOLMES’s performance across question types and complexities, we perform an analysis based on reasoning types (with HotpotQA dataset) and hops (with MuSiQue dataset). We choose respective datasets for each analysis based on the query type information available in the datasets. (a) Reasoning type-wise performance For any multi-hop question, the data generating process dictates the type of reasoning required to answer the question. HotpotQA dataset contains questions which fall in two categories w.r.t. reasoning - (i) bridge question (ii) comparison questions. We advise the reader to refer to the dataset paper (Yang et al., 2018) for more details. We report results across question categories in Table 4. Type # Samples Base StructQA HOLMES Bridge 787 0.55 0.56 0.64 Comparison 213 0.55 0.51 0.71 Table 4: Performance Across Reasoning Types: Evaluation of different methods. Base - Reader LLM with supp docs. (b) Hop-wise Performance Comparison Figure 4: Hop-wise Performance: Comparison on MuSiQue dataset with varying question complexity. Bars denote standard error q em(1−em) n . We evaluated the hop-wise performance of our method on MuSiQue dataset (Figure 4). This evaluation aimed to determine if our performance improvements (in Table 1) were primarily attributed to the system’s ability to answer 2-hop questions, which are less complex than 3 and 4-hop questions. We find that our method maintains its performance across an increasing number of hops and outperforms the baselines too. Measure of Confident Predictions LLMs demonstrate reasoning capabilities but often struggle to recognize their limitations in understanding queries or retrieving correct answers. We found that structured inputs and task-dependent reasoning steps, including the option for models to indicate uncertainty, alleviates this issue. Inspired by this, we introduce the Self-Aware Exact Match (Self13270 Aware EM) score to assess LLMs on complex QA tasks. This metric evaluates the accuracy and confidence of model responses, focusing on instances where the model provides answers with high confidence (or self-awareness). It aims to highlight the model’s precision and reliability, offering a nuanced view of its performance. It is defined as: Self Aware EM = P q∈QA EM(q) |QA| where QA is the set of questions the system answers. We report the Self-Aware EM scores in Figure 1. In our evaluation using the HotpotQA and MuSiQue datasets, HOLMES opted for "None" or "No answer" in 3% and 10% of the samples, respectively (with GPT-4 as reader LLM). In contrast, StructQA always provided an answer, often inaccurately (see Table 1). Upon reviewing instances where HOLMES did not provide an answer, we found it was due to incomplete information in the input graph or a misunderstanding of the question. This self-awareness—recognizing when it cannot reliably answer—is a key feature of dependable systems. Our findings, detailed in Figure 1, show that HOLMES achieves significantly higher selfaware EM scores compared to other baselines. 7 Sensitivity & Ablation Analysis Configuration Performance Hyper KG Prune Aux. Schema Comp. Fact Retrieval EM F1 Reader Inp Tokens X 0.56 0.69 1867 X X 0.56 0.70 1227 X X X 0.58 0.72 1220 X X X X 0.61 0.77 1230 Table 5: Impact of different components: Performance metrics (EM, F1) and Reader Input Token Count for different configurations of HOLMES. X refers to "included", and refers to "excluded" in the table. For example, in the first row, only Hyper-Relational KG is included in the HOLMES algorithm, every other component is removed. These results are reported on a set of 100 samples from HotpotQA dev set. Ablation Studies In Table 5 we share the results of our ablation study that systematically evaluates the contribution of each component within the HOLMES framework. Sensitivity Analysis We evaluated the effect of using a different LLM (gpt-4-1106-preview) for triple extraction, comparing our performance with StructQA in Table 6 (reader: GPT-4). Our approach consistently outperforms StructQA, although improved triple extraction models benefit both. Datasets HotpotQA (100 samples) Methods EM (↑) F1 (↑) P (↑) R (↑) Triple Extractor: gpt-4-1106-preview StructQA 0.56 0.76 0.79 0.77 Our Method 0.68 0.79 0.82 0.80 Triple Extractor: gpt-3.5-turbo-1106 StructQA 0.47 0.71 0.72 0.76 Our Method 0.61 0.77 0.78 0.78 Table 6: Impact of Triple Extractor on MHQA performance 8 Conclusion We introduce HOLMES, an approach leveraging a hyper-relational KG to enhance multi-hop question answering by minimizing noise and refining relevant facts. Constructing an entity-doc graph from the question’s supporting documents, we employ a level-order traversal, prune with an auxiliary graph schema, and utilize distilled graph as input for LLM-based answering. Our method achieves SoTA performance with a 20% improvement on HotpotQA and 26% on MuSiQue dataset w.r.t EM while using upto 67% fewer tokens to represent query relevant information w.r.t. SoTA methods. 9 Acknowledgement We would like to thank Shivangana Rawat and other members of the AI lab at Fujitsu Research India for providing valuable feedback on the manuscript. We also thank the anonymous ARR reviewers, the meta-reviewer, and the ACL program chairs for their comments and feedback, which helped improve our draft. Limitations The following details outline areas for future improvements: Possible Incompleteness in Constructed Graphs Extracting entities and triples to construct graphs via LLMs demonstrates high accuracy, yet the task of capturing all relevant triples faces inherent challenges. Structured information extraction remains an unsolved problem across the field, leading occasionally to incomplete graphs in our method as well Increased Computational Effort: Our method’s use of LLMs to generate an auxiliary schema introduces additional computational steps. While this increases the computational effort compared to prior methods, it’s offset by our efficiency w.r.t fewer input tokens during inference. 13271 While acknowledging the limitations of our approach, we remain optimistic, recognizing our work as a step forward in enhancing the capabilities of LLMs for multi-hop question answering. Ethical Concerns There are no ethical concerns associated with this work. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2020. Learning to retrieve reasoning paths over wikipedia graph for question answering. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. Comet: Commonsense transformers for automatic knowledge graph construction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779. Salvatore Carta, Alessandro Giuliani, Leonardo Piano, Alessandro Sebastian Podda, Livio Pompianu, and Sandro Gabriele Tiddia. 2023. Iterative zero-shot llm prompting for knowledge graph construction. arXiv preprint arXiv:2307.01128. Florin Cuconasu, Giovanni Trappolini, Federico Siciliano, Simone Filice, Cesare Campagnano, Yoelle Maarek, Nicola Tonellotto, and Fabrizio Silvestri. 2024. The power of noise: Redefining retrieval for rag systems. arXiv preprint arXiv:2401.14887. Joe Davison. 2020. Zero-shot learning in modern nlp. Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, and Patricia Riddle. 2022. Interpretable amrbased question decomposition for multi-hop question answering. arXiv preprint arXiv: 2206.08486. Junnan Dong, Qinggang Zhang, Xiao Huang, Keyu Duan, Qiaoyu Tan, and Zhimeng Jiang. 2023. Hierarchy-aware multi-hop question answering over knowledge graphs. In Proceedings of the ACM Web Conference 2023, WWW ’23, page 2519–2527, New York, NY, USA. Association for Computing Machinery. Yuwei Fang, Siqi Sun, Zhe Gan, Rohit Pillai, Shuohang Wang, and Jingjing Liu. 2020. Hierarchical graph network for multi-hop question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8823–8838, Online. Association for Computational Linguistics. Gautier Izacard and Édouard Grave. 2021. Leveraging passage retrieval with generative models for open domain question answering. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874–880. Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Structgpt: A general framework for large language model to reason over structured data. arXiv preprint arXiv:2305.09645. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. International Conference on Learning Representations. Gyeong-Geon Lee, Ehsan Latif, Lehong Shi, and Xiaoming Zhai. 2023. Gemini pro defeated by gpt-4v: Evidence from education. arXiv preprint arXiv:2401.08660. Ruosen Li and Xinya Du. 2023. Leveraging structured information for explainable multi-hop question answering and reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 6779–6789. Vaibhav Mavi, Anubhav Jangra, and Adam Jatowt. 2022. A survey on multi-hop question answering and generation. arXiv preprint arXiv:2204.09140. Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, and Douwe Kiela. 2020. Unsupervised question decomposition for question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8864–8880, Online. Association for Computational Linguistics. Peng Qi, Haejun Lee, Tg Sido, and Christopher Manning. 2021. Answering open-domain questions of varying reasoning steps from text. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3599–3614, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Lin Qiu, Yunxuan Xiao, Yanru Qu, Hao Zhou, Lei Li, Weinan Zhang, and Yong Yu. 2019. Dynamically fused graph network for multi-hop reasoning. In 13272 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6140– 6150, Florence, Italy. Association for Computational Linguistics. Gowtham Ramesh, Makesh Narsimhan Sreedhar, and Junjie Hu. 2023. Single sequence prediction over reasoning graphs for multi-hop qa. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11466–11481. Joshua Robinson, Christopher Michael Rytting, and David Wingate. 2022. Leveraging large language models for multiple choice question answering. arXiv preprint arXiv:2210.12353. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: an open multilingual graph of general knowledge. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, pages 4444–4451. Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2380–2390. Haitian Sun, William W. Cohen, and Ruslan Salakhutdinov. 2021. Iterative hierarchical attention for answering complex questions over long documents. arXiv preprint arXiv: 2106.00200. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231–4242. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022a. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. arXiv preprint arXiv:2212.10509. Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022b. Musique: Multihop questions via single-hop question composition. Transactions of the Association for Computational Linguistics, 10:539–554. Somin Wadhwa, Silvio Amir, and Byron C Wallace. 2023. Revisiting relation extraction in the era of large language models. arXiv preprint arXiv:2305.05003. Shuhe Wang, Xiaofei Sun, Xiaoya Li, Rongbin Ouyang, Fei Wu, Tianwei Zhang, Jiwei Li, and Guoyin Wang. 2023. Gpt-ner: Named entity recognition via large language models. arXiv preprint arXiv:2304.10428. Yusuke Watanabe, Bhuwan Dhingra, and Ruslan Salakhutdinov. 2017. Question answering from unstructured text by retrieval and comprehension. arXiv preprint arXiv: 1703.08885. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837. Derong Xu, Wei Chen, Wenjun Peng, Chao Zhang, Tong Xu, Xiangyu Zhao, Xian Wu, Yefeng Zheng, and Enhong Chen. 2023. Large language models for generative information extraction: A survey. arXiv preprint arXiv:2312.17617. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600. Semih Yavuz, Kazuma Hashimoto, Yingbo Zhou, Nitish Shirish Keskar, and Caiming Xiong. 2022. Modeling multi-hop question answering as single sequence prediction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 974–990. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675. Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, and Lidong Bing. 2023. Sentiment analysis in the era of large language models: A reality check. arXiv preprint arXiv:2305.15005. 13273 A Appendix In this section, we provide additional results and details that we could not include in the main paper due to space constraints. In particular, this appendix contains the following: • Additional Results and Analysis • Dataset Statistics • Guidelines for Human Annotators • Prompts used in HOLMES • Auxilliary schema construction • Case Study of HOLMES on HotpotQA dataset A.1 Additional Results and Analysis Extending our analysis in Section 6, here we report the following - (i) MHQA performance with Gemini-Pro reader LLM (ii) Sensitivity Analysis w.r.t. pruning process (iii) Sensitivity Analysis w.r.t. the number of depth of traversal while creating the Hyper-Relational KG. Gemini-Pro MHQA Results Datasets HotpotQA Methods EM (↑) F1 (↑) SA-EM (↑) Reader: Gemini-Pro Base (w/ supp docs) 0.48 0.66 0.48 StructQA 0.49 0.66 0.52 Our Method 0.58 0.67 0.66 Table 7: Multi-Hop QA performance SA-EM: SelfAware EM; Base: Only reader LLM; supp docs: supporting documents w.r.t. query Similar to results with GPT-3.5 and GPT-4 reader LLMs (in Table 1), we observe consistent improvements across metrics. Thus demonstrating the efficacy of our method, across LLMs. Impact of Pruning Figure 5: Impact of pruning on MHQA performance in HotpotQA dataset In Tables 1 and 6, we conducted our experiments by retaining 50 hyper triples after the pruning stage. Here we vary the number of hyper triples being retained after pruning and study the impact on the MHQA performance (EM and F1 scores). Figure 5 shows that after 50 triples, the MHQA performance does not change indicating that the first 50 triples captures all the query relevant information. This number is dependent on the dataset, so we suggest conducting experiments to deduce this threshold. Impact of Depth in the Level Order Traversal Figure 6: Impact of Depth on MHQA performance in HotpotQA dataset In the Tables 1 and 6, we conducted our experiments by traversing the entity document graph upto a depth of four levels. As stated in Section 4.2.1, the depth of traversal depends on the maximum complexity of query we expect to see during inference. To verify this, we perform a sensitivity analysis on this variable. We experiment on 100 randomly sampled datapoints from HotpotQA development set(question complexity varies between 2-3 hops). Figure 6 reflects that as the max query complexity of the dataset is three, beyond three levels of traversal the performance does not improve. Thus, verifying our claim. We choose a depth of four in the main paper as we wanted to cater to all the datasets. Total Input and Output Token Lengths & Costs Method Inp Token Length (↓) Out Token Length (↓) Total Cost in $ (↓) EM (↑) StructQA 9012 3590 0.99 0.48 HOLMES 9388 2524 0.85 0.57 Table 8: Performance and Cost Comparison: Comparison of total input and output token lengths (across all LLM calls), total cost, and EM scores for different methods. In table 8 we report the average total input and 13274 output token lengths on HotpotQA dataset (w.r.t. 100 random samples from dev set). We also share the cost estimate w.r.t. the latest GPT-3.5 LLM (both as reader and triple extractor model) The average input and output token lengths when simply using the reader LLM are 1334 and 10, respectively. Its worth noting that for performance improvements beyond LLMs on complex NLP datasets such as HotpotQA and MuSiQue, structured information extraction is immensely useful, as can be seen from the improvements across metrics in Table 1 of the paper. A.2 Dataset Statistics Dataset Train Dev HotpotQA 90,447 7,405 MuSiQue 19,938 2,417 Table 9: Train and Dev Statistics - number of samples Dataset number of instances MuSiQue 2-hop 667 MuSiQue 3-hop 366 MuSiQue 4-hop 200 Table 10: Hop-wise distribution of questions used for evaluation in Section 6 for MuSiQue Table 9 provides details on the total number of training and development set samples in each dataset. From these, we have randomly sampled 10, 000 questions (unlabelled or unannotated) from the training set to create our auxiliary schema. For each dataset, we randomly sampled 1000 data points from the development set for evaluation purposes (as illustrated in section 5.1). Table 10 details the hop-wise question statistics of the MuSiQue dataset (HotpotQA dataset (Yang et al., 2018) does not provide hop-wise distribution of their dataset). A.3 Guidelines for Human Annotators In the evaluation phase of HOLMES (Table 2), human annotators played a crucial role in assessing the accuracy of the predicted answers. The process was structured as follows: • Annotators were presented with a set of inputs for each question, which included: – The question – The gold answer – The predicted answer generated by one of the MHQA methods • The primary task for the annotators was to determine the correctness of the predicted answer in comparison to the gold answer. This evaluation was binary, with annotators assigning: – A score of 0 if the predicted answer was deemed incorrect – A score of 1 if it was considered correct • The evaluation process encompassed all methods: Base with supporting documents, StructQA, and HOLMES, each evaluated with a total of 100 questions. Therefore: – Each annotator was responsible for evaluating a comprehensive total of 300 questions, covering all possible combinations of questions and methods. The numbers in Table 2 represent the average scores from three human annotators. A.4 Prompts used in HOLMES In this section we detail the prompts used for solving different subtasks using LLMs. A.4.1 Entity Extraction from supporting documents For extracting named entities from the supporting documents, we use the following prompt. Few-shot examples used to improve the performance of the LLM is also included in the prompt. Task: Extract ALL the named entities from the given sentence (extract all time intervals, names, dates, organizations and locations). Examples: Use the following examples to understand the task better. Sentence: William Rast is an American clothing line founded by Justin Timberlake and Trace Ayala. Entities: William Rast, American, Justin Timberlake, Trace Ayala Sentence: The Glennwanis Hotel is a historic hotel in Glennville, Georgia, Tattnall County, Georgia, built on the site of the Hughes Hotel. 13275 Entities: Glennwanis Hotel, Glennville, Georgia, Tattnall County, Georgia, Hughes Hotel A.4.2 Entity and Relation Extraction from the Question Instead of separately extracting the entity and the relations from the inference query, we use a single LLM call. Below we detail the prompt along with the few-shot examples used. Task: extract all named entities from the above question. Then extract all the important information (each information should be 2-3 words) needed to answer the question. Output format “entities: [ent1, ent2, ...] important relations: [info1, info2, ...]". Please note, do not give named entities in the ‘important relations’ Use the following examples to understand the task: Question: Who is the author of the book that inspired the movie starring Tom Hanks as a symbologist? Entities: [Tom Hanks] Important Relations: [author of, inspired the movie, stars,as a symbologist] Question: Did the company that Elon Musk co-founded in 2002 eventually merge with a firm that had been contracted by NASA to resupply the International Space Station? Entities: [‘Elon Musk’, ‘2002’, ‘NASA’, ‘International Space Station’] Important Relations: [‘co-founded’, ‘merge with’, ‘contracted by’, ‘resupply’] A.4.3 Subject and Object Entity type Estimation As discussed in Section 4.2.2 and shown in Figure 3, to create the knowledge schema from the query, we need to estimate the subject and object entity types corresponding to a given relation. We use an LLM for the same and report the prompt below. Task: Generate the subject and object entity type for the following relation, w.r.t. the provided context information. relation: input Context: context ; DO NOT extract partial info. Output format: [<subject entity type>, <object entity type>] A.4.4 Triples Extraction from supporting documents Here we report the prompt along with the few-shot examples we used for knowledge triple extraction from raw text. Task: Comprehensively extract ALL the triples (subject, relation, object) from below given paragraph. Ensure that the subject and objects in the triples are named entities (name of person, organization, dates etc) and not multiple in number. You will be HEAVILY PENALIZED if you violate this constraint. Examples: Use the following examples to understand the task better. Paragraph: William Rast is an American clothing line founded by Justin Timberlake and Trace Ayala. It is most known for their premium jeans. On October 17, 2006, Justin Timberlake and Trace Ayala put on their first fashion show to launch their new William Rast clothing line. The label also produces other clothing items such as jackets and tops. The company started first as a denim line, later evolving into a men’s and women’s clothing line. Triples: i. subject: William Rast, relation: clothing line, object: American ii. subject: William Rast, relation: founded by, object: Justin Timberlake iii. subject: William Rast, relation: founded by, object: 13276 Trace Ayala iv. subject: William Rast, relation: known for, object: premium jeans v. subject: William Rast, relation: launched on , object: October 17, 2006 vi. subject: Justin Timberlake, relation: first fashion show, object: October 17, 2006 vii. subject: Trace Ayala, relation: first fashion show, object: October 17, 2006 viii. subject: William Rast, relation: produces, object: jackets ix. subject: William Rast, relation: produces, object: tops x. subject: William Rast, relation: started as, object: denim line xi. subject: William Rast, relation: evolved into, object: men’s and women’s clothing line Paragraph: The Glennwanis Hotel is a historic hotel in Glennville, Georgia, Tattnall County, Georgia, built on the site of the Hughes Hotel. The hotel is located at 209-215 East Barnard Street. The old Hughes Hotel was built out of Georgia pine circa 1905 and burned in 1920. The Glennwanis was built in brick in 1926. The local Kiwanis club led the effort to get the replacement hotel built, and organized a Glennville Hotel Company with directors being local business leaders. The wife of a local doctor won a naming contest with the name “Glennwanis Hotel", a suggestion combining “Glennville" and "Kiwanis Triples: i. subject: Glennwanis Hotel, relation: is located in, object: 209-215 East Barnard Street, Glennville, Tattnall County, Georgia ii. subject: Glennwanis Hotel, relation: was built on the site of, object: Hughes Hotel iii. subject: Hughes Hotel, relation: was built out of, object: Georgia pine iv. subject: Hughes Hotel, relation: was built circa, object: 1905 v. subject: Hughes Hotel, relation: burned in, object: 1920 vi. subject: Glennwanis Hotel, relation: was re-built in, object: 1926 vii. subject: Glennwanis Hotel, relation: was re-built using, object: brick viii. subject: Kiwanis club, relation: led the effort to re-build, object: Glennwanis Hotel ix. subject: Kiwanis club, relation: organized, object: Glennville Hotel Company x. subject: Glennville Hotel Company, relation: directors, object: local business leaders xi. subject: Glennwanis Hotel, relation: combines, object: “Glennville" and “Kiwanis" A.4.5 LLM Reader Below we report the reader LLM prompt. Please note that we instruct the LLM to reason in three steps - (i) relevant fact extraction (ii) reasoning based on the relevant facts (iii) final response (with an option to request details about named entities) Question:{question} Read the above question carefully, understand the answer category. Now, given you understand the question, use the following facts to answer the question. Please note that the meaning of facts is HEAVILY dependent on the context or the document from which it was extracted. prunned_and_verbalized_hyperkg (Note: the above set of facts 13277 In-Domain Questions Q1 Q2 … Qn [Ques 1, Ques 2,..] [Ques 1, Ques 2,..] … [Ques 1, Ques 2,..] C1 C2 … Ck Person Sport Film Date Question Decomposition Clustering of Question Embeddings Latent Topic Modelling Schema Induction Decomposed Questions Clustered Questions Figure 7: Auxilliary Graph Schema Construction Overview can be noisy, so if ambiguous information is present then focus on the question and keywords in the question relations_in_question) First, fetch the relevant set of facts from the above taking into consideration the context of the fact (DO NOT generate your facts), then by combining them answer the question - question. Instruction: Give the answer in the following format Relevant facts: <facts (and context of those facts) relevant to the question> Reasoning: <Here, give your thought process of how you use the named entities named_entities_in_question, relations relations_in_question of the question and accordingly traverse the facts given above to arrive at your answer> Final Response: <If all the necessary facts are available to answer the question, express it in 3-4 words - Else, say None> Further query: <if ‘Final Response’ is None, then think step-by-step using the ‘Reasoning’ and other facts and state which named entity’s direct information is missing> A.5 Auxilliary Schema Construction As stated in Section 4.2.2, we use a collection of indomain questions to construct a graph schema (See definition in Section 3), that captures the global level blueprint of information required to answer questions in the target domain. In this section, we detail the procedure used to create the same. We employ a four-step process, starting from the unannotated in-domain multi-hop questions. A.5.1 Question Decomposition Given the complexity of multi-hop questions and the challenges in processing their embeddings, we first decompose these into simpler, single-hop questions. This decomposition is facilitated by the llama-13b LLM, which effectively breaks down complex questions into their constituent single-hop components. Decomposing 10, 000 questions using SoTA LLMs is very expensive, thus we opted for a smaller, open-weight LLM. We use few-shot prompting to ensure the model understands the task well. We use the following prompt for the same: Task: Decompose a multi-hop question into a series of single-hop questions to assist in finding the answer in a step-by-step manner. Instruction: i. Proceed with the decomposition according to the outlined chain of thought. ii. Only return the sub-questions, nothing else. iii. Focus on the entities in the question iv. If decomposition is not clear, do not guess. Question: Lily’s Driftwood Bay premiered on what British television channel that is operated by a joint venture between Viacom International Media Networks Europe and Sky plc? Chain of Thought Instructions: 1. Identify the ultimate goal of the question: Find the 13278 television channel on which Lily’s Driftwood Bay premiered. 2. Recognize that there are two pieces of information needed: a) Determine the British television channel operated by the specific joint venture. b) Find out if Lily’s Driftwood Bay premiered on this channel. 3. Create single-hop sub-questions that will answer these pieces of information separately. Output: 1. Which British television channel is operated by a joint venture between Viacom International Media Networks Europe and Sky plc? 2. Did Lily’s Driftwood Bay premiere on this specific British television channel? A.5.2 Cluster Question Embeddings We then apply K-Means clustering to the embeddings of the decomposed single-hop questions to group similar questions. We use OpenAI embedding model (text-embedding-ada-002) to compute text embeddings. The optimal number of clusters (k) is determined by evaluating the clustering quality through a combined clustering evaluation score (CCES) which is a weighted sum of three normalized scores (normalized to lie between 0 and 1): silhouette score (Snorm), inverted DaviesBouldin score (DBnorm), and Calinski-Harabasz score (CHnorm), with equal weightage assigned to each. These scores collectively assess the cohesion, separation, compactness, distinctness, and dispersion of the clusters. We use the elbow method for determining the optimal number of clusters. We report the elbow plot below. Below, we report a couple of cluster centroids with a few data points (questions) around the centroid. Cluster 1 • Which football team did Kevin John Ufuoma Akpoguma play for? • Who is the former Houston Texans head coach? • Which NFL team currently employs him? Cluster 2 Figure 8: Elbow Plot for Kmeans clustering on 10000 questions from HotpotQA dataset. • What is the name of the actress? • Who are Rohan Bopanna and Cara Black? • What is the name of the protagonist in the show "Pretty Little Liars"? As can be seen from the above clusters, they capture latent topics in the dataset such as sports (cluster 1) and movies (cluster 2). A.5.3 Latent Question Category Modelling The next step involves latent topic modeling to identify question categories within the single-hop question clusters. By extracting the cluster centroid question and the five questions closest to each centroid, we prompt an LLM (gpt-4-1106-preview in this case) to discern latent topics, focusing on entity types and relations rather than specific entities. The prompt we use deducing the latent question category is as follows: Task: Extract latent topics with focus on entity types (not the entities themselves) and relations in below given questions. These topics should be the underlying question categories for these questions. Instructions: i. Keep in mind that I will later use these latent topics to create a KG schema such that the KG can answer questions such as the ones listed below. ii. Answer in 1 sentence using detailed and informative words only. 13279 iii. Consider cross-question information too for determining the latent topics. iv. Directly give the latent topics without preamble text like “the latent topic is ...", and use bullets to separate topics. A.5.4 Schema Induction Finally, we prompt the LLM to generate a graph schema based on the broad question categories identified in the previous step. This schema is explicitly designed as an information organization blueprint relevant to the identified question categories, thereby facilitating a structured approach to answering questions within the domain. We use the following prompt for the same Task: Create a graph schema for my KG using the broad question categories that I want to be answered by the KG. The broad question categories are: {question categories} Instructions: i. Create a set of relations for a knowledge graph that clearly and unambiguously express the relationships between entities ensuring they are reusable, standardized, and semantically meaningful. ii. Generate a list of distinct, relevant, and comprehensive entities for a knowledge graph about wikipedia text, ensuring they are specific, meaningful, and cover all aspects of the domain. iii. After extracting the entities and relation types, for the graph schema of the knowledge graph return the all the triplets “entity type Relation Type - Entity Type" in the output as a list iv. Important: ensure that the triplets generated forms a connected graph v. Important: - nothing else other than the list of triplets should be returned. Format for the list: [(“Entity type", “relation", “Entity type"), (“Entity type", “relation", “Entity type"),...] A.6 Case Study of HOLMES on HotpotQA dataset In this section, we showcase our constructed hyperrelational KG extracted from the supporting documents. We illustrate the corresponding reasoning process, highlighting relevant facts, and ultimately derive the answer for the question. Now, consider the following example question from the HotpotQA dataset. Question: ‘What major truck road is located in Backford Cross?’ Based on the question, our method discovers related supporting documents and then fetches structured information from them (Section 4.2.1). Below we list those discovered supporting documents. Supporting Document: 1 Title: Backford Cross Passage: Backford Cross is a village on the Wirral Peninsula, Cheshire, England. It is a suburb of the town of Ellesmere Port and part of Cheshire West and Chester. Backford Cross is located around the A41/A5117 junction, south of Great Sutton and about 1.5 mi north of the village of Backford, near Chester. Backford Cross is largely made up of residential homes built from 1990 onwards and serves as a commuter village to Ellesmere Port and Chester, although inhabitants show no allegiance to either locality. The area is split between postcode districts, with parts of the village in Great Sutton, Ellesmere Port CH66 and other areas in Backford, Chester CH1. Supporting Document: 2 Title: A5117 road Passage: The A5117 is a road in Cheshire, England. It runs between Shotwick ( ) and Helsby ( ) and connects the A550 at Woodbank to the M56. As such it forms a northerly bypass to Chester and 13280 a shorter route between the North West and North Wales than the A55. The road is dualled west of the M56. There is roundabout with the A540 and at Dunkirk at the western terminus of the M56. East of the junction the road is single carriageway and crosses the A41 by way of a roundabout at Backford Cross. The A5117 intersects the M53 at Junction 10. This junction is just east of Cheshire Oaks Designer Outlet. The road then continues almost parallel to the M56, which it intersects at Junction 14, at which there is a Motorway service area. The road then continues south east to terminate where it joins the A56 at Helsby. Supporting Document: 3 Title: Strawberry Park, Cheshire Passage: Strawberry Park and Strawberry Fields are suburbs in the town of Ellesmere Port, Cheshire West and Chester. They are located to the south of Hope Farm and to the west of Backford Cross. Supporting Document: 4 Title: A41 road Passage: The A41 is a major trunk road in England that links London and Birkenhead, although it has now in parts been superseded by motorways. It passes through or near various towns and cities including Watford, Kings Langley, Hemel Hempstead, Aylesbury, Solihull, Birmingham, West Bromwich, Wolverhampton, Newport, Whitchurch, Chester and Ellesmere Port. From these supporting documents, we create the Hyper-relational KG and further refine this graph to retain query relevant facts (Section 4.2.2). Below we report the same (blue colored text refers to the additional attributes of the hyper triple, red colored text refers to the subject entity, green colored text refers to the relation, and purple colored text refers to the object entity). Distilled Hyper-Relational KG: • context: ‘Strawberry Park, Cheshire’ subject: ‘Strawberry Park’ relation: ‘is a suburb in’ object: ‘Ellesmere Port, Cheshire West’ • context: ‘Strawberry Park, Cheshire’ subject: ‘Strawberry Fields’ relation: ‘is a suburb in’ object: ‘Ellesmere Port, Cheshire West’ • context: ‘Strawberry Park, Cheshire’ subject: ‘Ellesmere Port’ relation: ‘located in’ object: ‘Cheshire West and Chester’ • context: ‘Strawberry Park, Cheshire’ subject: ‘Strawberry Park’ relation: ‘located to the south of’ object: ‘Hope Farm’ • context: ‘Backford Cross’ subject: ‘Backford Cross’ relation: ‘is located in’ object: ‘Wirral Peninsula, Cheshire, England’ • context: ‘A5117 road’ subject: ‘A5117’ relation: ‘connects’ object: ‘A550 at Woodbank to M56’ • context: ‘A5117 road’ subject: ‘A5117’ relation: ‘forms a bypass to’ object: ‘Chester’ • context: ‘A5117 road’ subject: ‘A5117’ relation: ‘forms a shorter route between’ object: ‘North West and North Wales’ • context: ‘A5117 road’ subject: ‘A5117’ relation: ‘is dualled west of’ object: ‘M56’ • context: ‘A5117 road’ subject: ‘A5117’ relation: ‘has a roundabout with’ object: ‘A540’ • context: ‘A5117 road’ subject: ‘A5117’ relation: ‘has a roundabout at’ object: ‘Dunkirk’ • context: ‘A5117 road’ subject: ‘A5117’ relation: ‘intersects’ object: ‘M53 at Junction 10’ • context: ‘A5117 road’ subject: ‘M53’ relation: ‘is intersected by’ object: ‘A5117 at Junction 10’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘is a major trunk road in’ object: ‘England’ 13281 • context: ‘A41 road’ subject: ‘A41’ relation: ‘links’ object: ‘London and Birkenhead’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘superseded by’ object: ‘motorways’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes through’ object: ‘Watford’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes near’ object: ‘Kings Langley’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes near’ object: ‘Hemel Hempstead’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes near’ object: ‘Aylesbury’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes near’ object: ‘Solihull’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes through’ object: ‘Birmingham’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes near’ object: ‘West Bromwich’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes near’ object: ‘Wolverhampton’ • context: ‘A41 road’ subject: ‘A41’ relation: ‘passes through’ object: ‘Newport’ This distilled graph (after verbalization and proper formatting (Section 4.2.3)) is provided as input, along with the question, to the reader LLM. For this example, we use the SoTA, gpt-4-1106-preview as the reader LLM. Below, we describe the LLMs response which comprises of three things: • Relevant facts set of hyper triples which the LLM thought were important for the answering the question. These are verbalized hyper triples as the input to the LLM was also verbalized hyper triples. • Reasoning: LLMs thought process of going from the distilled hyper-relational KG to the final answer • Final answer: answer predicted by the LLM. Our Method’s Response Relevant facts: - Backford Cross is located around A41/A5117 junction (This fact was extracted from document named "Backford Cross ") - A41 is a major trunk road in England (This fact was extracted from document named "A41 road ") - A5117 is a road in Cheshire, England (This fact was extracted from document named "A5117 road ") Reasoning: The question asks for a major truck road located in Backford Cross. The fact that Backford Cross is located around the A41/A5117 junction indicates that these are the major roads in that area. The A41 is specifically described as a major trunk road, which fits the description of a “major truck road." The A5117 is also mentioned as a road in Cheshire, but it is not specified as a major trunk road. Therefore, based on the information provided, the A41 is the major truck road located in Backford Cross. Final Answer: A41 13282
2024
717
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13283–13310 August 11-16, 2024 ©2024 Association for Computational Linguistics Toward In-Context Teaching: Adapting Examples to Students’ Misconceptions Alexis Ross Jacob Andreas MIT CSAIL {alexisro,jda}@mit.edu Abstract When a teacher provides examples for a student to study, these examples must be informative, enabling a student to progress from their current state toward a target concept or skill. Good teachers must therefore simultaneously infer what students already know and adapt their teaching to students’ changing state of knowledge. There is increasing interest in using computational models, particularly large language models, as pedagogical tools. As students, language models in particular have shown a remarkable ability to adapt to new tasks given small numbers of examples. But how effectively can these models adapt as teachers to students of different types? To study this question, we introduce a suite of models and evaluation methods we call ADAPT. ADAPT has two components: (1) a collection of simulated Bayesian student models that can be used for evaluation of automated teaching methods; (2) a platform for evaluation with human students, to characterize the real-world effectiveness of these methods. We additionally introduce (3) ATOM, a new probabilistic method for adaptive teaching that jointly infers students’ past beliefs and optimizes for the correctness of future beliefs. In evaluations of simulated students across three learning domains (fraction arithmetic, English morphology, function learning), ATOM systematically outperforms LLM-based and standard Bayesian teaching methods. In human experiments, both ATOM and LLMs outperform non-adaptive random example selection. Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive methods for solving it. 1 Introduction Good teaching is adaptive to students’ specific beliefs and preconceptions (Corbett, 2001). Imagine, for example, that you have been tasked with tutoring students fraction arithmetic. You may start by first probing a student’s understanding, asking if addition: 
 else:
 if addition: 
 else:
 ? 𝟥 𝟨 Teacher What is ? 𝟣 𝟤× 𝟣 𝟨 What is ? 𝟣 𝟧× 𝟤 𝟧 Incorrect. It’s . 𝟣 𝟣𝟤 1) infer student misconceptions 2) choose most informative example prog + if addition: 
 else:
 prog + if addition: 
 else:
 multiplication learner addition learner over-generalize addition if denominators are equal make common denoms multiply nums make common denoms add nums over-generalize addition over-generalize mult. 𝟣 𝟧× 𝟤 𝟧= 𝟣 𝟤+ 𝟤 𝟦= 𝟤 𝟧 𝟦 𝟦 add nums & denoms multiply nums & denoms make common denoms add nums 𝟣 𝟤× 𝟣 𝟨= 𝟥 𝟨 𝟣 𝟣𝟤 𝟥 𝟨 𝟤 𝟧 Incorrect. It’s 𝟤 𝟤𝟧. make common denoms add nums make common denoms multiply nums if equal denoms: multiply nums else: multiply nums & denoms 𝟤 𝟤𝟧 multiply nums & denoms Target Concept Target Concept Students Figure 1: In the ADAPT (Adaptive Teaching) evaluation framework (§3), a teacher selects examples to teach a target concept to a student; however, the student has prior misconceptions that are unknown to the teacher. In the fraction arithmetic task (§3.1), some students (multiplication learner) tend to over-generalize the addition procedure of making common denominators and performing arithmetic only on numerators; others (addition learner) tend to over-generalize the multiplication procedure of applying arithmetic on both numerators and denominators. In order to teach effectively, the teacher must jointly 1) infer the student’s misconceptions online by observing their behavior throughout their interaction (i.e., the teacher infers that the student is a multiplication learner after observing the prediction 1 2 × 1 6 → 3 6), and 2) adapt to such misconceptions by selecting examples that will most efficiently correct these misconceptions (i.e., the teacher anticipates the student’s new incorrect belief that all fractions with equal denominators should be treated as addition problems and selects the example 1 5 × 2 5 = 2 25 to correct it). We propose ATOM, a two-part probabilistic approach that achieves adaptive teaching by maintaining explicit inferences about student priors (§4.1). 1 13283 them what 1 5 × 2 5 is. Suppose the student answers 2 5. Immediately, you might develop a hypothesis about this student’s misconceptions: they seem to be over-generalizing the rule for addition, only applying the operation to the numerator. Now suppose another student correctly answers 1 5 × 2 5 = 2 25, but answers 1 2 + 2 4 = 3 6. This student would appear to be over-generalizing the rule for multiplication. These (discrete and systematic) categories of student misconceptions have been found to be widespread among real students learning fraction arithmetic (Braithwaite et al., 2017). As this example highlights, interactions with students reveal insights about their misconceptions, and these misconceptions in turn influence the course of effective teaching. A good teacher should provide different problems for these students to target their specific misconceptions: The additiongeneralizer would benefit from multiplication examples, especially those with common denominators, while the multiplication-generalizer would benefit from addition examples. What does this mean for NLP? Computational models—particularly language models (LMs)—are increasingly used as pedagogical tools (Kasneci et al., 2023). But it is unclear how effectively any of today’s models can tailor instruction to perform “incontext teaching” (§2) for students with differing degrees of skill and prior knowledge. In this work, we draw on a long line of literature on rational models of pedagogy (Shafto et al., 2014) to study this question. To do so, we introduce ADAPT (Adaptive Teaching), an evaluation suite targeted at teaching students with varied prior misconceptions (§3). In ADAPT, a teacher is tasked with selecting examples to teach a particular target concept. As shown in Figure 1, the teacher selects examples one by one and can observe predictions made by the student. Importantly, the student has prior misconceptions that are unknown to the teacher. ADAPT is designed such that correct inferences about student misconceptions can enable more efficient learning. ADAPT has two components: 1. An offline, probabilistic framework for reproducibly evaluating how efficiently teachers can teach simulated students with unknown prior misconceptions (§3.2). 2. An evaluation platform for measuring teachers’ efficacy with human students with prior misconceptions (§3.3). Simulated and human experiments allow us to characterize the pedagogical capabilities of teachers along several dimensions: their inferences about student beliefs (§5.3, §6.3), adaptivity of chosen examples (§5.4), and differences in teaching mathematical concepts (§5.5). In addition to these evaluations, we introduce: 3. A new probabilistic method, ATOM (Adaptive Teaching tOwards Misconceptions), which performs online inference of student priors, then uses these inferences to select informative teaching examples. ATOM provides proof-of-viability for adaptive teaching methods in simulated and human students (§4.1). Using ADAPT evaluations, we characterize ATOM, GPT-4, and a range of other methods. In simulated students, we find that while GPT4 exhibits some adaptation to student misconceptions, there is room for improvement with adaptive approaches; it substantially underperforms ATOM, suggesting promise in using adaptive methods (§5). In human experiments, however, both ATOM and GPT-4 outperform random example selection, highlighting the potential of learned models (of various kinds) for adaptive teaching (§6).1 2 Preliminaries We formulate our problem setting as one in which a teacher interactively provides feedback to a student to communicate a new concept (a mapping from inputs x to outputs y). A teacher observes a sequence of (x, ˆy) pairs guessed by a student, and must infer what additional training example (x, y) pair will most improve the student’s understanding of the concept. We term this problem in-context teaching to draw an analogy to the widely studied phenomenon of “in-context learning” (Brown et al., 2020; Min et al., 2022; Akyürek et al., 2023). Formally: • The teacher begins with a target concept h∗ drawn from some concept space H. A concept parameterizes a mapping between an input space X and an output space Y. In Fig. 1, h is the true procedure for adding and multiplying fractions, and H is the space of other possible fraction manipulation algorithms, so X contains arithmetic expressions involving fractions, and Y contains fractions. 1Our code is publicly available at https://github.com/ alexisjihyeross/adaptive_teaching. 2 13284 if : undefined else: a x+ if addition: 
 else: 
 if addition: 
 else: 
 if addition: 
 else: 
 if : undefined else: a x+ if addition: else: mult prog Concepts which determine their predictive distributions if : undefined else: ax+ b greater_2(x): Concept space: 9 programs odd even divisible_20 divisible_3 . . . greater_20 greater_1 . . . prime positive 7 f(x) . +ed +d 0.08 0.6 Concept space: possible settings of 10,528 parameters of generative model from Teacher aims to teach a specific target concept Students Concept space: 4,158 programs +ed
 learner +d
 learner +consonant+ed
 learner f learner b learner Target concept: program for fraction arithmetic Target concept: generative model of verbs Target concept: wug(x) function that takes in an input x and returns an output have different priors over the concept space High prior belief in programs that over-generalize addition procedure to multiplication, eg: … … … High prior belief in programs with incorrect f/correct b a 1 prog + if addition: 
 else:
 prog + if addition: 
 else:
 multiplication learner addition learner 3 sub-programs each +ied
 learner High prior belief in incorrect parameters for +ied class +ied y fy zz gamify 0.27 0.05 0 1.0 0.6 hop cope accept … … … +consonant+ed +consonant+ed greater_4(x): 7 3 High prior belief in programs with correct f/incorrect b if : undefined else: a x+ greater_2(x): 3 3 Human & Simulated [Uniform belief over values of a] … … High prior belief in programs that over-generalize multiplication procedure to addition, eg: 𝗉[ 𝟣 𝟧× 𝟤 𝟧→𝟤 𝟧 ] = 𝟢. 𝟫 𝗉[ 𝟣 𝟧+ 𝟤 𝟧→𝟥 𝟣𝟢 ] = 𝟢. 𝟫 … 0.08 0.27 0.6 0.5 0.5 Continuous distribution … … 𝗉[𝗐𝗎𝗀(𝟥) →𝟫 ] = 𝟢. 𝟪 wug(3) 𝗉[𝗀𝖺𝗆𝗂𝖿𝗒→ ]= 𝟢. 𝟪 gamify +ed Fractions Add & multiply functions Verbs Categorize English past tense verbs Functions Learn what wug(x) does make common denoms add nums make common denoms multiply nums add nums & denoms multiply nums & denoms make common denoms add nums multiply nums & denoms Different student types Figure 2: An overview of the tasks and student types in the ADAPT (Adaptive Teaching) evaluation framework (§3). ADAPT has three tasks: fractions, verbs, and functions. For the fraction and function tasks, a student’s concept space consists of programs; for the verbs task, a student’s concept space is the space of generative models corresponding to English past tense verb classes. • We assume that teachers interact with students by providing examples (xi, yi) ∈X ×Y. For convenience, we denote a sequence of such examples (x, y) = [(x1, y1), . . . , (xn, yn)]. In each round of teaching, the teacher first presents the student with an input xi, the student produces a guess ˆyi, and then the teacher reveals the true yi.2 • Given a collection of examples (x, y), we assume that students compute a posterior over concepts pS(h | x, y). For example, a student who has just seen that 1 3 × 2 3 = 2 9 may be less likely to believe that fraction addition and multiplication follow the same rules. The process by which students infer concepts from examples can in principle be arbitrary; however, for some methods in this paper we will assume that students are Bayesian, with: pS(h | x, y) ∝pS(h) Y i pS(yi | xi, h) (1) under some prior belief pS(h) about the concept space. 2This is both a model of real-world educational practice and a standard paradigm for online learning; future work might study richer forms of interaction with explanations and instructions. Given this setup, a teaching strategy is a policy pT (x, y | x, y) that chooses examples to maximize the probability that the student assigns to h∗. In the optimal teaching (OT) framework of Shafto et al. (2014), for example, pS(h | x, y) is assumed known, and teachers choose examples: xi+1, yi+1 = arg max x,y p(h∗| x, y, x, y) (2) In the running example, this criterion is more likely to select examples of addition for a student who has already mastered multiplication, and vice-versa. In addition to this greedy approach, it is possible to plan sequences of informative examples for students (Rafferty et al., 2016). As discussed in §1, however, the assumption that teachers have exact knowledge of pS(h | x, y) is often unrealistic—real-world teaching involves students of many different types, whose beliefs and misconceptions may not be known (or easily discerned) a priori. Thus, we study teaching when students’ priors are themselves unknown. We assume that students are drawn from a distribution over student types p(α), each associated with a concept prior p(h | α). In the running example, these priors may correspond to different beliefs about the algorithms for fraction addition and multiplication, with “addition generalizers” assigning 3 13285 high probability to a spurious multiplication algorithm that manipulates only numerators. In this setting, teachers must still implement an effective example selection policy pT (x, y | x, y); however, choosing informative examples now requires inferring student priors in order to estimate the effect of these examples on pS(h∗| x, y). In the next section, we describe our proposed framework for evaluating adaptive teaching policies. In §4, we describe a set of candidate adaptive teaching policies (including our new ATOM method), and in §5 and §6 use ADAPT to evaluate these teaching policies with simulated and human students. 3 The ADAPT Evaluation Framework ADAPT has two parts: an offline evaluation framework with simulated students (§3.2), and a platform for doing experiments with human students (§3.3). We first describe the tasks in ADAPT (§3.1). An overview of ADAPT is shown in Figure 2. 3.1 Tasks Fractions In the first task, the teacher aims to teach the student how to add and multiply fractions. Here, student types correspond to different prior beliefs (possibly incorrect) about the rules for fraction arithmetic. Verbs The second task is English past-tense conjugation. In this task, students are presented with lemmas and must choose an appropriate ending (e.g., play →+ed, fry →+ied). Here, student types correspond to different degrees of familiarity with possible English past-tense endings. Functions In the third task, reminiscent of existing number concept learning tasks (Tenenbaum, 1999), the teacher aims to teach the student a function that takes in numbers and returns either numbers or undefined. These functions can be represented as programs that take an input x and compute: if f(x): return undefined (3) else: return a*x+b where f is a boolean function and a, b are integers. The teacher chooses input/output pairs (x, wug(x)) to show the student to maximize the student’s belief that the concept is the correct program h∗. We create 24 target concepts, which combine 3 unique settings of a/b and 8 settings of f. Student types can be instantiated by selecting preferences for specific primitives (e.g., f, a, b); arbitrary priors over programs can then be derived from these preferences for primitives. 3.2 Simulated Students The first component of ADAPT evaluates teachers with simulated, Bayesian students. These students maintain belief distributions over the full concept space.3 As shown in Figure 2, different “student types” are implemented by initializing students with different priors over the concept space. All student types begin with low initial belief in the target concept h∗and assign high probability to other spurious concepts. Fractions For the fraction task, we represent understanding of fraction arithmetic as programs. Students maintain a belief distribution over the space of possible programs, as shown in Figure 2. We create two student types, mult-learner (a model of a student who has not yet mastered multiplication and incorrectly applies the procedure for addition to multiplication) and add-learner (a student who performs addition by incorrectly applying the procedure for multiplication). These correspond to common incorrect strategies that children exhibit when learning fraction arithmetic (Braithwaite et al., 2017) by over-generalizing the procedure for one operation to another. Functions For the function task, students again maintain a belief distribution over the space of possible concepts. We create two types of students for each target concept: a b-learner and a f-learner. The f-learner knows the true value of b but has an incorrect, spurious belief about what function f(x) is; the b-learner knows the true f(x) in the target program h∗but has an incorrect belief about the value of b. See §B for how we select incorrect beliefs for students. Verbs For the verb task, we represent understanding of verb conjugation as generative models of English past-tense verbs. Students are naïve Bayes models with features for word-final character ngrams, so p(h | x, y) is a distribution over model parameters, with Dirichlet/Beta priors over the verb class/feature occurrence parameters, respectively. We fit a naïve Bayes model on the Unimorph 3The concept space consists of 9 concepts for the fraction task, 4,158 concepts for the function task, and a continuous space of possible values for 10,528 parameters (corresponding to 329 features and 4 verb classes) for the verb task. 4 13286 dataset4 (Batsuren et al., 2022) and use the mode of the resulting posterior as the target concept.5 We create four student types by picking one of the classes as the “unknown” class: a +d-learner, +ed-learner, +consonant+ed-learner, and a +ied-learner. To simulate students who are familiar with all but one class, we initialize the student’s priors by using the posterior mode parameters of the model fit on the full data, setting the parameters for the “unknown” class to all 1s (effectively removing any learned information about the class): Figure 2 shows how setting the prior in this way determines the mode of the +ied-learner’s prior distribution over generative models. 3.3 Human Students The second component of the ADAPT evaluation framework is a platform for evaluating adaptive teaching with human students, specifically for the function learning task. Human participants are tasked with learning what a “mystery machine” called wug does. They are given 10 minutes to interact with a teacher who presents teaching examples through a chat interface. Their task is to figure out when wug(x) is undefined (i.e., guess what f is), and when wug(x) is defined, what it computes (i.e., what a and b are in a*x+b). They can submit guesses for how wug(x) operates whenever and however many times they choose to during the 10 minutes of interaction. wug guesses have 3 components corresponding to f, a, and b, and we allow partial guesses. See §6.1 for more details on instructions, bonus compensation, and other parts of the human study. We create b-learners and f-learners by priming the human participants with hints from a “Dr. Smith”; b-learners receive a hint with the correct f but incorrect value for b, and the reverse for f-learners. An example hint is given in Table 3. 4 Methods 4.1 Teaching Toward Misconceptions We introduce an approach that makes explicit inferences about the parameters of the student’s prior. We call this method ATOM (Adaptive Teaching tOward Misconceptions). Like the OT method described in (§2), ATOM assumes that students are Bayesian reasoners and 4https://github.com/unimorph/unimorph 5By Dirichlet–Multinomial conjugacy, the posterior distribution over parameters factorizes, is also a product of Dirichlet and Beta distributions, and can be efficiently computed. chooses examples to maximize the posterior probability that the student assigns to the target concept h∗. Because the student’s prior is unknown, however, this process involves two steps: 1. Maximum a posteriori estimation of student priors. Recall that, during interaction, the teacher provides inputs xi, then observes student guesses ˆyi before providing ground-truth labels yi. In ATOM, the teacher selects student prior parameters that best explain the student’s sequence of guesses: αi = arg max α X i log pS(ˆyi | x, y, xi, α) (4) where (x, y) = [(x1, y1), . . . , (xi−1, yi−1)]. Estimating this arg max requires a tractable procedure for computing the posterior predictive distribution, which is available for each simulated student model we evaluate in §5. 2. Optimal selection of informative examples. As in OT, once α has been estimated, we choose an example (xi+1, yi+1) to optimize: arg max xi+1,yi+1 pS(h∗| x, y, xi, yi, xi+1, yi+1, α) (5) We note that many more sophisticated ATOMtype methods are possible—for example, explicit marginalization (rather than MAP estimation) of student priors. More basically, the method described above does not perform any active experimentation to identify the student prior; alternative ATOM implementations could explicitly trade off between exploration (of the student type) and exploitation (of the student posterior). 4.2 Other Methods Random The RANDOM baseline uniformly samples an input to show the student. Ranking A second baseline ranks the datapoints at the first step according to the objective in Eq 2, then chooses them in this order for the rest of the teaching interaction. The student type is chosen uniformly. We refer to this baseline as RANKING. Non-Adaptive A third baseline selects examples according to the OT objective in Eq 2 but maintains a fixed guess about the student type, chosen uniformly at the start of teaching. This baseline can be thought of as an ablation of the adaptive piece of ATOM. We refer to this baseline as NONADAPTIVE (Shafto et al., 2014). 5 13287 Fractions Functions Area Under Learning Curves: Simulated Students’ Likelihood of Target Concept Verbs Probability AUC Log PDF AUC GPT4-Gold Greedy GPT4-Gold Greedy (optimal) Adaptive GPT4 Rank Non-Adaptive Random GPT4 Rank Non-Adaptive ATOM Unknown Student: Known Student: 0.8 f-learner b-learner +d learner +ed learner +consonant+ed learner +ied learner multiplication learner addition learner Figure 3: Area under simulated students’ learning curves, where curves plot students’ posterior beliefs in the target concept by number of datapoints. We report results by task and student type with 3 random seeds per bar. Dashed bars indicate that the true student type is assumed. Note that the y-axis for the f-learner for functions starts at 0.8, as these students all learn the concept early on, and so differences in teaching methods are small. Error bars show min/max values across seeds. Full learning curves are shown in Figure 9. Random [Known] GPT4 Non-Adaptive [Known] Non-Adaptive Adaptive [Known] Rank Rank Number of Datapoints Number of Datapoints Number of Datapoints Number of Datapoints Number of Datapoints Target: greater_2
 Spurious: greater_4
 Ex: [3, 4] Target: greater_7
 Spurious: greater_9
 Ex: [8, 9] Target: positive
 Spurious: greater_1
 Ex: [1] Target: divisible_3
 Spurious: divisible_6
 Ex: [-15, -9, -3, 3, 9, 15] Target: greater_2
 Spurious: greater_1
 Ex: [2] GPT4 Teaching Methods:
 Unknown vs Known Student Type Critical Example Selection by Teaching Methods Figure 4: Critical example selection by different teaching methods for the function task. Results are for simulated f-learners, who have a spurious belief about f that agrees with the target f on all but a few examples, as labeled. The opacity of each square corresponds to the mean value of whether the example chosen by the teaching method at that step in learning is a “critical example” (averaged across experimental conditions: seed and concepts). See §5.4 for details. We report a subset of results here; see §11 for full results. GPT-4 We prompt the gpt-4-0314 model to select teaching examples (and provide no other explanations); the prompt describes the target concept, the student’s hypothesis space, and the student types. The model is instructed to try to infer the student type in order to teach most efficiently. See Appendix E for actual prompts. To control for the fact GPT-4 sometimes generates incorrect outputs for examples, we use ground truth outputs for generated inputs.6 We call this method GPT-4. 4.3 Oracle Methods We also compare against several methods that assume access to the true student. These serve as comparison points for how well methods could do if they inferred the correct student type. We run this 6We parse GPT-4 generations to get inputs and create new messages with target outputs. More details can be found in §E. reference for all methods except RANDOM, which does not make use of a student model. We refer to these methods as RANKING-KNOWN, NONADAPTIVE-KNOWN, and GPT-4-KNOWN. 5 Simulated Experiments 5.1 Experimental Set-Up We run 3 random seeds for all experimental conditions. For all methods except the GPT-4-based methods, we restrict the teaching methods from selecting previously selected examples. Teaching interactions last 40 steps for the fraction/function tasks and 50 steps for the verb task. For non GPT-4 methods, we enumerate over either the full dataset (fractions/functions) or a sampled subset (500 examples for verbs) to choose teaching examples. 6 13288 5.2 Students’ Learning Efficiency We evaluate teacher effectiveness by measuring the student’s probability of the target concept, h∗. Figure 3 shows the area under simulated students’ learning curves, where curves reflect students’ beliefs in h∗(full curves are shown in Figure 9). We observe that ATOM performs almost as well as the optimal strategy, NON-ADAPTIVEKNOWN, and outperforms NON-ADAPTIVE, suggesting adaptation is both possible (i.e., student type is inferrable from interaction) and that it leads to improved teaching efficiency. We also observe that both GPT-4-KNOWN and GPT-4 outperform RANDOM but underperform both ATOM and the non-adaptive probabilistic approaches. 5.3 GPT-4’s Inferences about Student Type We query GPT-4 for the student type at the end of the teaching interaction (Based on this interaction, which kind of student do you think I was at the start of this teaching session ...). See Table 11 for an example prompt. The mean accuracies of GPT-4’s student type guesses are 100% for verbs, 66.67% for fractions, and 53.47% for functions. A possible explanation for these discrepancies is that for the fraction and function tasks, students successfully learn the target concept by the end of the teaching interaction and thus make accurate predictions; for the verb task, however, students are still making errors by the end.7 We analyze how these accuracies change throughout the teaching interactions. For the function task, the student type accuracies are 64.2%, 60.4%, 56.3%, 53.47% after 10, 20, 30, and 40 steps, respectively: This decrease suggests that GPT-4 exhibits recency bias in making inferences about student type. 5.4 Selection of Critical Examples For function concepts, we evaluate how early teaching methods select “critical examples,” or key examples that distinguish the target f from the spurious f.8 Consider the case where the target f is greater_2 but the f-learner believes it is greater_4: The critical examples are 3 and 4 because they are the only examples on which the target f and spurious f would return different outputs. Observing wug(x) on one of these inputs would make clear to the f-learner that their belief about 7See Figure 10 for how the correctness of student predictions on teaching examples changes throughout learning. 8We consider a subset of target concepts/spurious concepts where the number of critical examples is less than 10. f is wrong. Therefore, an effective teacher should select such examples early in teaching. As shown in Figure 4, the probabilistic methods assuming the student type (RANKING-KNOWN, NON-ADAPTIVE-KNOWN) all select critical examples early. GPT-4-KNOWN also shows a concentration of critical examples early, though they are more spread out for some concepts (i.e., for target f divisible_3/positive); GPT-4 thus exhibits some pedagogical reasoning, focusing on examples that will target the f-learner’s misconceptions when it knows that the student type is a f-learner. When GPT-4 does not know the student type, we observe that critical examples are still more concentrated at the start than for RANDOM, suggesting some degree of adaptivity. Finally, we observe that ATOM selects critical examples at comparable points to NON-ADAPTIVE-KNOWN despite having to guess about student type. 5.5 Qualitative Differences in Teaching Math For function concepts, recall that when wug(x) is defined, it computes a*x+b. We analyze how different methods teach what a and b are when wug(x) is defined by plotting the inputs they choose. As Figure 5 shows, GPT-4 tends to select inputs in order of increasing magnitude. In contrast, ATOM starts with higher-magnitude examples, then selects examples in increasing order. These qualitative differences suggest that GPT-4 may have encoded information that inputs closer to the origin are easier to learn from than those further from the origin.9 9In contrast, ATOM scores inputs according to how many incorrect concepts they “rule out,” treating all else as equal. Minutes of Teaching Interaction Examples Selected for Teaching What wug(x) Computes GPT-4 Random ATOM Chosen Input Order of Datapoint Order of Datapoint GPT-4 ATOM Figure 5: Examples selected by different teaching methods for teaching a and b in the function learning task (i.e., what wug(x) computes when it is defined). The x-axis indicates the order of the chosen example compared to other examples targeting a and b. 7 13289 Function Results with Human Students * * wug partial correctness AUC Random GPT-4 ATOM Figure 6: Results with human students showing how efficiently students guessed the correct wug concept (§6.2). Stars indicate statistically significant results under a paired t-test. Error bars show 95% confidence intervals. 6 Human Experiments 6.1 Experimental Set-Up We recruit Prolific users who are fluent English speakers and who indicated some experience with computer programming. We pay participants a base pay of $4.00 per study ($16/hour) and offer a bonus based on the accuracy of their predictions to the teacher and on how early they guess the correct value for wug(x). See Appendix F for details. We evaluate RANDOM, GPT-4, and ATOM teachers on the function task. We run 5 experiments per teaching method per experimental condition.10 These experiments were classified as an exempt Benign Behavioral Intervention by our IRB. 6.2 Students’ Learning Efficiency We evaluate the effectiveness of teachers by measuring the correctness of wug guesses made by human participants, computing an AUC-like metric. We consider all timestamps where at least one participant made a guess for wug and compute a partial correctness metric for each guess: (f is correct) + 0.5(a is correct) + 0.5(b is correct).11 We report the mean correctness values across timestamps. As shown in Figure 6, we find that both GPT-4 and ATOM improve significantly over the RANDOM baseline (p < 0.05 using a paired t-test). Interestingly, these differences are entirely explained Because higher-magnitude inputs tend to result in outputs that can be explained by fewer functions, ATOM selects highermagnitude inputs early in teaching. 10There are 22 unique experimental conditions (11 target concepts, 2 student types per concept). 11If no new guess was made by a user, we use their last guess. by model behavior for f-learners: after controlling for student type, improvements over the random teacher are significant for f-learners but insignificant for b-learners. For results by student type, time, and individual participants, see §F.2. 6.3 Inferences about Student Type We evaluate the accuracy of predictions of student type made by ATOM and GPT-4 after each minute of teaching. ATOM makes more accurate predictions than GPT-4, with respective mean accuracies of 71.33% and 52.61% across participants and minutes. Accuracies over time are shown in §F.2 (Figure 19). 7 Discussion Both our human and simulated results show that ATOM and GPT-4 exhibit pedagogical ability over random example selection. ATOM’s performance with human students suggests that the Bayesian assumptions made by ATOM are accurate models of some aspects of human learning. Across human and simulated experiments, we find evidence of some adaptivity in GPT-4, though less than in ATOM, both in the examples selected (§5.4) and inferences about student type (§5.3, §6.3). See §A for additional analyses about how adaptive selected teaching examples are to student beliefs. We also observe other qualitative limitations of GPT-4 as a teacher: selecting the same teaching examples multiple times or terminating teaching early due to an incorrect belief that all teaching examples have been exhausted. It is important to highlight that because we use ground truth outputs (e.g., ground truth function evaluations) with GPT4, the GPT-4 results represent an upper bound on GPT-4’s performance. Despite these limitations, however, GPT-4 performs comparably to ATOM with human students, suggesting pedagogical benefits beyond adaptivity. In particular, the analysis in §5.5 suggests that LMs may encode information about human learning that is hard to represent in more structured approaches like ATOM—e.g., that it is easier for humans to learn the weights of a line from inputs closer to the origin. Together, our results point to complementary advantages of LM teachers like GPT-4 and more structured, probabilistic methods like ATOM. They suggest that there is substantial headroom to improve real-world teaching by augmenting the in8 13290 ferences of structured models with richer information about the priors that humans bring to learning, whether by combining such structured methods with information encoded in LMs or by developing other rich models of student priors—e.g., by learning more complex “student types” from naturally occurring data. We perform an initial experiment with human students in this direction by combining GPT-4 and ATOM but do not find improvements compared to either teacher alone; see §F.3 for details. Other directions for future work include modeling more complex student phenomena— accounting for students who ask questions, reason pedagogically about teacher intentions, and provide feedback to teachers (Chen et al., 2022)—and creating methods for adaptive teaching with natural language explanations in real-world teaching domains. 8 Related Work This work builds on a long line of work in rational models of pedagogy, including Bayesian models like those described by Shafto et al. (2014) and Sumers et al. (2021), as well as improved planning and inference procedures like the one described by Rafferty et al. (2016). Past work generally assumes students’ initial belief states are known. In parallel, Rafferty et al. (2015) use an inverse planning model to infer students’ prior beliefs from their actions, and Chen et al. (2022) show that human teachers adapt examples to these prior beliefs. This work is also closely related to other bodies of work that aim to infer student knowledge. Item response theory (IRT) infers a scalar measure of student’s skill based on their responses to questions (Hambleton and Swaminathan, 1981; Hambelton and Jodoin, 2003). Knowledge tracing (KT) models students’ evolving knowledge states over time separately for a fixed set of skills (Corbett and Anderson, 1994); previous work has used both bayesian methods for modeling individual students’ prior knowledge (Yudelson et al., 2013) and neural models for modeling (Piech et al., 2015) and targeting (Srivastava and Goodman, 2021) students’ evolving learning states. In contrast to IRT and KT, our work aims to infer the student’s entire prior and posterior over the concept space; these inferences in turn enable more fine-grained design of individual teaching examples. Inferring student misconceptions from errors also uses tools from a broader literature on computational models of theory of mind. Prominent work includes general-purpose bayesian models of other agents’ beliefs and desires (Baker et al., 2011) and models of pragmatic inference grounded in recursive reasoning about speakers and listeners (Frank and Goodman, 2012). More recent work has studied theory of mind capabilities in LMs; they find largely negative results in unaugmented LLMs (Sap et al., 2023) but positive results from LMs augmented with structured belief representations (Sclar et al., 2023). Recent work has also explored LLMs’ theory of mind abilities in teaching smaller LMs (Saha et al., 2023). There is also a large body of work on how to optimally provide and interpret human supervision for ML models. General frameworks for this problem include Machine Teaching (Zhu et al., 2018) and Cooperative IRL (Hadfield-Menell et al., 2016); related ideas appear in program synthesis (Vaduguru et al., 2022), robot learning (Milli and Dragan, 2019; Dragan et al., 2013), and natural language processing (Li et al., 2023) as well. There has been increased interest in using LLMs to assist or supplement human teachers. See Kasneci et al. (2023) for a survey and Wang et al. (2023) for a specific application to math teaching problems. Concurrent work by Chandra et al. (2024) uses program synthesis techniques to infer misconceptions and provide explanations about Javascript. Our work adds to this literature by providing a framework that allows reproducible evaluation of the effectiveness and personalization skills of LLMs as teachers, as well as a new model that empirically improves upon LLM baselines in teaching humans a new task. 9 Conclusion We introduce ADAPT, an evaluation suite measuring how effectively automated teaching methods can teach students with different prior misconceptions. We also introduce ATOM, a two-part probabilistic approach to adaptive teaching that maintains explicit inferences about student priors. Our evaluations of ATOM, LLMs, and other probabilistic baselines with both simulated and human students highlight the potential of learned adaptive models for solving the adaptive teaching task. 9 13291 10 Limitations One limitation of our work is that in our model of teaching, teachers are limited to example selection and students are limited to observation. These restrictions leave out that teachers can provide explanations, and students can ask questions and provide feedback to teachers. An interesting direction for future work would be to both create an evaluation framework for such phenomena and develop models for these richer forms of teaching and learning feedback. Relatedly, students can also engage in pedagogical reasoning about why teachers chose particular examples, which can in turn influence how they learn from these examples; while the simulated students in ADAPT do not model this phenomenon, future work could explore richer models of students. Another limitation is that we create the student types by instantiating priors in particular ways rather than deriving the priors from real-world data. Future work could explore how to automatically learn the types of priors that human students bring to different teaching scenarios. Acknowledgements We are grateful to Ekin Akyürek, Leshem Choshen, Gabriel Grand, Robert Hawkins, Andi Peng, Megha Srivastava, Lionel Wong, Zhaofeng Wu, and members of the MIT Language & Intelligence group for helpful discussions and feedback on this work. AR was funded by an MIT Shillman Fellowship and the NSF GRFP 2023357727; additional funding was provided by the National Science Foundation under grants IIS-2238240 and IIS2212310, and by the Alfred P. Sloan Foundation. We also acknowledge the use of the following icons from Noun Project: “student” and “Teacher” by Crystal Gordon, “Star” by Bluetip Design, “X” by Alice Design, and “Check” by Siti Masriatun. References Ekin Akyürek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou. 2023. What learning algorithm is in-context learning? investigations with linear models. In The Eleventh International Conference on Learning Representations. Chris Baker, Rebecca Saxe, and Joshua Tenenbaum. 2011. Bayesian theory of mind: Modeling joint Belief-Desire attribution. Proceedings of the Annual Meeting of the Cognitive Science Society, 33(33). Khuyagbaatar Batsuren, Omer Goldman, Salam Khalifa, Nizar Habash, Witold Kiera´s, Gábor Bella, Brian Leonard, Garrett Nicolai, Kyle Gorman, Yustinus Ghanggo Ate, Maria Ryskina, Sabrina Mielke, Elena Budianskaya, Charbel El-Khaissi, Tiago Pimentel, Michael Gasser, William Abbott Lane, Mohit Raj, Matt Coler, Jaime Rafael Montoya Samame, Delio Siticonatzi Camaiteri, Esaú Zumaeta Rojas, Didier López Francis, Arturo Oncevay, Juan López Bautista, Gema Celeste Silva Villegas, Lucas Torroba Hennigen, Adam Ek, David Guriel, Peter Dirix, Jean-Philippe Bernardy, Andrey Scherbakov, Aziyana Bayyr-ool, Antonios Anastasopoulos, Roberto Zariquiey, Karina Sheifer, Sofya Ganieva, Hilaria Cruz, Ritván Karahóˇga, Stella Markantonatou, George Pavlidis, Matvey Plugaryov, Elena Klyachko, Ali Salehi, Candy Angulo, Jatayu Baxi, Andrew Krizhanovsky, Natalia Krizhanovskaya, Elizabeth Salesky, Clara Vania, Sardana Ivanova, Jennifer White, Rowan Hall Maudslay, Josef Valvoda, Ran Zmigrod, Paula Czarnowska, Irene Nikkarinen, Aelita Salchak, Brijesh Bhatt, Christopher Straughn, Zoey Liu, Jonathan North Washington, Yuval Pinter, Duygu Ataman, Marcin Wolinski, Totok Suhardijanto, Anna Yablonskaya, Niklas Stoehr, Hossep Dolatian, Zahroh Nuriah, Shyam Ratan, Francis M. Tyers, Edoardo M. Ponti, Grant Aiton, Aryaman Arora, Richard J. Hatcher, Ritesh Kumar, Jeremiah Young, Daria Rodionova, Anastasia Yemelina, Taras Andrushko, Igor Marchenko, Polina Mashkovtseva, Alexandra Serova, Emily Prud’hommeaux, Maria Nepomniashchaya, Fausto Giunchiglia, Eleanor Chodroff, Mans Hulden, Miikka Silfverberg, Arya D. McCarthy, David Yarowsky, Ryan Cotterell, Reut Tsarfaty, and Ekaterina Vylomova. 2022. UniMorph 4.0: Universal Morphology. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 840–855, Marseille, France. European Language Resources Association. David W Braithwaite, Aryn A Pyke, and Robert S Siegler. 2017. A computational model of fraction arithmetic. Psychol. Rev., 124(5):603–625. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc. Kartik Chandra, Tzu-Mao Li, Rachit Nigam, Joshua Tenenbaum, and Jonathan Ragan-Kelley. 2024. From ‘why?’ to ‘wat!’: Explaining perplexing programs by debugging mental models. In PLATEAU Workshop. 10 13292 Alicia M Chen, Andrew Palacci, Natalia Vélez, Robert Hawkins, and Samuel J Gershman. 2022. Learning to teach, teaching to learn. Albert Corbett. 2001. Cognitive computer tutors: Solving the Two-Sigma problem. In User Modeling 2001, pages 137–147. Springer Berlin Heidelberg. Albert T Corbett and John R Anderson. 1994. Knowledge tracing: Modeling the acquisition of procedural knowledge. User Model. User-adapt Interact., 4(4):253–278. Anca D. Dragan, Kenton C.T. Lee, and Siddhartha S. Srinivasa. 2013. Legibility and predictability of robot motion. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction, HRI ’13, page 301–308. IEEE Press. Michael C. Frank and Noah D. Goodman. 2012. Predicting pragmatic reasoning in language games. Science, 336(6084):998–998. Dylan Hadfield-Menell, Stuart J Russell, Pieter Abbeel, and Anca Dragan. 2016. Cooperative inverse reinforcement learning. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc. RK Hambelton and M Jodoin. 2003. Item response theory: models and features. Ronald K. Hambleton and Hariharan Swaminathan. 1981. Journal of Educational Measurement, 18(3):178–180. Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 103:102274. Belinda Z Li, Alex Tamkin, Noah Goodman, and Jacob Andreas. 2023. Eliciting human preferences with language models. Smitha Milli and Anca D Dragan. 2019. Literal or pedagogic human? analyzing human model misspecification in objective learning. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048–11064, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Chris Piech, Jonathan Spencer, Jonathan Huang, Surya Ganguli, Mehran Sahami, Leonidas J. Guibas, and Jascha Sohl-Dickstein. 2015. Deep knowledge tracing. CoRR, abs/1506.05908. Anna N Rafferty, Emma Brunskill, Thomas L Griffiths, and Patrick Shafto. 2016. Faster teaching via POMDP planning. Cogn. Sci., 40(6):1290–1332. Anna N Rafferty, Michelle M LaMar, and Thomas L Griffiths. 2015. Inferring learners’ knowledge from their actions. Cogn. Sci., 39(3):584–618. Swarnadeep Saha, Peter Hase, and Mohit Bansal. 2023. Can language models teach weaker agents? teacher explanations improve students via personalization. Maarten Sap, Ronan LeBras, Daniel Fried, and Yejin Choi. 2023. Neural theory-of-mind? on the limits of social intelligence in large lms. Melanie Sclar, Sachin Kumar, Peter West, Alane Suhr, Yejin Choi, and Yulia Tsvetkov. 2023. Minding language models’ (lack of) theory of mind: A plug-andplay multi-character belief tracker. Patrick Shafto, Noah D Goodman, and Thomas L Griffiths. 2014. A rational account of pedagogical reasoning: teaching by, and learning from, examples. Cogn. Psychol., 71:55–89. Megha Srivastava and Noah Goodman. 2021. Question generation for adaptive education. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 692–701, Online. Association for Computational Linguistics. Theodore R Sumers, Robert D Hawkins, Mark K Ho, and Thomas L Griffiths. 2021. Extending rational models of communication from beliefs to actions. Joshua Tenenbaum. 1999. Rules and similarity in concept learning. In Advances in Neural Information Processing Systems, volume 12. MIT Press. Saujas Vaduguru, Kevin Ellis, and Yewen Pu. 2022. Efficient pragmatic program synthesis with informative specifications. Rose E. Wang, Qingyang Zhang, Carly Robinson, Susanna Loeb, and Dorottya Demszky. 2023. Bridging the novice-expert gap via models of decision-making: A case study on remediating math mistakes. Michael V Yudelson, Kenneth R Koedinger, and Geoffrey J Gordon. 2013. Individualized bayesian knowledge tracing models. In Lecture Notes in Computer Science, Lecture notes in computer science, pages 171–180. Springer Berlin Heidelberg, Berlin, Heidelberg. Xiaojin Zhu, Adish Singla, Sandra Zilles, and Anna N Rafferty. 2018. An overview of machine teaching. 11 13293 Data Targets f? Adaptation of Teaching Examples to Simulated Students’ Misconceptions: Functions GPT-4 ATOM Non-Adaptive: Known GPT-4: Known Number of Datapoints b-learner f-learner Number of Datapoints Number of Datapoints Number of Datapoints Figure 7: Plot showing whether chosen teaching examples target learning f (i.e., when wug(x) is undefined) or a*x+b (i.e., what wug(x) computes when defined) for the function task. A y-value of 1.0 indicates that the teaching example targets f; 0.0 indicates that it does not. An input x for which wug(x) is undefined targets f, and an input for which wug(x) is defined targets a*x+b. Intuitively, f-learners benefit more from seeing examples targeting f early on in teaching, and b-learners benefit more from seeing examples that target a*x+b. Error regions indicate standard errors of the mean. Adaptation of Teaching Examples to Simulated Students’ Misconceptions Fractions Verbs Functions Data Targets Unknown Concept? Number of Datapoints Number of Datapoints GPT4 Non-Adaptive GPT4 Non-Adaptive ATOM Number of Datapoints Unknown Student: Known Student: Figure 8: Plot showing whether chosen teaching examples across experiments target student misconceptions. A y-value of 1.0 indicates that the teaching example targets misconceptions; 0.0 indicates that it does not. For fractions, multiplication problems with common denominators target mult-learners’ misconceptions, and addition problems with different denominators target add-learners’ misconceptions. For functions, inputs x for which f(x) returns True target f-learners’ misconceptions, and inputs for which f(x) returns False target b-learners’ misconceptions. For verbs, inputs with class “unknown” by the student (e.g., a +ied verb for a +ied-learner) target the student’s misconception. Error regions indicate standard errors of the mean. A Do teaching examples target student misconceptions? For the simulated evaluations, we analyze whether the examples selected by different teaching methods target students’ specific misconceptions. Functions Figure 7 shows whether the selected teaching examples target learning f or a*x+b in the target wug concept, split by student type. We expect an adaptive teacher to select examples targeting f for f-learners at the start of learning, and similarly for examples targeting a*x+b for b-learners. In both plots, we see that the optimal teacher NON-ADAPTIVE-KNOWN, which assumes access to the ground truth student type, exhibits the expected behavior: It selects more examples targeting f for the f-learner than for the b-learner at the beginning of the teaching interaction. ATOM, despite needing to maintain guesses for the student type, shows similar adaptivity to students’ priors early on, selecting more examples targeting f for the f-learner than for the b-learner at the start of teaching. GPT-4-KNOWN also shows this adaptivity when assuming the true student type. However, when it does not have access to the true student type, it does not show this adaptation; the data selection patterns of GPT-4 are highly similar for the b-learners and f-learners, suggesting that GPT-4 struggles with doing implicit adaptation online. All Tasks Figure 8 shows adaptation of teaching examples across all tasks and student types. We observe similar trends: Across tasks, ATOM shows 12 13294 similar levels of adaptivity to NON-ADAPTIVEKNOWN, despite not knowing student type, and outperforms both GPT-4 methods. We also observe GPT-4-KNOWN selecting more examples targeting unknown concepts than GPT-4. B Creating Function Concepts and Student Types b has possible values [1, 2, · · · , 9] and a has possible values [−5, −4, · · · , 4, 5]. For each concept, to create the b-learner, we uniformly sample the incorrect b from the range of possible values of b, excluding the target b; to create the f-learner, we uniformly sample the incorrect f from a set of spuriously associated functions: These possible values are shown in Table 5. The full list of concepts and incorrect student beliefs are shown in Table 4. C Creating the Verbs Dataset For the verbs task, we create verb classes with reg exp matching on their past tense forms. For the GPT-4 method, we create ground truth outputs by first checking if a lemma exists in the Unimorph dataset; if not, we use a Python verb inflection package pyinflect12 to first get the past tense form of the verb, then categorize it. Table 1 shows verb classes and corresponding counts in the resulting dataset. The model that we fit on the full dataset (to derive the parameters of the target concept) obtains a predictive accuracy of 95.47%, and the mean probability of the ground truth outputs across the full dataset is 0.945. D Simulated Experiments For all tasks, we obtain predictions from simulated students by sampling from their predictive distributions. Program Tasks For the programmatic tasks (functions, fractions), each simulated student’s prior belief in a program h is proportional to the number of “special primitives” that appear in the program. We derive the prior over programs by multiplying a value c by the number of special primitives that appear in a program to get values for all programs; we then normalize these values to get a distribution over programs. The simulated students for the function and fraction tasks also maintain a noise parameter that 12https://github.com/bjascob/pyInflect governs how noisy the labels are in the examples they observe; this noise parameter governs their posterior updates. For fractions, this noise parameter is 0.8, and for functions, it is 0.05. We use the same noise values for the teacher’s models of the students. Fractions Table 2 shows the multiplication and addition sub-programs that are used to create the concept space for simulated students for the fraction task. For the mult-learner who overgeneralizes the procedure for addition, the “special primitives” are (1), (4), and (6). For the add-learner who over-generalizes the procedure for multiplication, the “special primitives” are (2), (3), and (5). We use a value of c = 1e5. Functions For b-learners, the “special primitives” are the target f and spurious b (and so programs with either of these primitives would have higher prior beliefs; programs having both the target f and spurious b would have the highest prior belief). Similarly, for f-learners, the incorrect f and target b are the special primitives. We use a value of c = 1e4. E GPT-4: Details Prompts for GPT-4 are shown in Tables 6, 7, and 8 for fractions, Tables 9, 10, and 11 for functions, and Tables 12, 13, and 14 for verbs. An example conversation between GPT-4 and a simulated student is shown in Table 15. Processing/Filtering GPT-4 Outputs In order to control for the fact that GPT-4 may generate incorrect outputs for examples, we use ground truth outputs for generated inputs. We parse GPT-4generated messages to obtain inputs, then compute ground truth labels for those inputs and append them to the message history, starting with “That’s correct/incorrect.” If a message cannot be parsed, we append a canned response, i.e., “Sorry, I could not learn from that example. I can only learn from examples that are formatted as...” (if no output can be parsed from the message) or “I would like to keep learning. Can I have another example?” (if no input can be parsed from the message); these messages do not count as an “interaction” in comparing against other teaching methods. For human experiments, we do not display these canned responses to the students and instead only display messages asking for predictions on examples and providing ground truth answers. 13 13295 Verb Class Description Example Counts +ed add ‘ed’ to the lemma clasp 6,130 +d add ‘d’ to the lemma smile 13,463 +ied replace last ‘y’ with ‘ied’ cry 1,056 +consonant+ed double last consonant, add ‘ed’ stop 1,878 Table 1: Verb classes and corresponding dataset counts for the verb conjugation task (§3.1). Addition Sub-Programs (1) (2) (3) add numerators & denominators if denominators are equal: make common denominators add numerators add numerators else: add numerators & denominators Multiplication Sub-Programs (4) (5) (6) multiply numerators & denoms if denominators are equal: make common denominators multiply numerators multiply numerators else: multiply numerators & denominators Table 2: The addition and multiplication sub-programs in the concept space for the fraction task in ADAPT (§3). The sub-programs in the target concept (i.e., correct sub-programs for adding/multiplying fractions) are bolded. Fractions Functions Learning Curves: Simulated Students’ Likelihood of Target Concept Probability Log PDF GPT4 Rank Non-Adaptive Random GPT4 Rank Non-Adaptive ATOM Unknown Student: Known Student: multiplication learner addition learner b learner f learner +d learner +ed learner +consonant+ed learner +ied learner Verbs Figure 9: Learning curves for simulated students. Top row: results for function learning and fraction arithmetic, with y-axis showing the probability of the target concept. Bottom row: results for verb conjugation, with y-axis showing the log PDF of the target concept. Each subplot corresponds to a different student type. Color indicates teaching method. Linestyles indicate whether the true student type is assumed (dashed=unknown, solid=known). Confidence intervals indicate standard error of the mean. Decoding For all experiments with GPT-4, we use a temperature of 0.5 and maximum tokens of 100. F Human Experiments F.1 Set-Up Post-Processing We filter and rerun any experiments where the chat messages were logged out of order or sent twice. Hyperparameters ATOM uses a noise parameter value of 0.02 for modeling simulated students. Instructions An example of hints given to the human participants is shown in Table 3. The full set of instructions shown to human participants, along with the interface, are shown in Figures 12/13 (instructions), 14 (chat), and 15 (end). Bonus Compensation Participants are told that their bonus depends on two things: 14 13296 Dr. Smith spent a bunch of time studying this machine. She figured out that when wug is defined, it computes a function of the form a*x+b, where a and b are constant numbers, so you only need to figure out what a and b are. She also left a note with some thoughts: I’m pretty sure, but not totally confident, that: 1) wug is undefined when inputs are greater than 2 2) When wug is defined, b = 3 –Dr. Smith Dr. Smith is quite familiar with wug, so her note should give you a good place to start! But keep in mind that it is possible that she is wrong. Table 3: An example of a hint given to a human learner. (1) is correct, while (2) is not, thus creating a b-learner. 1. Accuracy of wug guesses: Participants are told they will receive 0.05 for every 10 seconds of the teaching interaction that their guess is correct, with partial compensation if only f or only a/b is correct. 2. Accuracy of predictions on teaching examples: Participants are told they will receive up to an additional 1.00 based on the accuracy of their predictions. As shown in Figure 13, participants are prompted to indicate their understanding of what their bonus depends on. F.2 Additional Results Figure 16 shows the learning efficiency of human students by student type. Figure 17 shows the correctness of wug guesses by component as a function of time. Figure 18 shows the AUCs of wug correctness by component for individual participants. Figure 19 shows the accuracy of student type predictions over time. Correctness of Simulated Student Predictions on Teaching Examples Fractions Functions Verbs Number of Datapoints Prediction Correctness Figure 10: Correctness of simulated student predictions on teaching examples by task. Error regions indicate 95% confidence intervals. F.3 Combining GPT-4 and ATOM We run an experiment combining GPT-4 and ATOM in the following way: We use ATOM to make inferences about student type, then prompt GPT-4 with ATOM’s inference by updating the system prompt to GPT-4 after each prediction made by the student. Before any predictions are given, GPT-4 is prompted with both student types (i.e., with the prompt given to the teacher that does not know student type). We call this method GPT4+ATOM. Results are shown in Figure 20. We find that GPT-4+ATOM outperforms RANDOM (p < 0.05 using a paired t-test) but does not outperform GPT4 or ATOM. 15 13297 Random [Known] GPT4 Non-Adaptive [Known] Non-Adaptive Adaptive [Known] Rank Rank GPT4 Random [Known] GPT4 Non-Adaptive [Known] Non-Adaptive Adaptive [Known] Rank Rank GPT4 Critical Example Selection by Teaching Methods Target: divisible_3
 Spurious: divisible_6
 Ex: [-15, -9, -3, 3, 9, 15] Target: divisible_4
 Spurious: divisible_8
 Ex: [-20, -12, -4, 4, 12, 20] Target: greater_2
 Spurious: greater_1
 Ex: [2] Target: greater_2
 Spurious: greater_3
 Ex: [3] Target: greater_2
 Spurious: greater_4
 Ex: [3, 4] Target: positive
 Spurious: greater_1
 Ex: [1] Target: positive
 Spurious: greater_2
 Ex: [1, 2] Target: greater_7
 Spurious: greater_9
 Ex: [8, 9] Target: greater_7
 Spurious: greater_8
 Ex: [8] Target: greater_7
 Spurious: greater_5
 Ex: [6, 7] Teaching Methods:
 Unknown vs Known Student Type Number of Datapoints Number of Datapoints Number of Datapoints Number of Datapoints Number of Datapoints Figure 11: Full results for critical example selection by different teaching methods for the function task. Results are for simulated f-learners, who have a spurious belief about f that agrees with the target f on all but a few examples, as labeled. The opacity of each square corresponds to the mean value of whether the example chosen by the teaching method at that step in learning is a “critical example” (averaged across experimental conditions: seed and concepts). See §5.4 for details. Target Incorrect Used in f a b f b Human Experiments? even 1 7 divisible_6 5 even -5 5 divisible_6 7 Y even 3 8 divisible_4 3 Y greater_2 1 7 greater_4 3 greater_2 -5 5 greater_1 6 greater_2 3 8 greater_3 5 Y prime 1 7 odd 2 Y prime -5 5 odd 9 Y prime 3 8 odd 6 divisible_3 1 7 divisible_6 6 divisible_3 -5 5 divisible_6 9 divisible_3 3 8 divisible_6 5 divisible_4 1 7 divisible_8 1 divisible_4 -5 5 divisible_8 8 Y divisible_4 3 8 divisible_8 9 Y positive 1 7 greater_2 4 positive -5 5 greater_2 2 positive 3 8 greater_1 4 odd 1 7 divisible_5 3 Y odd -5 5 prime 2 Y odd 3 8 divisible_3 6 Y greater_7 1 7 greater_9 2 greater_7 -5 5 greater_8 6 Y greater_7 3 8 greater_5 6 Table 4: The target concepts and incorrect beliefs used in the function learning experiments. The target f, a, b define the concepts being taught. The incorrect f is the belief that the f-learners have about f at the start of learning, and the incorrect b is the value that the b-learners believes b to be. 16 13298 By entering your Prolific ID below, you are participating in a study being performed by computer scientists in the MIT Department of Electrical Engineering and Computer Science and are agreeing to the following: You must be at least 18 years old to participate. Your participation in this research is voluntary. You may decline to answer any or all of the following questions. You may decline further participation, at any time, without adverse consequences. Your anonymity is assured; the researchers who have requested your participation will not receive any personal information about you. Once you click submit, you will not be able to come back to this page. Enter your Prolific ID: Submit Mystery Machine Study Introduction 5/6/24, 8:46 AM Mystery Machine Study 128.30.64.44:8089 1/1 Goal In this study, we are researching the way that people learn from examples. You will be tasked with learning what a mystery machine called wug does. This machine takes in numbers and outputs numbers; wug(x)=y means that when wug takes in number x , it outputs y . However, it only works for some numbers and is undefined for others. Your task is to learn on what inputs wug is undefined, and when it is defined, what it does. For example, suppose that you were instead learning about a machine vug that operated on letters instead of numbers. Suppose you saw the following sequence of examples: vug(A)=B , vug(B)=undefined , vug(E)=F , vug(I)=J . This data might suggest that vug is undefined on consonants and, when defined (on vowels), returns the next letter in the alphabet. Next Mystery Machine Study Task 5/6/24, 8:31 AM Mystery Machine Study 128.30.64.44:8089/task1 1/1 Details You will be interacting with a teacher through a chat interface, shown on the right. The teacher will show you examples of what wug does on different numbers, first asking you what you think wug outputs on a given input, and then providing the true output. Your goal is to figure out what wug does as quickly as possible. Throughout the teaching interaction, you can provide your guess of what you think wug does through the sidebar on the right. Your guess will have two parts: 1. when wug is undefined 2. when defined, what it does. We also provide a calculator to help with any calculations you may need for your guess. You will have 10 minutes to learn from the teacher. After that, you will no longer be able to learn about wug or make guesses. If you get 10 predictions correct in a row while talking to the teacher or go through all the teacher's examples before the 10 minutes are up, you will have the option to end the interaction early. Back Next Mystery Machine Study Task 5/6/24, 8:31 AM Mystery Machine Study 128.30.64.44:8089/task2 1/1 Figure 12: Screens 1-3 (instructions) for the study with human participants. f Incorrect f Options prime odd positive greater_n for n ∈[−2, −1, 1, 2] even divis_4, divis_6 odd prime, divis_3, divis_5, divis_7 divis_n divis_m where m is the smallest multiple of n or the largest factor of n (if m = 2, this is even) greater_n greater_m where ∥m −n∥<= 2 (if m = 0, this is positive) Table 5: Descriptions of how the options for incorrect f beliefs of f-learners are generated for each target f in the function task. We uniformly sample the incorrect f from the set of options to determine the actual incorrect f belief for the f-learners. 17 13299 Bonus You will recive a base pay of $4.00 for this study. In addition, based on your performance at the task, you will be compensated with a bonus dependent on: Accuracy of guesses for wug : You will receive a higher bonus the sooner and longer you provide a correct guess for wug . You will receive $0.05 for every 10 seconds of the teaching interaction that your guess is correct. For example, if you provided the correct guess for wug 1 minute into the teaching interaction and left it unchanged, you would have a correct guess for 9 minutes, or equivalently 540 seconds. Therefore, you would get a bonus of $0.05 ∗ (540/10) ∗ = $2.70. You will receive partial compensation if only one of the two parts is correct, or if you have the correct guess submitted for less than 10 seconds. Accuracy of outputs for teacher examples: You will receive up to an additional $1.00 based on how many of your responses to the teacher are correct . For example, in the example teaching interaction in the right image, the participant gave 2/3 answers to the teacher, and so their bonus would be $1.00 ∗ (2/3) = $0.67. Therefore, you should aim to provide a guess of what wug does as soon as you think you know, and aim to give the teacher accurate answers. You will not know if the guesses you submit for wug are correct. Back Next Mystery Machine Study Task 5/6/24, 8:31 AM Mystery Machine Study 128.30.64.44:8089/task3 1/1 Hints about wug Fortunately, you are not starting from a blank slate. Dr. Smith spent a bunch of time studying this machine. She figured out that when wug is defined, it computes a function of the form a∗x+b , where a and b are constant numbers, so you only need to figure out what a and b are. She also left a note with some thoughts: I'm pretty sure, but not totally confident, that: 1. wug is undefined when inputs are divisible by 3 2. When wug is defined, b = 8 --Dr. Smith Dr. Smith is quite familiar with wug , so her note should give you a good place to start! But keep in mind that it is possible that she is wrong. We will provide the note for you throughout learning so that you can refer to it. You can also always come back to the task instructions by using the navigation buttons "Back" and "Next" below. Back Next Mystery Machine Study Task 5/6/24, 8:32 AM Mystery Machine Study 128.30.64.44:8089/task4 1/1 Instruction check You now have all the information to start learning wug ! Before you start, please answer the following questions. If you are unsure of the answers, we encourage you to read through the task instructions again. The bonus depends on: (1) Accuracy of predictions I give to the teacher (2) How quickly I guess what wug does Both (1) and (2) Correct! The correct answer is Both (1) and (2). True or False: Multiple guesses for wug are allowed. True False Correct! The correct answer is True. Multiple guesses are allowed. Submit Press "Next" to enter the chat interface. There, you will have the opportunity to familiarize yourself with the layout before learning begins and the timer starts. After you press "I Am Ready" on the next page, you will interact with the teacher for 10 minutes. Back Next Mystery Machine Study Task 5/6/24, 8:32 AM Mystery Machine Study 128.30.64.44:8089/task5 1/1 Figure 13: Screens 4-6 (instructions) for the study with human participants. 18 13300 Back Accuracy of Predictions: Streak: 1 2 3 4 5 6 7 8 9 10 Press "I Am Ready" when you are ready to start learning. After that, the timer will begin, and you will have 10 minutes to interact with the teacher and submit guesses. You are encouraged to play around with the interface before beginning. I Am Ready Type your message here... Send Mystery Machine Study Chat You can use this calculator to help you figure out wug; it will not affect your bonus. Remember: Multiple guesses are allowed, and you can get partial credit for getting only one part of wug correct. The sooner you have the correct guess for wug, the higher your bonus. Make a guess about wug (1) wug(x) is undefined when input x is: -(2) when defined, wug(x) computes a∗x+b where: a= b= Calculator for a∗x+b where: a= b= OR Reset to Guess x= 0 Result: I'm pretty sure, but not totally confident, that: 1. wug is undefined when inputs are divisible by 3 2. When wug is defined, b = 8 --Dr. Smith Timer: 10:00 5/6/24, 8:32 AM Chat Interface 128.30.64.44:8089/chat 1/1 Back Accuracy of Predictions: 1/3 Streak: 1 2 3 4 5 6 7 8 9 10 Teacher You Teacher You Teacher You Teacher What is wug(2)? 8 That's incorrect. wug(2)=14. What is wug(4)? 20 That's correct. What is wug(6)? undefined That's incorrect. wug(6)=26. What is wug(8)? Type your message here... Send Mystery Machine Study Chat You can use this calculator to help you figure out wug; it will not affect your bonus. Remember: Multiple guesses are allowed, and you can get partial credit for getting only one part of wug correct. The sooner you have the correct guess for wug, the higher your bonus. Make a guess about wug (1) wug(x) is undefined when input x is: -(2) when defined, wug(x) computes a∗x+b where: a= b= Calculator for a∗x+b where: a=3 b=8 OR Reset to Guess x= 4 Result: 3∗x+8= 20 I'm pretty sure, but not totally confident, that: 1. wug is undefined when inputs are divisible by 3 2. When wug is defined, b = 8 --Dr. Smith Timer: 09:26 5/6/24, 8:33 AM Chat Interface 128.30.64.44:8089/chat 1/1 Figure 14: The chat interface for the study with human participants. 19 13301 A) Was Dr. Smith's note right? I'm pretty sure, but not totally confident, that: 1. wug is undefined when inputs are divisible by 3 2. When wug is defined, b = 8 --Dr. Smith Hint (1) was: Correct Incorrect I don't know Hint (2) was: Correct Incorrect I don't know B) Did Dr. Smith's note influence what you thought wug did? Yes No C) Was there anything you found confusing or did not understand throughout the study, or any other issues that you encountered? Submit Mystery Machine Study Exit Questions 5/6/24, 8:34 AM Mystery Machine Study 128.30.64.44:8089/check 1/1 You have finished the study. Thank you for participating! Here is your completion code: C166E0FJ The correct answer was: 1. wug is undefined when inputs are odd 2. When wug is defined, it computes a∗x+b where a = 3 and b = 8 If you have questions about this research, please contact us at [email protected]. We will be in touch about your bonus. Mystery Machine Study 5/6/24, 8:34 AM Mystery Machine Study 128.30.64.44:8089/end 1/1 Figure 15: Post-chat end screens for the study with human participants. b-learner f-learner wug partial correctness AUC Function Results by Human Student Type Random GPT-4 ATOM Random GPT-4 ATOM * * Figure 16: Results showing how efficiently human students guessed the correct wug concept (§6.2), by student type. Stars indicate statistically significant results under a paired t-test: For f-learners, p < 0.05 for RANDOM v.s. ATOM and for RANDOM v.s. GPT-4. Error bars show 95% confidence intervals. Minutes of Teaching Interaction Minutes of Teaching Interaction Minutes of Teaching Interaction a Correctness of Human Guesses for Components of wug b f Correctness GPT-4 Random ATOM Figure 17: Correctness of humans’ guesses of wug, split by components and as a function of time spent interacting with the teacher. Error regions show 95% confidence intervals. 20 13302 Correctness of Human Guesses for Components of wug:
 AUCs of Individual Participants b correctness a correctness f correctness wug partial correctness GPT-4 ATOM Random GPT-4 ATOM Random GPT-4 ATOM Random GPT-4 ATOM Random Figure 18: Correctness of guesses of wug for individual human participants. Each dot shows the AUC of the curve of the metric’s correctness over time for an individual participant. Accuracy of Inferences about Human Students GPT-4 ATOM Minutes of Teaching Interaction Accuracy Figure 19: Accuracy of teaching methods’ predictions of student type with human learners (§6.3). Error regions show 95% confidence intervals. Random GPT-4 ATOM GPT-4 + ATOM Function Results with Human Students wug partial correctness AUC * * * Figure 20: Results with human students showing how efficiently students guessed the correct wug concept (§6.2). Stars indicate statistically significant results under a paired t-test. Error bars show 95% confidence intervals. 21 13303 You are GPT-teacher, an expert teacher. Your goal is to teach a student how to multiply and add fractions as efficiently as possible with helpful examples. You will be interacting with a student who has spent some time with fraction arithmetic but still has some misconceptions about how it works. The student you will be interacting with is a student who performs multiplication correctly, but tends to incorrectly add both numerators and denominators when adding fractions, especially when denominators are different. Please make sure to follow these instructions: - You are only allowed to give students example fraction problems, and ask them to guess the outputs. You may not explain any concepts to them directly, or ask any other questions. Anything other than example fraction problems and answers will be ignored by the student. - The student has not learned how to simplify fractions yet, so please do not simplify the fractions in your examples. Leave the answers in their unsimplified form. The student will also not simplify their answer. - Please only use fractions with positive numerators and denominators. - Do not teach arithmetic with mixed numbers or whole numbers. - Only teach fraction addition and multiplication. Please format input/output examples as: a/b+c/d=e/f for addition or a/b*c/d=e/f for multiplication. - Keep teaching with fraction problems and outputs until the student says they would like to stop, even if you think you have covered the full input range. For example, your interactions will look like the following, where capital words indicate placeholders for actual verb lemmas and categories: Your interactions will look like the following (where letters are placeholders for actual numbers): System: What is a/b+c/d? User: a/b+c/d=e/f System: That’s [correct/incorrect]. a/b+c/d=x/y. What is g/h+i/j? Please start by asking the student for their guess on a fraction example. Table 6: System prompt to GPT-4-KNOWN for the fraction task (known student type). Bolded words indicate variables that change between student types. 22 13304 You are GPT-teacher, an expert teacher. Your goal is to teach a student how to multiply and add fractions as efficiently as possible with helpful examples. You will be interacting with a student who has spent some time with fraction arithmetic but still has some misconceptions about how it works. There are 2 kinds of students: 1) Students who perform addition correctly, but tend to incorrectly multiply only numerators when multiplying fractions, especially when the denominators are equal; if the denominators are not equal, the student sometimes makes common denominators and then multiplies the numerators 2) Students who perform multiplication correctly, but tend to incorrectly add both numerators and denominators when adding fractions, especially when denominators are different You should try to figure out which kind of student you are interacting with and then teach them accordingly. Please make sure to follow these instructions: - You are only allowed to give students example fraction problems, and ask them to guess the outputs. You may not explain any concepts to them directly, or ask any other questions. Anything other than example fraction problems and answers will be ignored by the student. - The student has not learned how to simplify fractions yet, so please do not simplify the fractions in your examples. Leave the answers in their unsimplified form. The student will also not simplify their answer. - Please only use fractions with positive numerators and denominators. - Do not teach arithmetic with mixed numbers or whole numbers. - Only teach fraction addition and multiplication. Please format input/output examples as: a/b+c/d=e/f for addition or a/b*c/d=e/f for multiplication. - Keep teaching with fraction problems and outputs until the student says they would like to stop, even if you think you have covered the full input range. For example, your interactions will look like the following, where capital words indicate placeholders for actual verb lemmas and categories: Your interactions will look like the following (where letters are placeholders for actual numbers): System: What is a/b+c/d? User: a/b+c/d=e/f System: That’s [correct/incorrect]. a/b+c/d=x/y. What is g/h+i/j? Please start by asking the student for their guess on a fraction example. Table 7: System prompt to GPT-4 for the fraction task (unknown student type). Based on this interaction, which kind of student do you think I was at the start of this teaching session: 1) Students who perform addition correctly, but tend to incorrectly multiply only numerators when multiplying fractions, especially when the denominators are equal; if the denominators are not equal, the student sometimes makes common denominators and then multiplies the numerators 2) Students who perform multiplication correctly, but tend to incorrectly add both numerators and denominators when adding fractions, especially when denominators are different Please select (1) or (2). Table 8: System prompt to GPT-4 for inferring student type for the fraction task. 23 13305 You are GPT-teacher, an expert teacher. Your goal is to teach a student what a mystery machine called wug does. This machine takes in numbers and outputs numbers. However, it only works for some numbers and is undefined for others. Your goal is to teach the student on what inputs wug is undefined, and when it is defined, what it does. You should do so as efficiently as possible with helpful input/output examples, such as edge cases. The wug machine works as follows: wug(x) is undefined when x is greater than 2. When defined, wug(x) computes x+7. You’re going to be interacting with a student who is learning how wug works. The student knows that wug is sometimes undefined. The student also knows that when wug is defined, it computes something of the form a*x+b. In the real wug machine, a=1 and b=7. However, the student does not know this. The student only knows that a is a constant number between -5 and 5 (inclusive) and that b is a constant number between 1 and 9 (inclusive). The student knows that wug is undefined when the input is one of the following: - prime - positive - even - odd - divisible by n for n between 3 and 20 (inclusive) - greater than n for n between 1 and 20 (inclusive) Students have varying previous exposure to wug, and so they understand different parts of how wug works. The student you will be interacting with is a student who correctly thinks that b=7 but incorrectly thinks that wug is undefined when inputs are greater than 4. Please make sure to follow these instructions: - You are only allowed to give students example inputs, and ask them to guess outputs. You may not explain aspects of the concept to them directly, or ask any other questions. Anything other than inputs and outputs will be ignored by the student. - Please format input/output examples as: wug(INPUT)=ANSWER - wug only works for numbers between -20 to 20 (inclusive), so restrict the inputs you choose to that range. Any inputs outside of that range will be ignored by the student. - Keep teaching with inputs and outputs until the student says they would like to stop, even if you think you have covered the full input range. For example, your interactions will look like the following, where capital words indicate placeholders for actual numbers: Your interactions will look like the following: System: What is wug(INPUT)? User: wug(INPUT)=GUESS System: That’s [correct/incorrect]. wug(INPUT)=ANSWER. What is wug(NEW INPUT)? Please start by asking the student for their guess on an input. Table 9: System prompt to GPT-4-KNOWN for the function task (known student type). Bolded words indicate variables that change between student types and target concepts. 24 13306 You are GPT-teacher, an expert teacher. Your goal is to teach a student what a mystery machine called wug does. This machine takes in numbers and outputs numbers. However, it only works for some numbers and is undefined for others. Your goal is to teach the student on what inputs wug is undefined, and when it is defined, what it does. You should do so as efficiently as possible with helpful input/output examples, such as edge cases. The wug machine works as follows: wug(x) is undefined when x is greater than 2. When defined, wug(x) computes x+7. You’re going to be interacting with a student who is learning how wug works. The student knows that wug is sometimes undefined. The student also knows that when wug is defined, it computes something of the form a*x+b. In the real wug machine, a=1 and b=7. However, the student does not know this. The student only knows that a is a constant number between -5 and 5 (inclusive) and that b is a constant number between 1 and 9 (inclusive). The student knows that wug is undefined when the input is one of the following: - prime - positive - even - odd - divisible by n for n between 3 and 20 (inclusive) - greater than n for n between 1 and 20 (inclusive) Students have varying previous exposure to wug, and so they understand different parts of how wug works. There are two kinds of students: 1) Students who correctly think that b=7 but incorrectly think wug is undefined when inputs are greater than 4 2) Students who correctly think that wug is undefined when inputs are greater than 2 but incorrectly think that b=3 Please make sure to follow these instructions: - You are only allowed to give students example inputs, and ask them to guess outputs. You may not explain aspects of the concept to them directly, or ask any other questions. Anything other than inputs and outputs will be ignored by the student. - Please format input/output examples as: wug(INPUT)=ANSWER - wug is only defined for numbers between -20 to 20 (inclusive), so restrict the inputs you choose to that range. - Keep teaching with inputs and outputs until the student says they would like to stop, even if you think you have covered the full input range. For example, your interactions will look like the following, where capital words indicate placeholders for actual numbers: Your interactions will look like the following: System: What is wug(INPUT)? User: wug(INPUT)=GUESS System: That’s [correct/incorrect]. wug(INPUT)=ANSWER. What is wug(NEW INPUT)? Please start by asking the student for their guess on an input. Table 10: System prompt to GPT-4 for the function task (unknown student type). Bolded words indicate variables that change between student types and target concepts. Based on this interaction, which kind of student do you think I was at the start of this teaching session: 1) Students who correctly think that b=7 but incorrectly think wug is undefined when inputs are greater than 4 2) Students who correctly think that wug is undefined when inputs are greater than 2 but incorrectly think that b=3 Please select (1) or (2). Table 11: System prompt to GPT-4 for inferring student type for the function task. Bolded words indicate variables that change between student types and target concepts. 25 13307 You are GPT-teacher, an expert teacher. Your goal is to teach a student how to conjugate English past tense verbs as efficiently as possible with helpful examples. Specifically, your goal is to teach students about four categories of past tense verbs: - ‘+ed’: add ‘ed’ to the verb lemma - ‘+d’: add ‘d’ to the verb lemma - ‘y_to_ied’: if the verb lemma ends in a ‘y’, replace the ‘y’ with ‘ied’ - ‘+consonant+ed’: if the verb lemma ends in a consonant, double the last consonant and add ‘ed’ Different students have different confusion points, but each student has one verb category that they are the least familiar with. The student you will be interacting with is the least familiar with the ‘y_to_ied’ category. Please make sure to follow these instructions: - You are only allowed to give students example verb lemmas, and ask them to guess verb categories. You may not explain any concepts to them directly, or ask any other questions. Anything other than example verb lemmas and categories will be ignored by the student. - Please format input/output examples as: ‘LEMMA’ is a ‘CATEGORY’ verb - Keep teaching until the student says they would like to stop, even if you think they understand the verb categories. - You are only allowed to teach students about verbs in the four categories (‘+ed’, ‘+d’, ‘y_to_ied’, and ‘+consonant+ed’). Please do not give examples from other categories, like irregular verbs. For example, your interactions will look like the following, where capital words indicate placeholders for actual verb lemmas and categories: Your interactions will look like the following: System: What type of verb is ‘LEMMA’? User: ‘LEMMA’ is a ‘CATEGORY’ verb System: That’s [correct/incorrect]. ‘LEMMA’ is a ‘CATEGORY’ verb. What type of verb is ‘LEMMA’? Please start by asking the student for their guess on a lemma. Table 12: System prompt to GPT-4-KNOWN for the verb task (known student type). Bolded words indicate variables that change between student types. 26 13308 You are GPT-teacher, an expert teacher. Your goal is to teach a student how to conjugate English past tense verbs as efficiently as possible with helpful examples. Specifically, your goal is to teach students about four categories of past tense verbs: - ‘+ed’: add ‘ed’ to the verb lemma - ‘+d’: add ‘d’ to the verb lemma - ‘y_to_ied’: if the verb lemma ends in a ‘y’, replace the ‘y’ with ‘ied’ - ‘+consonant+ed’: if the verb lemma ends in a consonant, double the last consonant and add ‘ed’ Different students have different confusion points, but each student has one verb category that they are the least familiar with. While teaching the student, you should aim to infer what verb category they are the least familiar with in order to teach and correct their misconceptions most efficiently. Please make sure to follow these instructions: - You are only allowed to give students example verb lemmas, and ask them to guess verb categories. You may not explain any concepts to them directly, or ask any other questions. Anything other than example verb lemmas and categories will be ignored by the student. - Please format input/output examples as: ‘LEMMA’ is a ‘CATEGORY’ verb - Keep teaching until the student says they would like to stop, even if you think they understand the verb categories. - You are only allowed to teach students about verbs in the four categories (‘+ed’, ‘+d’, ‘y_to_ied’, and ‘+consonant+ed’). Please do not give examples from other categories, like irregular verbs. For example, your interactions will look like the following, where capital words indicate placeholders for actual verb lemmas and categories: Your interactions will look like the following: System: What type of verb is ‘LEMMA’? User: ‘LEMMA’ is a ‘CATEGORY’ verb System: That’s [correct/incorrect]. ‘LEMMA’ is a ‘CATEGORY’ verb. What type of verb is ‘LEMMA’? Please start by asking the student for their guess on a lemma. Table 13: System prompt to GPT-4 for the verb task (unknown student type). Based on this interaction, which verb category do you think I was the least familiar with at the start of this teaching session? Table 14: System prompt to GPT-4 for inferring student type for the verb task. 27 13309 assistant: What is wug(1)? user: wug(1)=9 assistant: That’s incorrect. wug(1)=8. What is wug(0)? user: wug(0)=7 assistant: That’s correct. wug(0)=7. What is wug(2)? user: wug(2)=9 assistant: That’s correct. wug(2)=9. What is wug(3)? user: wug(3)=10 assistant: That’s incorrect. wug(3)=undefined. What is wug(-1)? user: wug(-1)=6 assistant: That’s correct. wug(-1)=6. What is wug(-5)? user: wug(-5)=2 assistant: That’s correct. wug(-5)=2. Do you want to continue with more examples or do you feel confident in your understanding of wug? user: I would like to keep learning. Can I have another example? assistant: What is wug(-10)? user: wug(-10)=-3 assistant: That’s correct. wug(-10)=-3. What is wug(4)? user: wug(4)=11 assistant: That’s incorrect. wug(4)=undefined. What is wug(-15)? user: wug(-15)=-8 assistant: That’s correct. wug(-15)=-8. Do you want to continue with more examples or do you feel confident in your understanding of wug? user: I would like to keep learning. Can I have another example? assistant: What is wug(-20)? user: wug(-20)=-13 assistant: That’s correct. wug(-20)=-13. What is wug(5)? user: I would like to stop learning now. Based on this interaction, which kind of student do you think I was at the start of this teaching session: 1) Students who correctly think that b=7 but incorrectly think wug is undefined when inputs are greater than 4 2) Students who correctly think that wug is undefined when inputs are greater than 2 but incorrectly think that b=3 Please select (1) or (2). assistant: 1 Table 15: An example conversation transcript between GPT-4 and a simulated student for the function task. The system prompt for this conversation is shown in Table 10. 28 13310
2024
718
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13311–13325 August 11-16, 2024 ©2024 Association for Computational Linguistics Bridging Word-Pair and Token-Level Metaphor Detection with Explainable Domain Mining Yuan Tian1,2, Ruike Zhang1,2, Nan Xu1,3, Wenji Mao1,2* 1State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3Beijing Wenge Technology Co., Ltd {tianyuan2021,zhangruike2020,xunan2015,wenji.mao}@ia.ac.cn Abstract Metaphor detection aims to identify whether a linguistic expression in text is metaphorical or literal. Most existing research tackles this problem either using word-pair or token-level information as input, and thus treats word-pair and token-level metaphor detection as distinct subtasks. Benefited from the simplified structure of word pairs, recent methods for wordpair metaphor detection can provide intermediate explainable clues for the detection results, which remains a challenging issue for tokenlevel metaphor detection. To mitigate this issue in token-level metaphor detection and take advantage of word pairs, in this paper, we make the first attempt to bridge word-pair and tokenlevel metaphor detection via modeling word pairs within a sentence as explainable intermediate information. As the central role of verb in metaphorical expressions, we focus on tokenlevel verb metaphor detection and propose a novel explainable Word Pair based Domain Mining (WPDM) method. Our work is inspired by conceptual metaphor theory (CMT). We first devise an approach for conceptual domain mining utilizing semantic role mapping and resources at cognitive, commonsense and lexical levels. We then leverage the inconsistency between source and target domains for core word pair modeling to facilitate the explainability. Experiments on four datasets verify the effectiveness of our method and demonstrate its capability to provide the core word pair and corresponding conceptual domains as explainable clues for metaphor detection. 1 Introduction Metaphor is not just a figurative expression but a pervasive phenomenon in human thought, perception, and reasoning (Lakoff and Johnson, 1980). As defined in Merriam-Webster Dictionary, metaphor is “a figure of speech in which a word/phrase literally denoting one kind of object or idea is used in *Corresponding author The public digests social media news with diverse views. Focus word Target domain: INFORMATION Source domain: FOOD Inconsistent Word pair Word pair Core word pair Figure 1: Illustration of token-level metaphor understanding with the core word pair and corresponding source and target domains. The focus word digests in the sentence is a token-level metaphor. In this sentence, there are several candidate context words that can form word pairs with the focus word. The core word pair digest news and its implicit source and target domains can help metaphor detection and explanation. place of another to suggest a likeness or analogy between them”. Metaphor detection, as a fundamental research task in natural language processing (NLP), focuses on distinguishing metaphorical expressions from literal expressions in text. It can benefit a variety of other NLP tasks, which require the understanding of implicit semantics, such as machine translation (Mao et al., 2018), sentiment analysis (Mao and Li, 2021), and conversational dialogue (Sun et al., 2023). Previous studies on metaphor detection mainly focus on two distinct subtasks, word-pair metaphor detection (Ge et al., 2022; Tian et al., 2023) and token-level metaphor detection (Choi et al., 2021; Li et al., 2023). The former considers word pair as the fundamental unit conveying metaphorical meaning and classifies word pairs into metaphorical or literal categories, while the latter identifies words within sentences that imply metaphorical meaning. For example, in Figure 1, digests news can be classified as a metaphorical word pair and the word digests within the sentence is a token-level metaphor conveying metaphorical meaning. Early research on word-pair metaphor detection employs machine learning or deep learning methods based on linguistic features related to metaphor 13311 (Tsvetkov et al., 2014; Shutova et al., 2016; Rei et al., 2017). Recent research is inspired by conceptual metaphor theory (CMT) (Lakoff and Johnson, 1980), which reveals the underlying process of metaphor understanding in human cognition and argues that a metaphor implies the inconsistency between source and target domains. These works (Ge et al., 2022; Tian et al., 2023) propose methods to model the intermediate explainable domain information for better word-pair metaphor detection. To detect token-level metaphors, early studies employ statistical or RNN-based methods (Li et al., 2013; Wu et al., 2018; Mao et al., 2019). Most recent studies (Choi et al., 2021; Zhang and Liu, 2022; Li et al., 2023) have developed methods based on the elements and relations described in metaphor identification theories, including selection preference violation (SPV) (Wilks, 1975, 1978) and metaphor identification procedure (MIP) (Pragglejaz Group, 2007). Benefited from the simplified structure of word pairs, Ge et al. (2022) can capture intermediate explainable clues including source and target domains to enhance word-pair metaphor detection. However, modeling such explainable information for better detection remains a challenging issue for token-level metaphor detection. As tokenlevel metaphor detection involves complex semantic structure and sentence-level context, it brings greater research challenges for both detection and explanation compared to word-pair metaphor detection. Moreover, token-level and word-pair metaphor detection are treated as distinct subtasks in existing research. The intermediate information and explainable clues that can be derived from word pairs have never been explored in previous token-level research, nor are they properly utilized in the token-level detection and explanation processes. For example, Figure 1 illustrates a tokenlevel metaphor, indicating that the core word pair and corresponding source and target domains can serve as valuable clues for understanding and detecting token-level metaphors. In this paper, we take the first step to bridge word-pair and token-level metaphor detection and propose an explainable Word Pair based Domain Mining (WPDM) method for token-level verb metaphor detection. Since in a token-level metaphorical sentence, there are several candidate context words that can form word pairs with the focus word, it is non-trivial to identify the core word pair that conveys the primary metaphorical meaning. Thus, inspired by CMT, we mine the domain concepts associated with each word pair utilizing semantic role correspondence and domain granularity assessment for core word pair selection. Specifically, we first devise an approach to mine source and target domain information via mapping semantic roles based on VerbNet (Schuler, 2005) and leveraging domain concepts from cognitive, commonsense and lexical knowledge resources. We then exploit the inconsistency between conceptual source and target domains to determine core word pair attentions for facilitating explainable metaphor detection. The main contributions of our work are as follows: • We make the first attempt to connect word-pair and token-level metaphor detection for more fine-grained identification and understanding of metaphors at the sentence level. • To facilitate explainable token-level metaphor detection, we propose a novel word pair based domain mining method inspired by CMT, which consists of semantic role mapping and conceptual domain mining for core word pair modeling based on cognitive, commonsense and lexical resources. • Extensive experiments on four datasets verify the effectiveness of our method and also demonstrate its capability to identify the core word pair and corresponding conceptual domains as explainable results for token-level metaphor detection. 2 Related Work Metaphor detection aims to distinguish metaphorical and literal expressions in text. Existing studies focus on two subtasks: word-pair metaphor detection and token-level metaphor detection. The former determines whether a word pair is metaphorical or literal, and the latter concentrates on identifying metaphorical words within sentences. Word-Pair Metaphor Detection Traditional research on word-pair metaphor detection utilizes machine learning techniques based on linguistic or external knowledge related to metaphor, such as abstractness (Turney et al., 2011), imageability (Tsvetkov et al., 2014), visibility (Shutova et al., 2016) or property norm (Bulat et al., 2017). After that, some studies exploit neural networks modeling the similarity between words in a word pair (Rei 13312 et al., 2017) or the concreteness constructed from images (Su et al., 2020a) to detect metaphors. In addition, Shutova et al. (2017) perform distributional clustering methods to identify metaphors. Recently, some researchers have utilized well-founded cognitive theory, conceptual metaphor theory (CMT) (Lakoff and Johnson, 1980), to benefit this task. Ge et al. (2022) design a method to generate explainable source and target domains to help identification of metaphors. Tian et al. (2023) further propose an attribute Siamese network to capture similar attributes between source and target concepts for metaphor detection. Token-Level Metaphor Detection Inspired by a theory, selectional preference violation (SPV) (Wilks, 1975, 1978), which indicates that a metaphor contains violations between its context and its frequent usage of contexts, early work (Shutova et al., 2010; Li et al., 2013; Mao et al., 2018) develops statistical methods to capture cooccurrence information between words and their contexts based on various corpora and knowledge bases for token-level metaphor detection. After that, some studies employ RNN-based models to extract contextual representations to detect token-level metaphors (Gao et al., 2018; Mao et al., 2019; Le et al., 2020). As these methods primarily rely on static word embeddings, they struggle to capture the intricate contextual meaning implied by metaphors. In contrast, pre-trained models (Devlin et al., 2019; Liu et al., 2019) can provide dynamic contextualized word embeddings and become popular backbones (Su et al., 2020b; Li et al., 2020) for current studies in metaphor detection. Among them, some studies employ the multitask framework to learn shared embeddings from other tasks related to metaphor (Mao and Li, 2021; Li et al., 2020; Zhang and Liu, 2023; Badathala et al., 2023). Other studies utilize external resources to enhance performance, such as multiword expressions (Rohanian et al., 2020) and word definitions (Su et al., 2021). In addition, some works employ data augmentation methods to expand limited metaphor datasets (Lin et al., 2021; Feng and Ma, 2022). Recently, some studies (Choi et al., 2021; Zhang and Liu, 2022; Wang et al., 2023; Li et al., 2023) have developed methods to model the elements and their relations in metaphor identification theories, including SPV and metaphor identification procedure (MIP) (Pragglejaz Group, 2007), achieving promising results. Computational Work for Both Metaphor Detection and Explanation Metaphor explanation aims to interpret the implicit meaning conveyed by metaphorical expressions, using paraphrased expressions (Shutova, 2010) or conceptual domains (Wachowiak and Gromann, 2023) as explanatory information. Previous works tackling both metaphor detection and explanation (Li et al., 2013; Mao et al., 2018) mainly employ sequential methods to detect metaphors and then explain them based on contextual frequency clues guided by SPV. In fact, metaphor detection and explanation are inherently associated, and intermediate explainable information can benefit metaphor detection. To bridge the gap between metaphor detection and explanation, Ge et al. (2022) propose a model to generate source and target domain for better wordpair metaphor detection guided by a more explainable theory CMT. However, their approach takes word pairs as inputs and relies on the simplified word-pair structure for domain mining. In contrast, token-level metaphor detection is a more general but challenging task involving complex sentence context. To mitigate this gap, we aim to advance token-level metaphor research by upgrading wordpair metaphor detection with explainable sentencelevel conceptual domain modeling based on CMT. 3 Problem Definition Formally, Dtr = {(sk, wk v, bk)}Ntr k=1 is the training dataset with Ntr instances, where s is a sentence, wv is a focus verb within s, and b is the label (metaphorical or literal) for wv. Dte = {(sk, wk v, bk)}Nte k=1 is the test dataset with Nte instances. The goal of token-level verb metaphor detection is to predict the label of the focus verb wv in Dte by training a model with Dtr. 4 Method We propose an explainable Word Pair based Domain Mining (WPDM) method for token-level verb metaphor detection. Figure 2 illustrates the overview of our method, which contains four components: (1) Literal Example Sentences Construction, which collects example sentences for the literal meaning of the focus verb; (2) Semantic Role Mapping, which organizes words labeled with the same semantic role into a semantic group in the input sentence and literal example sentences, respectively; (3) Conceptual Domain Mining, which 13313 • Dogs are able to digest meat. • Most babies can digest food easily. • The patient is able to digest milk. • Your body needs time to digest the meal. • Herbivores can digest plants. • … 1. Literal Example Sentences Construction Support Literal Example Sentences in Dictionaries Classifier Global Embedding Word-Pair Embedding Metaphorical/literal The public digests social media news with diverse views. 2. Semantic Role Mapping 3. Conceptual Domain Mining Role: Agent public Role: Theme news Word Pair Layer digest Role: Agent dog, baby, patient, body, herbivore, … Role: Theme meat, food, milk, meal, plant, … public news PEOPLE INFORMATION plant meat milk … FOOD patient baby … child … PERSON ANIMAL body NATURAL OBJECT People Animal Person Natural object Information Living thing Food Word Pair Attentions Word Pair Layer Input sentence 4. Token-Level Metaphor Detection with Word Pair Modeling 𝛴 LIVING THING Semantic KB (WordNet) Domain Concept List Verb Embedding Candidate Word Pair Weighing hyp hyp hyp hyp hyp hyp hyp consistent inconsistent </s> public digests </s> digests news <s> The public digests social media news with diverse views. Role: Agent Role: Agent Role: Theme Role: Theme (focus verb) organism hyp herbivore dog … hyp hyp meal … hyp hyp hyp hyp hyp hyp Encoder Target Domain Mining Source Domain Mining … … … … … … Semantic Groups Semantic Groups digest word pair word pair hyp Domain Inconsistency Scores Figure 2: Overview of our method for token-level metaphor detection. (Here hypernym is abbreviated as hyp.) employs a domain mining algorithm on words in each semantic group to mine the conceptual source and target domains based on cognitive, commonsense, and lexical resources; and (4) TokenLevel Metaphor Detection with Word Pair Modeling, which evaluates the importance of word pairs in the sentence based on the inconsistency between conceptual source and target domains, and then gives more attention to the important core word pair during metaphor detection training. 4.1 Literal Example Sentences Construction To obtain the context for the focus verb wv using its literal meaning and establish a solid foundation for the following domain mining, we construct a literal example sentence set Se = {sk e}|Se| k=1 with |Se| sentences by gathering all the example sentences associated with the first sense of the focus verb wv in dictionaries.1 4.2 Semantic Role Mapping The semantic role of a verb, such as “agent” and “theme”, describes the underlying relation between an argument (a phrase/word) and the verb in the sentence (Schuler, 2005). The argument labeled 1We use the first sense of a word in a dictionary to capture its literal usage, for the reason that dictionaries typically list word senses chronologically by starting with the original meaning (see https://www.merriamwebster.com/help/explanatory-notes/dict-definitions and https://www.oxfordlearnersdictionaries.com/faq). … … meat milk beverage food organism plant Root Node living thing entity physical entity meal nutriment Hyper. Hypo. WordNet Synsets Hyper. Hypo. Hypo. Hypo. Hyper. Hypo. Hypo. Figure 3: Illustration of the hierarchical structure of synsets in WordNet. A hypernym of a synset represents a broader or more general semantic field than the synset itself, while a hyponym of a synset conveys a more specific meaning. For example, “entity” is a hypernym of “physical entity” and “organism” is a hyponym of “living thing”. Hyper. denotes hypernym and hypo. denotes hyponym. with a semantic role, which is closely related to the focus verb in semantics, can form a word pair with the focus verb to convey the implicit metaphorical meaning. To find these word pairs, we utilize VerbNet Parser to annotate the arguments (e.g. “public” in Figure 2) and their semantic roles associated with the focus verb wv in both the input sentence sin and the literal example sentences Se. We then select the overlapping roles in sin and Se to construct the semantic role set R = {rk}|R| k=1. Arguments labeled with the same semantic role in sin 13314 construct a semantic group. We can get a collection of semantic groups Ain = {Ai in}|R| i=1 for sin, where the set of the arguments labeled with the semantic role ri is denoted as Ai in = {(ai in)k}|Ai in| k=1 . Similarly, we can obtain a collection of semantic groups Ae = {Ai e}|R| i=1 for Se, where the semantic group containing arguments labeled with ri is denoted as Ai e = {(ai e)k}|Ai e| k=1. The semantic groups Ai in and Ai e are mapped using semantic role ri for the subsequent target and source domain mining. 4.3 Conceptual Domain Mining Humans can formulate the taxonomy of domains from various perspectives, such as cognitive processes that provide the underlying framework for how humans think and categorize (Lakoff and Johnson, 1980), commonsense knowledge that offers categories of general facts and relationships (Speer et al., 2017), and lexical resources that reflect the linguistic aspect of concept organization (Gelman et al., 1989). To mine the conceptual target and source domains for the aforementioned mapped semantic groups, we develop a conceptual domain mining approach based on knowledge at cognitive, commonsense and lexical levels. Domain Concept List We first construct a domain concept list that represents the taxonomy of domains established on human cognitive processes and commonsense knowledge, using two resources: Master Metaphor List (MML) (Lakoff et al., 1991), constructed by cognitive linguists, containing paired conceptual source and target domains of metaphorical understanding in cognitive processes, and OpenCyc (Lenat et al., 1985), a knowledge base consisting of large-scale commonsense concepts and relations. The combination of domain concepts from these resources constitutes the domain concept list, which is shown in Appendix A. Domain Mining Algorithm A large lexical semantic database WordNet (Miller, 1995) organizes words into hierarchical structures through synsets and conceptual relations (hypernym and hyponym). Figure 3 shows the hierarchical structure of synsets in WordNet. Given the richness and transitivity of hypernym relations, WordNet is a valuable resource to identify candidate conceptual domains. For a semantic group A = {ai}|A| i=1, we regard the hypernyms along the hypernym path from ai to the root node in WordNet as the candidate conAlgorithm 1 Conceptual Domain Mining Input: A = {ai}|A| i=1: a semantic group with |A| arguments. Require: (1) Ld: the domain concept list; (2) Comb(A, k): a function to generate the combinations of k nonoverlapping groups from the elements in the set A; (3) WordNet. Output: D = {dj}|D| j=1: the conceptual domain set. 1: for all i ←1 to |A| do 2: for all ˆ A = [ ˆA1, . . . , ˆAi] ∈Comb(A, i) do 3: for all ˆAj ∈ˆ A do 4: if ˆAj contains only one argument ˆa then 5: Traverse h in hypernym path P of ˆa 6: if ∃(h ∈P and h ∈Ld) then 7: Obtain h as the domain for ˆAj 8: else 9: Obtain P[0] as the domain for ˆAj 10: else 11: Obtain the least common ancestor hypernym of all arguments in ˆAj as the domain for ˆAj 12: Compute (θabs)j ▷Eq. (1) 13: Compute θdg ▷Eq. (2) 14: Obtain the domain set with the maximum θdg as D ceptual domains of ai. Our goal is to construct a domain set D = {dj}|D| j=1 and divide A into |D| subsets ˆ A = { ˆAj}|D| j=1, where every argument in the j-th subset ˆAj belongs to the j-th domain dj. To assess how abstract a domain (hypernym) is relative to an argument, we introduce the abstraction level l, calculated by the relative distance between them in the hypernym path (e.g. the abstraction level of domain food relative to the argument meal is 2 in Figure 3). (θabs)j represents the average abstraction level of dj for the arguments in ˆAj, which is calculated by (θabs)j = 1 | ˆAj| | ˆ Aj| X i=1 li j, (1) where li j is the abstraction level of dj relative to the i-th argument in Aj. The domains in D should strike a balance by capturing the broad meanings shared among the arguments in A without being overly abstract. For example, in Figure 3, food and living thing are more appropriate domains for the arguments (i.e. plan, meal, meat, and milk) than the excessively abstract domain physical entity. To evaluate how well D maintains this balance, we design the domain granularity metric θdg, which is calculated by θdg = 1 |D| |D| X k=1 | ˆAk| (θabs)k . (2) 13315 We develop a conceptual domain mining algorithm to find all possible collections of subsets ˆ A for arguments in A and their related conceptual domains D based on the domain concept list and WordNet. We obtain the ones with the highest domain granularity metric θdg as our final results. Algorithm 1 shows the pseudocode of this algorithm. We apply Algorithm 1 on every Ai in ∈Ain and Ai e ∈Ae to obtain the conceptual target domain set Di in for Ai in and the conceptual source domain set Di e for Ai e, respectively, where i ∈[1, |R|] is the index of semantic role. 4.4 Token-Level Metaphor Detection with Word Pair Modeling According to conceptual metaphor theory (Lakoff and Johnson, 1980), metaphor implies the inconsistency between source and target domains. Inspired by this, we utilize the inconsistency scores between above conceptual source and target domains to mine core word pairs that convey primary metaphorical meaning within the input sentence, and give more attention to core word pairs during the training process for explainable metaphor detection. Candidate Word Pair Weighing In the input sentence, each word labeled with a semantic role and the focus verb can constitute a candidate word pair. For the k-th such word wk p labeled with the semantic role rk, we have mined its conceptual target domain dk t and its set of conceptual source domain Dk s derived from the words labeled with the same semantic role rk in literal example sentences, using our domain mining algorithm. According to CMT (Lakoff and Johnson, 1980), a metaphor implies the inconsistency between source and target domains. Thus, the candidate word pair showing high inconsistency between its target domain and all its source domains is likely to be the core word pair conveying primary metaphorical meaning. We assign an attention to each candidate word pair based on the minimum inconsistency score between its target domain and all source domains. Specifically, the inconsistency scores βk between the target domain and all source domains for k-the candidate word pair and the attentions α for all the candidate word pairs are calculated by βk = (Sim(dk t , Dk s))−1 ∈R|Dk s |, (3) ˆβk = Min(βk), (4) α = Softmax([ˆβ1, . . . , ˆβnp]) ∈Rnp, (5) where Sim(·) calculates the path distance similarity in WordNet between dk t and every domain in Dk s, (·)−1 denotes the operation of element-wise inverse, Min(·) obtains the minimum score in a vector, Softmax(·) is the softmax function, and np is the number of candidate word pairs. Encoding To train our model from a good start of text embedding, we use the pre-trained model RoBERTa (Liu et al., 2019), as the text encoder. Given the input sentence sin and candidate word pairs P = {pk}np k=1, the input T is constructed by T = <s> sin </s> p1 </s> p2 . . . </s> pnp, (6) where <s> and </s> are the global and separation tokens, respectively. We divide T into tokens and feed them into RoBERTa to obtain contextual embeddings H ∈RN×d and the contextual embedding of global token h<s>: H = RoBERTa(T) = [h1, . . . , hN]⊤, (7) h<s> = h1 ∈Rd, (8) where d is the dimension of embedding, N is the number of tokens in T and hi is the embedding for the i-th token in T. We also obtain the embedding of the focus verb hv in the input sentence, and the embeddings of the focus verb hk v as well as the other target word hk w in the k-th candidate word pair. If a word is cut into tokens, its embedding is calculated by averaging its token embeddings. Word Pair Layer The embedding for the k-th candidate word pair is computed as ˆh k wp = CrossAtt(hk v, hk w) ∈Rd, (9) where CrossAtt(·) denotes lt Transformer blocks (Vaswani et al., 2017). In this module, hk v serves as the query vector and hk w serves as both key and value vectors. The aggregated word pair embedding hwp ∈Rd is computed by hwp = np X k=1 ˆh k wpαk, (10) where αk is the k-th element in α. Classification Finally, we feed the global embedding h<s>, verb embedding hv and word pair embedding hwp into a classifier, and adopt a crossentropy loss function to compute the loss Lclass: ˆl = Softmax(MLP(h<s>⊕hv⊕hwp)), (11) Lclass = −(l)⊤logˆl, (12) 13316 Dataset #Sent. #Verb %Met. Avg. Len MOH-X 647 647 48.69 8.0 LCC-Verb 2009 2009 42.61 29.1 TroFi 3,737 3,737 43.54 28.3 VUA-Verbtrain 7,479 15,516 27.90 20.2 VUA-Verbvalidation 1,541 1,724 26.91 25.0 VUA-Verbtest 2,694 5,873 29.98 18.6 Table 1: Statistics of datasets. #Sent. denotes the number of sentences. #Verb denotes the number of verbs that need to be detected. %Met. denotes the percentage of metaphor samples. Avg. Len denotes the average sentence length. where ⊕is the concatenation operation, MLP(·) is a multilayer perceptron with hidden dimension d′, ˆl ∈R2 is the predicted probability for all the labels, and l ∈R2 is the ground truth. 5 Experiments 5.1 Datasets We constructed a new verb metaphor detection dataset LCC-Verb based on LCC (Mohler et al., 2016), which is one of the representative benchmark datasets for metaphor detection. The LCCVerb construction process is shown in Appendix B. We also conducted experiments on three publicly available verb metaphor detection datasets, which are as follows: (1) MOH-X (Mohammad et al., 2016), containing 647 sentences where only a verb labeled as metaphorical or literal in each sentence; (2) TroFi (Birke and Sarkar, 2006), another verb metaphor detection dataset collected from Wall Street Journal Corpus; and (3) VUA-Verb (Leong et al., 2020), the largest publicly available dataset for verb metaphor detection drawn from VU Amsterdam Metaphor Corpus (VUA) (Steen et al., 2010). Table 1 shows the statistics of these datasets. 5.2 Baselines We compare our method with several representative methods for token-level metaphor detection, which are as follows: (1) RNN_ELMo (Gao et al., 2018) is a BiLSTM-based method using ELMo embeddings to model metaphorical words in context; (2) RNN_HG and RNN_MHCA (Mao et al., 2019) make the first attempt to apply linguistic theories (MIP & SPV) on BiLSTM-based network design for metaphor detection; (3) MUL_GCN (Le et al., 2020) exploits a multi-task learning framework to transfer knowledge from word sense disambiguation to metaphor detection; (4) MrBERT (Song et al., 2021) regards metaphor detection as a relation classification task via extracting dependency relations and utilizing them for metaphor detection; (5) MisNet (Zhang and Liu, 2022) proposes a linguistics enhanced network inspired by MIP and SPV for metaphor detection; (6) AdMul (Zhang and Liu, 2023) proposes a multi-task learning framework to transfer knowledge from basic sense discrimination to metaphor detection via adversarial training; (7) BasicBERT (Li et al., 2023) proposes a method to mine the concise basic meaning of the word based on literal annotation from training set inspired by MIP. 5.3 Implementation Details We use three English dictionaries to extract literal example sentences in our methods, including Longman Dictionary of Contemporary English1, Oxford Advanced Learner’s Dictionary2, and Collins English Dictionary3. We use F1 score and accuracy for evaluation. Following the convention of previous studies (Zhang and Liu, 2022, 2023), we take the model that achieves the best F1 score on the validation set to test on VUA-Verb testing dataset, and we calculate the average over the best F1 scores and corresponding accuracy scores in total 10 folds for MOH-X, TroFi and LCC-Verb datasets. We use RoBERTabase (Liu et al., 2019) as the encoder. Given MUL_GCN (Le et al., 2020) did not release their code, we reproduced their method according to their paper and tested it on our four datasets. For a fair comparison, we replaced ELMo in MUL_GCN, BERTbase in MrBERT and DeBERTabase in AdMul with RoBERTabase in our experiments. All experiments are done on NVIDIA RTX 3090 GPUs. Other details are illustrated in Appendix C.4 5.4 Main Results From the experimental results shown in Table 2, we can see that our proposed method outperforms previous state-of-the-art performances across most datasets, which verifies the effectiveness of our core word pair modeling based on conceptual domain mining for token-level metaphor detection. Methods utilizing pre-trained models, which incorporate extensive world knowledge through unsu1https://www.ldoceonline.com/ 2https://www.oxfordlearnersdictionaries.com/ 3https://www.collinsdictionary.com/dictionary/english 4Our code is available at https://github.com/ TIAN-viola/WPDM. 13317 Method MOH-X LCC-Verb TroFi VUA-Verb Average F1 Acc F1 Acc F1 Acc F1 Acc F1 Acc RNN_ELMo (Gao et al., 2018) 75.6 77.2 73.9 79.3 71.1 74.6 69.7 81.4 72.6 78.1 RNN_HG (Mao et al., 2019) 79.7 79.7 71.7 79.3 72.2 74.9 70.8 82.1 73.6 79.0 RNN_MHCA (Mao et al., 2019) 80.0 79.8 76.7 80.9 72.4 75.2 70.5 81.8 74.9 79.4 MUL_GCN† (Le et al., 2020) 78.7 78.0 77.3 79.2 71.6 74.7 69.6 81.8 74.3 78.4 MrBERT† (Song et al., 2021) 82.9 83.5 81.5 85.6 72.9 76.0 74.4 84.9 77.9 82.5 MisNet† (Zhang and Liu, 2022) 83.4 83.6 80.6 82.3 71.9 73.6 75.9 86.0 78.0 81.4 AdMul† (Zhang and Liu, 2023) 81.2 81.3 82.8 85.0 72.5 74.2 74.6 85.2 77.8 81.4 BasicBERT† (Li et al., 2023) 79.9 79.2 76.5 79.6 69.4 69.5 72.8 83.0 74.7 77.8 Our WPDM† 83.8∗ 84.2∗ 84.9∗∗ 87.0∗∗ 73.4∗ 76.3 74.1 84.4 79.0∗ 83.0∗ Table 2: Comparison between our method and baselines. The best results are in bold font and the second-best results are underlined. † indicates that these methods use the same RoBERTabase as the encoder. Average denotes the average results on four datasets. The symbols ∗and ∗∗indicate that our method’s results are statistically significantly different from those of the second-best methods, with p ≤0.05 and p ≤0.01, respectively. Method MOH-X LCC-Verb TroFi VUA-Verb F1 Acc F1 Acc F1 Acc F1 Acc Our WPDM 83.8 84.2 84.9 87.0 73.4 76.3 74.1 84.4 w/o Candidate word pair weighing 83.2 83.5 84.6 86.4 72.8 75.3 73.1 83.7 w/o Word-pair embedding (hwp) 83.0 83.1 84.0 86.3 72.5 75.1 72.3 83.6 w/o Global embedding (h<s>) 82.0 82.1 84.2 86.4 73.3 75.4 73.0 83.7 w/o Verb embedding (hv) 80.8 81.8 80.9 83.6 67.8 71.7 65.6 80.8 Table 3: Results of ablation study. The best results are in bold font. Accuracy (%) Random Words Random Words w/ Semantic Roles Our WPDM 15.5 6.0 4.0 4.5 55.5 48.0 44.5 45.5 94.0 89.5 89.5 88.0 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 MOH-X LCC-Verb TroFi VUA-Verb Figure 4: Evaluation results for core word pair selection. pervised learning, perform better than earlier RNNbased models. Our proposed method achieves performance gains across most datasets compared to previous methods inspired by SPV and MIP, showing the superior advantage of the theory CMT utilized in our method over SPV and MIP. 5.5 Ablation Study We conduct ablation study to evaluate the impact of components in our method. Table 3 shows the experimental results of the ablation study. Using average attentions to replace word pair attentions in candidate word pair weighing reduces the perforDataset Target Domain Source Domain MOH-X 90.0 80.5 LCC-Verb 89.0 76.0 TroFi 82.5 82.0 VUA-Verb 83.5 82.5 Table 4: Results of human evaluation on domain mining. mance, verifying that candidate word pair weighing based on the inconsistency between source and target domains can help our model focus on the core word pair and benefit the explainable detection of metaphors. We further directly remove the word pair embedding from our method, resulting in greater performance drops across all datasets compared to only removing candidate word pair weighing, which verifies the effectiveness of word pair information in our method. When the global embedding is aborted, the performance decreases, thus demonstrating the importance of global context information for metaphor detection. In addition, as verb contains the central information for verb metaphor detection, the removal of verb embedding leads to the most significant performance declines across all the datasets. 13318 Input Sentence (focus verb) Core Word Pair Source Domain Target Domain I salute your courage! salute courage GENERAL_OFFICER SPIRIT He always wears a smile. wears smile CHROMATIC_COLOR COMMUNICATION She drowned her trouble in alcohol. drowned trouble PERSON DIFFICULTY The rules relaxed after the new director arrived. rule relaxed PERSON IDEA The government bowed to the military pressure. bowed pressure SOVEREIGN PHYSICAL_PHENOMENON Table 5: Case study for explainable results obtained by our method on MOH-X dataset. The paired source and target domains presented in this table are the paired domains that can calculate the highest inconsistency score among all the candidate source and target domain pairs in conceptual domain mining. The target word in the core word pair and its corresponding target domain are in bold font. 5.6 Human Evaluation on Explainable Results Our method can not only identify metaphorical verbs but also provide the core word pair along with conceptual source and target domains as explainable results. To evaluate these explainable results, we randomly sampled 100 metaphorical instances from MOH-X, LCC-Verb, TroFi and VUAVerb’s test dataset, respectively. We then invited two annotators to evaluate the core word pair as well as its source and target domains identified by our method using accuracy. The core word pair for each focus verb is the candidate word pair in the sentence with the highest word pair attention, and the corresponding paired domains used for the calculation of this attention are considered the source and target domains. Core Word Pair Selection For the evaluation of core word pair selection, we compare our method with two baselines. The baseline Random Words randomly selects a word in each sentence to form a core word pair with focus verb. The other baseline Random Words w/ Semantic Roles randomly selects a word from the words labeled with semantic roles, which are related to the focus verb in the sentence, to form a core word pair with the focus verb. From the experimental results shown in Figure 4, we can see that our method outperforms baselines across all the datasets, verifying that our method can provide explainable intermediate clues in the form of the core word pair during metaphor detection, which is a capability lacking in previous methods. Conceptual Domain Mining From the human evaluation results in Table 4, we can see that our method performs well on both target and source domain mining, with most metrics exceeding 80%, verifying the effectiveness of our conceptual domain mining algorithm. When comparing between the target and source domain mining, our method achieves better performance on mining target domains than source domains. This is because information related to source domains is often absent in the context of the focus verb, making it more challenging to identify source domains. In contrast, the information related to target domains is typically explicitly present in the context of the focus verb. Despite the difficulty in mining source domains, our method still achieves good performance across most datasets. 5.7 Case Study Table 5 shows several cases for explainable results mined by our method on MOH-X dataset. For example, the word drown typically has a semantic relation of theme with nouns in the domain PERSON. However, in the third sentence of Table 5, its theme is trouble, which belongs to the target domain DIFFICULTY. The core word pair drowned trouble and the inconsistency between the source and target domains indicate a metaphorical use of the verb drown in this sentence. These cases further show the effectiveness of our method for explainable metaphor detection. 6 Conclusion In this paper, we propose an explainable word pair based domain mining method for tokenlevel metaphor detection. Inspired by conceptual metaphor theory, our method leverages semantic role mapping and conceptual source and target domain mining for core word pair modeling to facilitate explainable metaphor detection. Experimental results not only verify the effectiveness of our method for metaphor detection but also demonstrate its capability to determine the core word pair and conceptual domain information as explainable clues for the prediction of metaphors. 13319 Limitations Our work has some limitations. Currently, our method only focuses on detecting verb metaphors and providing explainable core word pair with corresponding source and target domains. Our future work needs to further explore other types of metaphors, such as noun metaphors and adjective metaphors. In addition, since our method utilizes external resources, our method may perform less effectively if verbs or context words cannot be found in these resources. Also, our method might exhibit performance decrease due to the fact that the external resources may be of lower quality in other languages. Ethics Statement As shown in previous studies (Navigli et al., 2023; Bartl et al., 2020), pre-trained language models exhibit biases in several respects, such as gender, ethnicity and race. Our method is fine-tuned on the pre-trained language model RoBERTabase and thus may inherit these biases. Acknowledgments This work is supported in part by the National Natural Science Foundation of China under Grants #72293575 and #62206287. We thank the anonymous reviewers for the valuable comments. References Naveen Badathala, Abisek Rajakumar Kalarani, Tejpalsingh Siledar, and Pushpak Bhattacharyya. 2023. A match made in heaven: A multi-task framework for hyperbole and metaphor detection. In Findings of the Association for Computational Linguistics: ACL 2023, pages 388–401. Marion Bartl, Malvina Nissim, and Albert Gatt. 2020. Unmasking contextual stereotypes: Measuring and mitigating BERT’s gender bias. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 1–16. Julia Birke and Anoop Sarkar. 2006. A clustering approach for nearly unsupervised recognition of nonliteral language. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 329–336. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Proceedings of the Annual Conference on Neural Information Processing Systems, pages 1877–1901. Luana Bulat, Stephen Clark, and Ekaterina Shutova. 2017. Modelling metaphor with attribute-based semantics. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 523–528. Minjin Choi, Sunkyung Lee, Eunseong Choi, Heesoo Park, Junhyuk Lee, Dongwon Lee, and Jongwuk Lee. 2021. MelBERT: Metaphor detection via contextualized late interaction using metaphorical identification theories. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, pages 1763–1773. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37–46. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171–4186. Huawen Feng and Qianli Ma. 2022. It’s better to teach fishing than giving a fish: An auto-augmented structure-aware generative model for metaphor detection. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 656–667. Ge Gao, Eunsol Choi, Yejin Choi, and Luke Zettlemoyer. 2018. Neural metaphor detection in context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 607–613. Mengshi Ge, Rui Mao, and Erik Cambria. 2022. Explainable metaphor identification inspired by conceptual metaphor theory. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 10681– 10689. Susan A. Gelman, Sharon A. Wilcox, and Eve V. Clark. 1989. Conceptual and lexical hierarchies in young children. Cognitive Development, 4(4):309–326. George Lakoff, Jane Espenson, and Alan Schwartz. 1991. The master metaphor list. Technical report, University of California at Berkeley. George Lakoff and Mark Johnson. 1980. Metaphors we live by. University of Chicago Press. Duong Le, My Thai, and Thien Nguyen. 2020. Multitask learning for metaphor detection with graph convolutional neural networks and word sense disambiguation. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 8139–8146. Doug Lenat, Mayank Prakash, and Mary Shepherd. 1985. CYC: Using common sense knowledge to overcome brittleness and knowledge acquisition bottlenecks. AI magazine, 6(4):65–85. 13320 Chee Wee Leong, Beata Beigman Klebanov, Chris Hamill, Egon Stemle, Rutuja Ubale, and Xianyang Chen. 2020. A report on the 2020 VUA and TOEFL metaphor detection shared task. In Proceedings of the Workshop on Figurative Language Processing, pages 18–29. Hongsong Li, Kenny Q Zhu, and Haixun Wang. 2013. Data-driven metaphor recognition and explanation. Transactions of the Association for Computational Linguistics, 1:379–390. Shuqun Li, Jingjie Zeng, Jinhui Zhang, Tao Peng, Liang Yang, and Hongfei Lin. 2020. ALBERT-BiLSTM for sequential metaphor detection. In Proceedings of the Workshop on Figurative Language Processing, pages 110–115. Yucheng Li, Shun Wang, Chenghua Lin, and Frank Guerin. 2023. Metaphor detection via explicit basic meanings modelling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 91–100. Zhenxi Lin, Qianli Ma, Jiangyue Yan, and Jieyu Chen. 2021. CATE: A contrastive pre-trained model for metaphor detection with semi-supervised learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 3888–3898. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. Computing Research Repository, arXiv:1907.11692. Rui Mao and Xiao Li. 2021. Bridging towers of multitask learning with a gating mechanism for aspectbased sentiment analysis and sequential metaphor identification. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 13534–13542. Rui Mao, Chenghua Lin, and Frank Guerin. 2018. Word embedding and WordNet based metaphor identification and interpretation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1222–1231. Rui Mao, Chenghua Lin, and Frank Guerin. 2019. Endto-end sequential metaphor identification inspired by linguistic theories. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 3888–3898. George A. Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41. Saif Mohammad, Ekaterina Shutova, and Peter Turney. 2016. Metaphor as a medium for emotion: An empirical study. In Proceedings of the Joint Conference on Lexical and Computational Semantics, pages 23–33. Michael Mohler, Mary Brunson, Bryan Rink, and Marc Tomlinson. 2016. Introducing the LCC metaphor datasets. In Proceedings of the International Conference on Language Resources and Evaluation, pages 4221–4227. Roberto Navigli, Simone Conia, and Björn Ross. 2023. Biases in large language models: Origins, inventory, and discussion. Journal of Data and Information Quality, 15(2):1–21. Pragglejaz Group. 2007. MIP: A method for identifying metaphorically used words in discourse. Metaphor and symbol, 22(1):1–39. Marek Rei, Luana Bulat, Douwe Kiela, and Ekaterina Shutova. 2017. Grasping the finer point: A supervised similarity network for metaphor detection. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1537–1546. Omid Rohanian, Marek Rei, Shiva Taslimipoor, and Le An Ha. 2020. Verbal multiword expressions for identification of metaphor. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 2890–2895. Karin Kipper Schuler. 2005. VerbNet: A broadcoverage, comprehensive verb lexicon. University of Pennsylvania. Ekaterina Shutova. 2010. Automatic metaphor interpretation as a paraphrasing task. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1029–1037. Ekaterina Shutova, Douwe Kiela, and Jean Maillard. 2016. Black holes and white rabbits: Metaphor identification with visual features. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics, pages 160–170. Ekaterina Shutova, Lin Sun, Elkin Darío Gutiérrez, Patricia Lichtenstein, and Srini Narayanan. 2017. Multilingual metaphor processing: Experiments with semi-supervised and unsupervised learning. Computational Linguistics, 43(1):71–123. Ekaterina Shutova, Lin Sun, and Anna Korhonen. 2010. Metaphor identification using verb and noun clustering. In Proceedings of the International Conference on Computational Linguistics, pages 1002–1010. Wei Song, Shuhui Zhou, Ruiji Fu, Ting Liu, and Lizhen Liu. 2021. Verb metaphor detection via contextual relation learning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 4240–4251. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In Proceedings of the AAAI Conference on Artificial Intelligence, page 4444–4451. 13321 Gerard J. Steen, Aletta G. Dorst, John B. Herrmann, Anna A. Kaal, Tina Krennmayr, and Trijntje Pasma. 2010. A method for linguistic metaphor identification: From MIP to MIPVU. John Benjamins, Amsterdam. Chang Su, Weijie Chen, Ze Fu, and Yijiang Chen. 2020a. Multimodal metaphor detection based on distinguishing concreteness. Neurocomputing, 429:166–173. Chang Su, Kechun Wu, and Yijiang Chen. 2021. Enhanced metaphor detection via incorporation of external knowledge based on linguistic theories. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1280–1287. Chuandong Su, Fumiyo Fukumoto, Xiaoxi Huang, Jiyi Li, Rongbo Wang, and Zhiqun Chen. 2020b. DeepMet: A reading comprehension paradigm for tokenlevel metaphor detection. In Proceedings of the Second Workshop on Figurative Language Processing, pages 30–39. Weiwei Sun, Shuyu Guo, Shuo Zhang, Pengjie Ren, Zhumin Chen, Maarten de Rijke, and Zhaochun Ren. 2023. Metaphorical user simulators for evaluating task-oriented dialogue systems. ACM Transactions on Information Systems, 42(1):1–29. Yuan Tian, Nan Xu, Wenji Mao, and Daniel Zeng. 2023. Modeling conceptual attribute likeness and domain inconsistency for metaphor detection. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 7736–7752. Yulia Tsvetkov, Leonid Boytsov, Anatole Gershman, Eric Nyberg, and Chris Dyer. 2014. Metaphor detection with cross-lingual model transfer. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 248–258. Peter Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 680–690. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the Conference on Neural Information Processing Systems, pages 1–11. Lennart Wachowiak and Dagmar Gromann. 2023. Does GPT-3 grasp metaphors? identifying metaphor mappings with generative language models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 1018–1032. Shun Wang, Yucheng Li, Chenghua Lin, Loic Barrault, and Frank Guerin. 2023. Metaphor detection with effective context denoising. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, pages 1404–1409. Semantic Role List Asset Instrument Time Agent Medium Location Destination Causer Extent Theme Patient State Utterance Stimulus Recipient Material Attribute Goal Location Source Path Pivot Experiencer Trajectory Value Manner Circumstance Product Beneficiary Topic Result Table 6: Semantic roles parsed by VerbNet Semantic Parser. Yorick Wilks. 1975. A preferential, pattern-seeking, semantics for natural language inference. Artificial Intelligence, 6(1):53–74. Yorick Wilks. 1978. Making preferences more active. Artificial Intelligence, 11(3):197–223. Chuhan Wu, Fangzhao Wu, Yubo Chen, Sixing Wu, Zhigang Yuan, and Yongfeng Huang. 2018. Neural metaphor detecting with CNN-LSTM model. In Proceedings of the Workshop on Figurative Language Processing, pages 110–114. Shenglong Zhang and Ying Liu. 2022. Metaphor detection via linguistics enhanced Siamese network. In Proceedings of the International Conference on Computational Linguistics, pages 4149–4159. Shenglong Zhang and Ying Liu. 2023. Adversarial multi-task learning for end-to-end metaphor detection. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1483–1497. A More Details on Method A.1 Semantic Role Mapping We use a publicly available tool1 as the VerbNet Parser in our method. The list of semantic roles that we use is shown in Table 6. We split arguments containing conjunctions like "or" or "and" into separate arguments using these conjunctions and then perform lemmatization on them. If the VerbNet Semantic Parser fails to label the semantic roles, we use Stanford dependency parser2 as a substitute. Specifically, we use dependency relations, including nsubj, nmod:agent, obl:agent, nmod:agent and agent, to label the semantic role Agent, and dependency relations, including obj, dobj, nsubjpass, nsubj:xsubj, nsubj:pass and nsubj:passdir, to label the semantic role Theme. 1https://github.com/jgung/verbnet-parser 2https://stanfordnlp.github.io/CoreNLP/ 13322 Domain Concepts in Master Metaphor List (Lakoff et al., 1991) (Cognitive Level) BODY IDEA INJURY CLOTH ARGUMENT CONTAINER LOVE HOPE SOCIETY BATTERY LIGHT ANGER ABILITY PROBLEM DEATH EMOTION FIRE WATER HARM RESOURCE COMMODITY WAR FIGHT CHILD CAREER BURDEN RACE WORD INFORMATION MONEY JOURNEY SCALE FLUID FAILURE THEORY CHANGE OBLIGATION PEOPLE COMPETITION LIQUID OBJECT RESPONSIBILITY IMPORTANCE FOOD PRECEDENCE MACHINE MOTION TIME LIFE BELIEF PATH WEAPON Domain Concepts in OpenCyc (Lenat et al., 1985) (Commonsense Level) GOAL STATE MOVEMENT VEHICLE HUMAN ACTIVITY SOLAR SYSTEM LOGIC ARTIFACT SHOPPING COMMUNICATION MATERIAL HUMAN BEING SOCIAL ACTIVITY HUMAN MATH DEVICE SPACE POLITICS PHYSIOLOGY COMMERCE TRANSPORTATION FORM SOFTWARE ENTERTAINMENT INDIVIDUAL AGENT PROFESSION ACTION CULTURE EARTH LIVING THING ECOLOGY BUILDING SPORT ANIMAL LOGISTICS PRODUCT WEATHER CHEMISTRY BEHAVIOR BUSINESS OCCUPATION LITERATURE ORGANIZATION PLAN SOCIAL TRAVEL LANGUAGE ASTRONOMY WORK RELATION PLANT LIFE BELIEF PATH WEAPON TIME Table 7: Domain concept list constructed from domains in Master Metaphor List and OpenCyc. Notation Value Description MOH LCC-Verb TroFi VUA-Verb N 160 200 200 200 maximum length of text tokens lrexc. encoder 3e−3 3e−4 3e−4 3e−4 learning rate of components except the text encoder lrencoder 3e−5 3e−5 3e−5 3e−5 learning rate of the text encoder bs 32 32 32 512 batch size lmlp 2 3 3 3 the number of layers in MLP(·) Table 8: Hyper-parameter values in our proposed method. A.2 Conceptual Domain Mining The domain concept list is shown in Table 7. When constructing the domain concept list, we filter out overly abstract domains (e.g., THING). We also exclude overly abstract hypernyms in WordNet, including physical_entity, abstraction, and entity. When determining the hypernym path of a word w in our conceptual domain mining algorithm, if the word w can be annotated with a name entity label (i.e. person, location, organization, money, number, ordinal, percent, date, time, duration), we obtain the connection of this label and its hypernym path in WordNet as the hypernym path for w; Otherwise, we directly obtain the hypernym path of w in WordNet. In our method, we use the function path_similarity(·) in NLTK python packages1 as the function Sim(·) in Eq. (3) to calculate the path similarity of two words in WordNet. 1https://www.nltk.org/howto/wordnet.html path_similarity(·) returns a score denoting the similarity between two word senses, based on the shortest path that connects the senses in WordNet. The score is in the range of 0 to 1. B Dataset Construction LCC-Verb LCC (Mohler et al., 2016) is a metaphor detection dataset with metaphoricity ratings of 0 to 3. We extracted verb instances to create LCC-Verb, classifying those labeled 0 as literal and those labeled 2 or 3 as metaphorical. We discarded instances labeled 1, which are possible metaphors. C Implementation Details We rerun the released code of baselines on our newly constructed LCC-Verb dataset. As BasicBERT has never been tested on the datasets that we use, we rerun their code2 on four datasets in our 2https://github.com/liyucheng09/BasicBERT/tree/master 13323 Method Template prompt Standard zero-shot Decide whether the word "[verb]" in the sentence "[sentence]" is used metaphorically. Give me an answer selected from "yes" or "no". Few-shot Q: Decide whether the word "[verb_example]" in the sentence "[sentence_example]" is used metaphorically. A: [answer_example] Q: Decide whether the word "[verb]" in the sentence "[sentence]" is used metaphorically. Table 9: Prompt design for zero-shot and few-shot prompting strategies in ChatGPT experimentation. [sentence] represents the input slot for the sentence in an instance from the test dataset, and [verb] represents the input slot for the focus verb within this sentence. Similarly, [sentence_example] denotes the input slot for the sentence in an instance from the training dataset, while [verb_example] represents the input slot for the focus verb within this sentence. [answer_example] serves as the input slot for the answer with "yes" indicating a metaphorical instance and "no" indicating a literal instance. Model Prompting Strategy MOH-X LCC-Verb TroFi VUA-Verb F1 Acc F1 Acc F1 Acc F1 Acc GPT-3.5 Standard Zero-Shot 64.4 67.3 42.6 50.4 50.1 56.0 50.5 69.3 5-Shot 70.1 72.5 48.0 52.8 57.6 60.0 53.6 69.8 GPT-4 Standard Zero-Shot 72.6 77.0 66.4 66.0 67.3 67.5 66.9 75.5 5-Shot 75.3 79.0 69.7 68.0 68.2 68.3 67.6 75.8 Our WPDM 83.8 84.2 84.9 87.0 73.4 76.3 74.1 84.4 Table 10: Comparison between our method and GhatGPT. The best results are in bold font. experiments. We also rerun the code of MisNet1 and AdMul2 on LCC-Verb in our experiments. We use AdamW3 as our optimizer with a weight decay of 0.01. The dropout rate is 0.5. The dimension of the text embedding d is 768. The hidden dimension of MLP is 2304. The layer of Transformer blocks is 2. We train our method for 15 epochs. Table 8 shows other hyper-parameter values. We utilize Stanford CoreNLP4 for named entity tagging in our method. The average Cohen’s kappa coefficients κ (Cohen, 1960) of the inter-rater agreement in human evaluations of explainable results on MOHX, LCC-Verb, TroFi, and VUA-Verb datasets are 0.75, 0.75, 0.70, and 0.73, respectively (note that 0.6 ≤κ ≤0.8 means substantial agreement). D Experiments on ChatGPT Implementation Details We also compare our method with ChatGPT. We use GPT-3.5 (gpt-3.5turbo-0613)5 and GPT-4 (gpt-4-0125-preview)6 as 1https://github.com/SilasTHU/MisNet 2https://github.com/SilasTHU/AdMul 3https://pytorch.org/docs/stable/generated/torch.optim. AdamW.html 4https://stanfordnlp.github.io/CoreNLP/ner.html 5https://platform.openai.com/docs/models/gpt-3-5 6https://platform.openai.com/docs/models/gpt-4-and-gpt4-turbo baseline models. We explore two prompting strategies, including the standard zero-shot prompting and the standard few-shot prompting (Brown et al., 2020). Table 9 summarizes prompts used for the experiments on ChatGPT. We randomly sampled 200, 250, 400, and 600 instances from MOH-X, LCC-Verb, TroFi and test set of VUA-Verb, respectively, as the testing datasets for ChatGPT, and then we randomly sampled 5 instances from the remaining data as the few-shot instances. To cleanse the answer, we select the initial string of either "yes" or "no" in the response provided by ChatGPT, following the process of converting all uppercase characters to lowercase in the answer string. Experimental Results From the experimental results in Table 10, we can see that GPT-4 performs better than GPT-3.5 for metaphor detection. Although the 5-shot prompting strategy leads to performance improvements for both GPT-3.5 and GPT-4 compared to the standard zero-shot prompting strategy, our method outperforms all ChatGPT baselines, which verifies the effectiveness of our method and indicates the difficulty and challenge inherent in the task of metaphor detection. 13324 Scientific Artifact License MOH-X https://saifmohammad.com/WebPages/SentimentEmotionLabeledData.html TroFi https://www.cs.sfu.ca/~anoop/students/jbirke/LICENSE.html Stanford CoreNLP GNU General Public License (v2 or later) WordNet WordNet 3.0 license VUA-Verb CC BY-SA 3.0 ConceptNet CC BY-SA 4.0 roberta-base MIT license ChatGPT API license LCC Unspecified OpenCyc Apache-2.0 NLTK Apache-2.0 VerbNet Parser Apache-2.0 Table 11: Licenses of the scientific artifacts. E Licenses of Scientific Artifacts Table 11 shows the licenses of the scientific artifacts used in this paper. 13325
2024
719